As a Data Engineering Consultant at Optiv you will make a difference for our clients by building, maintaining and/or architecting big data pipelines within Secure Data Analytic (SDA) Environments. You will work daily on applying your technical skills and engineering experience on our clients’ largest data challenges. You may help clients to design state-of-the-art data infrastructure from zero or find opportunities to enhance and upgrade legacy platforms. Your technical skills will help our clients to be successful through collaboration and experimentation. You will join a team of highly skilled and experienced big data professionals who work with the latest tools, technologies, and approaches.
· Help clients to assess, design, implement, and enhance ideal Big Data environments
· Deliver expertise in data platform deployment, hardening, and scaling
· Assist clients with data sourcing, transformation, enrichment, and information extraction
· Work with stakeholders and data scientists to implement and enhance feature stores and machine learning platforms
· Design, deliver, and manage consulting engagements with clients large and small, across every industry
· Create client deliverables to clearly present recommendations and strategies compelling formats including dashboards, presentations, and documentation
· Experience working with ‘big data’ data pipelines, architectures, and data sets
· Ability to perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
· Strong analytic skills related to working with unstructured datasets
· Experience building processes supporting data transformation, data structures, metadata, dependency and workload management
· Experience with relational SQL and NoSQL databases
· Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
· Proficiency in one or more scripting programming languages – Python, JSON, Ruby, C#, PowerShell, YAML
· Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
Nice to Have
· Experience with big data tools: Elasticsearch, Hadoop, Spark, Kafka, Databricks, etc
· Familiarity with Agile Practices like Scrum, Kanban, CI/CD preferred
· Familiarity with stream-processing systems: Logstash, Streamsets, NiFi, Storm, Spark-Streaming, etc.
· Experience working with cloud security and governance tools, cloud access security brokers (CASBs), and server virtualization technologies
· Experience with deployment orchestration, automation, and security configuration management (Jenkins, Puppet, Chef, CloudFormation, Terraform, Ansible, Salt) preferred
· Strong interpersonal and communication skills; ability to work in a team environment
· Ability to work independently with minimal direction; self-starter/self-motivated
· Technical writing experience