Job Details

AI Cloud DevOps Engineer - Data Science, Senior Consultant

Advertiser
Guidehouse
Location
Washington, Washington DC, United States
Rate
-

Responsibilities
As part of Guidehouse's Advanced Data Analytics team, you will work on high-impact and high-visibility projects, helping to shape not only Guidehouse's current business, but its long-term strategy.

Build the future of Data Science as part of the Artificial Intelligence Center of Excellence (CoE). The CoE is a unique team within Guidehouse, focusing on solving our client's most critical challenges using Data and Advanced Analytics, AI, and Automation. The CoE works on a wide variety of projects; from predictive analytics models to support our healthcare, financial, and energy services divisions, to open source analysis for federal agencies, to applying deep learning models (ie, NLP, image recognition) to solve more complex problems.

This role involves working in a multi-functional, Agile team environment with other data scientists, engineers, and UI/UX developers to develop and productionize analytics solutions. The Cloud DevOps Engineer is involved in various aspects of customer engagement. From collaborating with multiple team members and customers to supporting stakeholders, discover the information hidden in their vast amounts of data, data-driven decision-making, and to ultimately deliver better products.

Lead team initiatives to continuously refine our AWS deployment practices for improved reliability, repeatability and security. You'll create/contribute to plans, collaborate with other team members. These high-visibility initiatives will help to increase service levels, lower costs, and deliver features more quickly.
Work closely with Data Science team to automate deployment and configuration of infrastructure to support roll out of data products/projects on AWS Data Stack. This includes building Machine Learning workflows in AWS that comprise the full stack from Front End to back end.
Design effective monitoring/alerting (for conditions such as application-errors, high memory usage) and log aggregation approaches (to quickly access logs for troubleshooting, or generate reports for trend analysis) to proactively notify business stakeholders of issues and communicate metrics, working closely with these stakeholders, using tools including AWS CloudWatch, SageMaker, EMR, Glue etc.
Write code and scripts to automate provisioning of AWS services and to configure services, using tools and languages including AWS CLI/API, Terraform, Ansible, Chef, Python, Bash, and Git.
Configure build pipelines to support automated testing and deployments using tools including Jenkins, CircleCI, AWS CodeDeploy. You'll configure these pipelines for specific products and help optimize them for performance and scalability.
Help refine DevSecOps security practices (including regular security patching, minimum-permissions accounts and policies, encrypt-everything) in compliance with Health IT, government and other standards regulations, implement, and verify them, using tools like Sonarqube, VeraCode to analyze and verify compliance.
Document and diagram deployment-specific aspects of architectures and environments, working closely with Software Engineers, Data Scientists, Software Engineers in Test, and others in DevOps.
Troubleshoot issues in production and other environments, applying debugging and problem-solving techniques (eg, log analysis, non-invasive tests), working closely with development and product teams.
Suggest deployment patterns & practices improvements based on learnings from past deployments and production issues; collaborate with DevOps team to implement these.
Promote a DevOps culture, including building relationships with other technical and business teams.
Work closely with InterOps to deploy and configure the platform to on-board clinics.
Work to ensure system and data security is maintained at a high standard, ensuring the confidentiality, integrity and availability of the Navigating Cancers applications is not compromised.
Qualifications
Minimum Security Clearance: None

Minimum Years of Experience: 4

Minimum Education: Advanced degree

Ability to automate away manual interactions and have a passion for helping enable developers to write code that works
A strong understanding of Linux administration including Bash scripting
An understanding of automation and RPA orchestration tools such as UIPath
Networking expertise including VPCs, SDNs (eg, Amazon/Azure)/VLANs, Routers and firewalls
Familiarity with at least one IAC/CM tool such as Terraform or Ansible
Familiarity with at least one code build/deploy tool such as Jenkins, Circle CI
Familiarity with DB setup, configuration and monitoring
Work in terms of enabling capabilities through a blend of process and technology
Minimum Qualifications:

Bachelor's degree in Computer Science, Engineering, Applied Mathematics, Statistics, Data Management or related fields.
2+ years AWS administration experience/training including provisioning EC2 instances, VPCs, Lambda functions, RDS databases, S3 storage, IAM security, ECS containers, Cloudwatch metrics & logs, and AWS Cognito pools
2+ years of experience developing and/or deploying serverless functions using AWS Lambda, Azure Functions, or Google Cloud Functions
1+ years of experience operating and administering Kubernetes deployments, clusters, or configurations
1+ years of experience using infrastructure as code tools such as Terraform or Ansible
1+ years of experience with SQL; Adept in using an RDBMS such as PostgreSQL
1+ years of experience designing and deploying machine learning experiments
1+ years of experience analyzing large and complex data sets, including a demonstrated thorough aptitude for conducting quantitative and qualitative analysisperience with monitoring/alerting tools such as New Relic, Grafana, Prometheus, Sysdig
Experience with log aggregation tools such as Datadog, ELK, Splunk
Experience in Python as well as at least one other programming language such as Ruby, Java, Scala, JavaScript/Node.js, Go, C#, or C/C++.
AWS Certified DevOps Engineer
Desired Experience:

2+ years of experience in building cloud Data Lakes to support Data Analytics and Machine Learning tasks.
1+ years of experience in AWS RDS, schema design, system performance & optimization, capacity planning. Preferably AWS Big Data Architect Certification or equivalent.
Demonstrable in-depth understanding of data structures and ETL processes (including SSIS)
Experience with structured and unstructured data, including relational databases (SQL Server), graph databases (Neo4J), NoSQL databases (MongoDB) and unstructured data
Experience with working with big data (Scala, Spark, Pig)
Experience with the operationalization and maintenance of analytics APIs using Plumber, Flask, Swagger and similar
Experience in data analytics, business intelligence, or data science
Additional Requirements
This position requires successful completion of a background check and employment verification.
The successful candidate must not be subject to employment restrictions from a former employer (such as a non-compete) that would prevent the candidate from performing the job responsibilities as described.

Send application

Mail this job to me so I can apply later

Apply With CV

You are not logged in. If you have an account, log in to your account. If you do not have an account, why not sign up? It only takes a minute!

latest videos

Upcoming Events