Job Details

(Remote) Big Data Engineer

Insight Global
Washington, Washington DC, United States
A client is looking for a Big Data Engineer to join their Performance Analytics group and will sit fully remote. The team is building data pipelines between their clients, which are insurance providers and hospitals, to their internal big data platform. The engineers will be an individual contributor role and should have a strong background in ETL and data flow. The team is mainly east coast based and central, so the candidate must be in those time zones. The engineer will be responsible for Scala/Spark development of ETL pipelines and deploying containers to the cloud (AWS) - this role will be 50/50 spark/scala development and AWS engineering. They must be hands-o with AWS data pipeline work with EMR to run big data jobs, glue and lambdas for automation. The client has many data sets that they need to integrate into this Big Data platform, so the engineer will be helping with those integration efforts and ensuring that the data is accurate along the way. The role will be hands-on engineering/developing in Spark/Scala for data pipelines, python for pipeline automation, and this is hosted on-prem Hadoop cluster and Hive schema. The candidate must also have clear communication to interface with the business to gather requirements and breakdown solutions for their needs. Minimum Requirements- 3-8+ years experience as a Big Data Engineer in AWS- 4+ years pipeline development with Spark- AWS cloud engineering experience and deployments (EMR, Glue Lambdas)- Jenkins/Maven for builds/deployments Desired Skills- Python, terraform and Shell scripting- Scala- Healthcare- Cloud- ETL- Experience managing junior developers

Send application

Mail this job to me so I can apply later

Apply With CV

You are not logged in. If you have an account, log in to your account. If you do not have an account, why not sign up? It only takes a minute!

latest videos

Upcoming Events