Consult with our internal customers on their requirements, document and build specific tooling to meet their needs.Align system designs with larger strategic initiatives.Create specs and test plans and get them approved by stakeholders.Build automation and/or tools that adhere to the specs and build best in class CI/CD pipelines.Optimize our custom open source tools built in collaboration with other Fortune 500 Companies to run massive production workloads, company wide. Work closely with our Ops team to understand their day to day issues and recommend automated solutions to solve their problems.Lead and run Subject Matter Expert programs for BigQuery, Hive and other platforms to spread the knowledge of our supported platforms to our large and diverse user base.Build and deliver components for moving streaming data from Kafka to HDFS and cloud analytics systems and tools/dashboards to report on important metrics for Big Data Infrastructure.Clearly communicate project status and risks to engineering and executive stakeholders.What You Have5+ years of software development experience while thinking like a data engineerStrong knowledge of Kafka and/or HadoopExperience building and delivering Data Engineering systems.Strong programming skills Python, Scala, JavaExperience working with both Linux and WindowsStrong agile SDLC backgroundAn analytical, creative, and innovative approach to solving problems Bachelor's or Masters in Computer Science, Computer Engineering, or a related field
You are not logged in. If you have an account, log in to your account. If you do not have an account, why not sign up? It only takes a minute!
You must login to use this facility.
0 Job saves