Required Qualifications
3+ years' experience designing, building, and deploying scalable data pipelines to production on huge amounts of data .
3+ years hands-on experience in: Python, Spark, Hive, Shell
2+ years' experience designing, building deploying cloud native solutions.
3+ years' experience working within Hadoop ecosystem (HDFS, YARN, etc).
2+ years' experience developing solutions using RDBMS, HBASE, NiFi, etc.
2+ years' experience with at least one document-oriented DB (MongoDB, CouchDB, etc).
1+ years hands-on experience writing Infrastructure as Code (IaC) using Terraform or similar IaC tools
Demonstrated strength in data modeling, ETL development, and data warehousing.
Experience monitoring and troubleshooting operational or data issues in data pipelines.
Preferred Qualifications Experience developing solutions using Message Systems, including RabbitMQ, Kafka, etc.
Working knowledge of data structures, algorithms, distributed processing
Working knowledge of implementation of machine learning algorithms in Spark MLlib, H2O, R or related libraries/packages
Demonstrated ability to drive and articulate technical challenges and solutions.
Demonstrated ability to create advanced architectures and sustainable solutions.
Demonstrated ability to deliver high-quality software through working in a dynamic, team-focused Agile/Scrum environment
Experience collaborating with product and non-technical partners
Experience with at least one RDBMS (MySQL, PostgreSQL, RDS, Oracle, etc)
Hands-on experience with DevOps pipeline tools for code integration, automated testing, and deployment (Git, Jenkins/Groovy, etc.)
Google Cloud Platform/AWS/Azure Certification(s)
Education Bachelor's degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline
- provided by Dice