Big data Developer
Hadoop, Storm, Hive, Spark, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design), Kafka, Spark and NoSQL platforms such as HBase, Mongo DB, Hadoop with Oozie, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi, Scala, Ruby
Roles & Responsibilities
* Participate in technical planning & requirements gathering phases including design, coding, testing, troubleshooting, and documenting big data-oriented software applications. Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business's operational and analytics databases, and troubleshoots any existent issues.
* Implements, troubleshoots, and optimizes distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems.
* Define and build large-scale near real-time streaming data processing pipelines that will enable faster, better, data-informed decision making within the business.
* Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale.
* Keep up with industry trends and best practices on new and improved data engineering strategies that will drive departmental performance leading to improvement in overall improvement in data governance across the business, promoting informed decision-making, and ultimately improving overall business performance..
Pleasanton, CA/ Pasadena, CA/ Denver, CO