The WBT Data Intelligence team is seeking a Senior Data Engineer who will be an extraordinary addition to our growing team. As a Data Engineering at WB, you will be responsible for building high-quality scalable enterprise data solutions and data services that exceed data, reporting and analytical needs of the organization.
The ideal candidate will have technical skills to deal with multiple terabytes of data using Apache Spark framework, Elastic Search and Snowflake in an AWS ecosystem. You will work closely with Product Managers, Data Architects, Data Engineers to implement & maintain Data pipelines leveraging Big Data Technologies. You will be constantly innovating and improving ways in which we obtain value out of the data. The role will work closely with engineering leadership to plan and write high quality performant code. You will also participate in peer code reviews, mentor junior engineers and champion a high standard of code excellence. Your will manage applications and own the technology stack end-to-end to enable data to be exchanged across the studio.
Data Integration and Pipelines - Design and develop data integrations from a variety of formats including files, database extracts and external APIs.
Operations and Tuning - Investigate problems and resolve as required, including working with various internal teams and vendors. Proactively monitor the data flows with a focus on continued performance improvements. Develop POC's and best practices for application development.
Application Onwership - Own the application/data end-to-end from requirements to post production, working closely with other teams. Provide engineering leadership by actively advocating best practices and standards for software engineering. Share knowledge and guide junior engineers to level-up the whole team.
Bachelor's degree in Computer Science or related field.
Minimum or 7 years of software engineering/development experience.
Minimum of 5 years of Data Analytics/data engineering, Complex ETL/ELT experience.
Minimum of 3 years working with Big data technologies including Hadoop, Apache Spark, Snowflake and AWS Suite of technologies (S3, EMR, Lambda).
Minimum of 3 years architecting datastores - NoSQL/SQL.
Experience with Scripting (Shell, Python) in a linux environment.
Experience with consuming and creating restful API's.
Expert problem solver with strong analytical skills.
Expert in Spark (Scala/Java), Spark SQL and Spark Streaming.
Expert in SQL (Snowflake, MySQL).
Experience using big data tools (Hadoop, Map-reduce, Elastic search, Kinesis, Kafka, Solr).
Experience using AWS technologies (EMR, S3, Kinesis, Lambda).
Strong Object Oriented and functional programming skills.
Experience writing BaSH Scripts.
Experience using Git or SVN and Jira.
Experience using Python is a plus.
Experience using Restful APIs is a plus.
Experience with Ad platforms or CRM solutions is a plus.
Strong communication skills.
Ability to work independently or collaboratively.
Detail oriented with strong organization and prioritization skills.
Entertainment and/or Social Media experience a plus.
Demonstrated ability to work well under time constraints.
Must be able to work flexible hours, including possible overtime, when necessary.
Must be able to maintain confidentiality.
Management has the right to add or change duties and job requirements at any time.