NIKE, Inc., Beaverton, OR. Develop, configure, and test programs, systems and solutions in order to meet defined digital product specifications and direction. Requires technical development of digital products that meets the consumer needs as defined by product managers and aligns with architectural standards. Working at Nike's Global Technology organization; engage in developing and maintaining applications that process millions of consumer data signals and turn them into meaningful insights for the Marketing Team to use insights and target the right audience for various campaigns through different channels. Design the backend architecture for data ingestion using tools like HBase, Spark Streaming/SQL, Elasticsearch on Amazon AWS for the system to capture data in real-time, as well as batch updates with high read/, write throughput. Develop Terraform scripts as Infrastructure-As-A-Service using Amazon AWS Cloud services to deploy the backend infrastructure which includes Networks, Datastores, Compute Nodes,Code repos and Metric dashboards. Develop ETL scripts to transform consumer signals into measurable attributes by joining several datasets in s3 using Bigdata processing frameworks like Hadoop,MapReduce and Spark. Capture and publish application metrics like Time Taken, Number records processed, in real-time around the end-to-end data pipeline for stakeholders to have high observability into the system. Identify performance bottlenecks in the processes and develop tools to optimize the delivery of campaigns with low latency and high quality. Develop APIs to serve the transformed data to other downstream applications.Employer will accept Master's degree in Computer Science, Computer Applications, Information Technology, or Engineering and 1 year of experience in a software engineering related occupation. Experience must include the following: Big Data-Streaming and Analytics technologies; Designing, reviewing, implementing and optimizing distributed data processing applications; Apache Hadoop, Amazon EMR, Kinesis, Spark, Apache HBase, Elasticsearch, Kafka, Scoop, or Hive; Developing Web services in Java/J2EE, Scala to serve aggregated data processed in batch mode to analytic dashboards; Configuring the Hadoop cluster using major Hadoop Distribution like HortonWorks and Cloudera; AWS; and DevOps and Graph Databases (Neptune, Jena, Neo4j or TigerGraph).