Job Details

Associate Technical Architect/ Big Data Architect

Advertiser
Virtual Networx
Location
Golden, Colorado, United States
Rate
-
Greetings, My name is Sham and I represent Virtual Networx Inc., a staff augmentation firm providing a wide-range of talent on-demand and Contingent workforce solutions. We have excellent domain expertise in all verticals. Repositioning professionals is what we do, and we do it very well. I am reaching out to you today as your profile matches an immediate job opportunity we have with our premier client Job Title Associate Technical Architect Big Data Architect Work Location Colorado Duration 6+ Months Contract Primary Skill Data Engineering Other Mandatory Skills Job Duties and Responsibilities (6-7 years of experience) 10+ preferred Primary responsibilities fall into the following categories Deploy Enterprise data-oriented solutions leveraging Data Warehouse, Big Data and Machine Learning frameworks Optimizing data engineering and machine learning pipelines Support data and cloud transformation initiatives Contribute to our cloud strategy based on prior experience Understand the latest technologies in a rapidly innovative marketplace Independently work with all stakeholders across the organization to deliver point and strategic solutions Skills - Experience and Requirements A successful Technology Transformation Architect will have the following Should have prior experience in working as Data warehouseBig-data architect. Experience in advanced Apache Spark processing framework, spark programming using Scala or Python with knowledge in shell scripting. Coding experience in Java andor Scala is a must. Experience in using AWS APIs ( e.g., JavaAPI, Boto3, etc.) to integrate different services Should have experience in both functional programming and Spark SQL programming dealing with processing terabytes of data Specifically, this experience must be in writing Big-data data engineering jobs for large scale data integration in AWS. Prior experience in writing Machine Learning data pipeline using Spark programming language is an added advantage. Advanced SQL experience including SQL performance tuning is a must. Experience in logical physical table design in Big data environment to suit processing frameworks Knowledge of using, setting up and tuning Spark on EMR using resource management framework such as Yarn or standalone spark. Experience in writing spark streaming jobs (producersconsumers) using Apache Kafka or AWS Kinesis is required Should have knowledge in variety of data platforms such as Redshift, S3, DynamoDB, MySQLPostgreSQL Experience in AWS services such as EMR, Glue, Athena, IAM, Lambda, Cloud watch and Data pipeline Experience in AWS cloud transformation projects are required. Please reach me Or

Send application

Mail this job to me so I can apply later

Apply With CV

You are not logged in. If you have an account, log in to your account. If you do not have an account, why not sign up? It only takes a minute!

latest videos

Upcoming Events