Job Details

Big Data Engineer / Hadoop Developer

Advertiser
APLOMB Technologies
Location
Princeton, New Jersey, United States
Rate
-
Visa H1BOPT EADCPT EAD H4-EADL2-EADGCGC-EADUSC Location Multiple Locations Remote work available Experience 1 to 10 years Salary Depends on the experience Position Big Data Hadoop Developer Top Skills Required Spark, ScalaJava, SQL, Hive, Hadoop Job Description Our client is looking for an experienced and skilled Big Data EngineerDeveloper to build analytics and ML platform to collect, store, process, and analyze huge sets of data spread across the organization. The platform will provide frameworks for quickly rolling out new data analysis for data-driven products. And candidate should be able to assist the team in solving challenges of large-scale data storage, low-latency retrievals, high volume requests and high availability over a distributed environment. This candidate will help refresh and evolve many facets of the data and analytics infrastructure for several systems. This is part of an initiative to help standardize the storage backends and structure data flows across our systems to improve discoverability and data provenance. SKILLS AND EXPERIENCE REQUIRED Hands-on experience in Hadoop related technologies such as SPARK, HDFS, and IMPALA. Hands-on experience in SCALA or PYTHON programming languages and the ability to write reliable, manageable and high-performance code. Good understanding of OOPS concepts, Job Schedulers, Data Loaders. Critical thinking and problem-solving skills. Experience with Hadoop, particularly Hive and Spark or Kafka Deep knowledge in using Cloudera Hadoop components such as HDFS, Sentry, HBase, Impala, Hue, Spark, Hive, Kafkarsquos teaming, YARN, Zookeeper. Experience developing, enhancing and maintaining high throughput, low-latency Hadoop systems in a mission-critical production environment. Experience with Spark, Kafka, Oozie, Zookeeper, Flume or Storm and must have good knowledge spark configurations and performance tuning Strong in Spark Scala pipelines (both ETL Streaming) Proficient in Spark architecture Atleast 1 year experience in migration of Map Reduce process to Spark platform 3 yrs experience in Design and implementation using Hadoop, Hive Should be able to optimize and performance tune Hive queries Experience in one coding language is a must - JavaPython Worked on designing ETL Streaming pipelines in Spark Scala. Good experience in Requirements gathering, Design Development Working with cross-functional teams to meet strategic goals. Experience in high volume data environments If you are interested in the position you can send me your updated resume at sai.vaplombtek.com or you can apply through dice.

Send application

Mail this job to me so I can apply later

Apply With CV

You are not logged in. If you have an account, log in to your account. If you do not have an account, why not sign up? It only takes a minute!

latest videos

Upcoming Events