KPMG is currently seeking an Associate Director, Big Data Solution Delivery to join our Digital Nexus technology organization.
Responsibilities:
Design and develop multiple diversified applications using big data platform, leveraging hybrid clouds
Work with critical data stake holders and technical teams for optimal solutions and service delivery; provide strategic, tactical direction in the delivery of big data solutions
Ensure the design, code, and procedural aspects of solution are production ready, in terms of operational, security and compliance standards; work with critical data stake holders and technical teams for optimal solutions and service delivery; partner with various cross functional teams such as infrastructure, data and enterprise architects to build scalable, optimal, self-service analytics solutions, both on premise and in the cloud
Utilize innovation, problem solving and best practices to create and execute road maps that improve efficiencies, agility, performance, stability, and lower Total Cost of Ownership (TCO)
Apply technical proficiency to configure and tune technical environments
Continually optimize the data pipe with automation, and tools with the latest DevOps and Agile methodologies to improve cycle times, consistency, and quality; participate in day to day project and production delivery status meetings and provide technical support for faster resolution of issues
Qualifications:
Minimum eight years of experience in enterprise data architecture, integration and analytics; including five years of designing, implementing and successfully operationalizing large-scale data lakes solutions in production environments using Big data Stack (On-prem and Azure) and three years of experience in architecting and implementing end to end Azure cloud big data solutions
Bachelor's degree from an accredited college or university
Experience in big data stack and solutions, including cloud technologies (Container management and cloud connect)
Deep hands-on experience with Cloudera/Hortonworks Hadoop, HIVE, HBase, Spark, Kafka, Snowflake, Python, R, SQL, Java, Scala, Zeppelin, RStudio, Spark RDDs and Data Frames, machine learning algorithms, Ambari, Ranger, and Kerberos
Development and operations experience with ElasticSearch and Kibana, Informatica MDM, DQ, ETL architecture experience is required; experience with modern data management ETL tools such as Zaloni, Talend is preferred and prior knowledge with building, optimizing the data pipe - CI/CD, integrated - build and deployment automation, configuration management, and test automation solutions
US Citizenship required