The Data Analytics Senior Manager accomplishes results through the management of professional team(s) and department(s). Integrates subject matter and industry expertise within a defined area. Contributes to standards around which others will operate. Requires in-depth understanding of how areas collectively integrate within the sub-function as well as coordinate and contribute to the objectives of the entire function. Requires basic commercial awareness. Developed communication and diplomacy skills are required in order to guide, influence and convince others, in particular colleagues in other areas and occasional external customers. Has responsibility for volume, quality, timeliness and delivery of end results of an area. May have responsibility for planning, budgeting and policy formulation within area of expertise. Involved in short-term planning resource planning. Full management responsibility of a team, which may include management of people, budget and planning, to include duties such as performance evaluation, compensation, hiring, disciplinary and terminations and may include budget approval.
- Integrates subject matter and industry expertise within a defined area.
- Contributes to data analytics standards around which others will operate.
- Applies in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinates and contributes to the objectives of the entire function.
- Employs developed communication and diplomacy skills in order to guide, influence and convince others, in particular, colleagues in other areas and occasionally external entities.
- Resolves occasionally complex and highly variable issues.
- Produces detailed analysis of issues where the best course of action is not evident from the information available, but actions must be recommended/taken.
- Responsible for volume, quality, timeliness and delivery of data science projects along with short-term planning resource planning.
- Oversees management of people, budget and planning, to include duties such as performance evaluation, compensation, hiring, disciplinary action and terminations and may include budget approval.
- Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firms reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency, as well as effectively supervise the activity of others and create accountability with those who fail to maintain these standards.
- 6-10 years experience using tools for statistical modeling of large data sets
- Bachelor's/University degree or equivalent experience, potentially Masters degree
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
Candidates should possess strong knowledge and interest across big data technologies and have a background in data engineering.
Build data pipeline frameworks to automate high-volume and Real Time data delivery for our Spark and streaming data hub
Transform complex analytical models in scalable, production-ready solutions
Provide support and enhancements for an advanced anomaly detection machine learning platform
Continuously integrate and ship code into our cloud production environments
Develop cloud based applications from the ground up using a modern technology stack
Work directly with Product Owners and customers to deliver data products in a collaborative and agile environment
Your Responsibilities Will Include
Developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies
Building data APIs and data delivery services to support critical operational and analytical applications
Contributing to the design of robust systems with an eye on the long-term maintenance and support of the application
Leveraging reusable code modules to solve problems across the team and organization
Handling multiple functions and roles for the projects and Agile teams
Defining, executing and continuously improving our internal software architecture processes
Being a technology thought leader and strategist
At least 4 years of experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Java or Scala or Python
At least4 years of experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
At least 4 years of developing applications with Monitoring, Build Tools, Version Control, Unit Test, TDD, Change Management to support DevOps
At least 2 years of experience with SQL and Shell Scripting experience
Experience of designing, building, and deploying production-level data pipelines using tools from Hadoop stack (HDFS, Hive, Spark, HBase, Kafka, NiFi, Oozie, Apache Beam, Apache Airflow etc).
Experience with Spark programming (pyspark or Scala or java).
Experience troubleshooting JVM-related issues.
Experience and strategies to deal with mutable data in Hadoop.
Experience with Stream sets.
Familiarity with machine learning implementation using PySpark.
Experience in data visualization tools like Cognos, Arcadia, Tableau.
Angular.JS 4 Development and React.JS Development expertise in a up to date Java Development Environment with Cloud Technologies
1+ years' experience with Amazon Web Services (AWS), Google Compute or another public cloud service
2+ years of experience working with Streaming using Spark or Flink or Kafka or NoSQL
2+ years of experience working with Dimensional Data Model and pipelines in relation with the same
Hands on design experience with data pipelines, joining data between structured and unstructured data
Familiarity of SAS programming will be a plus
Experience implementing open source frameworks & exposure to various open source & package software architectures (AngularJS, ReactJS, Node, Elastic Search, Spark, Scala, Splunk, Apigee, and Jenkins etc.).
Experience with various noSQL databases (Hive, MongoDB, Couchbase, Cassandra, and Neo4j) will be a plus
Experience in Ab Initio technologies including, but not limited to Ab Initio graph development, EME, Co-Op, BRE, Continuous flow)