We're seeking a hands-on Data Engineer that can design, code and provide Big Data Warehouse solutions for the team. The right candidate for this role is passionate about technology, can interact with product owners and technical stakeholders, thrives under pressure, and is hyper-focused on delivering exceptional results with good teamwork skills. The candidate will have the opportunity to influence and interact with fellow technologists beyond his team and influence technology partners across the enterprise.
• Design and Develop scalable Big Data Warehouse solutions across the entire data supply chain.
• Create or implement solutions for metadata management.
• Create and review technical and user-focused documentation for data solutions (data models, data dictionaries, business glossaries, process and data flows, architecture diagrams, etc.).
• Extend and enhance the business Data Lake
• Solve for complex data integrations across multiple systems.
• Design and execute strategies for real-time data analysis and decisioning.
• Collaborate with management, business partners, analysts, developers, architects, and engineers to support data quality efforts.
• Work closely with the Data Science team to improve actionable data
• Be open and willing to learn new skills!
• In data management, data access (Big Data, traditional Data Marts and Data Warehousing).
• SQL (including Spark SQL and DataFrames) * Current data warehousing design concepts for utilizing Redshift, Spark, Hadoop, web services, etc to support business-driven decisioning
• In data architecture and data assembly
• In Data Governance and Data Security
• with data integration tools (e.g., Talend (preferred), Cascading)
• with data manipulation scripting languages
• with Business Intelligence, MDM, XML, SOA/WebServices
• with Data Science toolsets and technology
• Bachelor's or Master's degree in computer science/data processing or equivalent
• experience in Data Warehousing or similar analytic data experience
• of experience with Java programming and developing frameworks
• experience with Hadoop and Spark
• experience with Amazon EMR/EC2 (or equivalent)
• 2+ years' experience with Python
• Experience with Bitbucket and a solid understanding of core concepts with Git
• Familiarity with Linux
• Familiarity with Jenkins and CI/CD
• A solid understanding of basic core computer science concepts
• Experience with AWS technologies such as Aurora, Athena, EMR, Redshift, S3
• Experience with Postgres and MySql
• Excellent Organizational and Project Management skills
• Outstanding communication skills
This job and many more are available through The Judge Group. Find us on the web at