Job Description
Job Summary
The Senior Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. This role requires a high level of self-direction and be comfortable supporting the data needs of multiple teams, systems and products. The Senior Data Engineer will collaborate with the other members of the Enterprise Data & Analytics team to define our company’s data architecture to support our next generation of products and data initiatives. This position will be part of the Information Technology team.
Duties and Responsibilities/Essential Functions
- Work with business stakeholders, data scientists, and data analysts to design and implement Data Engineering solutions.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Lead the design and implementation of infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources leveraging various technologies part of the Azure platform.
- Evaluate and recommend technology and frameworks for building cross product data assets to optimize for flexibility, long-term viability, and time to market.
- Guide, mentor, influence, and adopt a cloud-first modern data architecture, and consistently adopt the associated standards and practices.
- Anticipate the need for data governance and work with designated teams to ensure that data modeling and data handling procedures are compliant with applicable laws and policies across the data pipeline.
- Leads root cause analysis in response to detected problems/anomalies; design and implement corrective actions to prevent further issues.
Qualifications
To perform this job successfully, an individual must be able to perform each essential function satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable qualified individuals with disabilities to perform the essential functions.
- Bachelor’s degree in Computer Science or related field.
- 5+ years’ experience supporting data transformation, data structures, metadata, dependency and workload management.
- Excellent knowledge and understanding of established/emerging data management technologies including and not limited to Spark, Databricks, Parquet and Delta Lake.
- Experience designing and implementing relational and NoSQL databases, unstructured data sets, and data streams.
- Knowledge and execution of SDLC is required.
- Knowledge and execution of Agile/Scrum is required.
- Experience with or in manufacturing is preferred.