Job Description
Senior DataOps Engineer
Locations: Jersey City, New Jersey
Fulltime
Position Overview:
This position is responsible for:
• Identifying opportunities for eliminating bottlenecks throughout the Bank’s operational data ecosystem
• Enhancing efficiency and effectiveness of data infrastructure and delivery processes using scripting/automation with a focus on data integrity, reliability, and consistency
• Build and support highly available data pipelines and related integrations
Team Overview:
As part of the Data Management Office (DMO), the Operational Data Services team streamlines, automates, hardens, and modernizes the processes and technologies by which the Bank’s operational data is stored, shared, protected, and maintained at quality levels that are fit for purpose.
Essential Job Function:
• Partner with DevOps team to apply their standards and best practices to DataOps initiatives
• Develop DataOps-specific standards and best practices in alignment with DevOps
• Provide thought leadership in identifying reusable, flexible, and scalable data pipeline based automated solutions
• Perform system analysis to identify gaps and impediments for continuous data integration and delivery
• Develop, construct, test, operationalize and maintain data pipelines for operational and analytical efficiencies
• Automate workflows for faster delivery of data from conception to operationalization
• Set up best practice standards for data engineering in alignment with Bank’s defined Enterprise and data architecture
• Manage and enhance Bank’s Non-IT Python Scripting environment for secure and automated script/package deployment
• Effectively plan/prioritize/manage workload and project deliverables to deliver timely business value
• Collaborate with Technology, business, and vendor stakeholders to plan, design, test, and deliver data pipelines and infrastructure
• Ensure all design and deliverables comply with PMLC, SDLC, DMO, and Bank standards
• Ensure monitoring and optimization of Bank’s RDS instances for cost-effectiveness and compliance with Bank standards
• Provide training on supported capabilities.
• Collaborate with other Data Management Office staff as needed to advance the maturity of the bank's overall data management.
• Identify existing and avoid building new roadblocks to delivering business value in a timely manner.
Skills:
Interpersonal skills:
• Team-oriented, collaborative, and coaching skills
• Strong analytical, conceptual, and critical thinking abilities
• Clear and concise communication and the ability to present ideas in user-friendly language
• Effective leadership qualities to negotiate with and influence a variety of stakeholders
• Effective relationship management skills at all levels
Technical Skills:
• Passionate about automation and developing best practices
• Strong organizational and process/workflow design skills
• Keen ability to identify and analyze patterns using complex data sets
• Good knowledge of agile delivery and DevOps frameworks
• Good ability to think at and switch between multiple levels (e.g., conceptual vs. logical vs. physical data models, strategic vs. tactical planning)
• Good understanding of data management functions as per the Data Management Body of Knowledge (DMBOK) and its application to design, development, and implementation of operational data solutions
Experience:
• Minimum of Five years of experience in a Data Engineer role
• 3-5 years of hands-on experience in creating continuous integration and delivery pipelines leveraging cloud native architectures/technologies on Amazon Web Services (AWS) Cloud
• 3-5 years of hands-on experience with DevOps tools for design, build, test, release, and monitor jobs - Jenkins/Ansible preferred
• 3-5 years of hands-on experience working with SQL, NoSQL database systems, data warehouses, and streaming solutions. Postgres, Oracle, SQL Server, Snowflake, Amazon Relational Database Services (RDS) preferred
• 3-5 years of demonstrated experience with building, combining, analyzing, and optimizing large and complex data sets
• Two plus years working with one or more of the technologies: Kafka, API Gateway, Kinesis, Docker, Apache Airflow, and/or Lambda
• 1-2 years of experience in design and development of:
• Optimized AWS RDS instances including replication using AWS Database Migration Services (DMS)
• Information management/data architectures
• Multi-tier data layers and frameworks
• Data Integration methods for transactional systems
• Cloud storage solutions
• 1-2 years of working experience with scripting languages, and/or statistical tools: Python, R, Scala, SAS
• Working knowledge of security/monitoring tools and their integration with pipeline
• Knowledge of AI, ML, and Microservices deployment and release management will be a plus
• Basic understanding of Machine Learning and Artificial Intelligence concepts
Education/Certifications:
• Bachelor’s degree in computer science, engineering, or related discipline. Master’s degree preferred. Professional experience will be considered in lieu of education.
• Certifications in AWS DevOps, AWS RDS/DMS, data engineering, data science, automation, or database technologies a plus. However, generally recognized certifications in data management, scripting or automation will be preferred
Locations: Jersey City, New Jersey
Fulltime
Position Overview:
This position is responsible for:
• Identifying opportunities for eliminating bottlenecks throughout the Bank’s operational data ecosystem
• Enhancing efficiency and effectiveness of data infrastructure and delivery processes using scripting/automation with a focus on data integrity, reliability, and consistency
• Build and support highly available data pipelines and related integrations
Team Overview:
As part of the Data Management Office (DMO), the Operational Data Services team streamlines, automates, hardens, and modernizes the processes and technologies by which the Bank’s operational data is stored, shared, protected, and maintained at quality levels that are fit for purpose.
Essential Job Function:
• Partner with DevOps team to apply their standards and best practices to DataOps initiatives
• Develop DataOps-specific standards and best practices in alignment with DevOps
• Provide thought leadership in identifying reusable, flexible, and scalable data pipeline based automated solutions
• Perform system analysis to identify gaps and impediments for continuous data integration and delivery
• Develop, construct, test, operationalize and maintain data pipelines for operational and analytical efficiencies
• Automate workflows for faster delivery of data from conception to operationalization
• Set up best practice standards for data engineering in alignment with Bank’s defined Enterprise and data architecture
• Manage and enhance Bank’s Non-IT Python Scripting environment for secure and automated script/package deployment
• Effectively plan/prioritize/manage workload and project deliverables to deliver timely business value
• Collaborate with Technology, business, and vendor stakeholders to plan, design, test, and deliver data pipelines and infrastructure
• Ensure all design and deliverables comply with PMLC, SDLC, DMO, and Bank standards
• Ensure monitoring and optimization of Bank’s RDS instances for cost-effectiveness and compliance with Bank standards
• Provide training on supported capabilities.
• Collaborate with other Data Management Office staff as needed to advance the maturity of the bank's overall data management.
• Identify existing and avoid building new roadblocks to delivering business value in a timely manner.
Skills:
Interpersonal skills:
• Team-oriented, collaborative, and coaching skills
• Strong analytical, conceptual, and critical thinking abilities
• Clear and concise communication and the ability to present ideas in user-friendly language
• Effective leadership qualities to negotiate with and influence a variety of stakeholders
• Effective relationship management skills at all levels
Technical Skills:
• Passionate about automation and developing best practices
• Strong organizational and process/workflow design skills
• Keen ability to identify and analyze patterns using complex data sets
• Good knowledge of agile delivery and DevOps frameworks
• Good ability to think at and switch between multiple levels (e.g., conceptual vs. logical vs. physical data models, strategic vs. tactical planning)
• Good understanding of data management functions as per the Data Management Body of Knowledge (DMBOK) and its application to design, development, and implementation of operational data solutions
Experience:
• Minimum of Five years of experience in a Data Engineer role
• 3-5 years of hands-on experience in creating continuous integration and delivery pipelines leveraging cloud native architectures/technologies on Amazon Web Services (AWS) Cloud
• 3-5 years of hands-on experience with DevOps tools for design, build, test, release, and monitor jobs - Jenkins/Ansible preferred
• 3-5 years of hands-on experience working with SQL, NoSQL database systems, data warehouses, and streaming solutions. Postgres, Oracle, SQL Server, Snowflake, Amazon Relational Database Services (RDS) preferred
• 3-5 years of demonstrated experience with building, combining, analyzing, and optimizing large and complex data sets
• Two plus years working with one or more of the technologies: Kafka, API Gateway, Kinesis, Docker, Apache Airflow, and/or Lambda
• 1-2 years of experience in design and development of:
• Optimized AWS RDS instances including replication using AWS Database Migration Services (DMS)
• Information management/data architectures
• Multi-tier data layers and frameworks
• Data Integration methods for transactional systems
• Cloud storage solutions
• 1-2 years of working experience with scripting languages, and/or statistical tools: Python, R, Scala, SAS
• Working knowledge of security/monitoring tools and their integration with pipeline
• Knowledge of AI, ML, and Microservices deployment and release management will be a plus
• Basic understanding of Machine Learning and Artificial Intelligence concepts
Education/Certifications:
• Bachelor’s degree in computer science, engineering, or related discipline. Master’s degree preferred. Professional experience will be considered in lieu of education.
• Certifications in AWS DevOps, AWS RDS/DMS, data engineering, data science, automation, or database technologies a plus. However, generally recognized certifications in data management, scripting or automation will be preferred