Roles % Responsibilities:
- Understand existing workflows and underlying frameworks
- Collaborate with and across Agile teams
- to gather metadata to meet the current data governance standards
- Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance
- Build scripts/utilities to accelerate migration
- Analyze data & generate reports
- Learn-unlearn-relearn concepts with an open and analytical mindset.
- Troubleshooting & Critical thinking
- Develop & review technical documentation for artifacts delivered.
Requirements:
- Minimum 7 years of experience in Data Engineering and Data Pipelines
- Minimum 5 years of extensive experience in Python Programming
- Minimum 3 years of extensive experience in SQL, Unix/Linux Shell Scripting
- Hands-on experience writing complex SQL queries, exporting, and importing large amounts of data using utilities.
- Minimum 3 year of AWS experience
- Basic Knowledge of CI/CD
- Excellent communication skills and Good Customer Centricity.
Nice to Have's:
- Prior experience with data migration project
- Experience with Kafka Streams/building data intensive streaming applications (stream processing, e.g. Kafka, Spark Streaming)
- Experience/Knowledge of Scala or Java Programming
- Experience with at least one Cloud DW such as Snowflake
- Experience with Distributed Computing Platforms
Job Types: Full-time, Contract
Salary: $35.00 - $80.00 per hour
Expected hours: 40 per week
Experience level:
- 10 years
- 11+ years
- 7 years
- 8 years
- 9 years
Schedule:
- 8 hour shift
- Day shift
- Monday to Friday
- On call
Experience:
- Informatica: 1 year (Preferred)
- SQL: 1 year (Preferred)
- Data warehouse: 1 year (Preferred)
Work Location: On the road