At Synaptiq, we believe in building a better world for future generations through novel applications of data. We create intelligent products for our customers that monitor the health of young coral to maintain perfect coral nursery conditions, improve infrastructure in Rwanda, make banking more accessible to low-income populations in Mexico, and help to reduce the risk of strokes and heart attacks. We choose to work on products that matter.
Synaptiq values not only the work we do, but also the people we work with. At Synaptiq, our team:- Is passionate about solving problems that improve lives
- Has the audacity to believe we can solve even the most seemingly impossible problem
- Believes that AI is the natural path forward to solving world class riddles and has the responsibility to be an ethical participant
- Is diverse and inclusive where collaboration flourishes and the work has meaning and a lasting impact
If you bring this kind of mindset to the table, we want to hear from you!
As our Data Engineer:You will partner with Data Scientists, ML Engineers and Application Developers to develop robust pipelines ingesting, transforming, and refining data at scale.
On any given day we hope that you will:- Partner closely with Data Scientists, Machine Learning Engineers, and Application Developers to understand data requirements and contribute to the design of data solutions.
- Design, develop, and maintain scalable data pipelines for ingesting, transforming, and refining large volumes of data.
- Contribute to the design and optimization of data models, ensuring they align with business needs and performance requirements.
- Collaborate on the development and maintenance of data architecture and storage solutions.
- Identify and address performance bottlenecks in data processing pipelines to ensure optimal system efficiency.
- Implement best practices for data storage, retrieval, and processing.
- Implement robust automated testing procedures to validate data pipelines and ensure the accuracy of transformed data.
- Collaborate with quality assurance teams to resolve issues and improve data quality.
- Create and maintain comprehensive documentation for data pipelines, data models, and architectural decisions.
- Establish monitoring solutions to proactively identify and resolve issues in data pipelines.
- Work closely with support teams to troubleshoot and resolve data-related incidents.
- Implement and enforce data security and privacy measures in accordance with industry standards and regulations.
- Strong programming skills in languages such as Python, Java, or Scala.
- Experience with data processing frameworks such as Apache Spark.
- Databricks experience is required
- Proficiency in designing and optimizing data models.
- Experience with relational and non-relational databases.
- Proven experience in designing and implementing efficient ETL & ELT processes.
- Knowledge of big data technologies such as Hadoop, Hive, or HBase.
- Strong communication skills with the ability to collaborate effectively across cross-functional teams.
- Experience working in an Agile or Scrum environment.
- Ability to troubleshoot and resolve complex data-related issues.
- Experience implementing testing procedures for data pipelines using common testing frameworks and tools.
- Detail-oriented with a commitment to creating and maintaining comprehensive documentation.
- Organized approach to managing and optimizing data processes.
- Knowledge of data security best practices and compliance requirements.
- Experience implementing security measures in data solutions.
Salary Range: Based on Experience
We are an equal opportunity employer. We welcome people of different backgrounds, experiences, abilities and perspectives. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status.
Employment Type: CONTRACTOR