Join the Cambridge Technology team and grow your career. We are solving real-world problems with creative innovation at some of the most recognizable corporations in the world.
From building entire infrastructures or platforms to solving complex IT challenges, we help businesses accelerate their digital transformation and become AI-first businesses. With over 20 years of expertise as a technology services company, we enable our customers to stay ahead of the curve by helping them figure out the perfect approach, solutions, and ecosystem for their business. Our experts help customers leverage the right AI, Big Data, cloud solutions, and intelligent platforms that will help them become and stay relevant in a rapidly changing world.
We are seeking a skilled and experienced Big Data Engineer to join our team. As a Big Data engineer, you will be expected to work closely with our Business and ETL team to implement all Data processing procedures for all new projects and maintain effective awareness of all production activities according to required standards and provide support to all existing applications. Also, you will be expected to work with the Data Architect(s) to help drive architecture and design approaches that result in implemented business solutions.
Big Data Engineer Duties & Responsibilities:
- Work with Big data development team, business stakeholders, DBAs, system administrators, data
- modeling teams for Data pipeline build and design.
- Translate business requirements into technical requirements.
- Understanding of data quality methodologies and data governance to ensure
- standardization of data is maintained.
- Contribute to the requirements analysis process, associate in creating architecture and design
- documents.
- Design, develop and test data flows using spark/ pyspark.
- Perform data science work for analytics.
- Determine optimal approach for obtaining data from diverse source systems.
- Be a key contributor to initiatives that require technical expertise.
- Work closely with the team responsible for maintaining the data model, including
- data dictionary/metadata registry.
- Possess expertise in project migration and release coordination activities.
- Interface with business stakeholders to understand requirements and offer solutions.
Skill Set and Qualification:
- At least six (6) years of experience in working with Big data tools and technology.
- Experience with Hadoop, Python, Spark/Pyspark, Hive and Tableau/Heavy.ai.
- Strong exposure working with web service sources/Targets, XML Sources and Restful API’s.
- Experience with Linux / shell scripting complementary with ETL tool.
- At least two (2) years of experience in performance tuning.
- Exposure working with relation databases Oracle, My SQL & SQL Server including complex SQL constructs and DDL generation.
- Experience in managing and deploying applications in Cloud Platform.
- Strong knowledge of relational databases and experience with SQL scripting / stored procedures in PL/SQL.
- Experience with data modeling and data mapping design.
- Bachelor’s degree in Computer Science, Information Systems, or equivalent education or work experience.
Only considering US based applicants.