Select employment type: Contract (C2C/W2)
Locations: New Jersey
About the client:
Client is one of the world's leading professional services companies, transforming clients' business, operating and technology models for the digital era. Their unique industry-based, consultative approach helps clients envision, build and run more innovative and efficient businesses. Headquartered in the U.S., a member of the Nasdaq-100 A mega IT services corporation with more than 300,00 employees and a revenue of over $19 billion, the client is ranked 194 on the Fortune 500 and is consistently listed among the most admired companies in the world.
Job Description:
Responsibilities:
- Responsible for day-to-day development of data ingestion pipelines, SQL queries, Spark coding, code refactoring, testing, debugging, and requirements documentation.
- Must have good expertise in Azure Data Platform services like ADF, ADLS, Azure Databricks, Azure SQL DB, Azure Synapse (Dedicated and Serverless), Azure Cosmos DB, Azure Functions, Azure Stream Analytics and Azure Logic Apps.
- Good knowledge in designing and implementation of Batch & Real time ingestion systems using Azure Data Factory and Azure Databricks.
- Should be expert in writing complex SQL objects, Python scripts and Pyspark/Scala scripts with in-depth understanding of windowing functions, aggregate functions, and date time functions •
- In-Depth understanding of Azure security model in terms of Networking and Active Directory level security of Data Assets like ADLS, SQL DB and SQL DW.
- Should have good understanding of Azure messaging services like Event hubs, Service Bus and Event Grids.
- Expertise in designing Data Models using Azure Analysis Service or Power BI Premium.
- Understanding of Azure DevOps, Agile, Azure CI/CD pipelines and Azure Repos source code.
- Azure Data Lake, Azure Data Factory, Blob storage.
- Azure Messaging Services
- Awareness of Data Factory
- Performance tuning of ETL
- API integration
- Language: SQL,Python, PySpark
- Understanding of Azure DevOps
- Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
- Proven experience in designing and developing data pipelines and ETL processes using Azure Data Factory.
- Strong proficiency in SQL and data modeling concepts.
- Familiarity with Azure services such as Azure Data Lake Storage, Azure SQL Database, and Azure Blob Storage.
- Experience with data integration patterns and best practices.
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
- Azure certification (e.g., Azure Data Engineer, Azure Developer) is a plus.