Must have a valid LinkedIn profile
Duration: 12+ months
Given that, client is deploying a dedicated integrations team to focus on building the package for integration of data derived from a variety of sources into and out of the new Clinical Trial Management System. This includes coding of complex SQL, logic development, flat file transfers, and potentially ETL transfers from source systems to data warehouse.
Data Engineers are responsible for building, and maintaining data pipelines, databases, and data warehouses using Microsoft technologies to ensure efficient and reliable data storage, processing, and accessibility.
- Integration Solutions: Developing and managing data integration solutions, including Extract, Transform, Load (ETL) processes, to ensure seamless data flow between different systems and databases.
- Microsoft Azure Services: Leveraging Microsoft Azure cloud services for data integration, storage, and processing.
- SQL and T-SQL: Strong SQL and Transact-SQL (T-SQL) skills for writing and optimizing queries, stored procedures, and triggers. Ensuring efficient data retrieval and manipulation.
- API Integration: Integrating data from external sources and applications through API integration. \
- Data Governance: Implementing data governance practices to ensure data quality, security, and compliance. Using tools like Azure Purview for data discovery, classification, and maintaining metadata.
- Performance Optimization: Tuning and optimizing data pipelines for performance, reliability, and scalability. Monitoring and addressing performance bottlenecks in integration workflows
Skills Requirement | Success Criteria for all:
•Hyper communication and inquisitive
•Drivers with high level of self-motivation
•Extreme accountability and ownership
•Hands-on executionists vs theorists
•Critical thinking and problem-solving skills
Skills Requirement | Success Criteria for Data Engineer:
•Must be hands-on with development and build. Need team members who are self-motivated and driven.
•Must have deep experience with SSIS, Python, SQL, and ideally Azure and Azure Data Factory. Primary relevant experience would be coding complex SQL, building the integration packages (including logic development), flat file transfers, etc. There is no XML work and no experience needed with Pyspark, Spark, or any other big data platforms. The standard requirement is Rest or Soap-based APIs but the majority are flat files. There may be some ETL work moving data from various source systems to data warehouse.
•MUST must possess familiarity and working knowledge of Validated Systems