Job Description
Sr Data Engineer (AWS/ETL/PySpark)
Chicago, IL
Locals only
onW2
H4, L2, EAD GC, USC, GC No OPT, CPT AND H1B
THIS IS OPEN TO VISA CANDIDATES
1-2 rounds of interviews
- Getting operational data from FRB and putting it into JPMC systems
- Moving data into archive to meet FDIC requirements
AWS S3 and Databricks are top 2 skills needed
If they don't know databricks, they have to be able to pick it up quickly - Python & Spark
Hadoop/Abinitio/Informatica would be helpful
Will be working with pipelines and archiving the data
May be moving the data into a new Data Lake
Required qualifications, capabilities, and skills:
BS degree in Computer Science or certification on software engineering.
Proficient in data analysis, Data Engineering, data modeling, and database management.
Strong understanding of RDBMS, NoSQL, big data, SQL, and ETL tools.
Experience programming with at least one modern language such as Java, Python, Unix Shell.
Proficiency in REST APIs, microservices, distributed systems and cloud (hybrid) computing .
Strong understanding of Agile methodologies with ability to work in at least one of the common frameworks.
Strong understanding of techniques such as CI/CD, TDD, cloud development, resiliency, and security.
Proven experience with business analysis, design, development, testing, deployment, maintenance, and improvement.
Preferred qualifications, capabilities, and skills:
Experience working with data intensive software (such as big data, data warehouses, data lakes).
AWS experience with developing on AWS, S3, Lambda, MSK, EC2, IAM, and related data products.
Experience with Databricks, Amazon RDS, Oracle, Hadoop/Cloudera, HUE, Hive, Impala.
Experience with GraphQL.
Rahul Kumar
Account Manager
M: 650-918-1927
O: +91-01662-358109
E: rahul.kumar@s3bglobal.com
Office: 8000, Avalon Blvd, Alpharetta, GA 30004
W: www.s3bglobal.com
E: info@s3bglobal.com