Job description
- Hands-on engineer (good coding skills). Preferably with Java, Scala or Python.
BigData: Hadoop / Spark (preferably with Scala; PySpark okay). Looking for Engineers with hands on experience.
Knowledge and experience with Kafka Streaming, Containerized Micro Services
Knowledge and experience with RDBMS (Aurora MySQL) and No-SQL (Cassandra)
Experience in building Data-pipeline & Data-lakes
Experience or Knowledge with Data-formats like Parquet, CSV, etc.
Knowledge of Hadoop/Spark and various data formats like Parquet, CSV, etc. is a plus
Data-storage architectures like HDFS, HBase, S3 and/or Hive.
AWS Cloud experience S3, EFS, MSK, ECS, EMR, etc
Data-transformations concepts including Partitioning, Shuffling,
Data-processing constructs like Joins, MapReduce
Exposure to working with cloud infra like AWS, Azure
Experienced Java Programmer.
Great attention to detail
Organizational skills
An analytical mind
Refer code: 7570884. eTeam - The previous day - 2024-01-02 21:52