Position: Senior Big Data Engineer
Location: Dallas, TX
Duration: 12+ Months
Job Description:
• Must be highly hands-on lead, writing code, leading team, reviewing other developers code, working closely with the architect to implement the proposed design
• 5+ years of experience writing enterprise grade Java or Scala or Python code (should be highly proficient)
• Solid understanding of data structures and fundamental algorithms (sort, select, search, queue)
• Solid understanding of distributed computing and/or massively parallel processing concepts and frameworks (at least one): Spark, Kafka, MapReduce, Impala
• 2+ years of experience in some of the Big Data technologies: Hadoop (HDFS, Hive, Impala) or Kafka or Spark
• 2+ years of experience building enterprise data platforms: Data Ingestion, Data Lake, ETL, Data Warehouse, Data Access Patterns/APIs, Reporting
• Decent data warehousing and data modeling skills
• Experience working in Linux
• Spring Boot/APIs implementation experience is a nice to have
• Azure experience is a nice to have
• Databricks experience is a nice to have
Skills: Hadoop, HDFS, Kafka, Spark, Datalake.
Full Name Mobile or Google Number Email ID: Address with Zip Code: Work Authorization Hourly Rate LinkedIn Profile: Education: US Work Experience DOB Last 4 digits of SSN (Optional) Visa Copy & DL Copy Interview Availability Vendor Details
Location: Dallas, TX
Duration: 12+ Months
Job Description:
• Must be highly hands-on lead, writing code, leading team, reviewing other developers code, working closely with the architect to implement the proposed design
• 5+ years of experience writing enterprise grade Java or Scala or Python code (should be highly proficient)
• Solid understanding of data structures and fundamental algorithms (sort, select, search, queue)
• Solid understanding of distributed computing and/or massively parallel processing concepts and frameworks (at least one): Spark, Kafka, MapReduce, Impala
• 2+ years of experience in some of the Big Data technologies: Hadoop (HDFS, Hive, Impala) or Kafka or Spark
• 2+ years of experience building enterprise data platforms: Data Ingestion, Data Lake, ETL, Data Warehouse, Data Access Patterns/APIs, Reporting
• Decent data warehousing and data modeling skills
• Experience working in Linux
• Spring Boot/APIs implementation experience is a nice to have
• Azure experience is a nice to have
• Databricks experience is a nice to have
Skills: Hadoop, HDFS, Kafka, Spark, Datalake.
Full Name Mobile or Google Number Email ID: Address with Zip Code: Work Authorization Hourly Rate LinkedIn Profile: Education: US Work Experience DOB Last 4 digits of SSN (Optional) Visa Copy & DL Copy Interview Availability Vendor Details