Job Description
W2 Role
Need Local Candidates
A candidate who does not require any sponsorship in the future will be highly preferred for this role .|
Video Interviews ( 2-3 round)
Primary Work Location: Pleasanton
Interview: Zoom
Top Must Haves:
4+ years of hands-on Development, Deployment and production Support experience in Big Data environment.
4-5 years of programming experience in Java, Scala, Python, Solr, Hbase
Proficient in SQL and relational database design and methods for data retrieval.
Hands-on experience in Cloudera Distribution 6.x
Must have experience with Spring framework, Web Services and REST API's.
Project Experience in Query Processing Language (QPL) a search engine independent technology for Advance Query Processing is highly desirable.
A. DELIVERABLES OR TASKS:
The tasks for the Hadoop Engineer include, but are not limited to, the following:
1. Provide vision, gather requirements and translate client user requirements into technical architecture.
2. Design and implement an integrated Big Data platform and analytics solution
3. Design and implement data collectors to collect and transport data to the Big Data Platform.
4. Implement monitoring solution(s) for the Big Data platform to monitor health on the infrastructure.
B. MENTORING & SKILL ENHANCEMENT:
Supplier Personnel will make every effort to provide skills enhancement at a satisfactory rate and report any issues that may impede the progress of training and mentoring.
Supplier Personnel resources shall provide input to Contract Executive to develop training and mentoring plan to include specific skill sets, tasks, and training methodologies.
Supplier Personnel will be responsible to execute the training and mentoring plan(s) with designated Client employees and shall provide input to refine and further develop training and mentoring plans as training progresses.
Supplier Personnel shall meet and discuss progress of training to Client on a monthly basis.|
Client Contract Executive will be responsible to document a training plan on the Mentoring & Skill Enhancement Planner and to monitor progress of training and mentoring with the Client employee(s). The Mentoring & Skill Enhancement Tracker and Planner are provided as Attachment C to this SOW.
C. RESOURCE REQUIREMENTS, SKILLS, KNOWLEDGE AND ABILITIES:
SUPPLIER SHALL ENSURE THAT ALL RESOURCES ASSIGNED TO THE PROJECT HAVE THE MINIMUM SKILLS REQUIREMENT TO RENDER THE SERVICES IN A COMPETENT AND EFFICIENT MANNER.
TECHNICAL KNOWLEDGE AND SKILLS:
Project Experience in Query Processing Language (QPL) a search engine independent technology for Advance QueryProcessing is highly desirable.
4+ years of hands-on Development, Deployment and production Support experience in Big Data environment.
4-5 years of programming experience in Java, Scala, Python.
Proficient in SQL and relational database design and methods for data retrieval.
Knowledge of NoSQL systems like HBase or Cassandra
Hands-on experience in Cloudera Distribution 6.x
Hands-on experience in creating, indexing Solr collections in Solr Cloud environment.
Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Solr, MR, Impala, Spark, Spark SQL.
Must have experience with developing Hive QL, UDFs for analyzing semi structured/structured datasets.
Must have experience with Spring framework, Web Services and REST API's.
Hands-on experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc.
Must have working experience in the data warehousing and Business Intelligence systems.
Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs.
Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production
support issue resolution process.
Experience in building ML models using MLLib or any ML tools.
Hands-on experience working in Real-Time analytics like Spark/Kafka/Storm
Experience with Graph Databases like Neo4J, Tiger Graph, Orient DB
Agile development methodologies.