Company

Sriven Systems, IncSee more

addressAddressAurora, CO
type Form of workFull-Time
CategoryInformation Technology

Job description

Job Description

Good Morning

Please share me the updated resume as per client arjun@srivensys.com

Hadoop Developer

9 Month (expected to go longer)

Hybrid in Denvor Col or Chicago IL (Work Environment)

They will be working with a team of about 30. The team is mainly in Charlotte, NC but have some other team members in other markets. Very collaborative, close knit team. Hybrid - 3 days onsite/ 2 days remote required in this role in these locations:
Chicago, IL; Denver, CO. (Will look at other core BoA markets)

Multiple openings!!!

Top Skills - Must Haves

Hadoop

Hive

SPARK

Kafka

Apachespot

Erp

Cloud DE

Jupyter

SQL

Top Skills' Details

  1. 5-8 years within Hadoop development/programming experience, engineering solutions, bringing data sets into clusters. Experience working with Hadoop/Big Data and Distributed Systems
    2) Spark/Scala experience - Will need to develop in these or impala and hive to pull data, working with SPARK Structured steaming
    Basic SQL Skills one or more of MySQL, HIVE, Impala, SPARK SQL
    Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml, parquet
    Working experience with Spark, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, CDP or HDP, Cloudera or Hortonworks, Elastic Search, Kibana, Jupyter Notebook etc.
    Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting, to name a few
    Experience and proficiency with Linux operating system is a must
    Troubleshoot platform problems and connectivity issues
    Experience working with Cloudera manager portal and YARN
    Experience with complex resource contention issues in a shared cluster/cloud environment
    Experience developing the backend in hbase to connect the api to javascript and/or python
    Experience in working in Agile methodology to deliver solutions for customers with changing requirements.

Job Description

We invite you to join the Data Strategy & Engineering team within the Global Information Security organization at Bank of America as a Big Data Platform Developer/Engineer. We are a tight-knit, supportive community passionate about on delivering the best experience for our customers while remaining sensitive to their unique needs.
In this role, you will be helping to build new data pipelines, identifying existing data gaps, and providing automated solutions to deliver advanced analytical capabilities and enriched data to applications that are supporting the operations team. You will also be responsible for obtaining data from the System of Record and establishing real-time data feed to provide analysis in an automated fashion.
For example the Cyber Security Assurance group that handles third party risk assessments uses these analytics dashboards to do their assessments and the Cyber Security Defense group that handles monitoring and malware issues uses this analytics as well. Those groups come to this team and have requirements on what they need/what types of analytics they need help with and this group takes those requirements and creates it within Hadoop and then it moves to UI to be visible for analytical purposes.
They are looking for candidates with experience using KafkaConnect, HBASE, ApacheSpot to increase what the cluster does. They are building and make sure that the cluster is very functional. Preference is someone with strong hbase and SPARK experience. They are looking for someone with hbase experience tying to an application and API so it pulls the data. . Another project that they will work on will be data ingestion.
They will be using Flume, Spark, and Hive.

Additional Skills & Qualifications

Strong SQL Skills one or more of MySQL, HIVE, Impala, SPARK SQL
Performance tuning experience with spark /MapReduce or SQL jobs
Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines
Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc.
Experience working with Jenkins and Jar management
Experience working with Hadoop/Big Data related field
Working experience on tools like Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, etc.
Hands on programming experience in perhaps Java, Scala, Python, or Shell Scripting, to name a few
Strong experience with SQL and Data modelling
Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle
Deep understanding of the Hadoop ecosystem and strong conceptual knowledge in Hadoop architecture components
Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets
Ability to comprehend customer requests and provide the correct solution
Strong analytical mind to help take on complicated problems
Desire to resolve issues and dive into potential issues
Ability to adapt and continue to learn new technologies is important

Employee Value Proposition (EVP)

Salaried contract - receive 6 TEK holidays (holiday pay) and 5 Bank holidays (guaranteed pay not holiday pay but guaranteed a 40 hour week of pay those weeks) on this contract.

Work Environment

They will be working with a team of about 30. The team is mainly in Charlotte, NC but have some other team members in other markets. Very collaborative, close knit team. Hybrid - 3 days onsite/ 2 days remote required in this role in these locations:
Chicago, IL; Denver, CO. (Will look at other core BoA markets)

Business Drivers/Customer Impact

There are a few drivers, one being the enhancement of both their Jupyter Hub tool and SIEM tool as well as their TPCA - third party control and assessments - trying to identify what questions to ask and then build a model off of those previous questions asked. For example: Home Depot - if they are doing 3 million dollars in transactions or TEKsystems is doing a lot of transactions with us and they are assessing us, does that mean 200 or 300 questions, just better using the data they already have versus re-inventing the wheel every time. Their data is all over the place right now in spreadsheets and just no one consolidated way to find this info or data and need to improve/make this more efficient for CSA - Byron specifically his groups is the customer/end user here. Allans team would provide the data and reports to Brent Bohmont's group who would develop the UI that would be what Byrons teams would actually see and use. The UI is internally built/not off the shelf. His team will build the application for the UI workflow, have it consolidated in a queue for them to look at and improve efficiency.

External Communities Job Description

We are looking for a multiple Hadoop Developers.

Thanks & Regards,

Arjun K

Accounts Manager - Recruitment

Sriven Systems, Inc

Direct : 6313156999 / Ext : 1005

arjun@srivensys.com

https://www.linkedin.com/in/arjun-k-2302b62a/

F: 516-908-3640

545 S. Kimball avenue, Suite 100, Southlake, TX 76092 (New Address)

www.srivensys.com

Winner of "Inc 500 & Inc 5000: Fastest Growing Companies in America" " 8 years in a Row : 2008 to 2016".

1999 2023: 23 + years of Excellence

Refer code: 7285313. Sriven Systems, Inc - The previous day - 2023-12-19 10:20

Sriven Systems, Inc

Aurora, CO
Popular Hadoop Developer jobs in top cities

Share jobs with friends

Related jobs

​​​​​Hadoop Developer - Referral Fee

Hadoop Developer

Signature Consultants

Denver, CO

5 months ago - seen

Hadoop Developers

TEKsystems

Fort Collins, CO

6 months ago - seen

Hadoop Developers

TEKsystems

Denver, CO

6 months ago - seen

Hadoop Developer

NR Consulting

Boulder, CO

7 months ago - seen