Responsibilities include:
Responsible for delivery in the areas of: Big Data Engineering mainly data acquisition technology implementations (Nifi & Kafka)
Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes (Nifi/Kafka/Spark)
Construct data staging layers and fast real-time systems
Utilize expertise in models that leverage the newest data sources, technologies, and tools, such as machine learning, Python, Hadoop, Spark, AWS, as well as other cutting-edge tools and applications for Big Data
Investigate the impact of new technologies, applications, and data sources on the future secondary mortgage business
Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions
Develop both deployment architecture and scripts for automated system deployment in AWS
Create large scale deployments using newly researched methodologies
Work in Agile environment
Qualifications:
Bachelor's degree in Mathematics, Statistics, Computer Science, related field, or equivalent work experience
2+ years of experience with Nifi & Kafka
4+ years of experience with Hadoop including Hive and Spark
4+ years' experience in Python & PySpark
2+ years of experience working with AWS
5-8 years of experience in Software development, ideally 8+ years
Key to success in this role:
Excellent development coupled with Kafka, Python and PySpark Skills
Deep curiosity to learn new tools and technologies and apply them
Intellectual agility and interpersonal flexibility
Strong communication skills
Ability to work with and collaborate across the team
A good "can do" attitude
Big Data Engineer who has Python, PySpark, Nifi and Kafka skillsets.
Responsible for delivery in the areas of: Big Data Engineering mainly data acquisition technology implementations (Nifi & Kafka)
Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes (Nifi/Kafka/Spark)
Construct data staging layers and fast real-time systems
Utilize expertise in models that leverage the newest data sources, technologies, and tools, such as machine learning, Python, Hadoop, Spark, AWS, as well as other cutting-edge tools and applications for Big Data
Investigate the impact of new technologies, applications, and data sources on the future secondary mortgage business
Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions
Develop both deployment architecture and scripts for automated system deployment in AWS
Create large scale deployments using newly researched methodologies
Work in Agile environment
Qualifications:
Bachelor's degree in Mathematics, Statistics, Computer Science, related field, or equivalent work experience
2+ years of experience with Nifi & Kafka
4+ years of experience with Hadoop including Hive and Spark
4+ years' experience in Python & PySpark
2+ years of experience working with AWS
5-8 years of experience in Software development, ideally 8+ years
Key to success in this role:
Excellent development coupled with Kafka, Python and PySpark Skills
Deep curiosity to learn new tools and technologies and apply them
Intellectual agility and interpersonal flexibility
Strong communication skills
Ability to work with and collaborate across the team
A good "can do" attitude
Big Data Engineer who has Python, PySpark, Nifi and Kafka skillsets.