Job Description
- Up to 8 years of software development experience in a professional environment and/or comparable experience such as:
- Bachelor's or master's degree in computer science, computer engineering, or other technical discipline, or equivalent work experience, is preferred
- 5+ years of software development experience in big data technologies such as Python, Spark, PySpark, Spark SQL, Shell Scripting & Hive
- Preferred – 5+ Years of hands-on experience in Data Ingestion, Data Organization & Data Consumption frame works using AXP Enterprise Data Platform
- Good understanding of big data technologies such as Spark, Mapreduce, YARN, Hive, Zookeeper etc. Preferably, with some real-world experience.
- Experience in design and development for batch, streaming and real-time big data applications using AXP EDP platform
- Experience in hierarchical data structures in json/ xml
- Experience with AXP CI/CD frameworks for code management and deployment like GitHub, Jenkins, XLR
- Experience with NoSQL technologies (column-family, key-value or document datastores)
- Experience (preferably GCP) and exposure to cloud native big data technologies would be a plus
- Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores
- REST API design and implementation experience
- Good understanding of NoSQL technologies such as HBase, Cassandra, Redis, Memcached etc. Preferably, with some real-world experience.
- Ability to effectively interpret technical and business objectives and challenges and articulate solutions
- Understanding of SOA, microservices and containerized application concepts would be a plus
- Strong analytical skills and programming skills, in production environment.
- Hands on Experience on GraphQL a plus
- Experience with Production Support/Dev-Ops would be a plus
- Willingness to learn new technologies and exploit them to their optimal potential