THE ROLE: SENIOR/STAFF/PRINCIPAL SW ENGINEER (SYSTEMS)
About us
Why d-Matrix
We want to build a company and a culture that sustains the tests of time. We offer the candidate a very unique opportunity to express themselves and become a future leader in an industry that will have a huge influence globally. We are striving to build a culture of transparency, inclusiveness and intellectual honesty while ensuring all our team members are always learning and having fun on the journey. We have built the industry's first highly programmable in-memory computing architecture that applies to a broad class of applications from cloud to edge. The candidate will get to work on a path breaking architecture with a highly experienced team that knows what it takes to build a successful business.
The Role: Senior/Staff/Principal SW Engineer (Systems)
The role requires you to be part of the team that helps productize the SW stack for our AI compute engine. As part of the Software team, you will be responsible for the development, enhancement, and maintenance of the next-generation AI Deployment software. You have had past experience working across all aspects of the full stack tool chain and understand the nuances of what it takes to optimize and trade-off various aspects of hardware-software co-design. You are able to build and scale software deliverables in a tight development window. You will work with a team of compiler experts to build out the compiler infrastructure working closely with other software (ML, Systems) and hardware (mixed signal, DSP, CPU) experts in the company.
Qualifications
- Computer Science, Engineering, Math, Physics or related degree
- Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals
- Proficient in C/C++/Python development in Linux environment and using standard development tools
- Experience with distributed, high performance software design and implementation
- Self-motivated team player with a strong sense of ownership and leadership
- MS or PhD in Computer Science, Electrical Engineering, or related fields
- Prior startup, small team or incubation experience
- Work experience at a cloud provider or AI compute / sub-system company
- Experience implementing SIMD algorithms on vector processors
- Experience with open source ML compiler frameworks such as MLIR
- Experience with deep learning frameworks (such as PyTorch, Tensorflow)
- Experience with deep learning runtimes (such as ONNX Runtime, TensorRT,...)
- Experience with inference servers/model serving frameworks (such as Triton, TFServ, KubeFlow,...)
- Experience with distributed systems collectives such as NCCL, OpenMPI,...
- Experience deploying ML workloads on distributed systems, in a multitenancy environment
- Experience with MLOps from definition to deployment including training, quantization, sparsity, model preprocessing, and deployment
- Experience training, tuning and deploying ML models for CV (ResNet,..), NLP (BERT, GPT), and/or Recommendation Systems (DLRM)