Job description
The Machine Learning Systems and Evaluation Engineering (MLSEE) team is developing frameworks and tools for making Siri and other AIML related products more testable across all the OS stack. As a result, we need to keep a continuous development of our tools and frameworks that guarantee the testability and scalability of Siri and AIML products for automation. The role is to design architectures that help integrate keeping the pace on a very dynamical environment, build all vital APIs, frameworks and libraries so that Development and Evaluation teams can benefit from it for writing their tests, support features and evaluate them. Ensure the scalability and sustainability of the framework to solve new challenging problems that currently exists and that will come as the continuous growth and support of Apple features and products.