Job description
- Research and advance red teaming methods for LLM's and diffusion models - Research and develop mitigations and safeguards to ensure safe deployment of LLM's in Apple products - Develop tools, metrics, and datasets for assessing and evaluating the safety of LLM's over the model deployment lifecycle, as well as methods and tools to help interpret and explain failures in language models