Description
We are seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems. Your primary responsibilities will include processing appeals, auditing automations, and labeling use cases in our system. You will also provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance. Additionally, you will collaborate with team members to provide feedback on tasks that improve AI's defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.
To be successful in this role, you will need expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support. You will also need to have a proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.
You will also have experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square. You will be able to interpret and apply xAI safety policies effectively, and have strong skills in ethical reasoning and risk assessment. You will also have a strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.
In addition, you will have strong communication, interpersonal, analytical, and ethical decision-making skills. You will be committed to continuous improvement of processes to prioritize safety and risk mitigation. You will also have expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.
Preferred qualifications include experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools. You will also have experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms. Additionally, you will have expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.
This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.