New The Skills of Tomorrow: how AI-exposed is every skill in 2026? See the data →
xAI

Senior Analyst - Safety Operations (CSE)

xAI
remote senior full-time Bastrop, TX
Apply →

First indexed 18 Apr 2026

Description

About the Role

xAI is seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems.

Responsibilities

  • Process appeals, audit automations, and properly label use cases in the system.
  • Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.
  • Support the delivery of high-quality curated data that reinforces xAI's rules and ethical alignment.
  • Collaborate with team members to provide feedback on tasks that improve AI's defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.

Basic Qualifications

  • Expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.
  • Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.
  • Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.
  • Ability to interpret and apply xAI safety policies effectively.
  • Proficiency in analyzing complex scenarios, with strong skills in ethical reasoning and risk assessment.
  • Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.
  • Strong communication, interpersonal, analytical, and ethical decision-making skills.
  • Commitment to continuous improvement of processes to prioritize safety and risk mitigation.
  • Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.

Preferred Skills and Experience

  • Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.
  • Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.
  • Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.

This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.

This listing is enriched and indexed by YubHub. To apply, use the employer's original posting: https://job-boards.greenhouse.io/xai/jobs/5097907007