New The Skills of Tomorrow: how AI-exposed is every skill in 2026? See the data →
OpenAI

Researcher, Safety & Privacy

OpenAI
onsite senior Full time $295K – $445K San Francisco
Apply →

First indexed 24 Apr 2026

Description

We are seeking a Researcher in Privacy-Preserving Safety to help design and build the next generation of privacy-preserving safety systems for frontier AI models. This role sits at the intersection of AI safety, security, and privacy, with a focus on developing auditable, privacy-first mechanisms that enable robust harm detection and mitigation without exposing sensitive user data.

You will help define and operationalize frameworks for identifying and addressing frontier risks (e.g., bioweapon instructions, malware creation, suicide/self-harm risks, jailbreaks), while ensuring that privacy guarantees remain intact,even under adversarial conditions.

This role is central to our long-term goal of scaling our automated privacy-preserving safety systems to mitigate potential harms while minimizing human review.

You’ll work on foundational problems such as privacy-preserving monitoring, algorithmic auditing, secure enclaves, and adversarially robust safety enforcement protocols, helping ensure that safety systems scale without compromising user trust.

Design and implement privacy-first architectures for detecting and mitigating harmful model behaviors.

Build frameworks for auditable private identification of high-risk content (jailbreaks, cyber threats, or weaponization instructions).

Develop strict, auditable mechanisms triggered only by harm signals.

Drive the development of automated safety systems that preserve privacy at every level.

You might thrive in this role if you:

Are a researcher with deep interest in privacy, security, and AI safety, motivated by building systems that are both trustworthy and effective at scale.

Hold a PhD or equivalent experience in Computer Science, Cryptography, Security, Machine Learning, or related fields

Have the ability to translate ambiguous problem spaces into formal frameworks and deployable systems

Demonstrate proficiency in one or more of:

Privacy-preserving computation (e.g., secure enclaves, MPC, differential privacy)

Security and adversarial systems

Machine learning safety or alignment

Experience designing robust systems under adversarial threat models

Have experience with AI safety, jailbreak detection, or model alignment

Are familiar with privacy-preserving machine learning techniques, algorithmic auditing and/or secure system design

This listing is enriched and indexed by YubHub. To apply, use the employer's original posting: https://jobs.ashbyhq.com/openai/a0feb59d-e66b-4cc7-a685-7f9393d80fb6