OpenAI

Researcher, Misalignment Research

OpenAI
hybrid senior full-time $380K – $445K New York City; San Francisco
Apply →

First indexed 6 Mar 2026

Description

Location

New York City; San Francisco

Employment Type

Full time

Location Type

Hybrid

Department

Safety Systems

Compensation

  • $380K – $445K • Offers Equity

The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.

  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Daily meals in our offices, and meal delivery credits as eligible
  • Relocation support for eligible employees
  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.

More details about our benefits are available to candidates during the hiring process.

This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.

About the Team

Safety Systems sits at the forefront of OpenAI’s mission to build and deploy safe AGI, ensuring our most capable models can be released responsibly and for the benefit of society. Within Safety Systems, we are building a misalignment research team to focus on the most pressing problems for the future of AGI. Our mandate is to identify, quantify, and understand future AGI misalignment risks far in advance of when they can pose harm.

The work of this research taskforce spans four pillars:

  1. Worst‑Case Demonstrations – Craft compelling, reality‑anchored demos that reveal how AI systems can go wrong. We focus especially on high importance cases where misaligned AGI could pursue goals at odds with human well being.
  1. Adversarial & Frontier Safety Evaluations – Transform those demos into rigorous, repeatable evaluations that measure dangerous capabilities and residual risks. Topics of interest include deceptive behavior, scheming, reward hacking, deception in reasoning, and power-seeking, along with other related areas.
  1. System‑Level Stress Testing – Build automated infrastructure to probe entire product stacks, assessing end‑to‑end robustness under extreme conditions. We treat misalignment as an evolving adversary, escalating tests until we find breaking points even as systems continue to improve.
  1. Alignment Stress‑Testing Research – Investigate why mitigations break, publishing insights that shape strategy and next‑generation safeguards. We collaborate with other labs when useful and actively share misalignment findings to accelerate collective progress.

About the Role

We are seeking a Senior Researcher who is passionate about red‑teaming and AI safety. In this role you will design and execute cutting‑edge attacks, build adversarial evaluations, and advance our understanding of how safety measures can fail—and how to fix them. Your insights will directly influence OpenAI’s product launches and long‑term safety roadmap.

In this role, you will

  • Design and implement worst‑case demonstrations that make AGI alignment risks concrete for stakeholders, focused on high stakes use cases described above.
  • Develop adversarial and system‑level evaluations grounded in those demonstrations, driving adoption across OpenAI.
  • Create automated tools and infrastructure to scale automated red‑teaming and stress testing.
  • Conduct research on failure modes of alignment techniques and propose improvements.
  • Publish influential internal or external papers that shift safety strategy or industry practice. We aim to concretely reduce existential AI risk.
  • Partner with engineering, research, policy, and legal teams to integrate findings into product safeguards and governance processes.
  • Mentor engineers and researchers, fostering a culture of rigorous, impact‑oriented safety work.

You might thrive in this role if you

  • Already are thinking about these problems night and day, and share our mission to build safe, universally beneficial AGI and align with the OpenAI Charter.
  • Have 4+ years of experience in AI red‑teaming, security research, adversarial ML, or related safety fields.
  • Possess a strong research track record—publications, open‑source projects, or high‑impact internal work—demonstrating creativity in uncovering and exploiting system weaknesses.
  • Are fluent in modern ML / AI techniques and comfortable hacking on large‑scale codebases and evaluation infrastructure.
  • Communicate clearly with both technical and non‑technical audiences, translating complex findings into actionable recommendations.
  • Enjoy collaboration and can drive cross‑functional projects that span research, engineering, and policy.
  • Hold a Ph.D., master’s degree, or equivalent experience in computer science, machine learning, security, or a related field.
This listing is enriched and indexed by YubHub. To apply, use the employer's original posting: https://jobs.ashbyhq.com/openai/7055f010-99f4-4c76-8361-ba5b5f9af1d0