# Researcher, Frontier Cybersecurity Risks

**Company**: OpenAI
**Location**: San Francisco
**Work arrangement**: onsite
**Experience**: senior
**Job type**: Full time
**Salary**: Estimated Base Salary $295K – $445K
**Category**: Engineering
**Industry**: Technology
**Wikidata**: https://www.wikidata.org/wiki/Q124605186

**Apply**: https://jobs.ashbyhq.com/openai/97a7eeae-9625-4d00-874f-e50131f98369
**Canonical**: https://yubhub.co/jobs/job_183800c4-b3d

## Description

## Compensation

The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.

- Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts

- Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)

- 401(k) retirement plan with employer match

- Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)

- Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees

- 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)

- Mental health and wellness support

- Employer-paid basic life and disability coverage

- Annual learning and development stipend to fuel your professional growth

- Daily meals in our offices, and meal delivery credits as eligible

- Relocation support for eligible employees

- Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.

## About the team

The Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings.

The Preparedness team is an important part of the [Safety Systems](https://openai.com/safety/safety-systems) org at OpenAI, and is guided by OpenAI’s [Preparedness Framework](https://openai.com/index/updating-our-preparedness-framework/).

Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.

The mission of the Preparedness team is to:

- Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophic

- Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems

## About the role

Models are becoming increasingly capable—moving from tools that assist humans to agents that can plan, execute, and adapt in the real world. As we push toward AGI, cybersecurity becomes one of the most important and urgent frontiers: the same systems that can accelerate productivity can also accelerate exploitation.

As a Researcher for cybersecurity risks, you will help design and implement an end-to-end mitigation stack to reduce severe cyber misuse across OpenAI’s products. This role requires strong technical depth and close cross-functional collaboration to ensure safeguards are enforceable, scalable, and effective. You’ll contribute directly to building protections that remain robust as products, model capabilities, and attacker behaviors evolve.

## In this role, you will:

- Design and implement mitigation components for model-enabled cybersecurity misuse—spanning prevention, monitoring, detection, and enforcement—under the guidance of senior technical and risk leadership.

- Integrate safeguards across product surfaces in partnership with product and engineering teams, helping ensure protections are consistent, low-latency, and scale with usage and new model capabilities.

- Evaluate technical trade-offs within the cybersecurity risk domain (coverage, latency, model utility, and user privacy) and propose pragmatic, testable solutions.

- Collaborate closely with risk and threat modeling partners to align mitigation design with anticipated attacker behaviors and high-impact misuse scenarios.

- Execute rigorous testing and red-teaming workflows, helping stress-test the mitigation stack against evolving threats (e.g., novel exploits, tool-use chains, automated attack workflows) and across different product surfaces—then iterate based on findings.

## You might thrive in this role if you:

- Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use.

- Bring demonstrated experience in deep learning and transformer models.

- Are proficient with frameworks such as PyTorch or TensorFlow.

- Possess a strong foundation in data structures, algorithms, and software engineering principles.

- Are familiar with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization.

- Excel at working collaboratively with cross-functional teams across research, security, policy, product, and engineering.

- Have significant experience designing and deploying technical safeguards for abuse prevention, detection, and enforcement at scale.

- (Nice to have) Bring background knowledge in cybersecurity or adjacent fields.

## Skills

### Required
- Deep learning
- Transformer models
- PyTorch
- TensorFlow
- Data structures
- Algorithms
- Software engineering principles
- Large language models
- Abuse prevention
- Detection
- Enforcement

### Nice to have
- Cybersecurity
- Adjacent fields
