Description
Job Posting
Researcher, Safety Oversight
Location
San Francisco
Employment Type
Full time
Department
Safety Systems
Compensation
- $295K – $445K • Offers Equity
The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.
- Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
- Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
- 401(k) retirement plan with employer match
- Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
- Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
- 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
- Mental health and wellness support
- Employer-paid basic life and disability coverage
- Annual learning and development stipend to fuel your professional growth
- Daily meals in our offices, and meal delivery credits as eligible
- Relocation support for eligible employees
- Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.
More details about our benefits are available to candidates during the hiring process.
This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.
About the Team
The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society, and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.
The Safety Oversight Research team aims to fundamentally advance our capabilities to maintain oversight over frontier AI models, and leverage these advances to ensure OpenAI’s deployed models are safe and beneficial. This requires a breadth of new ML research in the areas of human-AI collaboration, reasoning, robustness, and scalable oversight to keep pace with model capabilities. We invest heavily in developing novel model and system-level methods of identifying and mitigating AI misuse and misalignment.
Our goal is to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely.
About the Role
OpenAI is seeking a senior researcher with a passion for AI safety and experience in safety research. Your role will set directions for research to maintain effective oversight of safe AGI and work on research projects to identify and mitigate misuse and misalignment in our AI systems. You will play a critical role in defining how a safe AI system should look in the future at OpenAI, making a significant impact on our mission to build and deploy safe AGI.
In this role, you will:
- Develop and refine AI monitor models to detect and mitigate known and emerging patterns of misuse and misalignment.
- Set research directions and strategies to make our AI systems safer, more aligned, and more robust.
- Evaluate and design effective red-teaming pipelines to examine the end-to-end robustness of our safety systems, and identify areas for future improvement.
- Conduct research to improve models’ ability to reason about questions of human values, and apply these improved models to practical safety challenges.
- Coordinate and collaborate with cross-functional teams, including T&S, legal, policy and other research teams, to ensure that our products meet the highest safety standards.
You might thrive in this role if you:
- Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter
- Show enthusiasm for AI safety and dedication to enhancing the safety of cutting-edge AI models for real-world use.
- Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, human-AI collaboration, fairness & biases.
- Hold a Ph.D. or other degree in computer science, machine learning, or a related field.
- Thrive in environments involving large-scale AI systems.
- Possess 4+ years of research engineering experience and proficiency in Python or similar languages.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.