# Research Product Manager, Model Behaviors

**Company**: Anthropic
**Location**: San Francisco, CA | New York City, NY
**Work arrangement**: hybrid
**Experience**: senior
**Job type**: full-time
**Salary**: $305,000-$385,000 USD per year
**Category**: Engineering
**Industry**: Technology
**Wikidata**: https://www.wikidata.org/wiki/Q116758847

**Apply**: https://job-boards.greenhouse.io/anthropic/jobs/5097067008
**Canonical**: https://yubhub.co/jobs/job_fd5dc84a-c79

## Description

As a Product Manager for Model Behaviors, you will partner with the Alignment Finetuning team to define and shape Claude's character, behaviours, and reinforcement signals,work that directly influences how millions of people experience AI.

You will systematically identify high-priority behavioural improvements, coordinate across Research, Product, and Safeguards teams, and accelerate our ability to ship well-aligned models.

Key responsibilities include defining behavioural defaults and steerability constraints, developing and maintaining taxonomies of model behaviours across capabilities, identifying, triaging, and prioritising behaviour issues and opportunities, and amplifying alignment research breakthroughs.

To succeed in this role, you will need to have a deep passion and curiosity for AI and LLMs, a track record of delivering products and features to end-users, strong user empathy and the ability to synthesise vague or contradictory feedback into actionable priorities, and a strong grasp of ML concepts.

The ideal candidate will be a first-principles thinker with the ability to navigate and execute amidst ambiguity, flexing into different domains based on the business problem at hand and finding simple, easy-to-understand solutions.

This role offers a competitive compensation range of $305,000-$385,000 USD per year, with a minimum education requirement of a Bachelor's degree or an equivalent combination of education, training, and/or experience.

## Skills

### Required
- Product management
- AI and LLMs
- User empathy
- ML concepts
- First-principles thinking

### Nice to have
- Alignment research
- Steerability constraints
- Taxonomies of model behaviours
- Behavioural improvements
- Research coordination
