Description
About the role:
As a Product Manager for Model Behaviors, you will partner with the Alignment Finetuning team to define and shape Claude's character, behaviours, and reinforcement signals—work that directly influences how millions of people experience AI. You will systematically identify high-priority behavioural improvements, coordinate across Research, Product, and Safeguards teams, and accelerate our ability to ship well-aligned models.
Responsibilities:
- Define behavioural defaults and steerability constraints
- Develop and maintain taxonomies of model behaviours across capabilities
- Identify, triage, and prioritise behaviour issues and opportunities, coordinating input from Users, Research, Product, and Safeguards teams
- Amplify alignment research breakthroughs, translating them into product, process, and model improvements
- Deeply understand user interaction patterns to identify behaviour improvements that make Claude more helpful and safe
- Contribute to evals that measure alignment progress
- Identify and scale initiatives and tools that help researchers ship alignment improvements faster
You might be a good fit if you:
- Have a deep passion and curiosity for AI and LLMs. Use AI regularly.
- Have 5+ years in product management leading scaled conversational AI products.
- Are a first-principles thinker with the ability to navigate and execute amidst ambiguity, flexing into different domains based on the business problem at hand and finding simple, easy-to-understand solutions
- Have a track record of delivering products and features to end-users (consumer or end-user b2b focus)
- Have strong user empathy and the ability to synthesise vague or contradictory feedback into actionable priorities
- Have strong judgment and model taste, with the ability to make tradeoffs when there is no clear right answer
- Have a strong grasp of ML concepts and are willing to go deep on technical solutions
- Have intellectual curiosity without ego—comfortable asking questions and learning independently
- Think creatively about the risks and benefits of new technologies, moving beyond past checklists and playbooks
- Have a creative, hacker spirit and love solving puzzles
Logistics
- Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
- Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
- Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preference