# Principal Product Manager, AI Model Security

**Company**: Microsoft
**Location**: Mountain View
**Work arrangement**: onsite
**Experience**: senior
**Job type**: full-time
**Category**: Engineering
**Industry**: Technology
**Ticker**: MSFT
**Wikidata**: https://www.wikidata.org/wiki/Q2283

**Apply**: https://microsoft.ai/job/principal-product-manager-ai-model-security-2/
**Canonical**: https://yubhub.co/jobs/job_25c868eb-c32

## Description

Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more.

As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence , ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.

Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society , advancing science, education, and global well-being.

We are hiring a Product Manager to own AI model security , the discipline of making our frontier models resilient against adversarial attack and purpose-built for security practitioners.

This role has a dual mandate: (1) harden our models against the full spectrum of LLM security threats , prompt injection, data exfiltration, jailbreaking, training data extraction, zero-day exploit generation, model poisoning, and agentic workflow exploitation , and (2) partner closely with Microsoft Security product teams (Azure Security, Security Copilot) to ensure our models deliver best-in-class capabilities for real-world security workflows.

Responsibilities:

Own the model security roadmap: Define and prioritize the security hardening strategy for our frontier models across the full OWASP LLM threat surface , prompt injection (direct and indirect), data exfiltration, jailbreak resistance, system prompt leakage, training data extraction, and adversarial manipulation of agentic workflows.

Drive zero-day and exploit defense: Work with researchers to evaluate and mitigate the risk of models being used to generate zero-day exploits, malware, or novel attack vectors.

Build and scale red-teaming frameworks: Design, run, and iterate adversarial testing programs , both automated and human-driven , to continuously probe model vulnerabilities.

Establish metrics (e.g., jailbreak success rate, injection bypass rate, exfiltration resistance) and drive measurable improvement over time.

Partner with Microsoft Security product teams: Work closely with Azure Security and Security Copilot teams to translate their product requirements into model training priorities.

Ensure our models are purpose-built for threat detection, incident triage, vulnerability assessment, log analysis, and compliance reasoning.

Define security-specific model evaluations: Build benchmark suites and evaluation frameworks that measure real-world security usefulness , not just academic performance.

Drive training data strategy to improve domain-specific model quality for security practitioners.

Shape security policy and launch readiness: Establish clear security criteria for model launches.

Own the security dimension of go/no-go decisions, with frameworks that balance capability, risk, and deployment context.

Stay at the frontier: Track the rapidly evolving LLM security landscape , new attack techniques, emerging standards (OWASP, NIST AI RMF), regulatory requirements (EU AI Act), and academic research.

Translate what you learn into actionable product priorities.

Influence model training and architecture: Partner with researchers and engineers to embed security considerations into model training, fine-tuning, RLHF, and post-training safeguards.

Qualifications:

Bachelor’s Degree AND 5+ years experience in product management, security engineering, or software development OR equivalent experience

Demonstrated hands-on experience with AI/ML systems , you have personally built, evaluated, or shipped ML-powered products or security tools

Deep familiarity with LLM security threats: prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models , through professional experience, red-teaming, or security research

Experience defining product requirements and driving decisions in partnership with researchers or ML engineers

Track record of building evaluation systems, security benchmarks, or adversarial testing frameworks , not just consuming them

Ability to operate autonomously, make decisions with incomplete information, and drive projects from ambiguity to shipped outcomes

## Skills

### Required
- AI/ML systems
- LLM security threats
- prompt injection
- jailbreaking
- data exfiltration
- adversarial attacks on generative models
- product management
- security engineering
- software development
- model training
- fine-tuning
- RLHF
- post-training safeguards
