Google DeepMind

Security Lead, Agentic Red Team

Google DeepMind
onsite senior full-time $248,000 - $349,000 + bonus + equity + benefits Mountain View, California, US; New York City, New York, US
Apply →

First indexed 16 Mar 2026

Description

Job Title: Security Lead, Agentic Red Team

We're a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence. Our mission is to close the 'Agentic Launch Gap'; the critical window where novel AI capabilities outpace traditional security reviews.

As the Security Lead for the Agentic Red Team, you will direct a specialized unit of AI Researchers and Offensive Security Engineers focused on adversarial AI and agentic exploitation. Operating as a technical player-coach, you will architect complex, multi-turn attack scenarios while managing cross-functional partnerships with Product Area leads and Google security to influence launch criteria.

Key Responsibilities:

  • Direct Agile Offensive Security: Lead a specialized red team focused on rapid, high-impact engagements targeting production-level AI models and systems.
  • Perform Complex AI Exploitation: Develop and carry out advanced attack sequences that focus on vulnerabilities unique to GenAI, such as escalating privileges through tool usage, poisoning data, and executing multi-turn prompt injections.
  • Design Automated Validation Systems: Collaborate with Google teams to engineer 'Auto RedTeaming' solutions that transform manual vulnerability discoveries into robust, automated regression testing frameworks.
  • Engineer Technical Countermeasures: Create innovative defense-in-depth frameworks and control systems to mitigate agentic logic errors and non-deterministic model behaviors.
  • Manage Threat Intelligence Assets: Develop and oversee an evolving inventory of exploit primitives and agent-specific attack patterns used to establish release criteria and evaluate model security benchmarks.
  • Establish Security Scope: Collaborate with Google for conventional infrastructure protection, allowing the team to concentrate solely on agentic logic, model inference, and AI-centric exploits.

About You:

  • Bachelor's degree in Computer Science, Information Security, or equivalent practical experience.
  • Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.
  • Deep technical understanding of LLM architectures and agentic workflows (e.g., chain-of-thought reasoning, tool usage).
  • Proven ability to work in a consulting capacity with product teams, driving security improvements in fast-paced release cycles.
  • Experience managing or technically leading small, high-performance engineering teams.

In addition, the following would be an advantage:

  • Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).
  • Familiarity with AI safety benchmarks and evaluation frameworks.
  • Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers.
  • Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively.

The US base salary range for this full-time position is between $248,000 - $349,000 + bonus + equity + benefits.

This listing is enriched and indexed by YubHub. To apply, use the employer's original posting: https://job-boards.greenhouse.io/deepmind/jobs/7560787