Description
We are seeking a Technical Policy Manager, Cyber Harms to lead our efforts to prevent AI misuse in the cyber domain. As a member of our Safeguards team, you will be responsible for designing and overseeing the execution of capability evaluations to assess the cyber-relevant capabilities of new models. You will also create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques.
This is a unique opportunity to shape how frontier AI models handle dual-use cybersecurity knowledge,balancing the tremendous potential of AI to advance legitimate security research and defensive capabilities while preventing misuse by malicious actors.
In this role, you will lead and grow a team of technical specialists focused on cyber threat modeling and evaluation frameworks. You will serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies.
You will collaborate closely with internal and external threat modeling experts to develop training data for safety systems, and with ML engineers to train these systems, optimizing for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers.
You will also analyze safety system performance in traffic, identifying gaps and proposing improvements. You will conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks.
You will develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces. You will partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle.
You will translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies. You will contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety.
You will monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these.
You will mentor and develop team members, fostering a culture of technical excellence and responsible AI development.
To be successful in this role, you will need to have:
- An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity
- 5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing
- 2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders
- Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)
- Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)
- Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks
- Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems)
- Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders
- Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases
- A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies
- Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve
- Track record of translating specialized technical knowledge into actionable safety policies or enforcement guidelines
Preferred qualifications include:
- Background in AI/ML systems, particularly experience with large language models
- Experience developing ML-based security systems or adversarial ML research
- Experience working with defense, intelligence, or security organizations (e.g., NSA, CISA, national labs, security contractors)
- Published security research, disclosed vulnerabilities, or participated in bug bounty programs
- Understanding of Trust & Safety operations and content moderation at scale
- Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth
- Understanding of dual-use security research concerns and ethical considerations in AI safety
The annual compensation range for this role is $320,000-$405,000 USD.