New The Skills of Tomorrow: how AI-exposed is every skill in 2026? See the data →
OpenAI

Inference Technical Lead, On-Device Transformers

OpenAI
hybrid senior Full time $445K San Francisco
Apply →

First indexed 24 Apr 2026

Description

Job Title: Inference Technical Lead, On-Device Transformers

Location: San Francisco

Department: Consumer Products

Job Type: Full time

Workplace Type: Hybrid

Compensation

The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.

  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Daily meals in our offices, and meal delivery credits as eligible
  • Relocation support for eligible employees
  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.

About the Team

The Future of Computing Research team is an applied research team in the Consumer Devices group focused on developing new methods and models to support our vision as we advance forward in our mission of building AGI that benefits all of humanity.

About the Role

As a Technical Lead on the Future of Computing Research team, you will work together with both the best ML researchers in the world and the greatest design talent of our generation to push the frontier of model capabilities.

This role is based in San Francisco, CA. We follow a hybrid model with 4 days a week in the office and offer relocation assistance to new employees.

In this role, you will:

  • Evaluate and select silicon platforms (GPUs, NPUs, and specialized accelerators) for on-device and edge deployment of OpenAI models.
  • Work closely with research teams to co-design model architectures that meet real-world deployment constraints such as latency, memory, power, and bandwidth.
  • Analyze and model system performance, identifying tradeoffs between model design, memory hierarchy, compute throughput, and hardware capabilities.
  • Partner with hardware vendors and internal infrastructure teams to bring up new accelerators and ensure efficient execution of transformer workloads.
  • Build and lead a team of engineers responsible for implementing the low-level inference stack, including kernel development and runtime systems.
  • Run through the necessary walls to take nascent research capabilities and turn them into capabilities we can build on top of.

You might thrive in this role if you:

  • Have experience evaluating or deploying workloads on GPUs, NPUs, or other specialized accelerators.
  • Understand the performance characteristics of transformer models, including attention, KV-cache behavior, and memory bandwidth requirements.
  • Have designed or optimized high-performance compute systems, such as inference engines, distributed runtimes, or hardware-aware ML pipelines.
  • Have experience building or leading teams working on low-level performance-critical software such as CUDA kernels, compilers, or ML runtimes.
  • Have already spent time in the weeds teaching models to speak and perceive.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

Salary

Compensation Range: $445K

This listing is enriched and indexed by YubHub. To apply, use the employer's original posting: https://jobs.ashbyhq.com/openai/a653b035-a866-4a5c-9c2a-fda3c2950eee