New The Skills of Tomorrow: how AI-exposed is every skill in 2026? See the data →
OpenAI

Software Engineer, Inference - Performance Optimization

OpenAI
Full time $295K - $555K San Francisco
Apply →

First indexed 25 Apr 2026

Description

Compensation

We offer a competitive salary range of $295K - $555K, including generous equity, performance-related bonus(es) for eligible employees, and the following benefits:

  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Daily meals in our offices, and meal delivery credits as eligible
  • Relocation support for eligible employees
  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.

About the Team

Our team analyzes inference stack performance across the application, model, and fleet layers to identify bottlenecks and drive faster, cheaper inference. We combine systems profiling, benchmarking, and analysis to understand where time and cost are spent, then turn that understanding into performance optimizations and models that project performance and capacity needs for future launches.

About the Role

In this role, you will model inference performance across application, model, and fleet layers with higher fidelity. You will build cost-to-serve estimates from microbenchmarks and create tools that help cross-functional teams reason about latency, capacity, utilization, and cost tradeoffs.

In this role, you will:

  • Build and refine performance models that translate microbenchmark results into cost-to-serve estimates.
  • Analyze inference workloads end to end across applications, models, and fleet infrastructure.
  • Enhance tooling to identify bottlenecks across layers for latency and throughput.
  • Partner with other teams to turn performance insights into concrete improvements and project how future changes affect inference.

You might thrive in this role if you:

  • Enjoy reasoning from first principles about distributed systems, model inference, and hardware efficiency.
  • Are comfortable working across abstraction layers, from application behavior to kernels, accelerators, networking, and fleet scheduling.
  • Have deep expertise with performance profiling, benchmarking, analysis, and optimization.
  • Enjoy collaborating with engineering and research teams to improve real production systems.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

This listing is enriched and indexed by YubHub. To apply, use the employer's original posting: https://jobs.ashbyhq.com/openai/85fceac9-fb8a-4d71-a524-a8e5f1e9b01b