New The Skills of Tomorrow: how AI-exposed is every skill in 2026? See the data →
OpenAI

Software Engineer, Compute Infrastructure

OpenAI
Apply →
hybrid mid Full time $230K – $405K San Francisco; London, UK; New York City; Seattle

First indexed 28 Apr 2026

Description

We are looking for engineers who want to build the compute platform behind OpenAI's research and products. You may be strongest in low-level systems, high-performance computing, distributed infrastructure, reliability, CaaS, agent infrastructure, developer platforms, tooling, or the user experience around infrastructure.

The common thread is strong engineering judgment and excitement about making enormous compute systems faster, more reliable, and easier to use.

Depending on your background and interests, you might work close to hardware, close to users, on CaaS and agent infrastructure, or on the control planes and data planes in between.

We do not expect every candidate to have worked at every layer. Some engineers will go deep on systems performance, kernel or runtime behavior, large-scale networking protocols, RDMA, NCCL, GPU hardware behavior, benchmarking, scheduling, or hardware reliability; others will make the platform more usable through APIs, tools, workflows, and developer experience.

This is a general opening for Compute Infrastructure. We will consider candidates for teams across Compute Infrastructure and match you based on your strengths, the problems that motivate you, and where the infrastructure needs are highest.

In this role, you will:

  • Build and deeply optimize reliable system software for large-scale compute systems that run some of the world's most demanding AI workloads
  • Design and operate infrastructure across accelerators, CPUs, NICs, switches, networking protocols, storage, data centers, cluster orchestration, scheduling, and fleet health
  • Profile, benchmark, and optimize training workloads across compute, memory, storage, networking, NCCL and collective communication, and cluster scheduling bottlenecks
  • Create hardware-aware automation that makes provisioning, firmware and driver upgrades, incident response, and day-to-day operations faster and less error-prone
  • Build CaaS, agent infrastructure, profiling, observability, benchmarking, and platform tools that help researchers, product engineers, and operators launch, debug, and optimize workloads with less friction
  • Turn operational lessons into better systems, stronger abstractions, and clearer ownership boundaries across teams
  • Collaborate across research, engineering, security, networking, hardware, and data center teams to make compute capacity more capable and easier to use
This listing is enriched and indexed by YubHub. To apply, use the employer's original posting: https://jobs.ashbyhq.com/openai/ca300a6d-a2a7-4580-aad7-323fbdfee7b1