Description
Software Engineer, Fleet Management
Location
San Francisco
Employment Type
Full time
Department
Scaling
Compensation
- $230K – $490K • Offers Equity
The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.
Benefits
- Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
- Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
- 401(k) retirement plan with employer match
- Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
- Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
- 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
- Mental health and wellness support
- Employer-paid basic life and disability coverage
- Annual learning and development stipend to fuel your professional growth
- Daily meals in our offices, and meal delivery credits as eligible
- Relocation support for eligible employees
- Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.
About the Role
The Fleet team at OpenAI supports the computing environment that powers our cutting-edge research and product development. We oversee large-scale systems that span data centers, GPUs, networking, and more, ensuring high availability, performance, and efficiency. Our work enables OpenAI’s models to operate seamlessly at scale, supporting both internal research and external products like ChatGPT. We prioritize safety, reliability, and responsible AI deployment over unchecked growth.
Responsibilities
- Design and build systems to manage both cloud and bare-metal fleets at scale.
- Develop tools that integrate low-level hardware metrics with high-level job scheduling and cluster management algorithms.
- Leverage LLMs to coordinate vendor operations and optimize infrastructure workflows.
- Automate infrastructure processes, reducing repetitive toil and improving system reliability.
- Collaborate with hardware, infrastructure, and research teams to ensure seamless integration across the stack.
- Continuously improve tools, automation, processes, and documentation to enhance operational efficiency.
You might thrive in this role if you:
- Have strong software engineering skills with experience in large-scale infrastructure environments.
- Possess broad knowledge of cluster-level systems (e.g., Kubernetes, CI/CD pipelines, Terraform, cloud providers).
- Have deep expertise in server-level systems (e.g., systems, containerization, Chef, Linux kernels, firmware management, host routing).
- Are passionate about optimizing the performance and reliability of large compute fleets.
- Thrive in dynamic environments and are eager to solve complex infrastructure challenges.
- Value automation, efficiency, and continuous improvement in everything you build.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.