Anthropic

Staff / Senior Software Engineer, Cloud Inference

Anthropic
hybrid staff full-time $300,000-$485,000 USD San Francisco, CA | Seattle, WA
Apply →

First indexed 18 Apr 2026

Description

We are seeking a Staff / Senior Software Engineer to join our Cloud Inference team. The successful candidate will design and build infrastructure that serves Claude across multiple cloud service providers (CSPs), accounting for differences in compute hardware, networking, APIs, and operational models.

The ideal candidate will have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users. They will also have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration.

Responsibilities:

  • Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models
  • Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms
  • Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions
  • Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity
  • Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads
  • Optimise inference cost and performance across providers,designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region
  • Contribute to inference features that must work consistently across all platforms
  • Analyse observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads

Requirements:

  • Significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users
  • Experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration
  • Strong interest in inference
  • Thrive in cross-functional collaboration with both internal teams and external partners
  • Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems
  • Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work
  • Pick up slack, even when it goes outside your job description

Preferred skills:

  • Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings
  • A background in building platform-agnostic tooling or abstraction layers that work across cloud providers
  • Hands-on experience with capacity management, cost optimisation, or resource planning at scale across heterogeneous environments
  • Strong familiarity with LLM inference optimisation, batching, caching, and serving strategies
  • Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators
  • Background designing and building CI/CD systems that automate deployment and validation across cloud environments
  • Solid understanding of multi-region deployments, geographic routing, and global traffic management
  • Proficiency in Python or Rust

Salary Range: $300,000-$485,000 USD

Experience Level: Staff

Employment Type: Full-time

Workplace Type: Hybrid

Category: Engineering

Industry: Technology

Required Skills:

  • High-performance, large-scale distributed systems
  • Cloud computing (AWS, GCP, Azure)
  • Kubernetes
  • Infrastructure as Code
  • Container orchestration
  • Inference
  • Cross-functional collaboration
  • Autonomy and self-driven
  • Platform-agnostic tooling
  • Capacity management
  • Cost optimisation
  • Resource planning
  • LLM inference optimisation
  • Machine learning infrastructure
  • CI/CD systems
  • Multi-region deployments
  • Geographic routing
  • Global traffic management
  • Python
  • Rust

Preferred Skills:

  • Direct experience working with CSP partner teams
  • Building platform-agnostic tooling
  • Hands-on experience with capacity management
  • Strong familiarity with LLM inference optimisation
  • Experience with Machine learning infrastructure
  • Background designing and building CI/CD systems
  • Solid understanding of multi-region deployments
  • Proficiency in Python or Rust
This listing is enriched and indexed by YubHub. To apply, use the employer's original posting: https://job-boards.greenhouse.io/anthropic/jobs/5107466008