Thinking Machines Lab

Research Engineer, Infrastructure, Kernels

Thinking Machines Lab
onsite senior full-time $350,000 - $475,000 USD San Francisco
Apply →

First indexed 18 Apr 2026

Description

We're looking for an infrastructure research engineer to design, optimize, and maintain the compute foundations that power large-scale language model training. You will develop high-performance ML kernels (e.g., CUDA, CuTe, Triton), enable efficient low-precision arithmetic, and improve the distributed compute stack that makes training large models possible.

This role is perfect for an engineer who enjoys working close to the metal and across the research boundary. You'll collaborate with researchers and systems architects to bridge algorithmic design with hardware efficiency. You'll prototype new kernel implementations, profile performance across hardware generations, and help define the numerical and parallelism strategies that determine how we scale next-generation AI systems.

Responsibilities

  • Design and implement custom ML kernels (e.g., CUDA, CuTe, Triton) for core LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for modern GPU and accelerator architectures.
  • Design and think through compute primitives to reduce memory bandwidth bottlenecks and improve kernel compute efficiency.
  • Collaborate with research teams to align kernel-level optimizations with model architecture and algorithmic goals.
  • Develop and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.
  • Contribute to infrastructure stability and scalability, ensuring reproducibility, consistency across precision formats, and high utilization of compute resources.
  • Document and share insights through internal talks, technical papers, or open-source contributions to strengthen the broader ML systems community.

Skills and Qualifications

Minimum qualifications:

  • Bachelor’s degree or equivalent experience in computer science, electrical engineering, statistics, machine learning, physics, robotics, or similar.
  • Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases
  • Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.
  • Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.
  • A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.
  • Proficiency in CUDA, CuTe, Triton, or other GPU programming frameworks.
  • Demonstrated ability to analyze, profile, and optimize compute-intensive workloads.

Preferred qualifications:

  • Experience training or supporting large-scale language models with tens of billions of parameters or more.
  • Track record of improving research productivity through infrastructure design or process improvements.
  • Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators.
  • Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks.
  • Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM).
  • Contributions to open-source GPU, ML systems, or compiler optimization projects.
  • Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure.
This listing is enriched and indexed by YubHub. To apply, use the employer's original posting: https://job-boards.greenhouse.io/thinkingmachines/jobs/5013934008