Description
As a software engineer for GenAI inference, you will help design, develop, and optimize the inference engine that powers Databricks' Foundation Model API. You'll work at the intersection of research and production, ensuring our large language model (LLM) serving systems are fast, scalable, and efficient.
Your work will touch the full GenAI inference stack , from kernels and runtimes to orchestration and memory management. You will contribute to the design and implementation of the inference engine, and collaborate on model-serving stack optimized for large-scale LLMs inference.
Key responsibilities include:
- Collaborating with researchers to bring new model architectures or features (sparsity, activation compression, mixture-of-experts) into the engine
- Optimizing for latency, throughput, memory efficiency, and hardware utilization across GPUs, and accelerators
- Building and maintaining instrumentation, profiling, and tracing tooling to uncover bottlenecks and guide optimizations
- Developing and enhancing scalable routing, batching, scheduling, memory management, and dynamic loading mechanisms for inference workloads
- Supporting reliability, reproducibility, and fault tolerance in the inference pipelines, including A/B launches, rollback, and model versioning
- Integrating with federated, distributed inference infrastructure – orchestrate across nodes, balance load, handle communication overhead
- Collaborating cross-functionally: with platform engineers, cloud infrastructure, and security/compliance teams
- Documenting and sharing learnings, contributing to internal best practices and open-source efforts when possible
Requirements include:
- BS/MS/PhD in Computer Science, or a related field
- Strong software engineering background (3+ years or equivalent) in performance-critical systems
- Solid understanding of ML inference internals: attention, MLPs, recurrent modules, quantization, sparse operations, etc.
- Hands-on experience with CUDA, GPU programming, and key libraries (cuBLAS, cuDNN, NCCL, etc.)
- Comfortable designing and operating distributed systems, including RPC frameworks, queuing, RPC batching, sharding, memory partitioning
- Demonstrated ability to uncover and solve performance bottlenecks across layers (kernel, memory, networking, scheduler)
- Experience building instrumentation, tracing, and profiling tools for ML models
- Ability to work closely with ML researchers, translate novel model ideas into production systems
- Ownership mindset and eagerness to dive deep into complex system challenges
- Bonus: published research or open-source contributions in ML systems, inference optimization, or model serving
This listing is enriched and indexed by YubHub. To apply, use the employer's original posting:
https://job-boards.greenhouse.io/databricks/jobs/8202670002