Description
We are building next-generation customized AI silicon designed to accelerate AI workloads with unprecedented efficiency. We are looking for an exceptional Systems Engineer to bridge the gap between our custom hardware and modern AI inference frameworks.
Our work directly shapes how AI systems are designed, deployed, and scaled today and into the future. Engineers on this team operate with end-to-end ownership, deep technical rigor, and a strong bias toward real-world impact.
As a Senior AI Systems Engineer, you will own the software integration layer between our custom AI chip's proprietary SDK and SGLang, a state-of-the-art serving framework for Large Language Models (LLMs) and Vision-Language Models.
Responsibilities:
- Architect and develop the backend integration to make our custom AI chip a first-class citizen in SGLang.
- Write custom C++ / PyTorch extensions that map SGLang's primitive operations (e.g., RadixAttention, FlashAttention, matrix multiplications) to our custom chip's proprietary software layer.
- Profile and optimize end-to-end LLM inference latency, throughput, and memory utilization (Paged Attention) on our hardware.
- Work closely with our hardware architecture and compiler teams to provide feedback on our custom software stack and silicon design based on framework-level bottlenecks.
- Build robust testing pipelines to validate model accuracy and performance parity against standard GPU baselines.
Qualifications:
- BS, MS, or PhD in Computer Science, Computer Engineering, or a related field.
- Software engineering experience focusing on systems programming, ML infrastructure, or AI compilers.
- Expertise in Python: Deep understanding of memory management, concurrent programming.
- Experience with LLM Inference Engines: Hands-on experience modifying or extending frameworks like SGLang, vLLM, DeepSpeed-FastGen, or TensorRT-LLM.
- PyTorch Internals: Strong experience writing PyTorch C++ extensions and custom operators.
- Hardware Interfacing: Proven track record of integrating machine learning workloads with hardware accelerators (GPUs, TPUs, NPUs) using custom SDKs, APIs, or low-level drivers.