NVIDIA is seeking an exceptional Manager, Deep Learning Inference Software, to lead a world-class engineering team advancing the state of AI model deployment. You will shape the software powering today’s most sophisticated AI systems — from large language models to multimodal generative AI — all accelerated on NVIDIA GPUs. The Deep Learning Inference team develops and optimizes open-source frameworks that make AI deployment scalable, efficient, and accessible — including SGLang, vLLM, and FlashInfer. Our work enables developers worldwide to harness NVIDIA accelerators for real-time inference at every scale, from datacenter clusters to edge devices.
What you'll be doing:
Lead, mentor, and scale a high-performing engineering team focused on deep learning inference and GPU-accelerated software.
Drive the strategy, roadmap, and execution of NVIDIA’s inference frameworks engineering, focusing on SGLang.
Partner with internal compiler, libraries, and research teams to deliver end-to-end optimized inference pipelines across NVIDIA accelerators.
Oversee performance tuning, profiling, and optimization of large-scale models for LLM, multimodal, and generative AI applications.
Guide engineers in adopting best practices for CUDA, Triton, CUTLASS, and multi-GPU communications (NIXL, NCCL, NVSHMEM).
Represent the team in roadmap and planning discussions, ensuring alignment with NVIDIA’s broader AI and software strategies.
Foster a culture of technical excellence, open collaboration, and continuous innovation.
What we need to see:
MS, PhD, or equivalent experience in Computer Science, Electrical/Computer Engineering, or a related field.
6+ years of software development experience, including 3+ years in technical leadership or engineering management.
Strong background in C/C++ software design and development; proficiency in Python is a plus.
Hands-on experience with GPU programming (CUDA, Triton, CUTLASS) and performance optimization.
Proven record of deploying or optimizing deep learning models in production environments.
Experience leading teams using Agile or collaborative software development practices.
Ways to Stand out from The Crowd
Significant open-source contributions to deep learning or inference frameworks such as PyTorch, vLLM, SGLang, Triton, or TensorRT-LLM.
Deep understanding of multi-GPU communications (NIXL, NCCL, NVSHMEM) and distributed inference architectures.
Expertise in performance modeling, profiling, and system-level optimization across CPU and GPU platforms.
Proven ability to mentor engineers, guide architectural decisions, and deliver complex projects with measurable impact.
Publications, patents, or talks on LLM serving, model optimization, or GPU performance engineering.
With highly competitive salaries and a comprehensive benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, our rapid growth means endless opportunities for career advancement.
If you’re a passionate technical leader ready to shape the future of AI inference frameworks — and build the software that powers the world’s most advanced models — we’d love to hear from you.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 224,000 USD - 356,500 USD for Level 3, and 272,000 USD - 425,500 USD for Level 4.You will also be eligible for equity and benefits.