We're now seeking a Senior AI Software Engineer, in our LLM Inference Performance Analysis and Optimization team!
NVIDIA leads the generative AI revolution. We're now seeking an experienced AI Software Engineer to optimize LLM inference performance. Our team collaborates with compiler, kernel, hardware, and framework teams to assess bottlenecks, create optimization methods, and validate improvements. If you’re passionate about system-level performance, compiler IR, and GPU kernel optimization for deep learning inference, we’d love to consider you for our team.
What you'll be doing:
Analyze the performance of LLMs on NVIDIA GPUs by employing advanced profiling and projection tools.
Find opportunities for performance improvements in the IR-based compiler middle end optimizer and/or in precompiled kernel optimizations driven by Graph IR transformations.
Build and develop new compiler passes and optimization techniques to deliver outstanding, robust, and maintainable compiler infrastructure and tools.
Collaborate closely with architecture teams to influence and co-design future hardware features that improve compiler and runtime efficiency.
Work with geographically distributed teams across compiler, hardware, kernel, and framework domains to drive performance improvements and resolve complex issues.
Contribute to a core team at the forefront of deep learning and LLM inference technology, spanning hardware architecture development, kernel optimization, and integration with higher-level deep learning frameworks.
What we need to see:
Master’s or PhD in Computer Science, Computer Engineering, or a related field, or equivalent experience.
5+ years relevant experience.
Strong hands-on programming expertise in C++ and Python, with solid software engineering fundamentals.
Skilled in innovative LLM architectures, covering inference optimization, profiling, and compiler-level performance tuning.
Significant background in optimizing kernels through information retrieval techniques and generating code, including graph transformations, fusion, scheduling, and developing custom kernel generation frameworks like OpenAI Triton or other compiler-based code generation pipelines.
Hands-on experience with deep learning frameworks like TensorRT-LLM, vLLM, SGLang, Jax/XLA, or related compiler/runtime environments.
Proven ability to analyze and optimize LLM performance bottlenecks across model development, kernel execution, and runtime systems.
Excellent communication and collaboration skills, with the ability to work independently and effectively across distributed teams in a fast-paced environment.
Display a robust determination to continuously improve software and hardware performance by engaging in profiling, analysis, and optimization.
Proficiency in CUDA programming and familiarity with GPU-accelerated deep learning frameworks and performance tuning techniques.
Ways to stand out from the crowd:
Showcase innovative applications of agentic AI tools that enhance productivity and workflow automation.
Proven background in LLVM, MLIR, and/or Clang compiler development.
Active engagement with the open-source LLVM or MLIR community to ensure tighter integration and alignment with upstream efforts.
NVIDIA is recognized as one of the world’s most desirable engineering environments, built by teams who value technical depth, innovation, and impact. We work alongside some of the best minds in GPU computing, systems software, and AI. If you’re driven by performance, enjoy solving complex problems, and thrive in an environment that rewards initiative and technical excellence, we’d love to hear from you!
#LI-Hybrid
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 148,000 USD - 235,750 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.You will also be eligible for equity and benefits.