Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
Responsibilities
Develop and optimize core training operators on AMD GPUs (GEMM, GroupedGEMM, Attention, DeepEP, etc.), continuously pursuing state-of-the-art performance.
Conduct in-depth analysis of performance bottlenecks in large-scale model training and drive targeted end-to-end performance optimizations.
Collaborate closely with AMD's software and hardware teams to enhance the performance and stability of the ROCm ecosystem.
Participate in cutting-edge technology research, including but not limited to next-generation GPU hardware, compute-communication operator fusion, and AGI-driven automatic generation of high-performance operators.
Qualifications
Solid foundation in computer architecture and high-performance computing.
Proficient in C/C++, familiar with GPU programming (HIP / CUDA) and parallel development languages such as Triton, with strong engineering implementation skills.
Familiar with parallel computing principles and GPU execution models, demonstrating excellent performance analysis and optimization capabilities.
Understanding of large-model training workflows and hands-on experience with operator-level performance optimization during training.
Strong teamwork and cross-functional communication skills.
Preferred Qualifications
Familiarity with the latest GPU architectural features (e.g., AMD CDNA4 / NVIDIA Blackwell) and their performance optimization methodologies.
Experience in high-performance optimization of core operators (GEMM, Attention, GroupedGEMM, DeepEP, etc.).
Familiarity with the implementation and performance tuning of communication operators (AllReduce, AllToAll, ReduceScatter, etc.).
Development or research experience in low-precision computation (FP8 / FP4), compute-communication overlap (CCO), compiler optimizations, or automatic operator generation.
Experience in developing or optimizing large-model training systems (such as Megatron-LM, TorchTitan, etc.).
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
Apply on company website