Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world's most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives.
AMD together we advance_
SMTS Product Applications Engineer- Data Center GPU
THE TEAM:
AMD's Data Center GPU organization is transforming the industry with our AI based Graphic Processors. Our primary objective is to design exceptional products that drive the evolution of computing experiences, serving as the cornerstone for enterprise Data Centers, (AI) Artificial Intelligence, HPC and Embedded systems. If this resonates with you, come and joining our Data Center GPU organization where we are building amazing AI powered products with amazing people.
THE ROLE:
The Datacenter GPU Product Applications Engineer is a key technical lead, responsible for the technical execution of AMD's Datacenter graphics hardware/software subsystem projects for AMD OEM partners and enterprise commercial end-customers. This position offers a unique opportunity to apply your strong graphics, compute, datacenter, virtualization skills, AI / Machine Learning as well as program management skills, to collaboratively work with customers that use AMD Instinct™ Accelerators.
THE PERSON:
An engineer, solutions architect, or site reliability engineer with experience deploying large at-scale datacenter cluster deployments. Must be self-motivated and possess the ability to work well within a team environment.
KEY RESPONSIBILITIES:
· Resolve technical issues for customers that use AMD Instinct™ products.
· Assist development teams to root cause hardware / software technical issues and help to drive them to closure in a timely manner during the entire product lifecycle (i.e. from initial hardware bring-up through product end-of-life).
· Provide technical guidance and information to our customers in support of their server graphics and compute projects for AI and Machine Learning workloads.
· Mentor more junior members of the technical staff.
· Own the customer technical relationship and technical requirements.
· Provide technical guidance to internal teams based on customer feedback.
· Partner with program manager on project schedules, maintain action items tracker, ensure deliverables are met, provide project status updates to customers and AMD management.
· Build datacenter GPU dockers and containers for customers to test and deploy.
· Qualify and assess new software functionality to ensure customer compatibility.
PREFERRED EXPERIENCE:
· Expertise in networking and performance optimization for large-scale AI/ML networks, including network, compute, storage cluster design, modelling, analytics, performance tuning, convergence, scalability improvements
· Direct, co-development/deployment experience in working with strategic customers/partners in bringing solutions to market.
· Proven leadership in engaging customers with diverse technical disciplines in avenues such as Proof of Concept, Competitive and customer evaluations
· Familiarity with orchestrator/resource managers such as slurm and k8s.
· Expert Linux knowledge; install-setup, usage, debug in a cluster environment
· Strong knowledge of virtual environment, such as hypervisor vendors (VMWare, Citrix, KVM, Microsoft, etc.) and virtual machine setup and management.
· Familiar with datacenter GPU software stack such as AMD ROCm™ or Nvidia CUDA
· Knowledge of server architecture and functionality, including server remote management, network topologies, graphics software and hardware sub-systems
· Familiarity with distributed model training via NCCL/RCCL, MPI, or similar network technologies
· Experience in implementing and optimizing parallel methods on GPU accelerators in distributed memory systems with MPI, CUDA, HIP, OpenMP, etc.
· Familiar with AI / Machine learning workloads, frameworks, and models
· Understanding of site reliability engineering best practices.
· Strong debug, problem solving, and analysis.
ACADEMIC CREDENTIALS:
· Master's or PhD in Computer Science, Computational Physics, Engineering or related subjects, or equivalent experience desired
LOCATION:
Austin TX, Open to other locations
#HYBRID
#LI-RL1
At AMD, your base pay is one part of your total rewards package. Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD's Employee Stock Purchase Plan. You'll also be eligible for competitive benefits described in more detail here.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
Apply on company website