AWS GPU Infra 首席架构师 - 北京上海深圳100k以上
北京硕士及以上10年以上
Do you have a builder’s mentality where “show me” means more than “tell me”? Are you passionate about AI infrastructure, understand cloud architectures & platforms, and quick to pick up emerging technologies? Are you adept at working with Customers to experiment with innovative approaches and the validate the technical feasibility of solutions?
Generative AI is rapidly growing in importance. We're witnessing an increasing number of remarkable GenAI models and innovations—from AI infrastructure and intelligent customer service chatbots to operational data analysis, code generation, and multimodal implementations. Given the scale required for developing GenAI workloads, the cloud is an ideal place to build them, and Amazon Web Services is the leader in this market. We’re looking for someone passionate and deeply excited about this space. Someone who is devoted to helping IC customers understand how GenAI can make a big difference to their businesses.
Key job responsibilities
- As an AIML Specialist Solutions Architect (SA) in AI Infrastructure, you will serve as the Subject Matter Expert (SME) for providing optimal solutions in model training and inference workloads that leverage Amazon Web Services accelerator computing services. As part of the Specialist Solutions Architecture team, you will work closely with other Specialist SAs to enable large-scale customer model workloads and drive the adoption of AWS EC2, EKS, ECS, SageMaker and other computing platform for GenAI practice.
- You will interact with other SAs in the field, providing guidance on their customer engagements, and you will develop white papers, blogs, reference implementations, and presentations to enable customers and partners to fully leverage AI Infrastructure on Amazon Web Services. You will also create field enablement materials for the broader SA population, to help them understand how to integrate Amazon Web Services GenAI solutions into customer architectures.
- You must have deep technical experience working with technologies related to Large Language Model (LLM), Stable Diffusion and many other SOTA model architectures, from model designing, fine-tuning, distributed training to inference acceleration. A strong developing machine learning background is preferred, in addition to experience building application and architecture design. You will be familiar with the ecosystem of Nvidia and related technical options, and will leverage this knowledge to help Amazon Web Services customers in their selection process.
- Candidates must have great communication skills and be very technical and hands-on, with the ability to impress Amazon Web Services customers at any level, from ML engineers to executives. Previous experience with Amazon Web Services is desired but not required, provided you have experience building large scale solutions. You will get the opportunity to work directly with senior engineers at customers, partners and Amazon Web Services service teams, influencing their roadmaps and driving innovations.
Basic qualifications
- 5+ years of hands-on experience optimizing AI infrastructure, with deep expertise in inference acceleration frameworks (e.g., vLLM, SGLang, TensorRT, etc.), model training and serving systems across PyTorch and TensorFlow ecosystems;
- Advanced proficiency in Nvidia GPU performance optimization techniques, including memory management, kernel fusion, and quantization strategies for large-scale deep learning workloads;
- Strong foundation in parallel computing principles with practical CUDA programming experience, emphasizing efficient resource utilization and throughput maximization;
- Demonstrated success implementing and tuning distributed AI systems leveraging modern frameworks like Megatron-LM and Ray, with particular focus on LLM deployment and horizontal scaling across GPU clusters.
Preferred qualifications
- First hand implementation experience of AI Infrastructure operation & optimization (Nvidia GPU Chips, Frameworks, Servers, Networking, Power etc.).
- Graduate degree in a highly quantitative field (Computer Science, Machine Learning, Operations Research, Statistics, Mathematics, etc.);
- Proficiency in performance optimization on Amazon Trainiums;
- Proficiency in kernel programming for accelerated hardware using programming models such as (but not limited to) CUDA
- Solid end-to-end hands-on development experience of deep learning algorithms related to Transformers;
- Experience with patents or publications at top-tier peer-reviewed conferences or journals.
- Past and current experience writing and speaking about complex technical concepts to broad audiences in a simplified format; strong written and verbal communication skills;