Accelerate AI training workloads with Google Cloud TPUs and GPUs

754
35.9
Опубликовано 1 июля 2024, 15:37
Training large AI models at scale requires high-performance and purpose-built infrastructure. This session will guide you through the key considerations for choosing tensor processing units (TPUs) and graphics processing unit (GPUs) for your training needs. Explore the strengths of each accelerator for various workloads, like large language models and generative AI models. Discover best practices for training and optimizing your training workflow on Google Cloud using TPUs and GPUs. Understand the performance and cost implications, along with cost-optimization strategies at scale.

Speakers: Vaibhav Singh, Rob Martin, Amanpreet Singh, Erik Nijkamp

Watch more:
All sessions from Google Cloud Next → goo.gle/next24

#GoogleCloudNext

ARC219
Event: Google Cloud Next 2024
автотехномузыкадетское