Accelerate AI training workloads with Google Cloud TPUs and GPUs

1 212
31.1
Следующее
Популярные
Опубликовано 1 июля 2024, 15:37
Training large AI models at scale requires high-performance and purpose-built infrastructure. This session will guide you through the key considerations for choosing tensor processing units (TPUs) and graphics processing unit (GPUs) for your training needs. Explore the strengths of each accelerator for various workloads, like large language models and generative AI models. Discover best practices for training and optimizing your training workflow on Google Cloud using TPUs and GPUs. Understand the performance and cost implications, along with cost-optimization strategies at scale.

Speakers: Vaibhav Singh, Rob Martin, Amanpreet Singh, Erik Nijkamp

Watch more:
All sessions from Google Cloud Next → goo.gle/next24

#GoogleCloudNext

ARC219
Event: Google Cloud Next 2024
Случайные видео
177 дней – 15 2221:08
This is iPhone 17!
02.06.22 – 4 5628:28
Scale up with Startup Programs
жизньигрыфильмывесельеавтотехномузыкаспортедаденьгистройкаохотаогородзнанияздоровьекреативдетское