Orchestrating ML/AI workloads with TPUs on GKE

1 240
14.8
Следующее
25 дней – 3 2190:21
Join AI Agent Clinic on April 15th
Популярные
118 дней – 2 3503:57
What are Google Cloud Client Libraries?
Опубликовано 9 апреля 2026, 19:00
Google AI Hypercomputer → goo.gle/3ObrQLK
GKE for AI/ML inference → goo.gle/4cg4k8y
[Tutorial] Fine tune a LLM using TPUs on GKE → goo.gle/48hT4Hu

Tensor Processing Units (TPUs) are now in their 7th generation. They allow machine learning workloads to reach massive scale, especially when running on Google Kubernetes Engine (GKE). But how does that work, and what do you need to know in order to run TPUs on GKE successfully?

Join Yufeng Guo as he sits down with Kavitha Gowda, the product manager of TPUs on GKE, to get into the details of how to scale TPU workloads on GKE.

Speakers: Yufeng Guo, Kavitha Gowda
Products Mentioned: Google Kubernetes Engine, Cloud Tensor Processing Units, AI Hypercomputer
автотехномузыкадетское