Orchestrating ML/AI workloads with TPUs on GKE

330
6.9
Предыдущее
20 часов – 9 0142:43
BigQuery Graph in 5'
Популярные
52 дня – 4 0116:00
Pros and cons of on-device AI
Опубликовано 9 апреля 2026, 19:00
Google AI Hypercomputer → goo.gle/3ObrQLK
GKE for AI/ML inference → goo.gle/4cg4k8y
[Tutorial] Fine tune a LLM using TPUs on GKE → goo.gle/48hT4Hu

Tensor Processing Units (TPUs) are now in their 7th generation. They allow machine learning workloads to reach massive scale, especially when running on Google Kubernetes Engine (GKE). But how does that work, and what do you need to know in order to run TPUs on GKE successfully?

Join Yufeng Guo as he sits down with Kavitha Gowda, the product manager of TPUs on GKE, to get into the details of how to scale TPU workloads on GKE.

Speakers: Yufeng Guo, Kavitha Gowda
Products Mentioned: Google Kubernetes Engine, Cloud Tensor Processing Units, AI Hypercomputer
автотехномузыкадетское