Secure your AI agents for production workloads

2 924
6.8
Опубликовано 6 марта 2026, 17:00
Codelab→ goo.gle/4atfPdd
Video for developers → goo.gle/3ZJ6dor
Video for data engineers → goo.gle/4tRO4CO

How do you secure your AI agents against malicious attacks while ensuring they scale under pressure? In this episode of Agentverse, we move beyond "it works on my machine" to build a battle-hardened infrastructure on Google Cloud. You'll learn how to deploy open-source models using Ollama and vLLM, secure your pipeline with Model Armor, and implement full observability—all while optimizing your architecture to squeeze maximum performance out of your GPU budget.

Chapters:
0:00 - Intro: The Guardian, the Platform Engineer
02:05 - Self hosted LLMs: Deploy Ollama
05:07 - Deploy vLLM
08:48 - Setup Model Armor
12:04 - Build the agent pipeline
15:00 - Observability: Metrics and tracing
17:06 - Outro


🔔 Subscribe to Google Cloud Tech → goo.gle/GoogleCloudTech

#Gemini #GoogleCloud #Agentverse

Speaker: Debi Cabrera
Products Mentioned: Agentverse, Gemini, Agent Development Kit, Model Armor, Ollama, vLLM, Cloud Run
автотехномузыкадетское