Model Armor: Protecting Generative AI from Threats and Misuse

4 329
11.6
Следующее
Популярные
53 дня – 6 7653:54
AI workload storage options
Опубликовано 29 мая 2025, 4:00
Read the Model Armor documentation → goo.gle/43fWaK6

Protect your generative AI applications from threats like prompt injection and data leaks with Model Armor, the new security guard for any LLM. This video dives into how Model Armor uses centralized policies and prompt/response filtering to address some of the OWASP LLM top 10 risks. We'll explore key features and benefits, then see a live demo showing Model Armor in action against unsafe prompts and jailbreaking attempts, malicious URLs, and attempts to exchange sensitive data, both in user inputs and model outputs.

Subscribe to Google Cloud Tech → goo.gle/GoogleCloudTech

Speakers: Aron Eidelman
Products Mentioned: AI Infrastructure
автотехномузыкадетское