Safe Rag For Llms

2 789
6.4
Следующее
1 день – 6700:52
How can theme parks use AI?
Популярные
Опубликовано 12 сентября 2024, 16:00
Blog post → goo.gle/4gfJoQh
Code repo → goo.gle/4gnh12v
Codelab → goo.gle/3XETh2r

Large Language Models (LLMs) are pretty smart, but they don’t know everything. For example, an LLM might know why the sky is blue, but it probably doesn’t know more specific things, like which flight the user has booked. Many AI applications use Retrieval-Augmented Generation (RAG) to feed that sort of user-specific data to LLMs, so they can provide better answers.

However, malicious users can use specially engineered prompts to trick an LLM to reveal more data than intended. This gets especially dangerous if the LLM has access to databases through RAG. In this video, Wenxin Du shows Martin Omander how to make RAG safer and reduce the risk of an LLM leaking sensitive data that it gathered via RAG.

Chapters:
0:00 - Intro
1:15 - RAG
1:57 - Making RAG safer
3:11 - Architecture review
4:47 - Questions & Answers
5:47 - How to get started
6:09 - Wrap up

Watch more Serverless Expeditions → goo.gle/ServerlessExpeditions
Subscribe to Google Cloud Tech → goo.gle/GoogleCloudTech

#ServerlessExpeditions #CloudRun

Speaker: Wenxin Du, Martin Omander
Products Mentioned: Cloud - Containers - Cloud Run, Generative AI - General
автотехномузыкадетское