Следующее
Популярные
Опубликовано 14 января 2025, 17:00
Fine-tune Gemma models in Keras using LoRA → goo.gle/407Kise

Learn how to optimize large language models using LoRA (Low-Rank Adaptation) with a model from Google AI, Gemma. Watch along as Googler Paige Bailey utilizes Google Colab and the Databricks Dolly 15k dataset to demonstrate this fine tuning technique.

Chapters:
0:00 - What is Gemma?
0:44 - What is Low-Rank Adaptation (LoRA)?
1:20 - [Demo] Setting up your AI environment
2:45 - [Demo] Fine tuning with LoRA
3:33 - Conclusion

Watch more Generative AI Experiences for Developers → goo.gle/genAI4devs
Subscribe to Google Cloud Tech → goo.gle/GoogleCloudTech

#GoogleCloud #DevelopersAI

Speaker: Paige Bailey
Products Mentioned: Gemma, Google Colab, Gemini
автотехномузыкадетское