Inside Lyria 3, Google's music generation model

11 262
9.2
Следующее
Популярные
125 дней – 16 36310:02
Building AI Agents with ADK Go
Опубликовано 18 февраля 2026, 20:02
Jeff Chang, Myriam Hamed Torres, and Jason Baldridge from the Google DeepMind team join host Logan Kilpatrick for a deep dive into Lyria 3, Google’s latest music generation model. Their conversation explores the transition from simple audio generation to a model that acts as a collaborative instrument, providing creators with fine-grained control over mood, instrumentation, and vocals. Learn more about the technical challenges of prompt adherence in music, the importance of "vibe" in human evaluations, and the future of layered, iterative music composition.

Chapters:
0:00 - Intro
1:00 - Defining music generation models
1:40 - Lyria as a new instrument
3:05 - Connecting language and creative intent
5:08 - Guest backgrounds and musical journeys
7:57 - Demo: Instrumental funk jam
8:29 - Bridging the gap for non-musicians
12:03 - Demo: Exploring lyrics and vocals
15:07 - The magic of iterative co-creation
15:40 - Meeting users across the expertise spectrum
17:01 - Empowering new musical expressions
18:29 - Emotional and communal impact of music
19:51 - Opportunities for developers and community
21:09 - Real-time vs. song generation models
23:23 - Creating experimental sonic landscapes
25:08 - Demo: Capturing unexpectedness and energy
28:33 - Evaluating music through taste and expertise
31:30 - The diligence of music evaluation
31:52 - The future of Lyria and AI-first workflows
35:07 - Articulating creative vision through language


Listen to this podcast:
Apple Podcasts → goo.gle/3Bm7QzQ
Spotify → goo.gle/3ZL3ADl

Watch more Release Notes → goo.gle/4njokfg
Subscribe to Google for Developers → goo.gle/developers

Speaker: Jeff Chang, Myriam Hamed Torres, Jason Baldridge, Logan Kilpatrick
Products Mentioned: Google AI, Gemini, Genie
автотехномузыкадетское