Google12.8 млн
Опубликовано 15 февраля 2024, 15:00
This is a demo of long context understanding, an experimental feature in our newest model, Gemini 1.5 Pro using 100,633 lines of code and a series of multimodal prompts.
This demo is a recorded walkthrough of single continuous interaction with Gemini 1.5 Pro.
Token count details: The input TXT file (816,511 tokens) and image (256 tokens) total 816,767 tokens. The text inputs add additional tokens into the prompt, yielding the 818,495 token total shown in the interface.
To learn more about Gemini 1.5, visit goo.gle/3weBZhn
Subscribe to our Channel: youtube.com/google
Tweet with us on X: twitter.com/google
Follow us on Instagram: instagram.com/google
Join us on Facebook: facebook.com/Google
This demo is a recorded walkthrough of single continuous interaction with Gemini 1.5 Pro.
Token count details: The input TXT file (816,511 tokens) and image (256 tokens) total 816,767 tokens. The text inputs add additional tokens into the prompt, yielding the 818,495 token total shown in the interface.
To learn more about Gemini 1.5, visit goo.gle/3weBZhn
Subscribe to our Channel: youtube.com/google
Tweet with us on X: twitter.com/google
Follow us on Instagram: instagram.com/google
Join us on Facebook: facebook.com/Google
Свежие видео