Demo: Optimizing Gemma inference on NVIDIA GPUs with TensorRT-LLM

3 077
12.7
Следующее
235 дней – 2 9380:42
The types of devs at I/O
Популярные
75 дней – 6 5520:19
There’s really no in between
109 дней – 1 6720:59
Build with Gemini Nano on Android
Опубликовано 2 апреля 2024, 4:00
Even the smallest of Large Language Models are compute intensive significantly affecting the cost of your Generative AI application. Your ability to increase the throughput and reduce latency can make or break many business cases. NVIDIA TensorRT-LLM is an open-source tool that allows you to considerably speed up execution of your models and in this talk we will demonstrate its application to Gemma.

Checkout more videos of Gemma Developer Day 2024 → goo.gle/440EAIV
Subscribe to Google for Developers → goo.gle/developers

#Gemma #GemmaDeveloperDay

Event: Gemma Developer Day 2024
Products Mentioned: Gemma
автотехномузыкадетское