How to prepare data for LLMs
How can developers prepare data for usage in a large language model (LLM)? How can developers ensure sensitive data is not accidentally passed around?
4 856
10.3
Introduction to Gemini on Vertex AI
Getting started with Vertex AI Gemini 2.0 Flash → goo.gle/3D7UaZB Getting started with Vertex AI Gemini 1.5 Flash → goo.gle/4eZOL58 Getting started with Vertex AI Gemini 1.5 Pro →
4 668
12.2
Introduction to grounding with Gemini on Vertex AI
Grounding overview | Google Cloud → goo.gle/3CPOnYq Try Vertex AI Search |
4 178
8.7
How to evaluate AI applications
Vertex AI Evaluation Service Tutorial Notebooks → goo.gle/4i7vdxl How do developers know if their AI applications are working effectively? How can developers measure AI performance?
4 165
10.5
How to create Looker Studio Reports in Looker
Using Studio in Looker→ goo.gle/4gxTwmP Enabling Studio in Looker → goo.gle/4fcABNi AI for BI Innovation Roadmap Webinar → goo.gle/3BqpbYk Studio in Looker brings the best
3 526
11.1
Gemini Code Assist tools: Stay in the flow while coding
Tools for Gemini Code Assist → goo.gle/4fvbvZT Gemini Code Assist → goo.gle/3BfYUvU Stay in your workflow and never leave your IDE again.
3 487
10.6
Fine-tuning open AI models using Hugging Face TRL
Tutorial: Fine-tune Gemma 2 with Hugging Face TRL on GKE → goo.gle/3ZcLQiL Hugging Face Deep Learning containers → goo.gle/3Otahn3 Docs: Hugging Face TRL → goo.gle/3ZgsyJs
3 149
8.7
Multimodal AI in action
GitHub workshop → goo.gle/multimodal_use_cases_workshop_lab Multimodal Models → goo.gle/498q4RD Long Context Window → goo.gle/496aFl1 What is multimodal AI?
3 108
8.1
Your first workload with AI Hypercomputer
AI Hypercomputer → goo.gle/3OJDASw GitHub → goo.gle/3Yn5cRX Explore the AI Hypercomputer and discover how to build your own.
2 533
10.6
Ollama and Cloud Run with GPUs
Get started with Cloud Run → goo.gle/4i5oGDB Ollama is the easiest way to get up and running on with large language models.
1 363
9.6
How do I know my AI app is working?
You’ve built an AI application but now you don’t know how to evaluate if it is effectively working. How do developers know if their AI applications are working effectively?
1 267
7.5
Deploy Gemma 2 with multiple LoRA adapters on GKE
Tutorial: Deploy Gemma 2 with multiple LoRA adapters using TGI on GKE → goo.gle/4f5KP1C Video: Train a LoRA adapter with your own dataset → goo.gle/4gkBLar Deep dive: A conceptual
1 110
9.1
How can developers prepare data for use in LLMs?
Create a system where you can effectively manage your LLM, the feedback, and improve the system for your use cases and users.
1 023
7.8
Cloud migration insights from banking
Christian Gorke, VP and Head of Cyber Center of Excellence at Commerzbank, shares insights from their cloud migration journey, highlighting key lessons learned and practical advice for organizations
1 015
8
How to autoscale a TGI deployment on GKE
Tutorial: Configure autoscaling for TGI on GKE → goo.gle/3Z9a7WK Learn more about observability on GKE → goo.gle/4951bWY Hugging Face TGI (Text Generation Inference) →
677
10.5
Choosing between self-hosted GKE and managed Vertex AI to host AI models
Read the blog post → goo.gle/3V41A6f Vertex AI or Google Kubernetes Engine? Which platform is the best fit for unleashing the power of LLMs in your applications? Find out in this video.
660
12.1
Run Hugging Face transformers on GPU enabled Cloud Run functions
See how to create a GPU enabled Cloud Run function that directly hosts a Gemma 2 2B model.
442
6
Cloud Run functions with Gemma 2 and Ollama
Learn how to create a Cloud Run functions that talks to the Gemma 2 model hosted in the Ollama service.
412
10.4
Running Diffusion with Cloud Run GPUs
See how quickly GPU support on Cloud Run scales up on demand with Stable Diffusion.
334
7.9
19 видео