How to Fine-Tune a Model on AMD GPUs using LoRA

918
14.6
Опубликовано 27 августа 2025, 13:00
In this video, we’ll show how to use Group Relative Policy Optimization (GRPO) with Unsloth AI and Low-Rank Adaption (LoRa) to fine-tune an LLM with AMD ROCm™ software on AMD GPUs.

Referenced Links:

Train your own R1 Reasoning Model with Unsloth AI: Tutorial Link (rocm.docs.amd.com/projects/ai-...
Fine-Tuning FLUX.1 to Learn Mochicat: Tutorial Link (github.com/Mahdi-CV/flux_LoRA_...
Latest AI Tutorials: Tutorial List (rocm.docs.amd.com/projects/ai-...

Find the resources you need to develop using AMD products: amd.com/en/developer.html

Have questions or ideas? Collaborate directly with developers and experts on the AMD Developer Community Discord:
discord.gg/2tYF7hqW

***

© 2025 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, EPYC, ROCm, and AMD Instinct and combinations thereof are trademarks of Advanced Micro Devices, Inc.
автотехномузыкадетское