Open Standard, Multi-vendor AI Training and Inference with LLMs | Tech Talk | Innovation Selects
30 185
3354
Intel Software258 тыс
Следующее
Опубликовано 10 октября 2024, 21:00
Build and deploy LLMs with SYCL. Learn about the llm.c and llama.cpp projects and their potential for efficient training and inference on Intel and Nvidia GPUs.
LLMs have become a very popular method of harnessing AI and there are many emerging models available to developers. During this presentation we will talk about different ways to run LLMs that take advantage of the open standard nature of SYCL. The llm.c code from Andrej Karpathy is a simple project for LLM training using CUDA or pure C, and the llama.cpp project has been implemented to run fast inference on a wide range of target architectures. Both of these projects can use a SYCL back end implementation that brings the potential for them to be run on a wide range of targets including Intel and Nvidia GPUs. During this session we will present these two projects, how they are implemented with SYCL, and demonstrate how you can use them to run training and inference of LLMs on multiple vendor targets.
Intel Innovation: intel.ly/46ixwbA
Intel Events: intel.ly/4dCbLWb
SYCL Tech: intel.ly/3YhMeNq
Llama.cpp Repository and Landing Page: intel.ly/400k4YH
Llm.c Repository and Landing Page: intel.ly/4ez4FDj
About Intel Software:
Intel® Developer Zone is committed to empowering and assisting software developers in creating applications for Intel hardware and software products. The Intel Software YouTube channel is an excellent resource for those seeking to enhance their knowledge. Our channel provides the latest news, helpful tips, and engaging product demos from Intel and our numerous industry partners. Our videos cover various topics; you can explore them further by following the links.
Connect with Intel Software:
INTEL SOFTWARE WEBSITE: intel.ly/2KeP1hD
INTEL SOFTWARE on FACEBOOK: bit.ly/2z8MPFF
INTEL SOFTWARE on TWITTER: bit.ly/2zahGSn
INTEL SOFTWARE GITHUB: bit.ly/2zaih6z
INTEL DEVELOPER ZONE LINKEDIN: bit.ly/2z979qs
INTEL DEVELOPER ZONE INSTAGRAM: bit.ly/2z9Xsby
INTEL GAME DEV TWITCH: bit.ly/2BkNshu
#intelsoftware #innovation
Open Standard, Multi-vendor AI Training and Inference with LLMs | Tech Talk | Innovation Selects
LLMs have become a very popular method of harnessing AI and there are many emerging models available to developers. During this presentation we will talk about different ways to run LLMs that take advantage of the open standard nature of SYCL. The llm.c code from Andrej Karpathy is a simple project for LLM training using CUDA or pure C, and the llama.cpp project has been implemented to run fast inference on a wide range of target architectures. Both of these projects can use a SYCL back end implementation that brings the potential for them to be run on a wide range of targets including Intel and Nvidia GPUs. During this session we will present these two projects, how they are implemented with SYCL, and demonstrate how you can use them to run training and inference of LLMs on multiple vendor targets.
Intel Innovation: intel.ly/46ixwbA
Intel Events: intel.ly/4dCbLWb
SYCL Tech: intel.ly/3YhMeNq
Llama.cpp Repository and Landing Page: intel.ly/400k4YH
Llm.c Repository and Landing Page: intel.ly/4ez4FDj
About Intel Software:
Intel® Developer Zone is committed to empowering and assisting software developers in creating applications for Intel hardware and software products. The Intel Software YouTube channel is an excellent resource for those seeking to enhance their knowledge. Our channel provides the latest news, helpful tips, and engaging product demos from Intel and our numerous industry partners. Our videos cover various topics; you can explore them further by following the links.
Connect with Intel Software:
INTEL SOFTWARE WEBSITE: intel.ly/2KeP1hD
INTEL SOFTWARE on FACEBOOK: bit.ly/2z8MPFF
INTEL SOFTWARE on TWITTER: bit.ly/2zahGSn
INTEL SOFTWARE GITHUB: bit.ly/2zaih6z
INTEL DEVELOPER ZONE LINKEDIN: bit.ly/2z979qs
INTEL DEVELOPER ZONE INSTAGRAM: bit.ly/2z9Xsby
INTEL GAME DEV TWITCH: bit.ly/2BkNshu
#intelsoftware #innovation
Open Standard, Multi-vendor AI Training and Inference with LLMs | Tech Talk | Innovation Selects
Свежие видео
Случайные видео