Getting started with deploying foundation models on Amazon SageMaker | Amazon Web Services
1 471
30.6
Amazon Web Services782 тыс
Следующее
Опубликовано 7 июня 2024, 16:58
In this video, you will learn how to run inference on SageMaker. To deploy FMs to production, SageMaker offers 80+ instance types and flexible deployment modes such as real-time, asynchronous, serverless, and batch transform so you can choose the right deployment mode for their use case. SageMaker offers specialized hosting containers such as Large Model Inference (LMI), Text Generation Interface (TGI), PyTorch, and custom containers along with the ability to optimize the container for performance and cost.
Learn more: go.aws/3Vt466O
Subscribe:
More AWS videos: go.aws/3m5yEMW
More AWS events videos: go.aws/3ZHq4BK
Do you have technical AWS questions?
Ask the community of experts on AWS re:Post: go.aws/3lPaoPb
ABOUT AWS
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers — including the fastest-growing startups, largest enterprises, and leading government agencies — are using AWS to lower costs, become more agile, and innovate faster.
#AWS #AmazonWebServices #CloudComputing #GenerativeAI #Foundationmodel
Learn more: go.aws/3Vt466O
Subscribe:
More AWS videos: go.aws/3m5yEMW
More AWS events videos: go.aws/3ZHq4BK
Do you have technical AWS questions?
Ask the community of experts on AWS re:Post: go.aws/3lPaoPb
ABOUT AWS
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers — including the fastest-growing startups, largest enterprises, and leading government agencies — are using AWS to lower costs, become more agile, and innovate faster.
#AWS #AmazonWebServices #CloudComputing #GenerativeAI #Foundationmodel
Свежие видео