Run inference on Amazon SageMaker | Step 1: Deploy models | Amazon Web Services

1 062
23.6
Опубликовано 24 июля 2024, 15:37
Amazon SageMaker makes it easier to deploy FMs to make inference requests at the best price performance for any use case. In this video, you will learn how to deploy a Llama 2 (7B) model on real-time endpoint on SageMaker.

Follow along with this sample: github.com/aws-samples/sagemak...

Learn more at: go.aws/3Vt466O

Subscribe:
More AWS videos: go.aws/3m5yEMW
More AWS events videos: go.aws/3ZHq4BK

Do you have technical AWS questions?
Ask the community of experts on AWS re:Post: go.aws/3lPaoPb

ABOUT AWS
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers — including the fastest-growing startups, largest enterprises, and leading government agencies — are using AWS to lower costs, become more agile, and innovate faster.

#AWS #AmazonWebServices #CloudComputing
автотехномузыкадетское