ML Model Deployment Techniques using Amazon SageMaker Managed Deployment

2 998
41.6
Опубликовано 3 апреля 2019, 17:44
Learn more about AWS Innovate Online Conference at – amzn.to/2UqptXQ
Machine Learning can be very resource intensive and you will not be able to deploy a Machine Learning model until it is trained. At AWS, we are constantly working to make training models efficient, faster and cheaper. However, model inference is where the value of Machine Learning is delivered. This is where speech is recognized, text is translated, object is recognized in a video, manufacturing defects are found, and cars get driven. This session analyzes the common pain points we face in running Machine Learning and Deep Learning inference workloads. It also explains how AWS is addressing these pain points as you add intelligence to your applications and scale these workloads.

Speaker: Atanu Roy, AI Specialist Solution Architect, AISPL
Свежие видео
11 дней – 141 35710:21
A complete game changer! Insta360 Link 2
17 дней – 1 6701:00:25
A Vision for You
19 дней – 76 9100:50
AirPods 4 now with ANC
автотехномузыкадетское