Now you can reduce machine learning inference costs by up to 75% by using Amazon Elastic Inference (Amazon EI).
Now you can reduce machine learning inference costs by up to 75% by using Amazon Elastic Inference (Amazon EI). This new accelerated compute service for Amazon SageMaker and Amazon EC2 enables you to add hardware acceleration to your machine learning inference in fractional sizes of a full GPU instance, so you can avoid over-provisioning GPU compute capacity. In this video, you’ll also learn about the service’s benefits and key features and see a brief demonstration.
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.