GPU search time for SageMaker Async Inference and general GPU availability.

0

Background I want to build an ML Inference pipeline that will use SageMaker Asynchronous Inference. To decrease costs I want to down all SageMaker Async Inference-related EC2s when no jobs are waiting (for example for time out of business hours or during working hours where there are no requests from my users).

The questions

  1. On average, how long does it take for AWS SageMaker Async Inference to get an up-and-running EC2 with a GPU ready to execute my ML tasks/inference?
  2. What is the current availability of GPU machines on AWS? Is there any shortage?
已提问 2 个月前74 查看次数
没有答案

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则