AWS SageMaker Real-Time Inference: scaling down to 0 instances

0

Hello, We would like to use AWS SageMaker to run our AI models, but the fact that we can't downscale the instances to 0 is very problematic for us as we'll need to duplicate this infrastructure on our various environments (develop, staging, production) and on multiple regions, and this isn't possible cost-wise. Is there a specific reason why this isn't possible, and can we expect this to change soon? What are the solutions that you would suggest to solve this issue, we were thinking of the following:

  1. Using Kubernetes + Triton (similar to this blog). The main issue being the complexity of the system.
  2. Using SageMaker Asynchronous Inference. The issue is that we're not sure of the impact on speed, latency, etc. and having the calls asynchronous adds complexity.

Thank you!

Thomas
已提問 6 個月前檢視次數 510 次
1 個回答
0

Hi,

Why don't you try using SageMaker Serverless Inference instead ? It's purely serverless in nature so you pay only when the endpoint is serving inference.

See https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html

Wouldn't that be a better solution for your use case?

Best,

Didier

profile pictureAWS
專家
已回答 6 個月前
  • Hello Didier,

    Thank you for your answer. I have a few questions regarding SageMaker Serverless Inference:

    1. Does it support multiple models under one endpoint?
    2. Do the underlying instances have accelerated computing possibilities?

    Thank you for your help!

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南