How to keep sagemaker inference warm-up

0

Calling sagemaker inference frequently (3-5 calls in a minute) reduces runtime duration from ~200ms to ~50ms, so it seems there is similar warm-up behaviour like in Lambda. Do you have any suggestions how to keep sagemaker inference responsive always fast?

質問済み 1年前961ビュー
1回答
0

You may need to check where this acceleration comes from to determine the warm up process. In CloudWatch metrics, you have ModelLatency and OverheadLatency.

SageMaker Endpoint has a front-end router which maintains some caches for meta data and credentials. If the requests are frequent enough, the cache will be retained and auto renewed. This will reduce the OverheadLatency.

If you see a big drop in ModeLatency with warm-up requests, this may mean your algorithm container could have been configured to retrain some temporary data longer.

Normally, you could schedule an invocation Lambda with CloudWatch Alarms to target tracking the metricInvokationPerInstance. This will make sure you always maintain a certain invocation rate when idle and those fake requests could settle down when real requests are picking up.

The issue with warm-up is that we stops the normal auto-scaling process of endpoints. The endpoint may not scale down properly.

AWS
回答済み 1年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ