How to keep sagemaker inference warm-up

0

Calling sagemaker inference frequently (3-5 calls in a minute) reduces runtime duration from ~200ms to ~50ms, so it seems there is similar warm-up behaviour like in Lambda. Do you have any suggestions how to keep sagemaker inference responsive always fast?

feita há um ano1043 visualizações
1 Resposta
0

You may need to check where this acceleration comes from to determine the warm up process. In CloudWatch metrics, you have ModelLatency and OverheadLatency.

SageMaker Endpoint has a front-end router which maintains some caches for meta data and credentials. If the requests are frequent enough, the cache will be retained and auto renewed. This will reduce the OverheadLatency.

If you see a big drop in ModeLatency with warm-up requests, this may mean your algorithm container could have been configured to retrain some temporary data longer.

Normally, you could schedule an invocation Lambda with CloudWatch Alarms to target tracking the metricInvokationPerInstance. This will make sure you always maintain a certain invocation rate when idle and those fake requests could settle down when real requests are picking up.

The issue with warm-up is that we stops the normal auto-scaling process of endpoints. The endpoint may not scale down properly.

AWS
respondido há um ano

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas