SageMaker Inference recommender - Model latency for streaming response

0

I have an inference endpoint that returns a HTTP streaming response and I would like to load test it.

Does ModelLatency in the recommender metrics refer to time to receive the first chunk, or time to receive all chunks?

c.f. https://docs.aws.amazon.com/sagemaker/latest/dg/inference-recommender-interpret-results.html

Gabriel
已提问 6 个月前54 查看次数
没有答案

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则