multi model endpoints in sagemaker?

0

is there a list of containers documented somewhere , that support multi model endpoints ?

질문됨 일 년 전251회 조회
2개 답변
2

maybe this list can help you

https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html

first, choose your region then elect your algorithm then it will show you the docker registry

profile picture
전문가
답변함 일 년 전
0

Hi,

For information about the algorithms, frameworks, and instance types that you can use with multi-model endpoints, see the official document. Note that it has both cpu and gpu back-end instances. You can always bring your own if not already supported.

Hope it helps.

AWS
답변함 일 년 전
  • @Tina_Qian - thanks, i went through the documentation, but one thing is not clear to me, are we compressing two trained models into one zipped/tar file?

  • Actually not. Basically, the SageMaker manages the lifecycle of models hosted on multi-model endpoints in the container's memory. Instead of downloading all of the models from an Amazon S3 bucket to the container when you create the endpoint, SageMaker dynamically loads and caches them when you invoke them. When SageMaker receives an invocation request for a particular model, it first routes the request to an instance behind the endpoint, then downloads the model from the S3 bucket to that instance's storage volume. Finally, it loads the model to the container's memory (CPU or GPU, depending on whether you have CPU or GPU backed instances) on that accelerated compute instance. If the model is already loaded in the container's memory, invocation is faster because SageMaker doesn't need to download and load it.

    Maybe the pictures from https://aws.amazon.com/cn/blogs/machine-learning/save-on-inference-costs-by-using-amazon-sagemaker-multi-model-endpoints/ is easy to follow and understand.

    Hope it helps.

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠