- Mais recentes
- Mais votos
- Mais comentários
https://github.com/aws/amazon-sagemaker-examples/blob/main/advanced_functionality/multi_model_sklearn_home_value/sklearn_multi_model_endpoint_home_value.ipynb https://sagemaker-examples.readthedocs.io/en/latest/advanced_functionality/kmeans_bring_your_own_model/kmeans_bring_your_own_model.html
The above notebook shows how to seed a pre-existing model in an already built container. This functionality could be replicated with other Amazon SageMaker Algorithms, as well as the TensorFlow and MXNet containers. Although this is certainly an easy method to bring your own model, it is not likely to provide the flexibility of a bringing your own scoring container. Please refer to other example notebooks which show how to dockerize your own training and scoring container which could be modified appropriately to your use case.
In general it is recommended to Bring your Own docker container along with your custom model, SageMaker Inference Toolkit is a library that bootstraps MMS in a way that is compatible with SageMaker multi-model endpoints, while still allowing you to tweak important performance parameters, such as the number of workers per model. The inference container in this example uses the Inference Toolkit to start MMS which can be seen in the container/dockerd-entrypoint.py
file.
In order to deep-dive further I would request you to open a support ticket with the aws premium support for further investigation in to the cloudwatch logs and the specific resource of the endpoint.
If you still have difficulties, I recommend to cut a support case and provide more detail about your account information and cloudwatch logs. Due to security reason, we cannot discuss account specific issue in the public posts.
Thank you
Conteúdo relevante
- AWS OFICIALAtualizada há um ano
- AWS OFICIALAtualizada há um ano
- AWS OFICIALAtualizada há um ano
- AWS OFICIALAtualizada há um ano