2 Answers
- Newest
- Most votes
- Most comments
0
Hello Nikos,
If possible please try to build the image from official image and try with further reduced image size for deploying on serverless inference. You can exclude lines from L90-L116 in the base image to reduce the size further and use the custom built final image to deploy the serverless inference endpoint. The steps for building the image are here.
answered 3 days ago
-1
Hi,
See part 5 of https://tutorialsdojo.com/train-and-deploy-a-scikit-learn-model-in-amazon-sagemaker/
It explains how to deploy a model trained with scikit-lean on AWS SageMaker
Best,
Didier
Thank you, but this doesn't answer my question. I specifically asked about a serverless endpoint. I know how to create a real-time endpoint like described in the link you provided.
Relevant content
- asked a year ago
- Accepted Answer
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated a month ago
Thanks, I will try it when I find some time. I didn't know about mlio. Shall I miss some functionality during inference if I ommit those lines?
Well I tried to build locally but ran into many errors. Shouldn't there exist a ready-to-use sklearn image for serverless deployment? Or otherwise be mentioned in the documentation that serverless endpoints are not supported with sagemaker-scikit-learn-container ?