- Newest
- Most votes
- Most comments
Yes - from the thread on this open issue: https://github.com/aws/sagemaker-pytorch-inference-toolkit/issues/86
In the issue thread they note that the new Pytorch 1.6 image requires that the model filename is model.pth
, linking to the relevant code where this default is set: https://github.com/aws/sagemaker-pytorch-inference-toolkit/blob/9a6869e/src/sagemaker_pytorch_serving_container/torchserve.py#L121
Also noted in the thread is that users have successfully adapted their code to use torchserve in Pytorch 1.6 by changing it to save their model in a file named model.pth
. Once renamed, they were still able to use custom inference scripts to load their model by defining a custom model_fn
: https://github.com/data-science-on-aws/workshop/blob/374329adf15bf1810bfc4a9e73501ee5d3b4e0f5/09_deploy/wip/pytorch/code/inference.py
Relevant content
- asked a year ago
- Accepted Answer
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 4 days ago
- AWS OFFICIALUpdated a year ago
But here, they are assuming it is
model.pt
. Can you please recheck it...