2 Answers
- Newest
- Most votes
- Most comments
0
Hi Alex,
I just tried to replicate the same using the following example,
https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-python-sdk/pytorch_batch_inference/sagemaker_batch_inference_torchserve.ipynb
I did not face this issue. I have tried the framework versions “1.9.0” and "1.13.1” with the instance type “ml.g4dn.xlarge”. Can you try using the more recent framework versions starting with 1.13.1?
If you are still facing the issue, please share the code example that you are following at your end.
answered a year ago
0
I am also encountering this issue with py_version="py38" and framework_version="1.12" with ml.p2.xlarge. Any solution would be appreciated.
answered 4 months ago
Relevant content
- asked 10 months ago
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago