Inference pytorch model.pth stored on s3 bucket using app deployed on EC2 instance
0
Hi Dears,
I am building a web application to present my works in NLP projects, I have different torch models stored in the s3 bucket, Can I inference it in my EC2 instance? noting that I am using streamlit framework for the ec2 application.