Inference pytorch model.pth stored on s3 bucket using app deployed on EC2 instance


Hi Dears,

I am building a web application to present my works in NLP projects, I have different torch models stored in the s3 bucket, Can I inference it in my EC2 instance? noting that I am using streamlit framework for the ec2 application.

The code below does not work!!

import boto3

s3 = boto3.resource('s3') s3_object = s3.Bucket('nlp-gpt-models').Object('mod_v1.pth').get() model_path = s3_object

Please help how can I communicate with the s3 bucket that has the models files. where is set up the IAM role and has access between s3 and EC2.

Thanks Basem