Inference pytorch model.pth stored on s3 bucket using app deployed on EC2 instance

0

Hi Dears,

I am building a web application to present my works in NLP projects, I have different torch models stored in the s3 bucket, Can I inference it in my EC2 instance? noting that I am using streamlit framework for the ec2 application.

The code below does not work!!

import boto3

s3 = boto3.resource('s3') s3_object = s3.Bucket('nlp-gpt-models').Object('mod_v1.pth').get() model_path = s3_object

Please help how can I communicate with the s3 bucket that has the models files. where is set up the IAM role and has access between s3 and EC2.

Thanks Basem

沒有答案

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南