How to serve a pretrained model from hugging face in sagemaker without custom script?

0

I have been working with an example , where I write my own custom script ( sample below) , where i am overriding the predict_fn and other functions. I have tested my model without the custom script or inference.py. in the event when we don't provide our custom script, how is the model called? what does the default code for predict_fn look like, when I don't override it?

inference.py


import os
import json
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

def model_fn(model_dir):
   model_dir = './pytorch_model.bin'

    tokenizer = AutoTokenizer.from_pretrained(model_dir)
    model = AutoModelForSeq2SeqLM.from_pretrained(model_dir).to(device).eval()
    
    model_dict = {'model':model, 'tokenizer':tokenizer}
    
    return model_dict
        

def predict_fn(input_data, model_dict):
 
    input = input_data.pop('inputs')
   
    
    tokenizer = model_dict['tokenizer']
    model = model_dict['model']

    input_ids = tokenizer(input, truncation=True, return_tensors="pt").input_ids.to(device)
     ....
    
    
def input_fn(request_body, request_content_type):
    return  json.loads(request_body)
已提問 2 年前檢視次數 229 次
1 個回答
0

Usually you have to write your own inference code. Default predict_fn function is just returning a model. You may want to check the documentation here: https://docs.aws.amazon.com/sagemaker/latest/dg/adapt-inference-container.html

profile pictureAWS
已回答 2 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南