How to serve a pretrained model from hugging face in sagemaker without custom script?

0

I have been working with an example , where I write my own custom script ( sample below) , where i am overriding the predict_fn and other functions. I have tested my model without the custom script or inference.py. in the event when we don't provide our custom script, how is the model called? what does the default code for predict_fn look like, when I don't override it?

inference.py


import os
import json
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

def model_fn(model_dir):
   model_dir = './pytorch_model.bin'

    tokenizer = AutoTokenizer.from_pretrained(model_dir)
    model = AutoModelForSeq2SeqLM.from_pretrained(model_dir).to(device).eval()
    
    model_dict = {'model':model, 'tokenizer':tokenizer}
    
    return model_dict
        

def predict_fn(input_data, model_dict):
 
    input = input_data.pop('inputs')
   
    
    tokenizer = model_dict['tokenizer']
    model = model_dict['model']

    input_ids = tokenizer(input, truncation=True, return_tensors="pt").input_ids.to(device)
     ....
    
    
def input_fn(request_body, request_content_type):
    return  json.loads(request_body)
질문됨 2년 전234회 조회
1개 답변
0

Usually you have to write your own inference code. Default predict_fn function is just returning a model. You may want to check the documentation here: https://docs.aws.amazon.com/sagemaker/latest/dg/adapt-inference-container.html

profile pictureAWS
답변함 2년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠