How to serve a pretrained model from hugging face in sagemaker without custom script?

0

I have been working with an example , where I write my own custom script ( sample below) , where i am overriding the predict_fn and other functions. I have tested my model without the custom script or inference.py. in the event when we don't provide our custom script, how is the model called? what does the default code for predict_fn look like, when I don't override it?

inference.py


import os
import json
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

def model_fn(model_dir):
   model_dir = './pytorch_model.bin'

    tokenizer = AutoTokenizer.from_pretrained(model_dir)
    model = AutoModelForSeq2SeqLM.from_pretrained(model_dir).to(device).eval()
    
    model_dict = {'model':model, 'tokenizer':tokenizer}
    
    return model_dict
        

def predict_fn(input_data, model_dict):
 
    input = input_data.pop('inputs')
   
    
    tokenizer = model_dict['tokenizer']
    model = model_dict['model']

    input_ids = tokenizer(input, truncation=True, return_tensors="pt").input_ids.to(device)
     ....
    
    
def input_fn(request_body, request_content_type):
    return  json.loads(request_body)
gefragt vor 2 Jahren238 Aufrufe
1 Antwort
0

Usually you have to write your own inference code. Default predict_fn function is just returning a model. You may want to check the documentation here: https://docs.aws.amazon.com/sagemaker/latest/dg/adapt-inference-container.html

profile pictureAWS
beantwortet vor 2 Jahren

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen