how to inference parameters to a huggingface model hosted in sagemaker?

0

I created a model resource in sagemaker . the model is a tar file , downloaded from hugging face and fine tuned. based on the documentation provided ( sample code below) . the code sample is passing HF_TASK inference parameter and i assume this is hugging face specific, but is it possible to pass other parameters like padding or truncation and max_length ? such as padding : True truncation: True max_length = 512 ...

how do i pass these value?

import sagemaker 

hub = { 
   'HF_TASK' : 'text2text-generation'
}
role = sagemaker.get_execution_role()

huggingface_model = HuggingFaceModel( transformers_version='4.6.1', env=hub...

predictor = huggingface_model.deploy( ....
  • If you are using a Pretrained model you may not be able to tweak params such as padding. I am not sure why do you want to do that while inferencing.

已提問 2 年前檢視次數 95 次
沒有答案

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南