how to inference parameters to a huggingface model hosted in sagemaker?

0

I created a model resource in sagemaker . the model is a tar file , downloaded from hugging face and fine tuned. based on the documentation provided ( sample code below) . the code sample is passing HF_TASK inference parameter and i assume this is hugging face specific, but is it possible to pass other parameters like padding or truncation and max_length ? such as padding : True truncation: True max_length = 512 ...

how do i pass these value?

import sagemaker 

hub = { 
   'HF_TASK' : 'text2text-generation'
}
role = sagemaker.get_execution_role()

huggingface_model = HuggingFaceModel( transformers_version='4.6.1', env=hub...

predictor = huggingface_model.deploy( ....
  • If you are using a Pretrained model you may not be able to tweak params such as padding. I am not sure why do you want to do that while inferencing.

preguntada hace 2 años98 visualizaciones
No hay respuestas

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas