1 réponse
- Le plus récent
- Le plus de votes
- La plupart des commentaires
0
One of the best ways to debug a custom inference script would be to start off with using the SageMaker "local mode". Once you are sure that your script is working fine, move over to hosting on the SageMaker endpoint. Here are some of the examples to get started.
Example for a TF serving model that I have a custom Inference script, I would use local mode as shown below for my testing-
from sagemaker.tensorflow.model import TensorFlowModel
from sagemaker.local import LocalSession
tensorflow_serving_model = TensorFlowModel(
model_data=model_data,
role=sagemaker_role,
framework_version="2.6",
# sagemaker_session=sagemaker_session,
sagemaker_session=LocalSession()
)
répondu il y a 2 ans
Contenus pertinents
- demandé il y a 7 mois
- demandé il y a un an
- demandé il y a 4 mois
- demandé il y a 2 mois
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a 4 mois
- AWS OFFICIELA mis à jour il y a 2 ans