- Newest
- Most votes
- Most comments
If you want to create a batch transform job from a model in the registry, you can follow the documentation you already linked to, but instead of model.deploy(...)
you need to
- create a Transformer object
- Initiate a transform job
For example:
transformer = model.transformer(
initial_instance_count=1,
instance_type="ml.m5.xlarge"
)
transform_job = transformer.transform("s3://my-bucket/batch-transform-input")
You might also find this code example useful as a starting point: Using a model registry name as input, it creates a pipeline with first step being a LambdaStep to load model, then creates Model object and finally creates the Transformer.
Hope it helps.
Thanks all for the feedback.
I figured out a couple of ways to do this. One option is, after doing model approval as indicated here, to create a ModelPackage class as shown here, then instead of model.deploy we can create a transformer object out of it. Below is a snippet of the code.
# Step 1: Get the latest model model_package_group_name = "my_model_package" models_list = sm.list_model_packages(ModelPackageGroupName=model_package_group_name) model_package_arn = models_list['ModelPackageSummaryList'][0]['ModelPackageArn'] # Step 2: Approve the model model_package_update_input_dict = { "ModelPackageArn" : model_package_arn, "ModelApprovalStatus" : "Approved" } model_package_update_response = sm.update_model_package(**model_package_update_input_dict) # Step 3: Create a model package model = ModelPackage(role=role, model_package_arn=model_package_arn, sagemaker_session=sagemaker_session) # Step 4: Create a transformer object transformer = model.transformer( instance_type=transform_instance_type, instance_count=transform_instance_count, strategy=strategy, accept=accept_type, output_path=f"s3://{bucket}/output_location", ) # Step 5: Finally, transform transformer.transform(data=data_location)
Here is a documentation on how to use a Model Registry model in Inference Pipeline: https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-steps.html#step-type-register-model Here is the documentation for Batch Transform in Inference Pipeline: https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipeline-batch.html Are these what you are looking for?
Relevant content
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
Hello, thank you for the snippet, I currently have a similar case but the difference is that I already trained my model using a train script of my own and I suppose that I have to use my own inference.py script also. When I tried your way i had this error: AttributeError: 'NoneType' object has no attribute 'startswith'. did you use your wn inference script also later on? Or Is there a way to add my own inference script to your method. thank you!