Sagemaker Pipelines - Batch Transform job using generated predictions as input for the model
Hi all! So, we're trying to implement a very simple Sagemaker Pipeline with 3 steps:
- ETL: for now it only runs a simple query
- Batch transform: uses the ETL's result and generates predictions with a batch transform job
- Report: generates an HTML report
The thing is, when running the batch transform job alone in the Pipeline, everything runs OK. But when trying to run all the steps in a Pipeline, the batch transform job fails, and what we have seen in the logs is that the job takes the dataset which was generated in the ETL step, generates the predictions and saves them correctly in S3 (this is where we would expect the job to stop) but then it resends those predictions to the endpoint, as if they were a new input, and so the step fails as the model receives an array of 1 column thus mismatching the number of features which it was trained with.
There's not much info out there on this, and Sagemaker is painfully hard to debug. Has anyone experienced anything like this?
Our model and transformer code:
model = XGBoostModel( model_data=f"s3://{BUCKET}/{MODEL_ARTIFACTS_PATH}/artifacts.gzip", role=get_execution_role(), entry_point="predict.py", framework_version="1.3-1", ) transformer = model.transformer( instance_count=1, instance_type="ml.m5.large", output_path=f"s3://{BUCKET}/{PREDICTIONS_PATH}/", accept="text/csv", ) step = TransformStep( name="Batch", transformer=transformer, inputs=TransformInput( data=etl_step.properties.ProcessingOutputConfig.Outputs[ "dataset" ].S3Output.S3Uri, content_type="text/csv", split_type="Line", ), depends_on=[etl_step], )
And our inference script:
def input_fn(request_body, content_type): return pd.read_csv(StringIO(request_body), header=None).values def predict_fn(input_obj, model): """ Function which takes the result of input_fn and generates predictions. """ return model.predict_proba(input_obj)[:, 1] def output_fn(predictions, content_type): return ",".join(str(pred) for pred in predictions)
Hi,
The issue you describe could happen if the prediction file is written in the same location where the input files are, and thus triggering one more round of prediction.
Can you check that the
etl_step.properties.ProcessingOutputConfig.Outputs[
"dataset"
].S3Output.S3Uri
and
f"s3://{BUCKET}/{PREDICTIONS_PATH}/"
point a different path in your s3 bucket?
Did this work?
Thank you
Relevant questions
How to pass null/Nan values into the dataframe passed into batch transform
asked 3 years agoIs it possible to test locally SageMaker Inference Pipelines?
Accepted Answerxgboost sagemaker batch transform job output in multiple lines
Accepted Answerasked 2 years agoIs it possible to create Parallel Pipelines in Sagemaker
asked 3 months agoHyperparameter tuning for pipeline model
asked 5 months agoBatch deployment from model registry
asked 19 days agoSageMaker Pipelines and CI/CD with GitLab Multiaccount
asked 25 days agoUsing R model in SageMaker ML pipelines
Accepted Answerasked 5 months agoSagemaker Pipelines - Batch Transform job using generated predictions as input for the model
asked a month agoSageMaker Batch Transform local mode?
Accepted Answer
Hey! Thanks for the answer. Yes, I've tried that and no success at all. Still the same error.