Sagemaker Batch Transform for a tensorflow model

0

I have trained a model on our own data to take two embedding vectors as input and provide a probability score as output. So far I have been using the model hosted as a real time endpoint and querying it periodically using Lambda functions. However, the size of the data has increased exponentially (around 2.2 mil rows now) and I need to set the model up as a batch transform job. I can't find any good examples or details about how to do so for my particular case. The nature of my input data is as follows -> four columns: user_id, user_embedding, post_id, post_embedding in .parquet or .json format. The model takes the user_embedding and post_embedding as input and outputs the probability score. Can someone please point me in the right direction or tell me if there's a better solution?

The model is a tensorflow deep learning model whose artefacts are saved in an S3 bucket. The input data is also present in an S3 bucket.

Sarath
asked a year ago483 views
1 Answer
2
Accepted Answer

Hi Sarath,

  1. Create the model in SageMaker Console or using the CreateModel API, specify the right inference container image based on the model framework along with the s3 location that contains the model artefacts including the inference code.
  2. Create a BatchTransform job in the SageMaker console or using the CreateTransformJob API, parallelise the prediction using multiple instances and MultiRecord Batch strategy to speed up the batch inference based on the dataset volume
  3. Start the transform job

Check an example here.

AWS
answered a year ago
profile picture
EXPERT
reviewed 3 months ago
profile picture
EXPERT
reviewed 9 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions