I am trained my model using a SageMaker notebook running on us-west-2. The notebook where I trained my model uses tensorflow v2.13.1. I have saved my trained model to S3 and compressed it into a tar.gzip file, but when I try to create a model under SageMaker/Models/CreateModel, it asks for a Container Input Option.
Under create model, I am stuck on Container Input Option. I selected Provide Model Artifacts and inference location and it requests a location of inference code image. I assume this is the location of a prebuilt AWS Docker image that supports tensorflow which would then run my saved model artifacts, but I have no idea where to find the image location, or if this is even the correct option for me.
Cannot seem to find a tutorial that explains the end to end process for deploying a tensor flow model. Lot's of generative AI examples but they all use prebuilt models. I trained my own regression model in SageMaker notebooks, but don't see how I define a model.
My final use case is to create a RestAPI via API Gateway (easy enough), invoke a lambda (also easy), but the part about deploying the model so it can be invoked by my lambda escapes me. My use case maps to serverless inference model architecture.
Update: I have created a model. Next question: how do I invoke this model from a lambda to test it?