1 Answer
- Newest
- Most votes
- Most comments
0
Hello, According to the description, I assume you are going to use the Sagemaker notebooks for the development of the inference code and then export the code as a script.
If you model is light weight, you can cache the model inside you server application and call the predictions in it.
model = load_model()
@anvil.server.callable
def function(foo):
return model.predict("")
Alternatively you can create a SageMaker endpoint and inside the same function above you can call the Sagemaker endpoint and pass it back to the client as a response to the web-socket invocation.
Relevant content
- Accepted Answerasked 2 years ago
- asked a year ago
- asked a year ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago