1 個回答
- 最新
- 最多得票
- 最多評論
0
Hello, According to the description, I assume you are going to use the Sagemaker notebooks for the development of the inference code and then export the code as a script.
If you model is light weight, you can cache the model inside you server application and call the predictions in it.
model = load_model()
@anvil.server.callable
def function(foo):
return model.predict("")
Alternatively you can create a SageMaker endpoint and inside the same function above you can call the Sagemaker endpoint and pass it back to the client as a response to the web-socket invocation.
相關內容
- 已提問 1 年前
- AWS 官方已更新 1 年前
- AWS 官方已更新 2 年前