1개 답변
- 최신
- 최다 투표
- 가장 많은 댓글
0
Hello, According to the description, I assume you are going to use the Sagemaker notebooks for the development of the inference code and then export the code as a script.
If you model is light weight, you can cache the model inside you server application and call the predictions in it.
model = load_model()
@anvil.server.callable
def function(foo):
return model.predict("")
Alternatively you can create a SageMaker endpoint and inside the same function above you can call the Sagemaker endpoint and pass it back to the client as a response to the web-socket invocation.
관련 콘텐츠
- AWS 공식업데이트됨 일 년 전