1 個回答
- 最新
- 最多得票
- 最多評論
0
For notebook instance it's mostly trial-and-error, at least for now. Once your model is ready to be deployed, there is the SageMaker Inference Recommender that can do automated load testing and give you recommendation on the instance size.
It's hard to give a recommendation on the notebook instance because you might test a 100MB dataset today, but choose to go with a 500GB dataset tomorrow, so the recommendations are no longer valid.
You might want to try experimenting with a smaller dataset sampled from the original big dataset, once you are confident with the model training code, use distributed training to run it on the complete big dataset.
已回答 2 年前
相關內容
- 已提問 1 年前
- 已提問 6 個月前
- AWS 官方已更新 1 年前
- AWS 官方已更新 1 年前
- AWS 官方已更新 2 年前