1 個回答
- 最新
- 最多得票
- 最多評論
1
Hi,
For such large datasets Sagemaker Data Wrangler seems quite appropriate to prepare it. In https://aws.amazon.com/blogs/machine-learning/process-larger-and-wider-datasets-with-amazon-sagemaker-data-wrangler/ you have it benchmarked on a dataset of around 100 GB with 80 million rows and 300 columns.
About the training of large models with Amazon SageMaker, see this video: https://www.youtube.com/watch?v=XKLIhIeDSCY
Also, re. training of your model, this post helps you choose the best datasource: https://aws.amazon.com/blogs/machine-learning/choose-the-best-data-source-for-your-amazon-sagemaker-training-job/
Best,
Didier
相關內容
- 已提問 7 個月前