- 最新
- 投票最多
- 评论最多
Hi,
If you speak about the metrics mentioned on this page ( https://docs.aws.amazon.com/sagemaker/latest/dg/training-metrics.html ), they are in CloudWatch,
CloudWatch has a generic way to export CW metrics to CSV: see https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/publish-amazon-cloudwatch-metrics-to-a-csv-file.html
Best,
Didier
If you are interested in baselining the model and comparing its performance against other models, you may want to consider using SageMaker experiments so that you can create, manage, analyze, and compare your machine learning experiments. SageMaker Experiments automatically tracks the inputs, parameters, configurations, and results of your iterations as runs. You can assign, group, and organize these runs into experiments. SageMaker Experiments is integrated with Amazon SageMaker Studio, providing a visual interface to browse your active and past experiments, compare runs on key performance metrics, and identify the best performing models.
https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html
It's specifically designed to address the usecase you're describing, and the cost is driven by the volume of metrics that are ingested, stored, and queried.
https://aws.amazon.com/sagemaker/pricing/
Cheers, David
相关内容
- AWS 官方已更新 9 个月前
- AWS 官方已更新 2 年前