Using Hyperparameter Tuning Jobs over Training and Preprocessing

0

Some data science teams want to tune the hyperparameters of their preprocessing jobs alongside ML model training jobs.

Does AWS have a recommended approach to establish this using Sagemaker Hyperparameter tuning?

質問済み 3年前484ビュー
1回答
0
承認された回答

It depends on the dataset and the question for ML to answer.

Yes, it is feasible to do HPO with preprocessing. However, to run a HPO job, it is required to define to a specific target to achieve, e.g. maximize/minimize certain values during the whole HPO process. Thus, it is important to understand what is the target during preprocessing. If the answer is yes, they should be able to leverage Hyperparameter Tuning Jobs.

Here is how HPO works in SageMaker. Firstly, we define each training Job with output in a container and specify the hyperparameters in /opt/ml/input/config/hyperparameters.json. When we run the pipeline using HyperparameterTuner in SageMaker, the initial Job can pass the hyperparameters to the Pipeline for HPO, and return the model with highest score.

Option 1, if there is a clear defined target for preprocessing to achieve, we can also do HPO separately for data preprocessing through defining the function and outputs in a container and use HyperparameterTuner fit to tune the preprocessing.

Option 2. include the preprocessing + training code in the whole SageMaker Training Job. But then you can't use separate infrastructure for training and preprocessing.

So it depends on what exactly they are looking for, but they can likely use SageMaker HPO.

回答済み 3年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ