Using Hyperparameter Tuning Jobs over Training and Preprocessing

0

Some data science teams want to tune the hyperparameters of their preprocessing jobs alongside ML model training jobs.

Does AWS have a recommended approach to establish this using Sagemaker Hyperparameter tuning?

已提問 3 年前檢視次數 484 次
1 個回答
0
已接受的答案

It depends on the dataset and the question for ML to answer.

Yes, it is feasible to do HPO with preprocessing. However, to run a HPO job, it is required to define to a specific target to achieve, e.g. maximize/minimize certain values during the whole HPO process. Thus, it is important to understand what is the target during preprocessing. If the answer is yes, they should be able to leverage Hyperparameter Tuning Jobs.

Here is how HPO works in SageMaker. Firstly, we define each training Job with output in a container and specify the hyperparameters in /opt/ml/input/config/hyperparameters.json. When we run the pipeline using HyperparameterTuner in SageMaker, the initial Job can pass the hyperparameters to the Pipeline for HPO, and return the model with highest score.

Option 1, if there is a clear defined target for preprocessing to achieve, we can also do HPO separately for data preprocessing through defining the function and outputs in a container and use HyperparameterTuner fit to tune the preprocessing.

Option 2. include the preprocessing + training code in the whole SageMaker Training Job. But then you can't use separate infrastructure for training and preprocessing.

So it depends on what exactly they are looking for, but they can likely use SageMaker HPO.

已回答 3 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南