h2o,不同的停止度量导致不同的超参数最优

数据挖掘 机器学习 r 随机森林 优化 超参数
2022-03-09 12:38:54

我想为 gbm 选择“最佳”超参数。h2o所以我使用包运行以下代码

# create hyperparameter grid
hyper_params = list(ntrees = c(10,20,50,100,200,500),
                    max_depth = c(5,10,20,30),
                    min_rows = c(5,10,20,30),
                    learn_rate = c(0.01,0.05,0.08,0.1),
                    balance_classes=c(T,F))

# random subset of hyperparameters
search_criteria = list(strategy = "RandomDiscrete", max_models = 192, seed = 42,
                       stopping_rounds = 15, stopping_tolerance = 1e-3, stopping_metric = "mean_per_class_error")

# run grid
gbm.grid <- h2o.grid("gbm", grid_id = "gbm.grid", x = names(td.train.h2o)[!names(td.train.h2o)%like%"s_sd_segment"], y = "s_sd_segment", 
                     seed = 42, distribution = "multinomial",
                     training_frame = td.train.hyper.h2o, nfolds = 3,
                     hyper_params = hyper_params, search_criteria = search_criteria)

# sort results
gbm.sorted.grid <- h2o.getGrid(grid_id = "gbm.grid", sort_by = "mean_per_class_error", decreasing=FALSE)

hyperparameters_gbm <- as.data.table(gbm.sorted.grid@summary_table)

这给出了超参数的最佳组合

learn_rate  max_depth   min_rows    ntrees
0.08    10  5   200

然后我尝试做同样的事情,但使用不同的stopping_metric. 所以在上面我使用mean_per_class_error,在下面我使用logloss,所以我运行以下代码:

# create hyperparameter grid
hyper_params = list(ntrees = c(10,20,50,100,200,500),
                    max_depth = c(5,10,20,30),
                    min_rows = c(5,10,20,30),
                    learn_rate = c(0.01,0.05,0.08,0.1),
                    balance_classes=c(T,F))

# random subset of hyperparameters
search_criteria = list(strategy = "RandomDiscrete", max_models = 192, seed = 42,
                       stopping_rounds = 15, stopping_tolerance = 1e-3, stopping_metric = "logloss")

# run grid
gbm.grid <- h2o.grid("gbm", grid_id = "gbm.grid", x = names(td.train.h2o)[!names(td.train.h2o)%like%"s_sd_segment"], y = "s_sd_segment", 
                     seed = 42, distribution = "multinomial",
                     training_frame = td.train.hyper.h2o, nfolds = 3,
                     hyper_params = hyper_params, search_criteria = search_criteria)

# sort results
gbm.sorted.grid <- h2o.getGrid(grid_id = "gbm.grid", sort_by = "logloss", decreasing=FALSE)

hyperparameters_gbm <- as.data.table(gbm.sorted.grid@summary_table)

它给出了超参数的最佳组合

learn_rate  max_depth   min_rows    ntrees
0.1 20  5   500

我知道我将其用作参数strategy = "RandomDiscrete",但仍然例如 gbm using 的最佳组合是 gbm usingstopping_metric = "mean_per_class_error"的“第 50 个最佳组合”,而 gbm usingstopping_metric = "logloss"的“第二个最佳组合”是 gbmstopping_metric = "logloss"的“第 14 个最佳组合”使用 stopping_metric = "mean_per_class_error"

为什么会这样?

1个回答

首先,您使用不同的指标来确定您的表现如何,这意味着不同的指标会发现不同的超参数设置效果更好,这并不奇怪。其次,某些超参数可能与您要解决的问题无关,这意味着您从这些超参数获得的所有信号都是噪声。第三,大多数机器学习算法都是随机的,这意味着在训练它们时涉及随机性,有时在评估它们时,这意味着即使开始相同的网格或随机搜索也可能导致不同的超参数。也就是说,只有在实际表现彼此接近的情况下,这种可能性才会很高。