我想为 gbm 选择“最佳”超参数。h2o所以我使用包运行以下代码
# create hyperparameter grid
hyper_params = list(ntrees = c(10,20,50,100,200,500),
max_depth = c(5,10,20,30),
min_rows = c(5,10,20,30),
learn_rate = c(0.01,0.05,0.08,0.1),
balance_classes=c(T,F))
# random subset of hyperparameters
search_criteria = list(strategy = "RandomDiscrete", max_models = 192, seed = 42,
stopping_rounds = 15, stopping_tolerance = 1e-3, stopping_metric = "mean_per_class_error")
# run grid
gbm.grid <- h2o.grid("gbm", grid_id = "gbm.grid", x = names(td.train.h2o)[!names(td.train.h2o)%like%"s_sd_segment"], y = "s_sd_segment",
seed = 42, distribution = "multinomial",
training_frame = td.train.hyper.h2o, nfolds = 3,
hyper_params = hyper_params, search_criteria = search_criteria)
# sort results
gbm.sorted.grid <- h2o.getGrid(grid_id = "gbm.grid", sort_by = "mean_per_class_error", decreasing=FALSE)
hyperparameters_gbm <- as.data.table(gbm.sorted.grid@summary_table)
这给出了超参数的最佳组合
learn_rate max_depth min_rows ntrees
0.08 10 5 200
然后我尝试做同样的事情,但使用不同的stopping_metric. 所以在上面我使用mean_per_class_error,在下面我使用logloss,所以我运行以下代码:
# create hyperparameter grid
hyper_params = list(ntrees = c(10,20,50,100,200,500),
max_depth = c(5,10,20,30),
min_rows = c(5,10,20,30),
learn_rate = c(0.01,0.05,0.08,0.1),
balance_classes=c(T,F))
# random subset of hyperparameters
search_criteria = list(strategy = "RandomDiscrete", max_models = 192, seed = 42,
stopping_rounds = 15, stopping_tolerance = 1e-3, stopping_metric = "logloss")
# run grid
gbm.grid <- h2o.grid("gbm", grid_id = "gbm.grid", x = names(td.train.h2o)[!names(td.train.h2o)%like%"s_sd_segment"], y = "s_sd_segment",
seed = 42, distribution = "multinomial",
training_frame = td.train.hyper.h2o, nfolds = 3,
hyper_params = hyper_params, search_criteria = search_criteria)
# sort results
gbm.sorted.grid <- h2o.getGrid(grid_id = "gbm.grid", sort_by = "logloss", decreasing=FALSE)
hyperparameters_gbm <- as.data.table(gbm.sorted.grid@summary_table)
它给出了超参数的最佳组合
learn_rate max_depth min_rows ntrees
0.1 20 5 500
我知道我将其用作参数strategy = "RandomDiscrete",但仍然例如 gbm using 的最佳组合是 gbm usingstopping_metric = "mean_per_class_error"的“第 50 个最佳组合”,而 gbm usingstopping_metric = "logloss"的“第二个最佳组合”是 gbmstopping_metric = "logloss"的“第 14 个最佳组合”使用
stopping_metric = "mean_per_class_error"
为什么会这样?