如何调整 xgboost 树的超参数?

机器算法验证 机器学习 交叉验证 助推
2022-01-18 01:42:04

我有一个类不平衡数据,我想使用 xgboost 调整增强树的超参数。

问题

  1. xgboost 是否有等效的 gridsearchcv 或 randomsearchcv?
  2. 如果不是,建议调整 xgboost 参数的方法是什么?
4个回答

由于xgboostin的接口caret最近发生了变化,这里有一个脚本,它提供了一个完整的注释演练,caret用于调整xgboost超参数。

为此,我将使用来自 Kaggle 比赛“Give Me Some Credit”的训练数据。

1.拟合xgboost模型

在本节中,我们:

  • 拟合xgboost具有任意超参数的模型
  • 使用交叉验证 ( ) 评估损失 (AUC-ROC xgb.cv)
  • 绘制训练与测试评估指标

这是一些执行此操作的代码。

library(caret)
library(xgboost)
library(readr)
library(dplyr)
library(tidyr)

# load in the training data
df_train = read_csv("04-GiveMeSomeCredit/Data/cs-training.csv") %>%
  na.omit() %>%                                                                # listwise deletion 
  select(-`[EMPTY]`) %>%
  mutate(SeriousDlqin2yrs = factor(SeriousDlqin2yrs,                           # factor variable for classification
                                   labels = c("Failure", "Success")))

# xgboost fitting with arbitrary parameters
xgb_params_1 = list(
  objective = "binary:logistic",                                               # binary classification
  eta = 0.01,                                                                  # learning rate
  max.depth = 3,                                                               # max tree depth
  eval_metric = "auc"                                                          # evaluation/loss metric
)

# fit the model with the arbitrary parameters specified above
xgb_1 = xgboost(data = as.matrix(df_train %>%
                                   select(-SeriousDlqin2yrs)),
                label = df_train$SeriousDlqin2yrs,
                params = xgb_params_1,
                nrounds = 100,                                                 # max number of trees to build
                verbose = TRUE,                                         
                print.every.n = 1,
                early.stop.round = 10                                          # stop if no improvement within 10 trees
)

# cross-validate xgboost to get the accurate measure of error
xgb_cv_1 = xgb.cv(params = xgb_params_1,
                  data = as.matrix(df_train %>%
                                     select(-SeriousDlqin2yrs)),
                  label = df_train$SeriousDlqin2yrs,
                  nrounds = 100, 
                  nfold = 5,                                                   # number of folds in K-fold
                  prediction = TRUE,                                           # return the prediction using the final model 
                  showsd = TRUE,                                               # standard deviation of loss across folds
                  stratified = TRUE,                                           # sample is unbalanced; use stratified sampling
                  verbose = TRUE,
                  print.every.n = 1, 
                  early.stop.round = 10
)

# plot the AUC for the training and testing samples
xgb_cv_1$dt %>%
  select(-contains("std")) %>%
  mutate(IterationNum = 1:n()) %>%
  gather(TestOrTrain, AUC, -IterationNum) %>%
  ggplot(aes(x = IterationNum, y = AUC, group = TestOrTrain, color = TestOrTrain)) + 
  geom_line() + 
  theme_bw()

以下是测试与训练 AUC 的样子:

在此处输入图像描述

2.超参数搜索使用train

对于超参数搜索,我们执行以下步骤:

  • 创建一个data.frame具有我们想要训练模型的参数的独特组合。
  • 指定适用于每个模型训练的控制参数,包括交叉验证参数,并指定计算概率以便计算 AUC
  • 交叉验证和训练每个参数组合的模型,保存每个模型的 AUC。

这是一些显示如何执行此操作的代码。

# set up the cross-validated hyper-parameter search
xgb_grid_1 = expand.grid(
  nrounds = 1000,
  eta = c(0.01, 0.001, 0.0001),
  max_depth = c(2, 4, 6, 8, 10),
  gamma = 1
)

# pack the training control parameters
xgb_trcontrol_1 = trainControl(
  method = "cv",
  number = 5,
  verboseIter = TRUE,
  returnData = FALSE,
  returnResamp = "all",                                                        # save losses across all models
  classProbs = TRUE,                                                           # set to TRUE for AUC to be computed
  summaryFunction = twoClassSummary,
  allowParallel = TRUE
)

# train the model for each parameter combination in the grid, 
#   using CV to evaluate
xgb_train_1 = train(
  x = as.matrix(df_train %>%
                  select(-SeriousDlqin2yrs)),
  y = as.factor(df_train$SeriousDlqin2yrs),
  trControl = xgb_trcontrol_1,
  tuneGrid = xgb_grid_1,
  method = "xgbTree"
)

# scatter plot of the AUC against max_depth and eta
ggplot(xgb_train_1$results, aes(x = as.factor(eta), y = max_depth, size = ROC, color = ROC)) + 
  geom_point() + 
  theme_bw() + 
  scale_size_continuous(guide = "none")

eta最后,您可以在和的变化上为 AUC 创建气泡图max_depth

在此处输入图像描述

Caret 包已包含 xgboost。

cv.ctrl <- trainControl(method = "repeatedcv", repeats = 1,number = 3, 
                        #summaryFunction = twoClassSummary,
                        classProbs = TRUE,
                        allowParallel=T)

    xgb.grid <- expand.grid(nrounds = 1000,
                            eta = c(0.01,0.05,0.1),
                            max_depth = c(2,4,6,8,10,14)
    )
    set.seed(45)
    xgb_tune <-train(formula,
                     data=train,
                     method="xgbTree",
                     trControl=cv.ctrl,
                     tuneGrid=xgb.grid,
                     verbose=T,
                     metric="Kappa",
                     nthread =3
    )

样本输出

eXtreme Gradient Boosting 

32218 samples
   41 predictor
    2 classes: 'N', 'Y' 

No pre-processing
Resampling: Cross-Validated (3 fold, repeated 1 times) 
Summary of sample sizes: 21479, 21479, 21478 
Resampling results

  Accuracy   Kappa      Accuracy SD   Kappa SD   
  0.9324911  0.1094426  0.0009742774  0.008972911

我看到的一个缺点是当前插入符号不支持 xgboost 的其他参数,例如 subsample 等。

编辑

Gamma、colsample_bytree、min_child_weight 和 subsample 等现在(2017 年 6 月)可以直接使用 Caret 进行调整。只需将它们添加到上述代码的网格部分即可使其工作。感谢 usεr11852 在评论中强调它。

我知道这是一个老问题,但我使用了与上述不同的方法。我使用贝叶斯优化包中的贝叶斯优化函数来找到最佳参数。为此,您首先创建交叉验证折叠,然后创建一个函数xgb.cv.bayes,该函数将您想要更改的提升超参数作为参数。在这个例子中,我正在调整max.depth, min_child_weight, subsample, colsample_bytree, gamma. 然后,您调用xgb.cv该函数,并将超参数设置为xgb.cv.bayes. 然后,您使用增强超参数BayesianOptimizationxgb.cv.bayes和所需范围进行调用。 init_points是从指定范围内随机抽取超参数的初始模型的数量,以及n_iter是初始点之后的模型轮数。该函数输出所有提升参数和测试 AUC。

cv_folds <- KFold(as.matrix(df.train[,target.var]), nfolds = 5, 
                  stratified = TRUE, seed = 50)
xgb.cv.bayes <- function(max.depth, min_child_weight, subsample, colsample_bytree, gamma){
  cv <- xgv.cv(params = list(booster = 'gbtree', eta = 0.05,
                             max_depth = max.depth,
                             min_child_weight = min_child_weight,
                             subsample = subsample,
                             colsample_bytree = colsample_bytree,
                             gamma = gamma,
                             lambda = 1, alpha = 0,
                             objective = 'binary:logistic',
                             eval_metric = 'auc'),
                 data = data.matrix(df.train[,-target.var]),
                 label = as.matrix(df.train[, target.var]),
                 nround = 500, folds = cv_folds, prediction = TRUE,
                 showsd = TRUE, early.stop.round = 5, maximize = TRUE,
                 verbose = 0
  )
  list(Score = cv$dt[, max(test.auc.mean)],
       Pred = cv$pred)
}

xgb.bayes.model <- BayesianOptimization(
  xgb.cv.bayes,
  bounds = list(max.depth = c(2L, 12L),
                min_child_weight = c(1L, 10L),
                subsample = c(0.5, 1),
                colsample_bytree = c(0.1, 0.4),
                gamma = c(0, 10)
  ),
  init_grid_dt = NULL,
  init_points = 10,  # number of random points to start search
  n_iter = 20, # number of iterations after initial random points are set
  acq = 'ucb', kappa = 2.576, eps = 0.0, verbose = TRUE
)

这是一个较老的问题,但我想我会分享我如何调整 xgboost 参数。我最初以为我会为此使用插入符号,但最近发现处理所有参数以及缺失值的问题。我也在考虑通过不同的参数组合编写一个迭代循环,但希望它并行运行并且需要太多时间。使用 NMOF 包中的 gridSearch 提供了两全其美的效果(所有参数以及并行处理)。这是二进制分类的示例代码(适用于 windows 和 linux):

# xgboost task parameters
nrounds <- 1000
folds <- 10
obj <- 'binary:logistic'
eval <- 'logloss'

# Parameter grid to search
params <- list(
  eval_metric = eval,
  objective = obj,
  eta = c(0.1,0.01),
  max_depth = c(4,6,8,10),
  max_delta_step = c(0,1),
  subsample = 1,
  scale_pos_weight = 1
)

# Table to track performance from each worker node
res <- data.frame()

# Simple cross validated xgboost training function (returning minimum error for grid search)
xgbCV <- function (params) {
  fit <- xgb.cv(
    data = data.matrix(train), 
    label = trainLabel, 
    param =params, 
    missing = NA, 
    nfold = folds, 
    prediction = FALSE,
    early.stop.round = 50,
    maximize = FALSE,
    nrounds = nrounds
  )
  rounds <- nrow(fit)
  metric = paste('test.',eval,'.mean',sep='')
  idx <- which.min(fit[,fit[[metric]]]) 
  val <- fit[idx,][[metric]]
  res <<- rbind(res,c(idx,val,rounds))
  colnames(res) <<- c('idx','val','rounds')
  return(val)
}

# Find minimal testing error in parallel
cl <- makeCluster(round(detectCores()/2)) 
clusterExport(cl, c("xgb.cv",'train','trainLabel','nrounds','res','eval','folds'))
sol <- gridSearch(
  fun = xgbCV,
  levels = params,
  method = 'snow',
  cl = cl,
  keepNames = TRUE,
  asList = TRUE
)

# Combine all model results
comb=clusterEvalQ(cl,res)
results <- ldply(comb,data.frame)
stopCluster(cl)

# Train model given solution above
params <- c(sol$minlevels,objective = obj, eval_metric = eval)
xgbModel <- xgboost(
  data = xgb.DMatrix(data.matrix(train),missing=NaN, label = trainLabel),
  param = params,
  nrounds = results[which.min(results[,2]),1]
)

print(params)
print(results)