一、在运行XGboost之前,必须设置的三种类型参数。
1.General parameters:一般参数。设定boosting过程中使用哪种booster,常用的booster有树模型(tree)和线性模型(linear model)
booster [default=gbtree] 有两中模型可以选择gbtree和gblinear。(树模型-线性模型)
silent [default=0] 取0时打印出运行时信息,取1时不打印运行时信息
nthread [default to maximum number of threads available if not set] 运行时的线程数,默认是获得系统的最大线程数
num_pbuffer 和num_feature 系统会自动设定,不用手动设置
2.booster parameters(分为树模型和线性模型)
2.1 树模型booster参数
eta [default=0.3] (取值范围[0-1])。为了防止过拟合,更新过程中用到的收缩步长。在每次提升计算之后,算法会直接获得新特征的权重。 eta通过缩减特征的权重使提升计算过程更加保守。
gamma [default=0] (取值范围[0,∞])。
max_depth [default=6] (取值范围[1,∞])。 树的最大深度
min_child_weight [default=1] (取值范围[0,∞])。 子节点样本权重和
max_delta_step [default=0] (取值范围[0,∞]) 。每一步最大增量权重评价
subsample [default=1] (取值范围(0,1])。用于训练模型的子样本占整个样本集合的比例。如果设置为0.5则意味着XGBoost将随机的冲整个样本集合中随机的抽取出50%的子样本建立树模型,这能够防止过拟合。
colsample_bytree [default=1] (取值范围(0,1])在建立树时对特征采样的比例
2.2 线性模型booster参数
lambda [default=0] L2 正则的惩罚系数
alpha [default=0] L1 正则的惩罚系数
lambda_bias 在偏置上的L2正则,默认为0
3.task parameters(任务参数)
- objective [ default=reg:linear ]
- 定义学习任务及相应的学习目标,可选的目标函数如下:
- “reg:linear” –线性回归。
- “reg:logistic” –逻辑回归。
- “binary:logistic” –二分类的逻辑回归问题,输出为概率。
- “binary:logitraw” –二分类的逻辑回归问题,输出的结果为wTx。
- “count:poisson” –计数问题的poisson回归,输出结果为poisson分布。在poisson回归中,max_delta_step的缺省值为0.7。
- “multi:softmax” –让XGBoost采用softmax目标函数处理多分类问题,同时需要设置参数num_class(类别个数)
- “multi:softprob” –和softmax一样,但是输出的是ndata * nclass的向量,可以将该向量reshape成ndata行nclass列的矩阵。没行数据表示样本所属于每个类别的概率。
- “rank:pairwise” –set XGBoost to do ranking task by minimizing the pairwise loss
- base_score [ default=0.5 ]
- eval_metric [ default according to objective ]
- evaluation metrics for validation data, a default metric will be assigned according to objective( rmse for regression, and error for classification, mean average precision for ranking )
- User can add multiple evaluation metrics, for python user, remember to pass the metrics in as list of parameters pairs instead of map, so that latter 'eval_metric' won't override previous one
- The choices are listed below:
- "rmse": root mean square error
- "mae": mean absolute error
- "logloss": negative log-likelihood
- "error": Binary classification error rate. It is calculated as #(wrong cases)/#(all cases). For the predictions, the evaluation will regard the instances with prediction value larger than 0.5 as positive instances, and the others as negative instances.
- "merror": Multiclass classification error rate. It is calculated as #(wrong cases)/#(all cases).
- "mlogloss": Multiclass logloss
- "auc": Area under the curve for ranking evaluation.
- "ndcg":Normalized Discounted Cumulative Gain
- "map":Mean average precision
- "ndcg@n","map@n": n can be assigned as an integer to cut off the top positions in the lists for evaluation.
- "ndcg-","map-","ndcg@n-","map@n-": In XGBoost, NDCG and MAP will evaluate the score of a list without any positive samples as 1. By adding "-" in the evaluation metric XGBoost will evaluate these score as 0 to be consistent under some conditions. training repeatively
- seed [ default=0 ] 随机数的种子。