我们使用GridSearch对xgboost进行调参。首先先导入我们需要使用的包。
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold, cross_val_score
import xgboost as xgb
我们通过以前主观判断和以前的经验来挑选出一些重要的参数,并将他们设置为默认值。
other_params = {
'eta': 0.3, 'n_estimators': 850, 'gamma': 0, 'max_depth': 5, 'min_child_weight': 3,
'colsample_bytree': 1, 'colsample_bylevel': 1, 'subsample': 1, 'reg_lambda': 1, 'reg_alpha': 0,
'seed': 33}
然后我们将数据集划分为测试机和训练集。
X = train
y = X.pop('tradeMoney')
X_train_full,X_valid_full,y_train,y_valid = train_test_split(X,y,train_size=0.8,test_size=0.2,
random_state=0)
1.寻找最佳的 n_estimators
cv_params = {
'n_estimators': np.linspace(100, 1000, 10, dtype=int)}
regress_model = xgb.XGBRegressor(**other_params)
gs = GridSearchCV(regress_model, cv_params, verbose=2, refit=True, cv=5, n_jobs=-1)
gs.fit(X_train_full, y_train)
print("参数的最佳取值::", gs.best_params_)
print("最佳模型得分:", gs.best_score_