early-stopping在xgboost的过拟合使用

https://machinelearningmastery.com/avoid-overfitting-by-early-stopping-with-xgboost-in-python/

baseline,划分训练集测试集

# monitor training performance
from numpy import loadtxt
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# load data
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
# split data into X and y
X = dataset[:,0:8]
Y = dataset[:,8]
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=7)
# fit model no training data
model = XGBClassifier()
eval_set = [(X_test, y_test)]
model.fit(X_train, y_train, eval_metric="error", eval_set=eval_set, verbose=True)
# make predictions for test data
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
# evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))

结果如下:
[89]    validation_0-error:0.204724
[90]    validation_0-error:0.208661
[91]    validation_0-error:0.208661
[92]    validation_0-error:0.208661
[93]    validation_0-error:0.208661
[94]    validation_0-error:0.208661
[95]    validation_0-error:0.212598
[96]    validation_0-error:0.204724
[97]    validation_0-error:0.212598
[98]    validation_0-error:0.216535
[99]    validation_0-error:0.220472 -----这是train的
Accuracy: 77.95%                           ----这是test的

2.可视化

# plot learning curve
from numpy import loadtxt
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from matplotlib import pyplot
# load data
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
# split data into X and y
X = dataset[:,0:8]
Y = dataset[:,8]
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=7)
# fit model no training data
model = XGBClassifier()
eval_set = [(X_train, y_train), (X_test, y_test)]
model.fit(X_train, y_train, eval_metric=["error", "logloss"], eval_set=eval_set, verbose=True)
# make predictions for test data
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
# evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
# retrieve performance metrics
results = model.evals_result()
epochs = len(results['validation_0']['error'])
x_axis = range(0, epochs)
# plot log loss
fig, ax = pyplot.subplots()
ax.plot(x_axis, results['validation_0']['logloss'], label='Train')
ax.plot(x_axis, results['validation_1']['logloss'], label='Test')
ax.legend()
pyplot.ylabel('Log Loss')
pyplot.title('XGBoost Log Loss')
pyplot.show()
# plot classification error
fig, ax = pyplot.subplots()
ax.plot(x_axis, results['validation_0']['error'], label='Train')
ax.plot(x_axis, results['validation_1']['error'], label='Test')
ax.legend()
pyplot.ylabel('Classification Error')
pyplot.title('XGBoost Classification Error')
pyplot.show()

重点:

1.eval_set把train和test都放进去了

            所以result=model。evals_result有两个对象

          长这样:{
    'validation_0': {'error': [0.259843, 0.26378, 0.26378, ...]},
    'validation_1': {'error': [0.22179, 0.202335, 0.196498, ...]}
}

     所以results['validation_0]['error']代表train

2.可以再model.fit(eval_metric=["error", "logloss"])指定两个衡量指标,那么,对应字典也可以指定

results['validation_0']['logloss']

3.verbose=False关闭显示迭代结果

XGBoost Learning Curve Log Loss

XGBoost Learning Curve Classification Error

简单看一下,可以看出来在20-40之间test就不怎么下降了

三 采用earlystopping

# early stopping
from numpy import loadtxt
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# load data
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
# split data into X and y
X = dataset[:,0:8]
Y = dataset[:,8]
# split data into train and test sets
seed = 7
test_size = 0.33
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
# fit model no training data
model = XGBClassifier()
eval_set = [(X_test, y_test)]
model.fit(X_train, y_train, early_stopping_rounds=10, eval_metric="logloss", eval_set=eval_set, verbose=True)
# make predictions for test data
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
# evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))

early_stopping_rounds=10,代表在10个迭代内结果没什么改进就停止

结果:

...
[35]    validation_0-logloss:0.487962
[36]    validation_0-logloss:0.488218
[37]    validation_0-logloss:0.489582
[38]    validation_0-logloss:0.489334
[39]    validation_0-logloss:0.490969
[40]    validation_0-logloss:0.48978
[41]    validation_0-logloss:0.490704
[42]    validation_0-logloss:0.492369
Stopping. Best iteration:
[32]    validation_0-logloss:0.487297

建议:It is generally a good idea to select the early_stopping_rounds as a reasonable function of the total number of training epochs (10% in this case) or attempt to correspond to the period of inflection points as might be observed on plots of learning curves.

 

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值