sklearn Discrete AdaBoost vs Real AdaBoost

在组合式学习器中一般有参数learning_rate :学习速率 学习率
这是一个取值在[0, 1]上的值,一些文章说其是用来在算法中用来设定迭代范围的,
过大会导致过拟合,过拟合意味着拟合函数震荡不稳定,这在直观上是可以理解的。

对于adaBoost组合模型调用staged_predict可以得到每个迭代阶段的预测值。

sklearn.metrics.zero_one_loss直接度量了prediction与原值的距离。

下面在训练集及测试集上比较了Discrete AdaBoost及Real AdaBoost
import numpy as np 
import matplotlib.pyplot as plt 

from sklearn import datasets 
from sklearn.tree import DecisionTreeClassifier 
from sklearn.metrics import zero_one_loss 
from sklearn.ensemble import AdaBoostClassifier 

n_estimators = 400
learning_rate = 1

X, y = datasets.make_hastie_10_2(n_samples = 12000, random_state = 1)

X_test, y_test = X[2000:], y[2000:]
X_train, y_train = X[:2000], y[:2000]

dt_stump = DecisionTreeClassifier(max_depth = 1, min_samples_leaf = 1)
dt_stump.fit(X_train, y_train)
dt_stump_err = 1.0 - dt_stump.score(X_test, y_test)

dt = DecisionTreeClassifier(max_depth = 9, min_samples_leaf = 1)
dt.fit(X_train, y_train)
dt_err = 1.0 - dt.score(X_test, y_test)

ada_discrete = AdaBoostClassifier(base_estimator = dt_stump,
    learning_rate = learning_rate,
    n_estimators = n_estimators,
    algorithm = "SAMME")
ada_discrete.fit(X_train, y_train)

ada_real = AdaBoostClassifier(base_estimator = dt_stump,
    learning_rate = learning_rate,
    n_estimators = n_estimators,
    algorithm = "SAMME.R")
ada_real.fit(X_train, y_train)

fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, n_estimators], [dt_stump_err] * 2, "k-", label = "Decision Stump Error")
ax.plot([1, n_estimators], [dt_err] * 2, "k--", label = "Decision Tree Error")

ada_discrete_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(X_test)):
 ada_discrete_err[i] = zero_one_loss(y_pred, y_test)

ada_discrete_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(X_train)):
 ada_discrete_err_train[i] = zero_one_loss(y_pred, y_train)

ada_real_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(X_test)):
 ada_real_err[i] = zero_one_loss(y_pred, y_test)

ada_real_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(X_train)):
 ada_real_err_train[i] = zero_one_loss(y_pred, y_train)

ax.plot(np.arange(n_estimators) + 1, ada_discrete_err, label = "Discrete AdaBoost Test Error", color = "red")
ax.plot(np.arange(n_estimators) + 1, ada_discrete_err_train, label = "Discrete AdaBoost Train Error", color = "blue")
ax.plot(np.arange(n_estimators) + 1, ada_real_err, label = "Real AdaBoost Test Error", color = "orange")
ax.plot(np.arange(n_estimators) + 1, ada_real_err_train, label = "Real AdaBoost Train Error", color = "green")

ax.set_ylim((0.0, 0.5))
ax.set_xlabel("n_estimators")
ax.set_ylabel("err rate")

leg = ax.legend(loc = "upper right", fancybox = True)
leg.get_frame().set_alpha(0.7)

plt.show()




  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值