adaboost的原理是,由于在PAC学习的框架下,一个概念的强可学习的充要条件是这个概念是弱可学习的,所以就引出一个思路,对于二分类的数据,通过串行学习弱分类器,并使用加权多数表决方法对弱分类器组合成一个强分类器。而再引申后,adaboost也是损失函数为指数函数的加法模型,使用前向分步学习算法获得。
现在直接通过代码展示其中的逻辑,该部分代码源于《机器学习实战》
import numpy as np
#阈值划分
def stumpClassify(dataMatrix,dimen,threshVal,threshIneq):
retArray = np.ones((np.shape(dataMatrix)[0],1))
if threshIneq == "lt":
retArray[dataMatrix[:,dimen] <= threshVal] = -1.0
else:
retArray[dataMatrix[:,dimen] > threshVal] = -1.0
return retArray
#构造单个树桩
def buildStump(dataArr,classLabels,D):
dataMatrix = np.mat(dataArr)
labelMat = np.mat(classLabels).T
m,n = np.shape(dataMatrix)
numSteps = 10.0
bestStump = {}
bestClasEst = np.mat(np.zeros((m,1)))
minError = np.inf
for i in range(n):
rangeMin = dataMatrix[:,i].min()
rangeMax = dataMatrix[:,i].max()
stepSize = (rangeMax - rangeMin) / numSteps
for j in range(-1,int(numSteps) + 1):
for inequal in ["lt","gt"]:
threshVal = rangeMin + float(j) * stepSize
predictedVals = stumpClassify(dataMatrix,i,threshVal,inequal)
errArr = np.mat(np.ones((m,1)))
#取出判别错误的值,并计算错误率
errArr[predictedVals == labelMat] = 0
weightedError = D.T * errArr
print("split : dim %d,thresh %.2f,thresh inequal: %s,the weighted error is %.3f" % (i,threshVal,inequal,weightedError))
if weightedError < minError:
minError = weightedError
bestClasEst = predictedVals.copy()
bestStump["dim"] = i
bestStump["ineq"] = inequal
bestStump["thresh"] = threshVal
return bestStump,minError,bestClasEst
#构造单层决策树
def adaBoostTrainDS(dataArr,classLabels,numIt = 40):
weakClassArr = []
m = np.shape(dataArr)[0]
D = np.mat(np.ones((m,1)) / m)
aggClassEst = np.mat(np.zeros((m,1)))
for i in range(numIt):
bestStump,error,classEst = buildStump(dataArr,classLabels,D)
print("D:",D.T)
#计算系数,防止零除错误
alpha = float(0.5 * np.log((1.0 - error) / max(error,1e-16)))
bestStump["alpha"] = alpha
weakClassArr.append(bestStump)
print("classEst:",classEst.T)
#更新权重
expon = np.multiply(-1 * alpha * np.mat(classLabels).T,classEst)
D = np.multiply(D,np.exp(expon))
D = D / D.sum()
#给出预测值
aggClassEst += alpha * classEst
print("aggClassEst:",aggClassEst.T)
#计算单层的错误率
aggErrors = np.multiply(np.sign(aggClassEst) !=np.mat(classLabels).T,np.ones((m,1)))
errorRate = aggErrors.sum() / m
print("total error:",errorRate,"\n")
if errorRate == 0.0:
break
return weakClassArr
#获得强分类器
def adaClassify(datToClass,classifierArr):
dataMatrix = np.mat(datToClass)
m = np.shape(dataMatrix)[0]
aggClassEst = np.mat(np.zeros((m,1)))
for i in range(len(classifierArr)):
classEst = stumpClassify(dataMatrix,classifierArr[i]["dim"],classifierArr[i]["thresh"],classifierArr[i]["ineq"])
aggClassEst += classifierArr[i]["alpha"] * classEst
print(aggClassEst)
return np.sign(aggClassEst)
对课后习题1,将数据进行输入:
dataMat = np.mat([[0,1,3],[0,3,1],[1,2,2],[1,1,3],[1,2,3],[0,1,2],[1,1,2],[1,1,1],[1,3,1],[0,2,1]])
classLabels = [-1.0,-1.0,-1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0]
classifierArr = adaBoostTrainDS(dataMat,classLabels,30)
print(classifierArr)
可以获得一个强分类器。
对于sklearn中的包为
from sklearn.ensemble import AdaBoostClassifier
AdaBoostClassifer框架有5个参数,为
base_estimator(基分类器),
algorithm(分类算法,有SAMME和SAMME.R两种),
loss(损失函数,有linear,square,exponential三种),
n_estimators(最大的弱学习器个数),
learning_rate(每个弱学习器的权重缩减系数)。
AdaBoostClassifer的弱学习器参数有6个,为
max_features(划分时最大特征数,默认为None,可选log2或sqrt),
max_depth(树最大深度),
min_samples_split(内部节点再划分所需样本数),
min_samples_leaf(叶子节点最少样本数),
min_weight_fraction_leaf(叶子节点最小的样本权重和),
max_leaf_nodes(最大叶子节点数)。