算法介绍
当做重要决定时,大家可能都会考虑吸取多个专家而不只是一个人的意见。机器学习处理问
题时又何尝不是如此?这就是元算法(meta-algorithm) 背后的思路。元算法是对其他算法进行组
合的一种方式。接下来我们将集中关注一个称作AdaBoost是最流行的元算法。由于某些人认为
AdaBoost是最好的监督学习的方法,所以该方法是机器学习工具箱中最强有力的工具之一
算法实现
AdaBOOST是adaptive boosting(自适应boosting)的缩写,其运行过程如下:训练数据中的每
个样本,并赋予其一个权重,这些权重构成了向量乃。一开始,这些权重都初始化成相等值。首
先在训练数据上训练出一个弱分类器并计算该分类器的错误率,然后在同一数据集上再次训练弱
分类器。在分类器的第二次训练当中,将会重新调整每个样本的权重,其中第一次分对的样本的
权重将会降低,而第一次分错的样本的权重将会提高。为了从所有弱分类器中得到最终的分类结
果,AdaBoost为每个分类器都分配了一个权重值alpha,这些alpha值是基于每个弱分类器的错误
率进行计算的。其中,错误率£的定义为:
alpha的计算公式如下:
算法计算流程如下
计算出3碑&值之后,可以对权重向量乃进行更新,以使得那些正确分类的样本的权重降低而
错分样本的权重升高。D的计算方法如下。
如果某个样本被正确分类,那么该样本的权重更改为:
如果某个样本被错分,那么该样本的权重更改为:
在计算出乃之后,AdaBoost对又开始进入下一轮迭代。AdaBoost算法会不断地重复训练和调整
权重的过程,直到训练错误率为0或者弱分类器的数目达到用户的指定值为止。
代码实现
def loadSimpData():
datMat = matrix([[1., 2.1],
[2., 1.1],
[1.3, 1.],
[1., 1.],
[2., 1.]])
classLabels = [1.0, 1.0, -1.0, -1.0, 1.0]
return datMat, classLabels
def loadDataSet(fileName): # general function to parse tab -delimited floats
numFeat = len(open(fileName).readline().split('\t')) # get number of fields
dataMat = [];
labelMat = []
fr = open(fileName)
for line in fr.readlines():
lineArr = []
curLine = line.strip().split('\t')
for i in range(numFeat - 1):
lineArr.append(float(curLine[i]))
dataMat.append(lineArr)
labelMat.append(float(curLine[-1]))
return dataMat, labelMat
def stumpClassify(dataMatrix, dimen, threshVal, threshIneq): # just classify the data
retArray = ones((shape(dataMatrix)[0], 1))
if threshIneq == 'lt':
retArray[dataMatrix[:, dimen] <= threshVal] = -1.0
else:
retArray[dataMatrix[:, dimen] > threshVal] = -1.0
return retArray
def buildStump(dataArr, classLabels, D):
dataMatrix = mat(dataArr);
labelMat = mat(classLabels).T
m, n = shape(dataMatrix)
numSteps = 10.0;
bestStump = {};
bestClasEst = mat(zeros((m, 1)))
minError = inf # init error sum, to +infinity
for i in range(n): # loop over all dimensions
rangeMin = dataMatrix[:, i].min();
rangeMax = dataMatrix[:, i].max();
stepSize = (rangeMax - rangeMin) / numSteps
for j in range(-1, int(numSteps) + 1): # loop over all range in current dimension
for inequal in ['lt', 'gt']: # go over less than and greater than
threshVal = (rangeMin + float(j) * stepSize)
predictedVals = stumpClassify(dataMatrix, i, threshVal,
inequal) # call stump classify with i, j, lessThan
errArr = mat(ones((m, 1)))
errArr[predictedVals == labelMat] = 0
weightedError = D.T * errArr # calc total error multiplied by D
print "split: dim %d, thresh %.2f, thresh ineqal: %s, the weighted error is %.3f" % (
i, threshVal, inequal, weightedError)
if weightedError < minError:
minError = weightedError
bestClasEst = predictedVals.copy()
bestStump['dim'] = i
bestStump['thresh'] = threshVal
bestStump['ineq'] = inequal
return bestStump, minError, bestClasEst
def adaBoostTrainDS(dataArr, classLabels, numIt=40):
weakClassArr = []
m = shape(dataArr)[0]
D = mat(ones((m, 1)) / m) # init D to all equal
aggClassEst = mat(zeros((m, 1)))
for i in range(numIt):
bestStump, error, classEst = buildStump(dataArr, classLabels, D) # build Stump
# print "D:",D.T
alpha = float(
0.5 * log((1.0 - error) / max(error, 1e-16))) # calc alpha, throw in max(error,eps) to account for error=0
bestStump['alpha'] = alpha
weakClassArr.append(bestStump) # store Stump Params in Array
# print "classEst: ",classEst.T
expon = multiply(-1 * alpha * mat(classLabels).T, classEst) # exponent for D calc, getting messy
D = multiply(D, exp(expon)) # Calc New D for next iteration
D = D / D.sum()
# calc training error of all classifiers, if this is 0 quit for loop early (use break)
aggClassEst += alpha * classEst
# print "aggClassEst: ",aggClassEst.T
aggErrors = multiply(sign(aggClassEst) != mat(classLabels).T, ones((m, 1)))
errorRate = aggErrors.sum() / m
print "total error: ", errorRate
if errorRate == 0.0: break
return weakClassArr, aggClassEst
def adaClassify(datToClass, classifierArr):
dataMatrix = mat(datToClass) # do stuff similar to last aggClassEst in adaBoostTrainDS
m = shape(dataMatrix)[0]
aggClassEst = mat(zeros((m, 1)))
for i in range(len(classifierArr)):
classEst = stumpClassify(dataMatrix, classifierArr[0][i]['dim'], \
classifierArr[0][i]['thresh'], \
classifierArr[0][i]['ineq']) # call stump classify
aggClassEst += classifierArr[0][i]['alpha'] * classEst
print aggClassEst
return sign(aggClassEst)
测试代码
if __name__ == '__main__':
dataMat, labels = loadSimpData()
classifierArr = adaBoostTrainDS(dataMat, labels, 30)
result0 = adaClassify([0, 0], classifierArr)
print(result0)
测试结果
[[-0.69314718]]
[[-1.66610226]]
[[-1.]]
总结
AdaBoost以弱学习器作为基分类器,并且输人数据,使其通过权重向量进行加权。在第一次迭代当中,所有数据都等权重。但是在后续的迭代当中,前次迭代中分错的数据的权重会增大。这种针对错误的调节能力正是AdaBoost的长处
上面以单层决策树作为弱学习器构建了AdaBoost分类器。实际上,AdaBoost函数数可以应用于任意分类器,只要该分类器能够处理加权数据即可。AdaBoost算法十分强大,它能够快速处理其他分类器很难处理的数据集。