要理解决策树算法需要首先明确信息增益及信息熵的概念:
对于一个分类集中的分类xi,其熵为
l(xi)=−log2p(xi)
对于所有类别的信息熵总和:
H=−∑ni=1p(xi)log2p(xi)
计算香农熵的函数:
from math import log
def calcShannonEnt(dataSet):
numEntries = len(dataSet) #类别个数
labelCount = {}
for featVec in dataSet: #对每一条数据
currentLabel = featVec[-1] #currentLabel为当前数据的类别
if currentLabel not in labelCount.keys(): #计数
labelCount[currentLabel] = 0
labelCount[currentLabel] += 1
shannonEnt = 0.0
for key in labelCount.keys():
prob = float(labelCount[key]) / float(numEntries)
shannonEnt -= prob * float(log(prob,2))#计算香农熵
return shannonEnt
按某个特征划分数据集,返回属性值与传入值相同的数据的集合:
def splitDataSet(dataSet, axis, value):
retDataSet = []
for featVec in dataSet:
if featVec[axis] == value:
reducedFeatVec = featVec[:axis] #巧妙使用切片
reducedFeatVec.extend(featVec[axis+1:])
retDataSet.append(reducedFeatVec)
return retDataSet
注意这个函数中extend()和append()的用法:
a= [1,2,3]
b=[4,5,6]
a.append(b)
a
[1, 2, 3, [4, 5, 6]]
a= [1,2,3]
a .extend(b)
[1, 2, 3, 4, 5, 6]
选择最佳数据集划分方式:
def chooseBestFeatureToSplit(dataSet):
numFeatures = len(dataSet[0]) - 1 #属性个数,需要减去最后一列的类别
baseEntropy = calcShannonEnt(dataSet)
bestInfoGain = 0.0
bestFeture = -1
for i in range(numFeatures):
featList = [example[i] for example in dataSet] 取出属性列中的所有属性值
uniqueVals = set(featList) #set函数创建一个无重复项的集合
newEntropy = 0.0
for value in uniqueVals: #计算属性列的信息熵
subDataSet = splitDataSet(dataSet,i,value)
prob = len(subDataSet) / float(len(dataSet))
newEntropy += prob*calcShannonEnt(subDataSet)
infoGain = baseEntropy - newEntropy #计算信息增益
if(infoGain > newEntropy):
bestInfoGain = infoGain
bestFeture = i
return bestFeture
使用要求:
- 调用的数据必须储存在列表中,且所有列表元素有相同长度
- 列表元素的最后一列为类别
选择出现最多类别:
def majorityCnt(classList):
classCount = {}
for vote in classList:
if vote not in classCount.keys():classCount[vote] = 0
classCount[vote] += 1
sortedClassCount = sorted(classCount.iteritems(),
key = operator.itemgetter(1), reverse=True) #注意sorted函数的用法
return sortedClassCount[0][0]