决策树ID3代码(Python)
上一节介绍了决策树的递归算法框架(create),这一节将用Python语言实现一个小程序。代码和数据来源于《机器学习实战》Peter Harrington [美] 人民邮电出版社,有删改。
下面,将给出部分代码:
1、计算信息熵
def calcShannonEnt(dataSet):
numEntries = len(dataSet)#总的数据量
labelCounts = {}
for featVec in dataSet: #从数据集中每次取出一条数据
currentLabel = featVec[-1] #取类标签
if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0 #统计出现次数
labelCounts[currentLabel] += 1
shannonEnt = 0.0 #初始化熵
for key in labelCounts:
prob = float(labelCounts[key])/numEntries
shannonEnt -= prob * log(prob,2) #log base 2
return shannonEnt
在python命令提示符中输入以下:
>>> import trees>>> myDat,labels=trees.createDataSet()
>>> myDat
[[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
>>> labels
['no surfacing', 'flippers']
>>> trees.calcShannonEnt(myDat)
0.9709505944546686
可以算出此时的熵为0.97
2、根据ID3算法,选择最优属性划分数据集
def chooseBestFeatureToSplit(dataSet):
numFeatures = len(dataSet[0]) - 1 #获取特征数目(不允许出现空的特征)
baseEntropy = calcShannonEnt(dataSet)
bestInfoGain = 0.0; bestFeature = -1
for i in range(numFeatures): #依次计算每个特征
featList = [example[i] for example in dataSet]#取出该特征的所有特征值
uniqueVals = set(featList) #利用set特征值去重
newEntropy = 0.0
for value in uniqueVals:#对每一个特征值value
subDataSet = splitDataSet(dataSet, i, value)#将数据集根据特征i划分出属于特征值value的子集
prob = len(subDataSet)/float(len(dataSet))
newEntropy += prob * calcShannonEnt(subDataSet) #计算newEntropy
infoGain = baseEntropy - newEntropy #计算信息增益
if (infoGain > bestInfoGain): #和最优特征对应的信息增益对比
bestInfoGain = infoGain #if better ,更新
bestFeature = i
return bestFeature
其中,特征集划分的函数:
def splitDataSet(dataSet, axis, value):
retDataSet = []
for featVec in dataSet:
if featVec[axis] == value:
reducedFeatVec = featVec[:axis] #chop out axis used for splitting
reducedFeatVec.extend(featVec[axis+1:])
retDataSet.append(reducedFeatVec)
return retDataSet
在python命令提示符中输入以下:
>>> trees.chooseBestFeatureToSplit(myDat)
0
>>> myDat
[[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
说明当前选择特征0号来划分是最优的。
3、递归创建决策树代码
def createTree(dataSet,labels):
classList = [example[-1] for example in dataSet]
if classList.count(classList[0]) == len(classList): #stop splitting ,所有实例都属于同一个类,即出口一
return classList[0] #返回该类标签
if len(dataSet[0]) == 1: #stop splitting ,数据集只剩下类别属性,没有特征可以分,即出口二
return majorityCnt(classList) #选择实例数最大的类作为该节点的类标记返回
bestFeat = chooseBestFeatureToSplit(dataSet)
bestFeatLabel = labels[bestFeat]#最优特征
myTree = {bestFeatLabel:{}}#建立节点
del(labels[bestFeat]) #将该特征从特征集中去除(已经用了,一次性的)
featValues = [example[bestFeat] for example in dataSet]
uniqueVals = set(featValues)#去重后的最优特征所包含的特征值
for value in uniqueVals: #对每一个特征值
subLabels = labels[:] #复制成子特征集
myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels) #递归
return myTree
其中,选择实例数最多的类的函数:
def majorityCnt(classList):
classCount={}
for vote in classList:
if vote not in classCount.keys(): classCount[vote] = 0
classCount[vote] += 1
sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True)
return sortedClassCount[0][0]
在python命令提示符中输入以下:
>>> myTree = trees.createTree(myDat,labels)
>>> myTree
{'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}}
决策树myTree如上图所示: {'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}}
4、绘制树形图
利用python的Matplotlib可以绘制出树形图,留给读者自己实现!