机器学习-决策树(Decision Tree)

Section I: Brief Introduction on Decision Tree

Decision tree classifiers are attractive models if model interpretability is of our concern. As the name decision tree suggests, we can think of this model as breaking down the data by making decision based on asking a series of questions. Interestingly, standardize data will not be required before model’s applications.
From
Sebastian Raschka, Vahid Mirjalili. Python机器学习第二版. 南京:东南大学出版社,2018.

Section II: The Definition of Three Criteria
  • 第一部分:信息熵
  • 第二部分:基尼系数
  • 第三部分:分类误差
    上述指标的定义及计算过程,可参见此处引用的参考文献。
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['figure.dpi']=200
plt.rcParams['savefig.dpi']=200
font = {'family': 'Times New Roman',
        'weight': 'light'}
plt.rc("font", **font)

#Section I: Mathmatical formulations of three criteria
def gini(p):
    return 1-p*p-(1-p)*(1-p)

def entropy(p):
    return -p*np.log2(p)-(1-p)*np.log2((1-p))

def error(p):
    return 1-np.max([p,1-p])

#Section II: Calculate three criteria
x=np.arange(0.0,1.0,0.01)
ent=[entropy(p) if p!=0 else None for p in x]
sc_ent=[e*0.5 if e else None for e in ent]
err=[error(i) for i in x]

#Section III: Visualize the criteria distribution
fig,ax=plt.subplots()
for i,lab,ls,c in zip([ent,sc_ent,gini(x),err],
                      ['Entropy',"Entropy Scaled","Gini Impurity","Misclassification Error"],
                      ['-','-','--','-.'],
                      ['black','lightgray','red','green']):
    line=ax.plot(x,i,label=lab,linestyle=ls,lw=2,color=c)
ax.legend(loc='upper right')
ax.axhline(y=0.5,linewidth=1,color='k',linestyle='--')
ax.axhline(y=1.0,linewidth=1,color='k',linestyle='--')
plt.ylim([0,1.1])
plt.xlabel('p(i=1)')
plt.ylabel('Impurity Index')
plt.savefig('./fig1.png')
plt.show()

在这里插入图片描述
由上图可得知,预测概率越趋近于0或者1两边极端,则各类指标越低,即节点纯度越高。反之,若预测概率越模棱两可,则节点纯度越差。

Section III: Decision Tree Called via Sklearn
import matplotlib.pyplot as plt
from sklearn import datasets
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from DecisionTrees.visualize_test_idx import plot_decision_regions

plt.rcParams['figure.dpi']=200
plt.rcParams['savefig.dpi']=200
font = {'family': 'Times New Roman',
        'weight': 'light'}
plt.rc("font", **font)

#Section 1: Load data and split it into train/test dataset
iris=datasets.load_iris()
X=iris.data[:,[2,3]]
y=iris.target
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=1,stratify=y)

#Section 2: Build a decision tree
tree=DecisionTreeClassifier(criterion='gini',
                            max_depth=4,
                            random_state=1)
tree.fit(X_train,y_train)
X_combined=np.vstack([X_train,X_test])
y_combined=np.hstack([y_train,y_test])

plot_decision_regions(X=X_combined,
                      y=y_combined,
                      classifier=tree,
                      test_idx=range(105,150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.savefig('./fig2.png')
plt.show()

在这里插入图片描述

Section IV: Visualize Decision Tree in PNG Format
#Section 3: Visualize tree in PNG format
from pydotplus import graph_from_dot_data
from sklearn.tree import export_graphviz

dot_data=export_graphviz(tree,
                         filled=True,
                         rounded=True,
                         class_names=['Setosa','Versicolor','Virginica'],
                         feature_names=['petal length','petal width'],
                         out_file=None)
graph=graph_from_dot_data(dot_data)
graph.write_png('./tree.png')

在这里插入图片描述
上图给出了决策树的分类过程,如Gini系数、分类标签和样本数等。

参考文献
Sebastian Raschka, Vahid Mirjalili. Python机器学习第二版. 南京:东南大学出版社,2018.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值