决策树是基于树结构来进行决策,一颗决策树包括一个根结点、若干个内部结点和若干个叶结点。
如何进行划分属性?
1、最常用的是通过计算信息增益。
信息熵是度量样本集合纯度最常用的一种指标。值越小,纯度越高。是不确定性值。
信息增益用来选择划分属性,选择最小的那个。如果根节点的信息熵为0.998,则表示不确定性为0.998,当前属性的信息增益为0.109,则表示不确定性减少了0.109。
决策树构建的基本步骤如下:
1. 开始,所有记录看作一个节点;
2. 遍历每个变量的每一种分割方式,运用划分选择方法,找到最好的分割点;
3. 分割成两个节点N1和N2;
4. 对N1和N2分别继续执行2-3步,直到每个节点足够“纯”为止。
运行结果
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_split=1e-07, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
presort=False, random_state=None, splitter='best')
[0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2]
30
precision recall f1-score support
0 1.00 1.00 1.00 10
1 1.00 1.00 1.00 10
2 1.00 1.00 1.00 10
avg / total 1.00 1.00 1.00 30
[[10 0 0]
[ 0 10 0]
[ 0 0 10]]
[5.0, 4.5, 4.4000000000000004, 5.0, 5.0999999999999996, 4.7999999999999998, 5.0999999999999996, 4.5999999999999996, 5.2999999999999998, 5.0, 5.5, 6.0999999999999996, 5.7999999999999998, 5.0, 5.5999999999999996, 5.7000000000000002, 5.7000000000000002, 6.2000000000000002, 5.0999999999999996, 5.7000000000000002, 6.7000000000000002, 6.9000000000000004, 5.7999999999999998, 6.7999999999999998, 6.7000000000000002, 6.7000000000000002, 6.2999999999999998, 6.5, 6.2000000000000002, 5.9000000000000004]
[3.5, 2.2999999999999998, 3.2000000000000002, 3.5, 3.7999999999999998, 3.0, 3.7999999999999998, 3.2000000000000002, 3.7000000000000002, 3.2999999999999998, 2.6000000000000001, 3.0, 2.6000000000000001, 2.2999999999999998, 2.7000000000000002, 3.0, 2.8999999999999999, 2.8999999999999999, 2.5, 2.7999999999999998, 3.1000000000000001, 3.1000000000000001, 2.7000000000000002, 3.2000000000000002, 3.2999999999999998, 3.0, 2.5, 3.0, 3.3999999999999999, 3.0]
如何进行划分属性?
1、最常用的是通过计算信息增益。
信息熵是度量样本集合纯度最常用的一种指标。值越小,纯度越高。是不确定性值。
信息增益用来选择划分属性,选择最小的那个。如果根节点的信息熵为0.998,则表示不确定性为0.998,当前属性的信息增益为0.109,则表示不确定性减少了0.109。
决策树构建的基本步骤如下:
1. 开始,所有记录看作一个节点;
2. 遍历每个变量的每一种分割方式,运用划分选择方法,找到最好的分割点;
3. 分割成两个节点N1和N2;
4. 对N1和N2分别继续执行2-3步,直到每个节点足够“纯”为止。
# -*- coding: utf-8 -*-
#导入数据集iris
from sklearn.datasets import load_iris
#载入数据集
iris = load_iris()
'''''
print iris.data #输出数据集
print iris.target #输出真实标签
print len(iris.target)
print iris.data.shape #150个样本 每个样本4个特征
'''
'''''
重点:分割数据集 构造训练集/测试集,120/30
70%训练 0-40 50-90 100-140
30%预测 40-50 90-100 140-150
'''
#训练集
train_data = np.concatenate((iris.data[0:40,:],iris.data[50:90,:],iris.data[100:140,:]),axis = 0)
#训练集样本类别
train_target = np.concatenate((iris.target[0:40],iris.target[50:90],iris.target[100:140]),axis = 0)
#测试集
test_data = np.concatenate((iris.data[40:50,:],iris.data[90:100,:],iris.data[140:150,:]),axis = 0)
#测试集样本类别
test_target = np.concatenate((iris.target[40:50],iris.target[90:100],iris.target[140:150]),axis =0)
#导入决策树DTC包
from sklearn.tree import DecisionTreeClassifier
#训练
clf = DecisionTreeClassifier()
#注意均使用训练数据集和样本类标
clf.fit(train_data, train_target)
print clf
#预测结果
predict_target = clf.predict(test_data)
print predict_target
#预测结果与真实结果对比
print sum(predict_target == test_target)
#输出准确率 召回率 F值
from sklearn import metrics
print(metrics.classification_report(test_target,predict_target))
print(metrics.confusion_matrix(test_target,predict_target))
#获取花卉测试数据集两列数据集
X = test_data
L1 = [n[0] for n in X]
print L1
L2 = [n[1] for n in X]
print L2
#绘图
import numpy as np
import matplotlib.pyplot as plt
plt.scatter(L1, L2, c=predict_target, marker='x') #cmap=plt.cm.Paired
plt.title("DecisionTreeClassifier")
plt.show()
运行结果
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_split=1e-07, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
presort=False, random_state=None, splitter='best')
[0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2]
30
precision recall f1-score support
0 1.00 1.00 1.00 10
1 1.00 1.00 1.00 10
2 1.00 1.00 1.00 10
avg / total 1.00 1.00 1.00 30
[[10 0 0]
[ 0 10 0]
[ 0 0 10]]
[5.0, 4.5, 4.4000000000000004, 5.0, 5.0999999999999996, 4.7999999999999998, 5.0999999999999996, 4.5999999999999996, 5.2999999999999998, 5.0, 5.5, 6.0999999999999996, 5.7999999999999998, 5.0, 5.5999999999999996, 5.7000000000000002, 5.7000000000000002, 6.2000000000000002, 5.0999999999999996, 5.7000000000000002, 6.7000000000000002, 6.9000000000000004, 5.7999999999999998, 6.7999999999999998, 6.7000000000000002, 6.7000000000000002, 6.2999999999999998, 6.5, 6.2000000000000002, 5.9000000000000004]
[3.5, 2.2999999999999998, 3.2000000000000002, 3.5, 3.7999999999999998, 3.0, 3.7999999999999998, 3.2000000000000002, 3.7000000000000002, 3.2999999999999998, 2.6000000000000001, 3.0, 2.6000000000000001, 2.2999999999999998, 2.7000000000000002, 3.0, 2.8999999999999999, 2.8999999999999999, 2.5, 2.7999999999999998, 3.1000000000000001, 3.1000000000000001, 2.7000000000000002, 3.2000000000000002, 3.2999999999999998, 3.0, 2.5, 3.0, 3.3999999999999999, 3.0]