计算每个alpha对应的决策树纯度
CCP算法
为子树Tt定义了代价和复杂度,以及一个衡量代价与复杂度之间关系的参数a。其中代价指的是在剪枝过程中因子树T_t被叶节点替代而增加的错分样本;复杂度表示剪枝后子树Tt减少的叶结点数;a则表示剪枝后树的复杂度降低程度与代价间的关系。在树构建完成后,对树进行剪枝简化,使以下损失函数最小化: 损失函数既考虑了代价,又考虑了树的复杂度,所以叫代价复杂度剪枝法,实质就是在树的复杂度与准确性之间取得一个平衡点。 备注:在sklearn中,如果criterion设为gini,Li 则是每个叶子节点的gini系数,如果设为entropy,则是熵。
过程
导入工程
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
from sklearn import tree
from sklearn.metrics import accuracy_score
数据划分为特征集和标签列
X,y=load_iris(return_X_y=True)
数据拆分为训练集和测试集
X_train,X_test,Y_train,Y_test=train_test_split(X,y,random_state=3)
实例化决策树,参数全部用默认值
clf=DecisionTreeClassifier(random_state=0)
拟合样本数据
clf.fit(X_train,Y_train)
对测试集进行预测
pred=clf.predict(X_test)
对预测结果进行打分
print(accuracy_score(Y_test,pred))
获得ccp_alpha参数
path=clf.cost_complexity_pruning_path(X_train,Y_train)
###fuzadu为误差率增益率,buchundu为剪枝后决策树所有叶子节点的不纯度
fuzadu,buchundu=path.ccp_alphas,path.impurities
计算每个ccp_alpha对应的决策树纯度
clfs=[]
tree_impurities = []
for ccp_alpha in fuzadu:
clf = DecisionTreeClassifier(min_samples_split=10, random_state=0, ccp_alpha=ccp_alpha)
clf.fit(X_train, Y_train)
clfs.append(clf)
is_leaf = clf.tree_.children_left == -1
# print(is_leaf)
tree_impurity = (clf.tree_.impurity[is_leaf] * clf.tree_.n_node_samples[is_leaf] / len(y)).sum()
# tree_impurity = (clf.tree_.impurity * clf.tree_.n_node_samples / len(y)).sum() # 获取叶子节点的样本数
tree_impurities.append(tree_impurity)
print(tree_impurities)