决策树是广泛用于分类和回归任务的模型。
构造决策树如果不进行控制,每个叶结点都是纯叶结点,这样会造成过拟合。
防止过拟合的方法有两种:预剪枝和后剪枝。sklearn只实现了预剪枝,包括限制树的最大深度、限制叶结点的最大数目等等。
1、分类
下面用乳腺癌数据集来训练一个决策树模型,并进行预剪枝,设置其最大深度为4
from sklearn.datasets import load_breast_cancer
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.model_selection import train_test_split
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer['data'], cancer['target'], stratify=cancer['target'],
random_state=42)
DecTree = DecisionTreeClassifier(max_depth=4, random_state=0)
DecTree.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(DecTree.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(DecTree.score(X_test, y_test)))
训练精度:
Accuracy on training set: 0.988
Accuracy on test set: 0.951
下面对该决策树进行可视化:
import graphviz
export_graphviz(DecTree, out_file="cancer_tree.dot", class_names=["malignant", "benign"],
feature_names=cancer['feature_names'], impurity=False, filled=True)
with open("cancer_tree.dot") as f:
dot_graph = f.read()
dot = graphviz.Source(dot_graph)
dot.view()
从树中可以看出,有些特征并没有用到,而有的缺显得非常重要,比如worst radius,下面就展示一下各个特征的重要性
n_features = cancer['feature_names'].shape[0]
plt.figure(figsize=(6, 4))
plt.barh(range(n_features), DecTree.feature_importances_)
plt.yticks(np.arange(n_features), cancer['feature_names'])
plt.show()
2、回归
基于决策树的回归模型有一大缺点,即不能在训练数据范围之外进行预测。