[数据挖掘Python] 22 决策树-分类树3 如何防止过拟合?超参数调优: 穷举网格搜索法GridSearchCV,随机搜索搜法RandomizedSearchCV

本文介绍了如何使用Python的sklearn库构建决策树模型,并通过预剪枝和后剪枝方法防止过拟合,以提高在银行数据集上的预测准确性。作者展示了从预剪枝到最优剪枝参数选择的过程,以及剪枝前后模型性能的变化。
摘要由CSDN通过智能技术生成

[数据挖掘Python] 23 决策树-分类树4 如何防止过拟合?预剪枝 与 后剪枝

import pandas as pd
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score

# 读取数据
filepath = "/Users/zitongqiu/Documents/data mining/data/UniversalBank.csv"
bank = pd.read_csv(filepath)
bank.columns = [c.replace(' ','_') for c in bank.columns]
bank.drop(columns=['ID', 'ZIP_Code'], inplace=True)
X = bank.drop(columns=['Personal_Loan'])
y = bank['Personal_Loan']

# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=1)


# 使用CART算法构建决策树并进行预剪枝
classTree = DecisionTreeClassifier(criterion='gini', max_depth=4, min_samples_leaf=5, random_state=1)
classTree.fit(X_train,y_train)


# 绘制决策树
plt.figure(figsize=(18, 10))
class_names = classTree.classes_.astype(str)
plot_tree(classTree, filled=True, feature_names=X_train.columns, class_names=class_names)
# plt.show()
# 对测试集进行预测并计算准确率
y_pred = classTree.predict(X_test)
acc = (y_pred == y_test).mean()
print("Pre-pruning Accuracy on test set(: {:.3f}".format(acc))

# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=1)

# 使用CART算法构建决策树
classTree = DecisionTreeClassifier(criterion='gini', random_state=1)
classTree.fit(X_train, y_train)

# 计算测试集的准确率
y_test_pred = classTree.predict(X_test)
print('剪枝前准确率:', accuracy_score(y_test, y_test_pred))

# 得到决策树的每个节点的Alpha值
path = classTree.cost_complexity_pruning_path(X_train, y_train)
ccp_alphas, impurities = path.ccp_alphas, path.impurities


# 打印Alpha的值
print('ccp_alphas:', ccp_alphas)
# 对Alpha的值进行排序
print('Sorted Alpha:', sorted(ccp_alphas))
# 打印Alpha的个数
print('Alpha的个数:', len(sorted(ccp_alphas)))

newTree = DecisionTreeClassifier(criterion='gini', ccp_alpha=best_alpha, random_state=1)
newTree.fit(X_train, y_train)

# 计算根据剪枝后的准确率
y_test_pred = newTree.predict(X_test)
print('Pre-pruning Accuracy on test set:', accuracy_score(y_test, y_test_pred))

alpha_list = sorted(ccp_alphas)
train_scores = []
test_scores = []

测试集得分
for alpha in alpha_list:
    newTree = DecisionTreeClassifier(criterion='gini', ccp_alpha=alpha, random_state=1)
    newTree.fit(X_train, y_train)
    y_train_pred = newTree.predict(X_train)
    y_test_pred = newTree.predict(X_test)
    train_scores.append(accuracy_score(y_train, y_train_pred))
    test_scores.append(accuracy_score(y_test, y_test_pred))

print(train_scores)
print(test_scores)
plt.figure(figsize=(10,8))
plt.step(alpha_list[:14],train_scores[:14],linewidth='2',
        color='mediumvioletred',marker='o',markersize=6,
        label="train_scores")
plt.step(alpha_list[:14],test_scores[:14],linewidth='2',
        color='yellowgreen',marker='o',markersize=6,
        label="test_scores")
plt.legend()
plt.title('Accuracy vs alpha')
plt.show()

best_alpha = alpha_list[19]

bestTree = DecisionTreeClassifier(criterion='gini', ccp_alpha=best_alpha, random_state=1)
bestTree.fit(X_train, y_train)
plt.figure(figsize=(18, 10))
class_names = bestTree.classes_.astype(str)
plot_tree(bestTree, filled=True, feature_names=X_train.columns, class_names=class_names)
plt.show()

y_pred = bestTree.predict(X_test)
acc = accuracy_score(y_test, y_pred)
print("post-pruning accuracy on test set: {:.3f}".format(acc))

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值