决策树——过/欠拟合

本文探讨了使用scikit-learn的决策树算法时遇到的过拟合和欠拟合问题。通过调整决策树的深度、min_samples_split、min_samples_leaf等参数,以及采用剪枝策略,来优化模型的训练和测试集表现。过拟合表现为训练集AUC值过高,欠拟合则导致AUC值普遍较低。通过偏差-方差权衡,理解并处理模型的泛化能力是关键。
摘要由CSDN通过智能技术生成

本文的数据集和上一篇一样,是美国个人收入信息。

Using Decision Trees With Scikit-Learn

scikit-learn中实现了DecisionTreeClassifier分类算法以及DecisionTreeRegressor回归算法。

from sklearn.tree import DecisionTreeClassifier

# All columns have been converted to numeric.
columns = ["age", "workclass", "education_num", "marital_status", "occupation", "relationship", "race", "sex", "hours_per_week", "native_country"]

# Set random_state to 1 to keep results consistent.
clf = DecisionTreeClassifier(random_state=1)
clf.fit(income[columns], income["high_income"])

Splitting The Data Into Train And Test Sets

import numpy
import math

# Set a random seed so the shuffle is the same every time.
numpy.random.seed(1)

# Shuffle the rows.  This first permutes the index randomly using numpy.random.permutation.
# Then, it reindexes the dataframe with this.
# The net effect is to put the rows into random order.
income = income.reindex(numpy.random.permutation(income.index))

train_max_row = math.floor(income.shape[0] * .8)
train = income.iloc[:train_max_row]
test = income.iloc[train_max_row:]

Evaluating Error

from sklearn.metrics 
  • 1
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值