Learn: OverfittingAndUnderfitting、一种缓解方式之决策回归树中设置max_leaf_nodes

过拟合、欠拟合

以决策树为例,来说。
dataset被划分到叶子,树太浅,假如数据集仅被split成了2个groups(划分的粒度特粗),每个group里必然特别多的houses。如果树特别地深,假如数据集被split成了1024个groups(划分的粒度特粗),叶子特别多,每个叶子上的houses则特别地少。
简单说,树太deep,易发生过拟合。
树太shallow,易发生欠拟合。

Overfitting: capturing spurious patterns that won’t recur in the future, leading to less accurate predictions, or
Underfitting: failing to capture relevant patterns, again leading to less accurate predictions.

overfitting:该模型capture了将来不会再出现的**“骗人模式”(spurious patterns)**,对训练数据拟合地perfect,But对验证集、对新数据的预测超级poor。
unfitting:该模型没有能capture到数据中 important distinctions and patterns。因为我们关注模型在新数据上的准确度,我们评估验证集,我们想找到过拟合和欠拟合之间的the sweet spot(最佳点)

一种解决:决策回归树中设置max_leaf_nodes

比如,在决策树中,我们可以设置max_leaf_nodes值。如,DecisionTreeRegressor函数提供了max_leaf_nodes。
尝试不同的max_leaf_nodes值,观察Error,找到最佳的max_leaf_nodes。

import pandas as pd
from sklearn.model_selection import train_test_split

#read csv
melbourne_data= pd.read_csv(r'G:\kaggle\melb_data.csv')
#drop NaN
filtered_melbourne_data= melbourne_data.dropna(axis=0)#pandas的 pd.dropna()

#target
y= filtered_melbourne_data.Price
#choosing features
melbourne_features=['Rooms', 'Bathroom', 'Landsize', 'BuildingArea', 'YearBuilt', 'Lattitude', 'Longtitude']
#X
X= filtered_melbourne_data[melbourne_features]

#split data into two pieces: training data and validation data
train_X, val_X, train_y, val_y = train_test_split(X, y, test_size=0.33, random_state=0 )#random_state设为0,重复实验室,其它参数不变时,每次得到的随机数组是不一样的。
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_absolute_error

#compare MAE scores from different values for max_leaf_nodes
def get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y ):
    model= DecisionTreeRegressor(max_leaf_nodes= max_leaf_nodes, random_state=1)
    model.fit(train_X, train_y)
    val_prediction_y= model.predict(val_X)
    mae= mean_absolute_error(val_y, val_prediction_y)
    return mae
help(DecisionTreeRegressor)
#compare MAE with differing values of max_leaf_nodes
for max_leaf_nodes in [5,50,500,5000]:
    my_mae= get_mae(max_leaf_nodes,train_X, val_X, train_y, val_y )
    print("When max_leaf_nodes is ",max_leaf_nodes,", MAE is " ,my_mae)
('When max_leaf_nodes is ', 5, ', MAE is ', 345357.4675454862)
('When max_leaf_nodes is ', 50, ', MAE is ', 258991.08393549395)
('When max_leaf_nodes is ', 500, ', MAE is ', 244241.84628754357)
('When max_leaf_nodes is ', 5000, ', MAE is ', 253299.98630806847)

可见,500是最优的叶子节点数量。

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值