过拟合解决方法python_如何解决Python sklearn随机森林中的过拟合问题?

I am using RandomForestClassifier implemented in python sklearn package to build a binary classification model. The below is the results of cross validations:

Fold 1 : Train: 164 Test: 40

Train Accuracy: 0.914634146341

Test Accuracy: 0.55

Fold 2 : Train: 163 Test: 41

Train Accuracy: 0.871165644172

Test Accuracy: 0.707317073171

Fold 3 : Train: 163 Test: 41

Train Accuracy: 0.889570552147

Test Accuracy: 0.585365853659

Fold 4 : Train: 163 Test: 41

Train Accuracy: 0.871165644172

Test Accuracy: 0.756097560976

Fold 5 : Train: 163 Test: 41

Train Accuracy: 0.883435582822

Test Accuracy: 0.512195121951

I am using "Price" feature to predict "quality" which is a ordinal value. In each cross validation, there are 163 training examples and 41 test examples.

Apparently, overfitting occurs here. So is there any parameters provided by sklearn can be used to overcome this problem? I found some parameters here, e.g. min_samples_split and min_sample_leaf, but I do not quite understand how to tune them.

Thanks in advance!

解决方案

I would agree with @Falcon w.r.t. the dataset size. It's likely that the main problem is the small size of the dataset. If possible, the best thing you can do is get more data, the more data (generally) the less likely it is to overfit, as random patterns that appear predictive start to get drowned out as the dataset size increases.

That said, I would look at the following params:

n_estimators: @Falcon is wrong, in general the more trees the less likely the algorithm is to overfit. So try increasing this. The lower this number, the closer the model is to a decision tree, with a restricted feature set.

max_features: try reducing this number (try 30-50% of the number of features). This determines how many features each tree is randomly assigned. The smaller, the less likely to overfit, but too small will start to introduce under fitting.

max_depth: Experiment with this. This will reduce the complexity of the learned models, lowering over fitting risk. Try starting small, say 5-10, and increasing you get the best result.

min_samples_leaf: Try setting this to values greater than one. This has a similar effect to the max_depth parameter, it means the branch will stop splitting once the leaves have that number of samples each.

Note when doing this work to be scientific. Use 3 datasets, a training set, a separate 'development' dataset to tweak your parameters, and a test set that tests the final model, with the optimal parameters. Only change one parameter at a time and evaluate the result. Or experiment with the sklearn gridsearch algorithm to search across these parameters all at once.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值