GridSearchCV和cross_val_score

一、GridSearchCV

  1. 将网格搜索和交叉验证放在一起进行。
  2. 网格搜索用于超参数调优
  3. 交叉验证用于模型泛化性能验证,交叉验证不会提高模型精度
from statistics import mean
import joblib
import pandas as pd
import seaborn as sns
from sklearn.svm import SVC
from sklearn import metrics
import datetime
from imblearn.over_sampling import SMOTE
from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
import xgboost as xgb
from sklearn import tree
from sklearn.utils import stats
from sklearn import ensemble
from sklearn import svm
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.utils import shuffle
from sklearn.model_selection import RepeatedKFold

# 交叉验证初始化
rbk = RepeatedKFold(n_splits=5, n_repeats=1, random_state=12)  # scoring默认是acc, scoring='f1_macro'
# 开始网格搜索和交叉验证
clf_svm = GridSearchCV(svm.SVC(
                         class_weight='balanced',
                         decision_function_shape='ovo',
                         probability=True),
                       param_grid, scoring="accuracy", cv=rbk)
clf_svm.fit(X_train, Y_train)

二、cross_val_score


一般用于获取每折的交叉验证的得分,进而得知模型的一般泛化性能。然后根据这个得分为模型选择合适的超参数,通常需要编写循环手动完成交叉验证过程。

from statistics import mean
import joblib
import pandas as pd
import seaborn as sns
from sklearn.svm import SVC
from sklearn import metrics
import datetime
from imblearn.over_sampling import SMOTE
from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
import xgboost as xgb
from sklearn import tree
from sklearn.utils import stats
from sklearn import ensemble
from sklearn import svm
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.utils import shuffle
from sklearn.model_selection import RepeatedKFold



n = 5
SKF = KFold(n_splits=n, shuffle=True, random_state=42)
score_SVM = cross_val_score(clf_svm, X, y, cv=SKF)  # K折交叉验证,也可以把SKF改为10
print('——>交叉验证分数', score_SVM)
# 获取置信区间。(也就是均值和方差),std()计算标准偏差Accuracy: 0.98 (+/- 0.03)
# print("——>10折交叉验证 Mean Accuracy: %0.4f (+/- %0.4f)" % (score_SVM.mean(), score_SVM.std() * 2))
print("——>5折交叉验证 Mean Accuracy: %0.4f" % (score_SVM.mean()))
mean_score = [score_SVM.mean()] * 5
plt.plot(range(1, n + 1), score_SVM, label='K-Score')
plt.plot(range(1, n + 1), mean_score, label='MeanScore')
plt.legend()
# plt.savefig('../figureResult/train_svm/{}-fold.jpg'.format(n), dpi=800)
plt.show()


三、总结

GridSearchCV :
除了自行完成叉验证外,还返回了最优的超参数及对应的最优模型
所以相对于cross_val_score来说,GridSearchCV在使用上更为方便;但是对于细节理解上,手动实现循环调用cross_val_score会更好些。

cross_val_score :
一般用于获取每折的交叉验证的得分,然后根据这个得分为模型选择合适的超参数,通常需要编写循环手动完成交叉验证过程。

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值