8-4和8-5中的方法,通过测试数据集判断模型的好坏,该方法比只使用训练数据得到的模型要靠谱,但是严格来说该方法也有不足之处:使得最终得到的模型过拟合了测试数据(通过测试数据来判断模型的好坏,一旦发现不好,就修改参数,重新训练,某种程度上导致了模型在一定程度上围绕测试数据集打转,相当于针对测试数据集调参,很有可能产生过拟合现象,也就是说我们得到的模型针对测试数据集过拟合)
解决方法:将我们的数据分成三部分:
1、训练数据集:训练数据,得到模型
2、验证数据集:跟之前的测试数据集功能相同(找到合适的参数、合适的效果),调整超参数使用的数据集
3、测试数据集:不参与模型的创建,作为衡量最终模型性能的数据集
以上方法存在的问题:验证数据集是从原来的数据里随机的切分出来的,一旦有相对极端的数据,将会导致相应的模型变得不准确,为了解决这个问题,就有了交叉验证(Cross Validation)
import numpy as np
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target
测试train_test_split
# 使用train_test_split来进行超参数的调整
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.4,random_state=666)
#用KNN算法进行手写数据的识别
from sklearn.neighbors import KNeighborsClassifier
best_score,best_p,best_k = 0,0,0
for k in range(2,11):#选取周围几个数
for p in range(1,6):#选取距离
knn_clf = KNeighborsClassifier(weights="distance",n_neighbors=k,p=p)
knn_clf.fit(X_train,y_train)
score = knn_clf.score(X_test,y_test)
if score > best_score:
best_score,best_p,best_k =score,p,k
print("Best K = ",best_k)
print("Best P = ",best_p)
print("Best Score = ",best_score)
输出:
Best K = 3
Best P = 4
Best Score = 0.9860917941585535
使用交叉验证
# 使用交叉验证的方式进行超参数的调整
from sklearn.model_selection import cross_val_score
knn_clf = KNeighborsClassifier()
# cross_val_score(knn_clf,X_train,y_train,cv=5)#cv可以控制将测试数据集分成几份
cross_val_score(knn_clf,X_train,y_train)#返回生成的k个模型每个模型对应的准确率
输出:array([0.99537037, 0.98148148, 0.97685185, 0.97674419, 0.97209302])
best_score,best_p,best_k = 0,0,0
for k in range(2,11):#选取周围几个数
for p in range(1,6):#选取距离
knn_clf = KNeighborsClassifier(weights="distance",n_neighbors=k,p=p)
scores = cross_val_score(knn_clf,X_train,y_train)
score = np.mean(scores)
if score > best_score:
best_score,best_p,best_k =score,p,k
print("Best K = ",best_k)
print("Best P = ",best_p)
print("Best Score = ",best_score)
输出:
Best K = 2
Best P = 2
Best Score = 0.9851507321274763
相比之下交叉验证的结果更可信,因为很大程度上减小了过拟合的影响。best-score并不是最终的准确度,只是帮助我们获得最好的k和p的值
best_knn_clf = KNeighborsClassifier(weights="distance",n_neighbors=2,p=2)
best_knn_clf.fit(X_train,y_train)
best_knn_clf.score(X_test,y_test)#得到最终的分类准确度
输出:0.980528511821975
回顾网格搜索
from sklearn.model_selection import GridSearchCV
param_grid = [
{
'weights':['distance'],
'n_neighbors':[i for i in range(2,11)],
'p':[i for i in range(1,6)]
}
]
# GridSearchCV(knn_clf,param_grid,verbose=1 cv=5)#在进行网格搜索的时候每一次交叉验证都会将数据集分成5份
grid_search = GridSearchCV(knn_clf,param_grid,verbose=1)
grid_search.fit(X_train,y_train)
输出:
Fitting 5 folds for each of 45 candidates, totalling 225 fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 225 out of 225 | elapsed: 58.5s finished
GridSearchCV(estimator=KNeighborsClassifier(n_neighbors=10, p=5,
weights=‘distance’),
param_grid=[{‘n_neighbors’: [2, 3, 4, 5, 6, 7, 8, 9, 10],
‘p’: [1, 2, 3, 4, 5], ‘weights’: [‘distance’]}],
verbose=1)
grid_search.best_score_
输出:0.9851507321274763
1
grid_search.best_params_#获得最佳参数
输出:{'n_neighbors': 2, 'p': 2, 'weights': 'distance'}
best_knn_clf = grid_search.best_estimator_#获得最佳参数获得的分类器
best_knn_clf.score(X_test,y_test)
输出:0.980528511821975