Cross-validation: evaluating estimator performance

最近在看程序看到训练集、验证集、测试集,之前就看到过,但是不知道具体是个啥,这里整理一下,方便以后查阅。

训练集、验证集、测试集

  • 训练集(train set) —— 用于模型拟合的数据样本。

  • 验证集(validation set)—— 是模型训练过程中单独留出的样本集,它可以用于调整模型的超参数和用于对模型的能力进行初步评估。 通常用来在模型迭代训练时,用以验证当前模型泛化能力(准确率,召回率等),以决定是否停止继续训练。
    在神经网络中, 我们用验证数据集去寻找最优的网络深度(number of hidden layers),或者决定反向传播算法的停止点或者在神经网络中选择隐藏层神经元的数量;
    在普通的机器学习中常用的交叉验证(Cross Validation) 就是把训练数据集本身再细分成不同的验证数据集去训练模型。

  • 测试集 —— 用来评估模最终模型的泛化能力。但不能作为调参、选择特征等算法相关的选择的依据。

类别验证集测试集
是否被训练到
作用1)调超参数;2)监控模型是否发生过拟合(以决定是否停止训练)为了评估最终模型泛化能力
使用次数多次使用,以不断调参仅仅一次使用
缺陷模型在一次次重新手动调参并继续训练后所逼近的验证集,可能只代表一部分非训练集,导致最终训练好的模型泛化性能不够

一个形象的比喻:

训练集-----------学生的课本;学生 根据课本里的内容来掌握知识。

验证集------------作业,通过作业可以知道 不同学生学习情况、进步的速度快慢。

测试集-----------考试,考的题是平常都没有见过,考察学生举一反三的能力。

交叉验证(Cross Validation)

概念

一种模型验证技术。简单来说就是重复使用数据。除去测试集,把剩余数据进行划分,组合成多组不同的训练集和验证集,某次在训练集中出现的样本下次可能成为验证集中的样本,这就是所谓的“交叉”。最后用各次验证误差的平均值作为模型最终的验证误差。

为什么要用交叉验证?

留出法(holdout)需要从数据集中抽出一部分作为验证集(这部分不再用于模型训练)。如果验证集较大,那么训练集就会变得很小,如果数据集本身就不大的话,显然这样训练出来的模型就不好。如果验证集很小,那么此验证误差就不能很好地反映出泛化误差。此外,在不同的划分方式下,训练出的不同模型的验证误差波动也很大(方差大)。到底以哪次验证误差为准?谁都不知道。但是如果将这些不同划分方式下训练出来的模型的验证过程重复多次,得到的平均误差可能就是对泛化误差的一个很好的近似。

交叉验证的几种方法

留一法(Leave One Out Cross Validation,LOOCV)

假设数据集一共有m个样本,依次从数据集中选出1个样本作为验证集,其余m-1个样本作为训练集,这样进行m次单独的模型训练和验证,最后将m次验证结果取平均值,作为此模型的验证误差。

留一法的优点是结果近似无偏,这是因为几乎所有的样本都用于模型的拟合。缺点是计算量大。假如m=1000,那么就需要训练1000个模型,计算1000次验证误差。因此,当数据集很大时,计算量是巨大的,很耗费时间。除非数据特别少,一般在实际运用中我们不太用留一法。

K折交叉验证(K-Fold Cross Validation)

把数据集分成K份,每个子集互不相交且大小相同,依次从K份中选出1份作为验证集,其余K-1份作为训练集,这样进行K次单独的模型训练和验证,最后将K次验证结果取平均值,作为此模型的验证误差。当K=m时,就变为留一法。可见留一法是K折交叉验证的特例。

根据经验,K一般取10。(在各种真实数据集上进行实验发现,10折交叉验证在偏差和方差之间取得了最佳的平衡。)

多次K折交叉验证(Repeated K-Fold Cross Validation)

每次用不同的划分方式划分数据集,每次划分完后的其他步骤和K折交叉验证一样。例如:10 次 10 折交叉验证,即每次进行10次模型训练和验证,这样一共做10次,也就是总共做100次模型训练和验证,最后将结果平均。这样做的目的是让结果更精确一些。(研究发现,重复K折交叉验证可以提高模型评估的精确度,同时保持较小的偏差。)

Cross-validation: evaluating estimator performance

Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting.

To avoid it, it is common practice when performing a (supervised) machine learning experiment to hold out part of the available data as a test set X_test, y_test.

Here is a flowchart of typical cross validation workflow in model training. The best parameters can be determined by grid search techniques.

在这里插入图片描述

In scikit-learn a random split into training and test sets can be quickly computed with the train_test_split helper function. Let’s load the iris data set to fit a linear support vector machine on it:

import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import datasets
from sklearn import svm

X, y = datasets.load_iris(return_X_y=True)
X.shape, y.shape

We can now quickly sample a training set while holding out 40% of the data for testing (evaluating) our classifier:

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)

X_train.shape, y_train.shape, X_test.shape, y_test.shape
clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)
clf.score(X_test, y_test)

When evaluating different settings (“hyperparameters”) for estimators, such as the C setting that must be manually set for an SVM, there is still a risk of overfitting on the test set because the parameters can be tweaked until the estimator performs optimally. This way, knowledge about the test set can “leak” into the model and evaluation metrics no longer report on generalization performance.

PS:对上面这段话的理解,在模型中有些超参数需要手动设置,那么我们设置这个参数的理由是什么呢?是因为我们模型在测试集上的表现不理想,那么我们就需要调整超参数,让模型的性能更好。在这个调整的过程中我们用到了测试集的信息,这就造成了验证集数据的“泄露”,同时为了让模型的性能表现得更好,我们期望模型的识别率更高,但是我们是针对测试集的识别率调整的,这样就会造成一种过拟合的状态,让模型更适合我们现在的数据,对于新来的数据它的泛化能力是不够的。

To solve this problem, yet another part of the dataset can be held out as a so-called “validation set”: training proceeds on the training set, after which evaluation is done on the validation set, and when the experiment seems to be successful, final evaluation can be done on the test set.

However, by partitioning the available data into three sets, we drastically reduce the number of samples which can be used for learning the model, and the results can depend on a particular random choice for the pair of (train, validation) sets.

A solution to this problem is a procedure called cross-validation (CV for short). A test set should still be held out for final evaluation, but the validation set is no longer needed when doing CV. In the basic approach, called k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:

  • A model is trained using k − 1 k-1 k1 of the folds as training data;
  • the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy).

The performance measure reported by k-fold cross-validation is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as is the case when fixing an arbitrary validation set), which is a major advantage in problems such as inverse inference where the number of samples is very small.

在这里插入图片描述

1 Computing cross-validated metrics

The simplest way to use cross-validation is to call the cross_val_score helper function on the estimator and the dataset.

The following example demonstrates how to estimate the accuracy of a linear kernel support vector machine on the iris dataset by splitting the data, fitting a model and computing the score 5 consecutive times (with different splits each time):

在这里插入图片描述
在这里插入图片描述

from sklearn.model_selection import cross_val_score
clf = svm.SVC(kernel='linear', C=1, random_state=42)
scores = cross_val_score(clf, X_train, y_train, cv=5)
scores

The mean score and the standard deviation are hence given by:

print("%0.2f accuracy with a standard deviation of %0.2f" % (scores.mean(), scores.std()))

By default, the score computed at each CV iteration is the score method of the estimator. It is possible to change this by using the scoring parameter:

from sklearn import metrics
scores = cross_val_score(
    clf, X, y, cv=5, scoring='f1_macro')
scores

See The scoring parameter: defining model evaluation rules for details. In the case of the Iris dataset, the samples are balanced across target classes hence the accuracy and the F1-score are almost equal.

When the cv argument is an integer, cross_val_score uses the KFold or StratifiedKFold strategies by default, the latter being used if the estimator derives from ClassifierMixin.

It is also possible to use other cross validation strategies by passing a cross validation iterator instead, for instance:

from sklearn.model_selection import ShuffleSplit
n_samples = X.shape[0]
cv = ShuffleSplit(n_splits=5, test_size=0.3, random_state=0)
cross_val_score(clf, X, y, cv=cv)

Data transformation with held out data

Just as it is important to test a predictor on data held-out from training, preprocessing (such as standardization, feature selection, etc.) and similar data transformations similarly should be learnt from a training set and applied to held-out data for prediction:

from sklearn import preprocessing
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)

scaler = preprocessing.StandardScaler().fit(X_train)
X_train_transformed = scaler.transform(X_train)
X_test_transformed = scaler.transform(X_test)

clf = svm.SVC(C=1).fit(X_train_transformed, y_train)
clf.score(X_test_transformed, y_test)

A Pipeline makes it easier to compose estimators, providing this behavior under cross-validation:

from sklearn.pipeline import make_pipeline
clf = make_pipeline(preprocessing.StandardScaler(), svm.SVC(C=1))
cross_val_score(clf, X, y, cv=cv)

See Pipelines and composite estimators.

Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:

  • Convenience and encapsulation
    You only have to call fit and predict once on your data to fit a whole sequence of estimators.

  • Joint parameter selection
    You can grid search over parameters of all estimators in the pipeline at once.

  • Safety
    Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors.

1.1 The cross_validate function and multiple metric evaluation

The cross_validate function differs from cross_val_score in two ways:

  • It allows specifying multiple metrics for evaluation.

  • It returns a dict containing fit-times, score-times (and optionally training scores as well as fitted estimators) in addition to the test score.

For single metric evaluation, where the scoring parameter is a string, callable or None, the keys will be - [‘test_score’, ‘fit_time’, ‘score_time’]

return_train_score is set to False by default to save computation time. To evaluate the scores on the training set as well you need to set it to True.

You may also retain the estimator fitted on each training set by setting return_estimator=True.

The multiple metrics can be specified either as a list, tuple or set of predefined scorer names:

在这里插入图片描述
在这里插入图片描述

from sklearn.model_selection import cross_validate
from sklearn.metrics import recall_score
scoring = ['precision_macro', 'recall_macro']
clf = svm.SVC(kernel='linear', C=1, random_state=0)
scores = cross_validate(clf, X, y, scoring=scoring)
sorted(scores.keys())

scores['test_recall_macro']

1.2 Obtaining predictions by cross-validation

The function cross_val_predict has a similar interface to cross_val_score, but returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. Only cross-validation strategies that assign all elements to a test set exactly once can be used (otherwise, an exception is raised).

The function cross_val_predict is appropriate for:

  • Visualization of predictions obtained from different models.
  • Model blending: When predictions of one supervised estimator are used to train another estimator in ensemble methods.

2 Cross validation iterators

The following sections list utilities to generate indices that can be used to generate dataset splits according to different cross validation strategies.

Cross-validation iterators for i.i.d. data

Assuming that some data is Independent and Identically Distributed (i.i.d.) is making the assumption that all samples stem from the same generative process and that the generative process is assumed to have no memory of past generated samples.

The following cross-validators can be used in such cases.

K-fold

KFold divides all the samples in groups of samples, called folds (if k = n k=n k=n, this is equivalent to the Leave One Out strategy), of equal sizes (if possible). The prediction function is learned using k − 1 k-1 k1 folds, and the fold left out is used for test.

Example of 3-fold cross-validation on a dataset with 6 samples:

在这里插入图片描述

import numpy as np
from sklearn.model_selection import KFold

X = ["a", "b", "c", "d", "e", "f"]
kf = KFold(n_splits=3)
for train, test in kf.split(X):
    print("%s %s" % (train, test))

Here is a visualization of the cross-validation behavior. Note that KFold is not affected by classes or groups.

在这里插入图片描述
Each fold is constituted by two arrays: the first one is related to the training set, and the second one to the test set. Thus, one can create the training/test sets using numpy indexing:

X = np.array([[0., 0.], [1., 1.], [-1., -1.], [2., 2.]])
y = np.array([0, 1, 0, 1])
X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]
Repeated K-Fold

RepeatedKFold repeats K-Fold n times. It can be used when one requires to run KFold n times, producing different splits in each repetition.

Example of 2-fold K-Fold repeated 4 times:

在这里插入图片描述

import numpy as np
from sklearn.model_selection import RepeatedKFold
X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
random_state = 12883823
rkf = RepeatedKFold(n_splits=2, n_repeats=4, random_state=random_state)
for train, test in rkf.split(X):
    print("%s %s" % (train, test))
Leave One Out (LOO)

在这里插入图片描述

from sklearn.model_selection import LeaveOneOut

X = [1, 2, 3, 4]
loo = LeaveOneOut()
for train, test in loo.split(X):
    print("%s %s" % (train, test))
Leave P Out (LPO)

在这里插入图片描述

from sklearn.model_selection import LeavePOut

X = np.ones(4)
lpo = LeavePOut(p=2)
for train, test in lpo.split(X):
    print("%s %s" % (train, test))

Cross-validation iterators with stratification based on class labels

Stratified k-fold

StratifiedKFold is a variation of k-fold which returns stratified folds: each set contains approximately the same percentage of samples of each target class as the complete set.

在这里插入图片描述

import numpy as np
from sklearn.model_selection import StratifiedKFold
X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
y = np.array([0, 0, 1, 1])
skf = StratifiedKFold(n_splits=2)
skf.get_n_splits(X, y)

for train_index, test_index in skf.split(X, y):
    print("TRAIN:", train_index, "TEST:", test_index)
    X_train, X_test = X[train_index], X[test_index]
    y_train, y_test = y[train_index], y[test_index]
from sklearn.model_selection import StratifiedKFold, KFold
import numpy as np
X, y = np.ones((50, 1)), np.hstack(([0] * 45, [1] * 5))
skf = StratifiedKFold(n_splits=3)
for train, test in skf.split(X, y):
    print('train -  {}   |   test -  {}'.format(
        np.bincount(y[train]), np.bincount(y[test])))



kf = KFold(n_splits=3)
for train, test in kf.split(X, y):
    print('train -  {}   |   test -  {}'.format(
        np.bincount(y[train]), np.bincount(y[test])))

在这里插入图片描述

参考资料

训练集、验证集、测试集以及交验验证的理解

验证和交叉验证(Validation & Cross Validation)

3.1. Cross-validation: evaluating estimator performance

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值