scikit-learn学习系列 - 线性和二次判别分析

线性判别分析 (discriminant_analysis.LinearDiscriminantAnalysis) 和二次判别分析 (discriminant_analysis.QuadraticDiscriminantAnalysis) 是两个经典的分类器,正如它们的名字所暗示的,分别具有一个线性和一个二次决策曲面。

这些分类器很有吸引力,因为它们具有可以轻松计算的封闭形式的解决方案,本质上是多类的,在实践中已经证明工作得很好,并且没有需要调优的超参数。

ldaqda

图中给出了线性判别分析和二次判别分析的决策边界。从下一行可以看出,线性判别分析只能学习线性边界,而二次判别分析可以学习二次边界,因此更加灵活。

示例:

Linear and Quadratic Discriminant Analysis with covariance ellipsoid: LDA与QDA在合成数据上的比较。

1. 使用线性判别分析降维

discriminant_analysis.LinearDiscriminantAnalysis can be used to perform supervised dimensionality reduction, by projecting the input data to a linear subspace consisting of the directions which maximize the separation between classes (in a precise sense discussed in the mathematics section below). The dimension of the output is necessarily less than the number of classes, so this is, in general, a rather strong dimensionality reduction, and only makes sense in a multiclass setting.

This is implemented in discriminant_analysis.LinearDiscriminantAnalysis.transform. The desired dimensionality can be set using the n_components constructor parameter. This parameter has no influence ondiscriminant_analysis.LinearDiscriminantAnalysis.fit or discriminant_analysis.LinearDiscriminantAnalysis.predict.

Examples:

Comparison of LDA and PCA 2D projection of Iris dataset: 比较LDA和PCA对虹膜数据集降维效果

2. LDA和QDA分类器的数学公式

Both LDA and QDA can be derived from simple probabilistic models which model the class conditional distribution of the data P(X|y=k) for each class k. Predictions can then be obtained by using Bayes’ rule:

P(y=k|X)=P(X|y=k)P(y=k)P(X)=P(X|y=k)P(y=k)∑lP(X|y=l)⋅P(y=l)

and we select the class k which maximizes this conditional probability.

More specifically, for linear and quadratic discriminant analysis, P(X|y) is modeled as a multivariate Gaussian distribution with density:

P(X|y=k)=1(2π)d/2|Σk|1/2exp⁡(−12(X−μk)tΣk−1(X−μk))

where d is the number of features.

To use this model as a classifier, we just need to estimate from the training data the class priors P(y=k) (by the proportion of instances of class k), the class means μk (by the empirical sample class means) and the covariance matrices (either by the empirical sample class covariance matrices, or by a regularized estimator: see the section on shrinkage below).

In the case of LDA, the Gaussians for each class are assumed to share the same covariance matrix: Σk=Σ for all k. This leads to linear decision surfaces, which can be seen by comparing the log-probability ratios log⁡[P(y=k|X)/P(y=l|X)]:

log⁡(P(y=k|X)P(y=l|X))=log⁡(P(X|y=k)P(y=k)P(X|y=l)P(y=l))=0⇔(μk−μl)tΣ−1X=12(μktΣ−1μk−μltΣ−1μl)−log⁡P(y=k)P(y=l)

In the case of QDA, there are no assumptions on the covariance matrices Σk of the Gaussians, leading to quadratic decision surfaces. See [3] for more details.

Note

 

Relation with Gaussian Naive Bayes

If in the QDA model one assumes that the covariance matrices are diagonal, then the inputs are assumed to be conditionally independent in each class, and the resulting classifier is equivalent to the Gaussian Naive Bayes classifier naive_bayes.GaussianNB.

3. LDA降维的数学公式

To understand the use of LDA in dimensionality reduction, it is useful to start with a geometric reformulation of the LDA classification rule explained above. We write K for the total number of target classes. Since in LDA we assume that all classes have the same estimated covariance Σ, we can rescale the data so that this covariance is the identity:

X∗=D−1/2UtX with Σ=UDUt

Then one can show that to classify a data point after scaling is equivalent to finding the estimated class mean μk∗ which is closest to the data point in the Euclidean distance. But this can be done just as well after projecting on the K−1 affine subspace HK generated by all the μk∗ for all classes. This shows that, implicit in the LDA classifier, there is a dimensionality reduction by linear projection onto a K−1 dimensional space.

We can reduce the dimension even more, to a chosen L, by projecting onto the linear subspace HL which maximizes the variance of the μk∗ after projection (in effect, we are doing a form of PCA for the transformed class means μk∗). This Lcorresponds to the n_components parameter used in the discriminant_analysis.LinearDiscriminantAnalysis.transformmethod. See [3] for more details.

4. Shrinkage

Shrinkage is a tool to improve estimation of covariance matrices in situations where the number of training samples is small compared to the number of features. In this scenario, the empirical sample covariance is a poor estimator. Shrinkage LDA can be used by setting the shrinkage parameter of the discriminant_analysis.LinearDiscriminantAnalysis class to ‘auto’. This automatically determines the optimal shrinkage parameter in an analytic way following the lemma introduced by Ledoit and Wolf [4]. Note that currently shrinkage only works when setting the solver parameter to ‘lsqr’ or ‘eigen’.

The shrinkage parameter can also be manually set between 0 and 1. In particular, a value of 0 corresponds to no shrinkage (which means the empirical covariance matrix will be used) and a value of 1 corresponds to complete shrinkage (which means that the diagonal matrix of variances will be used as an estimate for the covariance matrix). Setting this parameter to a value between these two extrema will estimate a shrunk version of the covariance matrix.

shrinkage

5. 估计算法

默认的解决方案是' svd '。它既可以进行分类,又可以进行变换,不依赖于协方差矩阵的计算。在特征数量很大的情况下,这可能是一个优势。However, the ‘svd’ solver cannot be used with shrinkage.

The ‘lsqr’ solver is an efficient algorithm that only works for classification. It supports shrinkage.

The ‘eigen’ solver is based on the optimization of the between class scatter to within class scatter ratio. It can be used for both classification and transform, and it supports shrinkage. However, the ‘eigen’ solver needs to compute the covariance matrix, so it might not be suitable for situations with a high number of features.

示例:

Normal and Shrinkage Linear Discriminant Analysis for classification: Comparison of LDA classifiers with and without shrinkage.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值