概述
PCA(principal components analysis)即主成分分析技术,又称为主分量分析,旨在利用降维的思想,把多个指标转换为少数的几个综合指标。
主成分分析是一种简化数据集的技术,它是一个线性变换。这个线性变化把数据变换到一个新的坐标系统中,使得任何数据投影的第一大方差在第一个坐标上(称为第一主成分),第二个大的方差在第二个坐标上(称为第二主成分),以此类推。主成分分析经常用于减少数据集的维数,同时保持数据集的对方差贡献最大的特征。这是通过保留低阶主成分,忽略高阶主成分做到的。这样低阶成分往往能够保留住数据的最重要方面。
PCA的原理就是将原来的样本数据投影到一个新的空间中。其中就是将原来的样本数据空间经过坐标变换矩阵变到新空间坐标下,新空间坐标由其原数据样本中不同维度之间的协方差矩阵中几个最大特征值对应的前几个特征向量组成。较小的特征值向量作为非主要成分去掉,从而可以达到提取主要成分代表原始数据的特征,降低数据复杂度的目的。
算法步骤
- 将n次采样的m维数据组织成矩阵形式 X ∈ R n × m X ∈ R n × m X\in R^{n\times m}X\in R^{n\times m} X∈Rn×mX∈Rn×m。具体形式如下所示:
( x 11 x 12 x 21 x 22 ⋯ x 1 m ⋯ x 2 m ⋮ ⋮ x n 1 x n 2 ⋱ ⋮ ⋯ x n m ) \left(\begin{matrix}\begin{matrix}x_{11}&x_{12}\\x_{21}&x_{22}\\\end{matrix}&\begin{matrix}\cdots&x_{1m}\\\cdots&x_{2m}\\\end{matrix}\\\begin{matrix}\vdots&\vdots\\x_{n1}&x_{n2}\\\end{matrix}&\begin{matrix}\ddots&\vdots\\\cdots&x_{nm}\\\end{matrix}\\\end{matrix}\right) ⎝⎜⎜⎜⎛x11x21x12x22⋮xn1⋮xn2⋯⋯x1mx2m⋱⋯⋮xnm⎠⎟⎟⎟⎞
-
将样本矩阵 X X XX XX的每一列零均值化得新矩阵 X ′ X ′ X^{\prime}X^{\prime} X′X′。
x i ← x i − 1 m ∑ i = 1 m x i \boldsymbol{x}_{i} \leftarrow \boldsymbol{x}_{i}-\frac{1}{m} \sum_{i=1}^{m} \boldsymbol{x}_{i} xi←xi−m1i=1∑mxi
-
计算其样本数据维度之间的相关度,此处使用协方差矩阵 C C CC CC:
c o v = 1 m X ′ X ′ T cov=\frac{1}{m}X^\prime{X^\prime}^T cov=m1X′X′T -
计算协方差矩阵 C C CC CC的特征值及其对应的特征向量,并特征值按照从大到小排列。 ( λ 1 , λ 2 , ⋯ , λ t ) = ( p 11 p 12 p 21 p 22 ⋯ p 1 t ⋯ p 2 t ⋮ ⋮ p n 1 p n 2 ⋱ ⋮ ⋯ p n t ) = ( P 1 , P 2 , … , P i ) , ( 其 中 λ 1 > λ 2 > ⋯ > λ t ) \left(\lambda_1,\lambda_2,\cdots,\lambda_t\right)=\left(\begin{matrix}\begin{matrix}p_{11}&p_{12}\\p_{21}&p_{22}\\\end{matrix}&\begin{matrix}\cdots&p_{1t}\\\cdots&p_{2t}\\\end{matrix}\\\begin{matrix}\vdots&\vdots\\p_{n1}&p_{n2}\\\end{matrix}&\begin{matrix}\ddots&\vdots\\\cdots&p_{nt}\\\end{matrix}\\\end{matrix}\right)=\left(\boldsymbol{P}_{1}, \boldsymbol{P}_{2}, \ldots, \boldsymbol{P}_{i}\right)\ ,\ (其中\lambda_1>\lambda_2>\cdots>\lambda_t) (λ1,λ2,⋯,λt)=⎝⎜⎜⎜⎛p11p21p12p22⋮pn1⋮pn2⋯⋯p1tp2t⋱⋯⋮pnt⎠⎟⎟⎟⎞=(P1,P2,…,Pi) , (其中λ1>λ2>⋯>λt)
-
根据降维要求,比如此处降到 k k kk kk维,取其前个 k k kk kk向量组成降维矩阵 P P PP PP,如下所示:
P = ( P 1 , P 2 , … , P k ) T , P ∈ R k × n P=\left(\boldsymbol{P}_{1}, \boldsymbol{P}_{2}, \ldots, \boldsymbol{P}_{k}\right)^T ,\ P\in R^{k\times n} P=(P1,P2,…,Pk)T, P∈Rk×n -
通过变换矩阵P对原样本数据 X X XX XX进行坐标变换,从而达到数据降维与主成分提取的目的。
Y = X ∙ P , Y ∈ R k × m Y=X\bullet P\ ,\ Y\in R^{k\times m} Y=X∙P , Y∈Rk×m
重建误差的计算
在投影完成之后,需要对投影的误差进行重建,从而计算数据降维之后信息的损失,一般来说通过以下公式来计算。
e
r
r
o
r
1
=
1
k
∑
i
=
1
k
∣
∣
x
(
i
)
−
x
a
p
p
r
o
x
(
i
)
∣
∣
2
{error}_1=\frac{1}{k}\sum_{i=1}^{k}{||x^{\left(i\right)}}-x_{approx}^{\left(i\right)}||^2
error1=k1i=1∑k∣∣x(i)−xapprox(i)∣∣2
e
r
r
o
r
2
=
1
m
∑
i
=
1
m
∣
∣
x
(
i
)
∣
∣
2
{error}_2=\frac{1}{m}\sum_{i=1}^{m}{||x^{\left(i\right)}}||^2
error2=m1i=1∑m∣∣x(i)∣∣2
其中:
- m m mm mm个样本表示为 ( x ( 1 ) , x ( 2 ) , ⋯ , x ( m ) ) ( x ( 1 ) , x ( 2 ) , ⋯ , x ( m ) ) (x^{\left(1\right)},x^{\left(2\right)},\cdots,x^{\left(m\right)})(x^{\left(1\right)},x^{\left(2\right)},\cdots,x^{\left(m\right)}) (x(1),x(2),⋯,x(m))(x(1),x(2),⋯,x(m))
- 对应投影后的数据表示为 ( x a p p r o x ( 1 ) , x a p p r o x ( 2 ) , ⋯ , x a p p r o x ( m ) ) ( x a p p r o x ( 1 ) , x a p p r o x ( 2 ) , ⋯ , x a p p r o x ( m ) ) (x_{approx}^{\left(1\right)},x_{approx}^{\left(2\right)},\cdots,x_{approx}^{\left(m\right)})(x_{approx}^{\left(1\right)},x_{approx}^{\left(2\right)},\cdots,x_{approx}^{\left(m\right)}) (xapprox(1),xapprox(2),⋯,xapprox(m))(xapprox(1),xapprox(2),⋯,xapprox(m))。
则其比率 η η \eta\eta ηη为
η = e r r o r 1 e r r o r 2 \eta=\frac{{error}_1}{{error}_2} η=error2error1
通过 η η \eta\eta ηη来衡量数据降维之后信息的损失。
算法描述
进而我们总结出算法描述如下:
输入: 样本集 D = { x 1 , x 2 , … , x m } D = { x 1 , x 2 , … , x m } D=\left\{\boldsymbol{x}_{1}, \boldsymbol{x}_{2}, \ldots, \boldsymbol{x}_{m}\right\}D=\left\{\boldsymbol{x}_{1}, \boldsymbol{x}_{2}, \ldots, \boldsymbol{x}_{m}\right\} D={x1,x2,…,xm}D={x1,x2,…,xm};
低维空间维数
k k kk kk
过程:
- 对所有样本进行零均值化: x i ← x i − 1 m ∑ i = 1 m x i x i ← x i − 1 m ∑ i = 1 m x i \boldsymbol{x}_{i} \leftarrow \boldsymbol{x}_{i}-\frac{1}{m} \sum_{i=1}^{m} \boldsymbol{x}_{i}\boldsymbol{x}_{i} \leftarrow \boldsymbol{x}_{i}-\frac{1}{m} \sum_{i=1}^{m} \boldsymbol{x}_{i} xi←xi−m1∑i=1mxixi←xi−m1∑i=1mxi;
- 计算样本的协方差矩阵 X X T X X T \mathbf{X X}^{\mathrm{T}}\mathbf{X X}^{\mathrm{T}} XXTXXT;
- 对协方差矩阵 X X T X X T \mathbf{X X}^{\mathrm{T}}\mathbf{X X}^{\mathrm{T}} XXTXXT做特征值分解;
- 取最大的 k k kk kk个特征值所对应的特征向量 ( P 1 , P 2 , … , P k ) ( P 1 , P 2 , … , P k ) \left(\boldsymbol{P}_{1}, \boldsymbol{P}_{2}, \ldots, \boldsymbol{P}_{k}\right)\left(\boldsymbol{P}_{1}, \boldsymbol{P}_{2}, \ldots, \boldsymbol{P}_{k}\right) (P1,P2,…,Pk)(P1,P2,…,Pk);
- 进行矩阵变换 Y = P ∙ X , Y ∈ R k × m Y = P ∙ X , Y ∈ R k × m Y=P\bullet X\ ,\ Y\in R^{k\times m}Y=P\bullet X\ ,\ Y\in R^{k\times m} Y=P∙X , Y∈Rk×mY=P∙X , Y∈Rk×m
输出: 变换后的矩阵 Y = X ∙ P , Y ∈ R k × m Y = X ∙ P , Y ∈ R k × m Y=X\bullet P\ ,\ Y\in R^{k\times m}Y=X\bullet P\ ,\ Y\in R^{k\times m} Y=X∙P , Y∈Rk×mY=X∙P , Y∈Rk×m
算法实现
选用的数据集
使用数据集为:Imported Analog EMG – Voltage下的EMG1、EMG2、…、EMG8部分的数据
实验代码展示
fileName = 'c:\Users\Administrator\Desktop\机器学习作业\PCA\pcaData1.csv';
X = csvread(fileName);
m = size(X,1);
meanLine = mean(X,2);
R = size(X ,2);
%对原始数据做均值化处理,每一列都减去均值
A = [];
for i = 1:R
temp = X(:,i) - meanLine;
A = [A temp];
end
%求其协方差矩阵
C = A'*A/R;
%求协方差矩阵的特征值及其特征向量
[U,S,V] = svd(C);
%设置降维的维度数k,从1维计算到R-1维
k=8;
%计算投影后的样本数据Y
P=[];
for x = 1:k
P = [P U(:,x)];
end
Y = X*P;
%计算数据重建误差以及比率
err1 = 0;
%获取样本X重建后的矩阵XR
XR= Y * pinv(P);
for i = 1:m
err1 = norm(X(i,:)-XR(i,:))+err1;
end
%计算数据方差
err2 = 0;
for i=1:m
err2 = norm(X(i,:))+err2;
end
eta = err1/err2
结果展示与分析
通过计算我们发现对应的特征值以及其对应的投影方向如下:
λ 1 λ 1 \lambda_1\lambda_1 λ1λ1=1.8493对应的投影方向为 ( − 0.0164 , 0.0300 , − 0.2376 , 0.4247 , − 0.6717 , 0.2356 , − 0.2196 , 0.4551 ) ( − 0.0164 , 0.0300 , − 0.2376 , 0.4247 , − 0.6717 , 0.2356 , − 0.2196 , 0.4551 ) (-0.0164,0.0300,-0.2376,0.4247,-0.6717,0.2356,-0.2196,0.4551)(-0.0164,0.0300,-0.2376,0.4247,-0.6717,0.2356,-0.2196,0.4551) (−0.0164,0.0300,−0.2376,0.4247,−0.6717,0.2356,−0.2196,0.4551)(−0.0164,0.0300,−0.2376,0.4247,−0.6717,0.2356,−0.2196,0.4551)
λ 2 λ 2 \lambda_2\lambda_2 λ2λ2=1.3836对应的投影方向为 ( 0.0910 , 0.1724 , − 0.0097 , − 0.8267 , − 0.1464 , 0.3599 , 0.0025 , 0.3570 ) ( 0.0910 , 0.1724 , − 0.0097 , − 0.8267 , − 0.1464 , 0.3599 , 0.0025 , 0.3570 ) (0.0910,0.1724,-0.0097,-0.8267,-0.1464,0.3599,0.0025,0.3570)(0.0910,0.1724,-0.0097,-0.8267,-0.1464,0.3599,0.0025,0.3570) (0.0910,0.1724,−0.0097,−0.8267,−0.1464,0.3599,0.0025,0.3570)(0.0910,0.1724,−0.0097,−0.8267,−0.1464,0.3599,0.0025,0.3570)
λ 3 λ 3 \lambda_3\lambda_3 λ3λ3=0.5480对应的投影方向为 ( − 0.1396 , − 0.4457 , − 0.1668 , 0.0870 , 0.2812 , 0.7696 , − 0.1742 , − 0.2115 ) ( − 0.1396 , − 0.4457 , − 0.1668 , 0.0870 , 0.2812 , 0.7696 , − 0.1742 , − 0.2115 ) (-0.1396,-0.4457,-0.1668,0.0870,0.2812,0.7696,-0.1742,-0.2115)(-0.1396,-0.4457,-0.1668,0.0870,0.2812,0.7696,-0.1742,-0.2115) (−0.1396,−0.4457,−0.1668,0.0870,0.2812,0.7696,−0.1742,−0.2115)(−0.1396,−0.4457,−0.1668,0.0870,0.2812,0.7696,−0.1742,−0.2115)
λ 4 λ 4 \lambda_4\lambda_4 λ4λ4=0.4135对应的投影方向为 ( 0.0622 , 0.1782 , 0.3136 , − 0.0080 , − 0.5387 , 0.2841 , 0.3300 , − 0.6214 ) ( 0.0622 , 0.1782 , 0.3136 , − 0.0080 , − 0.5387 , 0.2841 , 0.3300 , − 0.6214 ) (0.0622,0.1782,0.3136,-0.0080,-0.5387,0.2841,0.3300,-0.6214)(0.0622,0.1782,0.3136,-0.0080,-0.5387,0.2841,0.3300,-0.6214) (0.0622,0.1782,0.3136,−0.0080,−0.5387,0.2841,0.3300,−0.6214)(0.0622,0.1782,0.3136,−0.0080,−0.5387,0.2841,0.3300,−0.6214)
λ 5 λ 5 \lambda_5\lambda_5 λ5λ5=0.3218对应的投影方向为 ( 0.2126 , − 0.7813 , 0.3136 , − 0.0080 , − 0.5387 , 0.2841 , 0.3300 , − 0.6214 ) ( 0.2126 , − 0.7813 , 0.3136 , − 0.0080 , − 0.5387 , 0.2841 , 0.3300 , − 0.6214 ) (0.2126,-0.7813,0.3136,-0.0080,-0.5387,0.2841,0.3300,-0.6214)(0.2126,-0.7813,0.3136,-0.0080,-0.5387,0.2841,0.3300,-0.6214) (0.2126,−0.7813,0.3136,−0.0080,−0.5387,0.2841,0.3300,−0.6214)(0.2126,−0.7813,0.3136,−0.0080,−0.5387,0.2841,0.3300,−0.6214)
λ 6 λ 6 \lambda_6\lambda_6 λ6λ6=0.1322对应的投影方向为 ( − 0.0959 , 0.0340 , − 0.6943 , 0.0068 , 0.0269 , 0.0042 , 0.7119 , 0.0064 ) ( − 0.0959 , 0.0340 , − 0.6943 , 0.0068 , 0.0269 , 0.0042 , 0.7119 , 0.0064 ) (-0.0959,0.0340,-0.6943,0.0068,0.0269,0.0042,0.7119,0.0064)(-0.0959,0.0340,-0.6943,0.0068,0.0269,0.0042,0.7119,0.0064) (−0.0959,0.0340,−0.6943,0.0068,0.0269,0.0042,0.7119,0.0064)(−0.0959,0.0340,−0.6943,0.0068,0.0269,0.0042,0.7119,0.0064)
λ 7 λ 7 \lambda_7\lambda_7 λ7λ7=0.0620对应的投影方向为 ( 0.8881 , − 0.0497 , − 0.3407 , − 0.0198 , − 0.0103 , − 0.0424 , − 0.2075 , − 0.2176 ) ( 0.8881 , − 0.0497 , − 0.3407 , − 0.0198 , − 0.0103 , − 0.0424 , − 0.2075 , − 0.2176 ) (0.8881,-0.0497,-0.3407,-0.0198,-0.0103,-0.0424,-0.2075,-0.2176)(0.8881,-0.0497,-0.3407,-0.0198,-0.0103,-0.0424,-0.2075,-0.2176) (0.8881,−0.0497,−0.3407,−0.0198,−0.0103,−0.0424,−0.2075,−0.2176)(0.8881,−0.0497,−0.3407,−0.0198,−0.0103,−0.0424,−0.2075,−0.2176)
λ 8 = 9.5959 × 1 0 − 17 λ 8 = 9.5959 × 1 0 − 17 \lambda_8=9.5959\times 10^{-17}\lambda_8=9.5959\times 10^{-17} λ8=9.5959×10−17λ8=9.5959×10−17对应的投影方向为 ( 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 ) ( 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 , 0.3536 ) (0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536)(0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536) (0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536)(0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536,0.3536)
k取不同值时对应的误差比率如下所示:
k的取值 | 数据重建误差eat |
---|---|
1 | 0.8265 |
2 | 0.7105 |
3 | 0.6499 |
4 | 0.5940 |
5 | 0.5521 |
6 | 0.5294 |
7 | 0.5162 |
参考
- PCA主成分数量(降维维度)选择
- Imported Analog EMG – Voltage下的EMG1、EMG2、…、EMG8部分进行PCA/NMF降维