Machine Learning 分类
![在这里插入图片描述](https://i-blog.csdnimg.cn/blog_migrate/19f1191ae3d17a3c4627d6c916fbf172.png)
![在这里插入图片描述](https://i-blog.csdnimg.cn/blog_migrate/cfd92224162733b045977300eb4b27d1.png)
1 PCA
Consider linear combinations:
Y
1
=
a
11
X
1
+
a
12
X
2
+
.
.
.
+
a
1
p
X
p
=
a
1
T
X
Y
2
=
a
21
X
1
+
a
22
X
2
+
.
.
.
+
a
2
p
X
p
=
a
2
T
X
.
.
.
Y
3
=
a
p
1
X
1
+
a
p
2
X
2
+
.
.
.
+
a
p
p
X
p
=
a
p
T
X
Y_1=a_{11}X_1+a_{12}X_2+...+a_{1p}X_p=\mathrm{a_1}^\mathsf{T}X\\Y_2=a_{21}X_1+a_{22}X_2+...+a_{2p}X_p=\mathrm{a_2}^\mathsf{T}X\\.\\.\\.\\Y_3=a_{p1}X_1+a_{p2}X_2+...+a_{pp}X_p=\mathrm{a_p}^\mathsf{T}X
Y1=a11X1+a12X2+...+a1pXp=a1TXY2=a21X1+a22X2+...+a2pXp=a2TX...Y3=ap1X1+ap2X2+...+appXp=apTX
PCA
- The linear combinations Y 1 , Y 2 , . . . , Y p Y_1,Y_2,...,Y_p Y1,Y2,...,Yp are principal components
- a j \mathrm{a_j} aj is the eigenvector of the j t h j^{th} jth principal components
- a j 1 , . . . , a j p a_{j1},...,a_{jp} aj1,...,ajp are the loadings of the j t h j_{th} jth principal component. Loadings make up the principal component loading vector: a j = ( a j 1 , . . . , a j p ) T a_j=(a_{j1},...,a_{jp})^\mathsf{T} aj=(aj1,...,ajp)T
- socre
y i j = a j 1 x i 1 + a j 2 x i 2 + . . . + a j p x i p y_{ij}=a_{j1}x_{i1}+a_{j2}x_{i2}+...+a_{jp}x_{ip} yij=aj1xi1+aj2xi2+...+ajpxip
calculate the observation in new coordinate system of principal components.
Properties
- Y 1 , Y 2 , . . . , Y p Y_1,Y_2,...,Y_p Y1,Y2,...,Yp are pairwise-uncorrelated, V a r ( Y ) = d i a g ( λ 1 , . . . , λ p ) = Λ Var(Y)=diag(\lambda_1,...,\lambda_p)=\mathbb{\Lambda} Var(Y)=diag(λ1,...,λp)=Λ, where λ j \lambda_j λj is the j t h j^{th} jth eigenvalue of Σ \Sigma Σ, V a r ( Y 1 ) = a T Σ a Var(Y_1)=a^\mathsf{T}\Sigma a Var(Y1)=aTΣa
- The total variance is preserved under the principal component transformation, Σ j = 1 p V a r ( Y j ) = Σ j = 1 p V a r ( X j ) \Sigma^p_{j=1}Var(Y_j)=\Sigma^{p}_{j=1}Var(X_j) Σj=1pVar(Yj)=Σj=1pVar(Xj)
- The first k k k principal components account for the proportion Σ j = 1 k λ j Σ j = 1 k λ p \frac{\Sigma^k_{j=1}\lambda_j}{\Sigma^k_{j=1}\lambda_p} Σj=1kλpΣj=1kλj of the total variance
PCA practice
- Proportion of Variation
- Cattell’s method
- Kaiser’s method
steps see https://blog.csdn.net/MINGRAN_JIA/article/details/123242755