以Iris数据集为例,分别实现PCA和LDA降维
算法原理
主成分分析(Principal Component Analysis,PCA)是一种常用的线性降维数据分析方法,其实质是在能尽可能好的代表原特征的情况下,将原特征进行线性变换、映射至低纬度空间中。
算法实现
首先将数据做中心化处理1
2meanVal = np.mean(X, axis=0)
W = X - meanVal
这里说一下,中心化就是使得样本矩阵的中心回归到坐标系的原点,看下图应该比较好理解。也可以点这里更详细。
计算中心化后数据的协方差矩阵1covMat = np.cov(W, rowvar=0)
计算协方差矩阵的特征值和特征向量1eigVals, eigVects = np.linalg.eig(np.mat(covMat))
找出特征值最大的k个特征所对应的特征向量,并组成向量T1
2
3E = np.argsort(eigVals)
k_E = E[:-(k + 1):-1]
T = eigVects[:, k_E]
Y=W*T即为降维到k维后的数据1Y = W * T
结果
PCA算法1
2
3
4
5
6
7
8
9
10
11
12
13
14
15def (X,k):
meanVal = np.mean(X, axis=0)
W = X - meanVal
covMat = np.cov(W, rowvar=0)
eigVals, eigVects = np.linalg.eig(np.mat(covMat))
E = np.argsort(eigVals)
k_E = E[:-(k + 1):-1]
T = eigVects[:, k_E]
Y = W * T
return Y
带入Iris数据集1
2
3data = load_iris()
y = data.target
X = data.data
带入数据,输出结果1
2reduced_X = np.array(MyPca(X, 2))
show2(reduced_X)
LDA
算法原理
线性判别分析(linear discriminant analysis,LDA)是是一种监督学习的降维技术,投影后希望类内方差最小,类间方差最大,即每一种类别数据的投影点尽可能的接近,而不同类别的数据的类别中心之间的距离尽可能的大。
算法实现
首先将数据根据标签分类1
2yi = set(y)
xi = np.array([X[np.where(y == i)] for i in yi])
计算所有样本均值$mu=frac{1}{m}sumlimits_{i=1}^m$和各类样本均值$mu_i=frac{1}{n_i}sumlimits_{xin x_i}x$
其中$m$表示总样本数,$n_i$表示第i类的样本数。1
2u = np.array([np.mean(X, axis=0)])
ui = np.array([np.mean(xi[i], axis=0) for i in range(xi.shape[0])])
计算类内散度矩阵$S_w=sumlimits_{i=1}^csumlimits_{xin x_i}(x-mu_i)(x-mu_i)^T$
及类间散度矩阵$S_b=sumlimits_{i=1}^cn_i(mu_i-mu)(mu_i-mu)^T$
其中$c$表示类别数。1
2Sw = sum(np.dot((xi[i] - ui[i]).T, (xi[i] - ui[i])) for i in range(len(yi)))
Sb = sum(len(xi[i]) * (ui[i].reshape(1, 4) - u).T * (ui[i].reshape(1, 4) - u) for i in range(len(yi)))
计算$S_w^{-1}S_b$(由于$S_b$的秩最大为$c-1$,所以LDA最大只能降到$c-1$的维度)1
2
3
4
5S=np.linalg.inv(Sw).dot(Sb)
r=np.linalg.matrix_rank(S)
if(k>r):
print("k_max=",r)
k=r
找出特征值最大的k个特征所对应的特征向量,并组成向量$W$,$Y=W*X$即为降维到k维后的数据1
2
3
4
5eigVals, eigVects = np.linalg.eig(S)
E = np.argsort(eigVals)
k_E = E[:-(k + 1):-1]
W = eigVects[:, k_E]
Y=np.dot(X, W)
结果
LDA算法1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25def MyLDA(X,y,k):
yi = set(y)
xi = np.array([X[np.where(y == i)] for i in yi])
u = np.array([np.mean(X, axis=0)])
ui = np.array([np.mean(xi[i], axis=0) for i in range(xi.shape[0])])
Sw = sum(np.dot((xi[i] - ui[i]).T, (xi[i] - ui[i]))
for i in range(len(yi)))
Sb = sum(len(xi[i]) * (ui[i].reshape(1, 4) - u).T * (ui[i].reshape(1, 4) - u)
for i in range(len(yi)))
S=np.linalg.inv(Sw).dot(Sb)
r=np.linalg.matrix_rank(S)
if(k>r):
print("k_max=",r)
k=r
eigVals, eigVects = np.linalg.eig(S)
E = np.argsort(eigVals)
k_E = E[:-(k + 1):-1]
W = eigVects[:, k_E]
Y=np.dot(X, W)
return Y
带入数据,输出结果1
2LDA_2D = np.array(MyLDA(X,y,2))
show2(LDA_2D)
这样就将数据降到二维,但可能有人会疑惑,怎么知道降维后的信息量变化了多少呢?可以用这样的计算方法: $eta_k=frac{sum_{j=1}^klambda_j}{sum_{j=1}^klambda_j}$来表示降维后剩余的信息量
总结
总的来说,PCA和LDA的实现很简单,但是基本原理和推导需要扎实的数学基础,尤其是LDA中,尤其要注意矩阵的秩对结果的影响。PCA和LDA虽然都用到数据降维的思想,但是两者有着很大的不同,首先监督方式不一样,LDA是有监督的降维方法,而PCA是无监督的降维方法;再者目的也不一样,PCA是为了去除原始数据集中冗余的维度,让投影子空间的各个维度的方差尽可能大。而LDA是通过数据降维使得原始数据中不同的类别尽可能区分开来。
代码1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from sklearn.datasets import load_iris
def MyPCA(X,k):
meanVal = np.mean(X, axis=0)
W = X - meanVal
covMat = np.cov(W, rowvar=0)
eigVals, eigVects = np.linalg.eig(np.mat(covMat))
E = np.argsort(eigVals)
k_E = E[:-(k + 1):-1]
T = eigVects[:, k_E]
n = sum(eigVals[k_E])/sum(eigVals)
print("降到",k,"维后保留的信息量是原来的",n * 100.0, "%")
Y = W * T
return Y
def MyLDA(X,y,k):
yi = set(y)
xi = np.array([X[np.where(y == i)] for i in yi])
u = np.array([np.mean(X, axis=0)])
ui = np.array([np.mean(xi[i], axis=0) for i in range(xi.shape[0])])
Sw = sum(np.dot((xi[i] - ui[i]).T, (xi[i] - ui[i]))
for i in range(len(yi)))
Sb = sum(len(xi[i]) * (ui[i].reshape(1, 4) - u).T * (ui[i].reshape(1, 4) - u)
for i in range(len(yi)))
S=np.linalg.inv(Sw).dot(Sb)
r=np.linalg.matrix_rank(S)
if(k>r):
print("k_max=",r)
k=r
eigVals, eigVects = np.linalg.eig(S)
E = np.argsort(eigVals)
k_E = E[:-(k + 1):-1]
W = eigVects[:, k_E]
n = sum(eigVals[k_E]) / sum(eigVals)
print("降到", k, "维后保留的信息量是原来的", n * 100.0, "%")
Y=np.dot(X, W)
return Y
def show2(reduced_X):
red_x, red_y = [], []
blue_x, blue_y = [], []
green_x, green_y = [], []
for i in range(len(reduced_X)):
if y[i] == 0:
red_x.append(reduced_X[i][0])
red_y.append(reduced_X[i][1])
elif y[i] == 1:
blue_x.append(reduced_X[i][0])
blue_y.append(reduced_X[i][1])
else:
green_x.append(reduced_X[i][0])
green_y.append(reduced_X[i][1])
plt.scatter(red_x, red_y, c='r', marker='x')
plt.scatter(blue_x, blue_y, c='b', marker='*')
plt.scatter(green_x, green_y, c='g', marker='.')
# plt.show()
def show3(X):
red_x, red_y, red_z = [], [], []
blue_x, blue_y, blue_z = [], [], []
green_x, green_y, green_z = [], [], []
for i in range(len(X)):
if y[i] == 0:
red_x.append(X[i][0])
red_y.append(X[i][1])
red_z.append(X[i][2])
elif y[i] == 1:
blue_x.append(X[i][0])
blue_y.append(X[i][1])
blue_z.append(X[i][2])
else:
green_x.append(X[i][0])
green_y.append(X[i][1])
green_z.append(X[i][2])
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(red_x, red_y, red_z, c='r', marker='x')
ax.scatter(blue_x, blue_y, blue_z, c='b', marker='*')
ax.scatter(green_x, green_y, green_z, c='g', marker='.')
# plt.show()
if __name__ == "__main__":
data = load_iris()
y = data.target
X = data.data
#PCA
#降到2维
PCA_2D = np.array(MyPCA(X, 2))
show2(PCA_2D)
# 降到3维
PCA_3D = np.array(MyPCA(X, 3))
show3(PCA_3D)
plt.show()
#LDA
# 降到2维
LDA_2D = np.array(MyLDA(X,y,2))
show2(LDA_2D)
plt.show()
如有错误,欢迎指正;如果有更好的,欢迎分享。