2 感知机
2.1 感知机模型
假设输入空间(特征空间)是
χ
⊆
R
n
\chi \subseteq R^n
χ⊆Rn,输出空间是
γ
=
{
+
1
,
−
1
}
\gamma = \{+1, -1\}
γ={+1,−1},输入
x
∈
χ
x \in \chi
x∈χ表示实例的特征向量,对应于输入空间的点;输出
y
∈
γ
y \in \gamma
y∈γ表示实例的类别,由输入到输出的如下函数
f
(
x
)
=
s
i
g
n
(
ω
⋅
x
+
b
)
f(x) = sign(\omega \cdot x + b)
f(x)=sign(ω⋅x+b)
称为感知机。其中
ω
\omega
ω和
b
b
b为感知机模型参数。
2.2 学习策略
-
线性可分数据集:如果存在超平面 S S S
ω ⋅ x + b = 0 \omega \cdot x + b = 0 ω⋅x+b=0
能够将数据集的正实例点和负实例点完全正确的划分到超平面的两侧,则称数据集为线性可分数据集(linearly separable data set) -
由于空间 R n R^n Rn中任意一点到超平面S的距离是
1 ∣ ∣ ω ∣ ∣ ∣ ω ⋅ x 0 + b ∣ \frac{1}{||\omega||}|\omega \cdot x_0 + b| ∣∣ω∣∣1∣ω⋅x0+b∣
所有误分类点到超平面的总距离为
1 ∣ ∣ ω ∣ ∣ ∑ x i ∈ M y i ( ω ⋅ x i + b ) \frac{1}{||\omega||} \sum_{x_i \in M}y_i(\omega \cdot x_i + b) ∣∣ω∣∣1xi∈M∑yi(ω⋅xi+b)
2.3 学习算法
2.3.1 原始形式
输入:训练数据集 T = ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) T={(x_1, y_1), (x_2, y_2), ..., (x_N, y_N)} T=(x1,y1),(x2,y2),...,(xN,yN),其中 x i ∈ X = R n , y i ∈ y = − 1 , + 1 , i = 1 , 2 , . . , N x_i \in X = R^n, y_i \in y = {-1, +1}, i = 1,2,.., N xi∈X=Rn,yi∈y=−1,+1,i=1,2,..,N;学习率 η ( 0 < η ≤ 1 ) \eta(0<\eta\leq1) η(0<η≤1);
输出: w i , b i w_i, b_i wi,bi感知机模型 f ( x ) = s i g n ( w ⋅ x + b ) f(x) = sign(w \cdot x + b) f(x)=sign(w⋅x+b)
(1)选取初值 w 0 , b 0 w_0, b_0 w0,b0
(2)在训练集中选取数据 ( x i , y i ) (x_i, y_i) (xi,yi)
(3)如果
y
i
(
w
⋅
x
i
+
b
)
≤
0
y_i (w \cdot x_i +b) \leq 0
yi(w⋅xi+b)≤0,
w
←
w
+
η
y
i
x
i
b
←
b
+
η
y
i
w \leftarrow w + \eta y_i x_i \\ b \leftarrow b + \eta y_i
w←w+ηyixib←b+ηyi
(4)转至(2),直至数据没有误分类点
2.3.2 算法收敛性的证明
首先将偏置 b b b也并入权重向量 w w w,记做 w ^ = ( w T , b ) T \hat w = (w^T, b)^T w^=(wT,b)T,同样也将输入向量加以补充,加进常数1,记做 x ^ = ( x T , 1 ) T \hat x = (x^T, 1)^T x^=(xT,1)T,这样 x ^ ∈ R n + 1 , w ^ ∈ R n + 1 \hat x \in R^{n+1}, \hat w \in R ^ {n + 1} x^∈Rn+1,w^∈Rn+1,显然, w ^ ⋅ x = w ⋅ x + b \hat w \cdot x = w \cdot x + b w^⋅x=w⋅x+b
Novikoff定理
设训练数据
T
=
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
,
.
.
.
,
(
x
N
,
y
N
)
T={(x_1, y_1), (x_2, y_2), ..., (x_N, y_N)}
T=(x1,y1),(x2,y2),...,(xN,yN)是线性可分,其中
x
i
∈
X
=
R
n
,
y
i
∈
y
=
−
1
,
+
1
,
i
=
1
,
2
,
.
.
,
N
x_i \in X = R^n, y_i \in y = {-1, +1}, i = 1,2,.., N
xi∈X=Rn,yi∈y=−1,+1,i=1,2,..,N,则
(1)存在满足条件
∣
∣
w
^
o
p
t
∣
∣
=
1
||\hat w _{opt}|| = 1
∣∣w^opt∣∣=1的超平面
w
^
o
p
t
⋅
x
^
=
w
o
p
t
⋅
x
+
b
o
p
t
=
0
\hat w_{opt} \cdot \hat x = w_{opt} \cdot x + b_{opt} = 0
w^opt⋅x^=wopt⋅x+bopt=0将训练集数据完全分开;且存在
γ
>
0
\gamma > 0
γ>0,对所有
i
=
1
,
2
,
.
.
.
,
N
i = 1, 2, ..., N
i=1,2,...,N
y
i
(
w
^
o
p
t
⋅
x
i
)
=
y
i
(
w
o
p
t
⋅
x
i
+
b
o
p
t
)
≥
γ
y_i(\hat w_{opt} \cdot x_i) = y_i(w_{opt} \cdot x_i + b_{opt}) \geq \gamma
yi(w^opt⋅xi)=yi(wopt⋅xi+bopt)≥γ
(2)令
R
=
m
a
x
1
≤
i
≤
N
∣
∣
x
^
i
R = \underset{1 \leq i \leq N}{max} || \hat x_{i}
R=1≤i≤Nmax∣∣x^i,则感知机算法在训练数据集上的误分类次数
k
k
k满足不等式
k
≤
(
R
γ
)
2
k \leq (\frac{R}{\gamma}) ^ 2
k≤(γR)2
证明:
(1)由于数据集是线性可分的,按照定义存在超平面可以将数据集完全正确分开,取超平面为
w
^
o
p
t
⋅
x
^
=
w
o
p
t
⋅
x
+
b
o
p
t
=
0
\hat w_{opt} \cdot \hat x = w_{opt} \cdot x + b_{opt} = 0
w^opt⋅x^=wopt⋅x+bopt=0,并使
∣
∣
w
^
o
p
t
∣
∣
=
1
||\hat w _{opt}|| = 1
∣∣w^opt∣∣=1,因此对于有限的
i
i
i,均有
y
i
(
w
^
o
p
t
⋅
x
i
)
=
y
i
(
w
o
p
t
⋅
x
i
+
b
o
p
t
)
>
0
y_i(\hat w_{opt} \cdot x_i) = y_i(w_{opt} \cdot x_i + b_{opt}) > 0
yi(w^opt⋅xi)=yi(wopt⋅xi+bopt)>0
所以存在
γ
\gamma
γ,
γ
=
m
i
n
i
{
y
i
(
w
o
p
t
⋅
x
i
+
b
o
p
t
)
}
\gamma = \underset{i} {min} \{y_i(w_{opt} \cdot x_i + b_{opt})\}
γ=imin{yi(wopt⋅xi+bopt)}
满足
y
i
(
w
^
o
p
t
⋅
x
i
)
=
y
i
(
w
o
p
t
⋅
x
i
+
b
o
p
t
)
≥
γ
y_i(\hat w_{opt} \cdot x_i) = y_i(w_{opt} \cdot x_i + b_{opt}) \geq \gamma
yi(w^opt⋅xi)=yi(wopt⋅xi+bopt)≥γ
(2)令
w
^
k
−
1
\hat w_{k - 1}
w^k−1是第
k
k
k个误分类实例之前的扩充权重向量,即
w
^
k
−
1
=
(
w
k
−
1
T
,
b
k
−
1
)
T
\hat w_{k-1} = (w_{k-1}^T, b_{k-1})^T
w^k−1=(wk−1T,bk−1)T
第
k
k
k个误分类实例的条件是
y
i
(
w
^
k
−
1
⋅
x
i
)
=
y
i
(
w
k
−
1
⋅
x
i
+
b
k
−
1
)
≤
0
y_i(\hat w_{k-1} \cdot x_i) = y_i(w_{k-1} \cdot x_i + b_{k-1}) \leq 0
yi(w^k−1⋅xi)=yi(wk−1⋅xi+bk−1)≤0
若
(
x
i
,
y
i
)
(x_i, y_i)
(xi,yi)是被误分类的数据,则
w
^
k
=
w
^
k
−
1
+
η
y
i
x
^
i
\hat w_k = \hat w_{k-1} + \eta y_i \hat x_i
w^k=w^k−1+ηyix^i
由此,推导两个不等式
w
^
k
⋅
w
^
o
p
t
≥
k
η
γ
(2.12)
\hat w_k \cdot \hat w_{opt} \geq k \eta \gamma \tag {2.12}
w^k⋅w^opt≥kηγ(2.12)
由之前的式子可知
w
^
k
⋅
w
^
o
p
t
=
w
^
k
−
1
⋅
w
^
o
p
t
+
η
y
i
w
^
o
p
t
⋅
x
^
i
≥
w
^
k
−
1
⋅
w
^
o
p
t
+
η
γ
\hat w_k \cdot \hat w_{opt} = \hat w_{k-1} \cdot \hat w_{opt} + \eta y_i \hat w_{opt} \cdot \hat x_i \\ \geq \hat w_{k-1} \cdot \hat w_{opt} + \eta \gamma
w^k⋅w^opt=w^k−1⋅w^opt+ηyiw^opt⋅x^i≥w^k−1⋅w^opt+ηγ
进一步递推得到
w
^
k
⋅
w
^
o
p
t
≥
w
^
k
−
1
⋅
w
^
o
p
t
+
η
γ
≥
w
^
k
−
2
⋅
w
^
o
p
t
+
2
η
γ
≥
.
.
.
≥
k
η
γ
\hat w_k \cdot \hat w_{opt}\geq \hat w_{k-1} \cdot \hat w_{opt} + \eta \gamma \geq \hat w_{k-2} \cdot \hat w_{opt} + 2\eta \gamma \geq ... \geq k \eta \gamma \\
w^k⋅w^opt≥w^k−1⋅w^opt+ηγ≥w^k−2⋅w^opt+2ηγ≥...≥kηγ
下一步证明
∣
∣
w
^
k
∣
∣
≤
k
η
2
R
2
(2.13)
||\hat w_{k}|| \leq k \eta^2 R^2 \tag{2.13}
∣∣w^k∣∣≤kη2R2(2.13)
由前面的式子可得
∣
∣
w
^
k
∣
∣
2
=
∣
∣
w
^
k
−
1
∣
∣
2
+
2
y
i
w
^
k
−
1
⋅
x
^
i
+
η
2
∣
∣
x
^
i
∣
∣
2
≤
∣
∣
w
^
k
−
1
∣
∣
2
+
η
2
∣
∣
x
^
i
∣
∣
2
≤
∣
∣
w
^
k
−
1
∣
∣
2
+
η
2
R
2
≤
∣
∣
w
^
k
−
2
∣
∣
2
+
2
η
2
R
2
≤
.
.
.
≤
k
η
2
R
2
||\hat w_k||^2 = ||\hat w_{k-1}||^2 + 2 y_i \hat w_{k-1} \cdot \hat x_i + \eta^2||\hat x_i||^2 \\ \leq ||\hat w_{k-1}||^2 + \eta^2||\hat x_i||^2 \\ \leq ||\hat w_{k-1}||^2 + \eta^2 R^2 \\ \leq ||\hat w_{k-2}||^2 + 2\eta^2 R^2 \leq ... \\ \leq k \eta^2 R^2
∣∣w^k∣∣2=∣∣w^k−1∣∣2+2yiw^k−1⋅x^i+η2∣∣x^i∣∣2≤∣∣w^k−1∣∣2+η2∣∣x^i∣∣2≤∣∣w^k−1∣∣2+η2R2≤∣∣w^k−2∣∣2+2η2R2≤...≤kη2R2
结合2.12和2.13可得
k
η
γ
≤
w
^
k
⋅
w
^
o
p
t
≤
∣
∣
w
^
k
∣
∣
∣
∣
w
^
o
p
t
∣
∣
=
∣
∣
w
^
k
∣
∣
≤
k
η
R
k
2
γ
2
≤
k
R
2
k \eta \gamma \leq \hat w_k \cdot \hat w_{opt} \leq ||\hat w_k||\ || \hat w_{opt}|| = ||\hat w_k|| \leq \sqrt{k}\eta R \\ k^2 \gamma ^2 \leq k R^2
kηγ≤w^k⋅w^opt≤∣∣w^k∣∣ ∣∣w^opt∣∣=∣∣w^k∣∣≤kηRk2γ2≤kR2
所以得到
k
≤
(
R
γ
)
2
k \leq (\frac{R}{\gamma})^2
k≤(γR)2
最终证明误分类次数存在上界,经过有限次搜索可以找到将训练数据完全正确分开的分离超平面。也就是说,当数据线性可分时,感知机算法是收敛的。
习题
2.1
假设二维平面,存在四个点,(1,1)、(1,-1)、(-1,1)、(-1,-1),根据异或的定义,(1,1)和(-1,-1)应被归为一类,但从二维空间看,并不存在这样一个平面,可以将这4个点,依据正负样本分隔开。
2.3
必要性:
样本线性可分->正实例点所构成的凸壳与负实例点所构成的凸壳互不相交
采用反证法
假设样本集线性可分,正实例点所构成的凸壳与负实例点所构成的凸壳相交,即存在某个元素 s s s,同时满足 s ∈ conv ( S + ) s \in \text{conv}(S_+) s∈conv(S+)和 s ∈ conv ( S − ) s \in \text{conv}(S_-) s∈conv(S−)。
首先样本线性可分,代表存在一个超平面
w
⋅
x
+
b
=
0
w \cdot x + b = 0
w⋅x+b=0,使得正、负实例处于超平面的两边,即对于所有的正实例来说,满足
w
⋅
x
i
+
b
=
ϵ
i
>
0
,
i
=
1
,
2
,
.
.
.
,
∣
S
+
∣
w \cdot x_i + b = \epsilon_i > 0, i = 1,2,...,|S_+|
w⋅xi+b=ϵi>0,i=1,2,...,∣S+∣
根据凸壳的定义,对于
c
o
n
v
(
S
+
)
conv(S_+)
conv(S+)中的元素,存在
w
⋅
s
+
+
b
=
w
⋅
∑
i
=
1
∣
S
+
∣
λ
i
x
i
+
b
=
∑
i
=
1
∣
S
+
∣
λ
i
w
⋅
x
i
+
b
=
∑
i
=
1
∣
S
+
∣
λ
i
(
ϵ
i
−
b
)
+
b
=
∑
i
=
1
∣
S
+
∣
λ
i
ϵ
i
>
0
w \cdot s_+ + b = w \cdot \sum_{i=1}^{|S_+|} \lambda_i x_i +b \\ = \sum_{i=1}^{|S_+|}\lambda_i w \cdot x_i + b \\ = \sum_{i=1}^{|S_+|}\lambda_i (\epsilon_i - b) + b \\ = \sum_{i=1}^{|S_+|}\lambda_i \epsilon_i > 0
w⋅s++b=w⋅i=1∑∣S+∣λixi+b=i=1∑∣S+∣λiw⋅xi+b=i=1∑∣S+∣λi(ϵi−b)+b=i=1∑∣S+∣λiϵi>0
同理,对于
c
o
n
v
(
S
−
)
conv(S_-)
conv(S−)中的元素,均存在
w
⋅
s
−
+
b
=
=
∑
i
=
1
∣
S
−
∣
λ
i
ϵ
i
<
0
w \cdot s_- + b = = \sum_{i=1}^{|S_-|}\lambda_i \epsilon_i < 0
w⋅s−+b==i=1∑∣S−∣λiϵi<0
那根据推理,不存在
s
s
s同时满足
s
∈
conv
(
S
+
)
s \in \text{conv}(S_+)
s∈conv(S+)和
s
∈
conv
(
S
−
)
s \in \text{conv}(S_-)
s∈conv(S−)。
充分性有点纠结…没有发现好理解的证明
感知机原始形式代码实现:
# 感知机的原始形式
import numpy as np
X = np.array([[3,3], [4,3], [1,1]]).T
y = np.array([1,1,-1]).T
# 构建模型
def predict(w, b, x):
f = np.dot(w , x) + b
return 1 if f > 0 else -1
# 模型训练
def train(lr=1):
# 初始化
w = np.array([0, 0])
b = 0
lr = lr
false_count = len(X)
iter_count = 0
# 迭代
while false_count != 0:
print(f"this is the {iter_count}th iter")
false_count = X.shape[1]
for x_p, y_p in zip(X.T, y):
print(f'the train set is {x_p}, {y_p}')
pre_y = predict(w, b, x_p)
# print(pre_y)
if pre_y * y_p <= 0:
w = w + lr * y_p * x_p
b = b + lr * y_p
print(w, b)
else:
false_count -= 1
print(f'false_count:{false_count}')
iter_count += 1
train()
感知机对偶形式代码实现
# 生成gram矩阵
def calculate_gram_matrix(X):
# 矩阵中不同样本是列向量
gram_matrix = np.dot(X.T, X)
return gram_matrix
#
def duality_predict(alpha, b, x_i, gram_matrix_p):
n = len(gram_matrix_p)
res = 0
for j, x_j_x_i in enumerate(gram_matrix_p):
res += alpha[j] * y[j] * x_j_x_i
res += b
return res
def duality_train():
# 初始化
alpha = np.array([0, 0, 0])
b = 0
false_count = len(X)
iter_count = 0
gram_matrix = calculate_gram_matrix(X)
# 迭代
while false_count != 0:
print(f"this is the {iter_count}th iter")
false_count = X.shape[1]
for i, (x_i, y_i) in enumerate(zip(X.T, y)):
print(f'the train set is {x_i}, {y_i}')
pre_y = duality_predict(alpha, b, x_i, gram_matrix[i])
# print(pre_y)
if pre_y * y_i <= 0:
alpha[i] += 1
b += y_i
print(alpha, b)
else:
false_count -= 1
print(f'false_count:{false_count}')
iter_count += 1
w = 0
for i, x_i in enumerate(X):
w += alpha[i] * y[i] * X[:,i]
duality_train()