感知机和统计学习方法
感知机
1、输入为实例的特征向量,输出为实例的类别,取+1和-1;
2、感知机对应于输入空间中将实例划分为正负两类的分离超平面,属于判别模型;
3、导入基于误分类的损失函数;
4、利用梯度下降法对损失函数进行极小化;
感知机学习算法具有简单而易于实现的优点,分为原始形式和对偶形式;1957年由Rosenblatt提出,是神经网络与支持向量机的基础。
定义(感知机):
假设输入空间(特征空间)是
X
⊆
R
n
X\subseteq R^{n}
X⊆Rn,输出空间是
y
=
{
+
1
,
−
1
}
y=\{+1,-1\}
y={+1,−1}.
输入
x
∈
X
x \in X
x∈X表示实例的特征向量,对应于输入空间(特征空间)的点,输出
y
∈
Y
y \in Y
y∈Y表示实例的类别,由输入空间到输出空间的函数:
f
(
x
)
=
s
i
g
n
(
w
∙
x
+
b
)
f(x)=sign(w \bullet x+b)
f(x)=sign(w∙x+b)
称为感知机。模型参数:w x,内积,权值向量,偏置,符号函数:
s
i
g
n
(
x
)
=
{
+
1
,
x
≥
0
1
,
x
<
0
sign(x) = \begin{cases} +1,x\geq 0 \\ 1, x<0 \end{cases}
sign(x)={+1,x≥01,x<0
感知机几何解释:
线性方程:
w
∙
x
+
b
=
0
w \bullet x+b=0
w∙x+b=0
对应于超平面S,w为法向量,b截距,分离正、负类:
分离超平面:
感知机学习策略
如何定义损失函数?
自然选择:误分类点的数目,但损失函数不是w,b 连续可导,不宜优化。
另一选择:误分类点到超平面的总距离:
距离:
1
∣
∣
w
∣
∣
∣
w
∙
x
0
+
b
∣
\frac{1}{||w||}|w\bullet x_{0}+b|
∣∣w∣∣1∣w∙x0+b∣
误分类点:当
x
≥
0
x \ge0
x≥0时,
y
=
−
1
y=-1
y=−1,当
x
<
0
x<0
x<0时,
y
=
1
y=1
y=1.
−
y
i
(
w
∙
x
i
+
b
)
>
0
-y_{i}(w \bullet x_{i}+b)>0
−yi(w∙xi+b)>0
误分类点距离:
−
1
∣
∣
w
∣
∣
y
i
(
w
∙
x
i
+
b
)
-\frac{1}{||w||}y_{i}(w\bullet x_{i}+b)
−∣∣w∣∣1yi(w∙xi+b)
总距离:
−
1
∣
∣
w
∣
∣
∑
x
i
∈
M
y
i
(
w
∙
x
i
+
b
)
-\frac{1}{||w||}\sum_{x_{i} \in M}y_{i}(w\bullet x_{i}+b)
−∣∣w∣∣1xi∈M∑yi(w∙xi+b)
损失函数:
L
(
w
,
b
)
=
−
∑
x
i
∈
M
y
i
(
w
∙
x
i
+
b
)
L(w,b)=-\sum_{x_{i} \in M}y_{i}(w\bullet x_{i}+b)
L(w,b)=−xi∈M∑yi(w∙xi+b)
其中M为误分类点的数目。
感知机学习算法
求解最优化问题:
m
i
n
w
,
b
L
(
w
,
b
)
=
−
∑
x
i
∈
M
y
i
(
w
∙
x
i
+
b
)
min_{w,b}L(w,b)=-\sum_{x_{i} \in M}y_{i}(w\bullet x_{i}+b)
minw,bL(w,b)=−xi∈M∑yi(w∙xi+b)
随机梯度下降法,
首先任意选择一个超平面,w,b,然后不断极小化目标函数,损失函数L的梯度:
▽
w
L
(
w
,
b
)
=
−
∑
x
i
∈
M
y
i
x
i
\triangledown_{w}L(w,b)=-\sum_{x_{i} \in M}y_{i}x_{i}
▽wL(w,b)=−xi∈M∑yixi
▽
b
L
(
w
,
b
)
=
−
∑
x
i
∈
M
y
i
\triangledown_{b}L(w,b)=-\sum_{x_{i} \in M}y_{i}
▽bL(w,b)=−xi∈M∑yi
选取误分类点更新:
w
←
w
+
η
y
i
x
i
w\leftarrow w+\eta y_{i}x_{i}
w←w+ηyixi
b
←
b
+
η
y
i
b\leftarrow b+\eta y_{i}
b←b+ηyi
感知机学习算法的原始形式:
输入:训练数据集
T
=
{
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
,
.
.
.
,
(
x
N
,
y
N
)
}
T=\{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{N},y_{N})\}
T={(x1,y1),(x2,y2),...,(xN,yN)},其中
x
i
∈
X
n
=
R
n
,
y
i
∈
Y
=
+
1
,
−
1
,
i
=
1
,
2
,
3
,
.
.
.
,
N
x_{i} \in Xn = R^{n},y_{i} \in Y={+1,-1},i=1,2,3,...,N
xi∈Xn=Rn,yi∈Y=+1,−1,i=1,2,3,...,N学习率
η
(
0
<
η
≤
1
)
\eta (0 < \eta \leq1)
η(0<η≤1);
输出:w,b;感知机模型
f
(
x
)
=
s
i
g
n
(
w
∙
x
+
b
)
f(x)=sign(w\bullet x+b)
f(x)=sign(w∙x+b)
(1) 选取初始值
w
0
,
b
0
w_{0},b_{0}
w0,b0
(2)在训练集中选取数据
(
x
i
,
y
i
)
(x_{i},y_{i})
(xi,yi)
(3)如果
y
i
(
w
∙
x
+
b
)
≤
0
y_{i}(w \bullet x+b) \leq 0
yi(w∙x+b)≤0,
w
←
w
+
η
y
i
x
i
\quad w\leftarrow w+\eta y_{i}x_{i}
w←w+ηyixi
b
←
b
+
η
y
i
\quad \quad b\leftarrow b+\eta y_{i}
b←b+ηyi
(4) 专职(2),直到训练集中没有误分类点。
例:正例:
x
1
=
(
3
,
3
)
T
x_{1}=(3,3)^T
x1=(3,3)T 负例:
x
3
=
(
1
,
1
)
T
x_{3}=(1,1)^T
x3=(1,1)T
解:构建优化问题:
m
i
n
w
,
b
L
(
w
,
b
)
=
−
∑
x
i
∈
M
y
i
(
w
∙
x
i
+
b
)
min_{w,b}L(w,b)=-\sum_{x_{i} \in M}y_{i}(w\bullet x_{i}+b)
minw,bL(w,b)=−xi∈M∑yi(w∙xi+b)
求解:w,b,
η
=
1
\eta =1
η=1
(1) 取出值
w
0
,
b
0
=
0
w_{0},b_{0}=0
w0,b0=0
(2) 对
x
1
=
(
3
,
3
)
T
,
y
1
(
w
0
∙
x
1
+
b
0
=
0
)
x_{1}=(3,3)^T,y_{1}(w_{0} \bullet x_{1}+b_{0}=0)
x1=(3,3)T,y1(w0∙x1+b0=0),未能被正确分类,更新w,b
w
1
=
w
0
+
y
1
x
1
=
(
3
,
3
)
T
,
b
1
=
b
0
+
y
1
=
1
w_{1}=w_{0}+y_{1}x_{1}=(3,3)^T,b_{1}=b_{0}+y_{1}=1
w1=w0+y1x1=(3,3)T,b1=b0+y1=1
得线性模型:
w
1
∙
x
+
b
1
=
3
x
(
1
)
+
3
x
(
2
)
+
1
w_{1}\bullet x+b_{1}=3x^{(1)}+3x^{(2)}+1
w1∙x+b1=3x(1)+3x(2)+1
(3)
x
2
x_{2}
x2,显然,
y
1
(
w
1
∙
x
i
+
b
1
)
>
0
y_{1}(w_{1}\bullet x_{i}+b_{1})>0
y1(w1∙xi+b1)>0,被正确分类,对
x
3
=
(
1
,
1
)
T
,
y
3
(
w
1
∙
x
3
+
b
1
)
<
0
x_{3}=(1,1)^T,y_{3}(w_{1}\bullet x_{3}+b_{1})<0
x3=(1,1)T,y3(w1∙x3+b1)<0,被错误分类,
w
2
=
w
+
1
+
y
3
x
3
=
(
2
,
2
)
T
,
b
2
=
b
1
+
y
3
=
0
w_{2}=w+{1}+y_{3}x_{3}=(2,2)^T,b_{2}=b_{1}+y_{3}=0
w2=w+1+y3x3=(2,2)T,b2=b1+y3=0
得到线性模型:
w
2
∙
x
+
b
2
=
2
x
(
1
)
+
2
x
2
w_{2}\bullet x+b_{2}=2x^{(1)}+2x^{2}
w2∙x+b2=2x(1)+2x2
如此继续下去:
w
7
=
(
1
,
1
)
T
,
b
7
=
−
3
w_{7}=(1,1)^T,b_{7}=-3
w7=(1,1)T,b7=−3,
w
7
∙
x
+
b
7
=
x
(
1
)
+
x
(
2
)
−
3
w_{7} \bullet x+ b_{7}=x^{(1)}+x^{(2)}-3
w7∙x+b7=x(1)+x(2)−3
分离超平面:
x
(
1
)
+
x
(
2
)
−
3
=
0
x^{(1)}+x^{(2)}-3=0
x(1)+x(2)−3=0
感知机模型:
f
(
x
)
=
s
i
g
n
(
x
(
1
)
+
x
(
2
)
−
3
)
f(x)=sign(x^{(1)}+x^{(2)}-3)
f(x)=sign(x(1)+x(2)−3)
算法的收敛性:证明经过有限次迭代可以得到一个将训练数据集完全正确划分的分离超平面及感知机模型。
将b并入权重向量w,记作:
w
^
=
(
w
T
,
b
)
T
\hat w=(w^T,b)^T
w^=(wT,b)T
x
^
=
(
x
T
,
1
)
T
,
x
^
∈
R
n
+
1
,
w
^
∈
R
n
+
1
,
w
^
x
^
=
w
∙
x
+
b
\hat x=(x^T,1)^T ,\hat x \in R^{n+1},\hat w \in R^{n+1},\hat w \hat x=w \bullet x+b
x^=(xT,1)T,x^∈Rn+1,w^∈Rn+1,w^x^=w∙x+b
定理:
设训练数据集
T
=
{
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
,
.
.
.
,
(
x
N
,
y
N
)
}
T=\{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{N},y_{N})\}
T={(x1,y1),(x2,y2),...,(xN,yN)}是线性的,可分离的,其中
x
i
∈
X
=
R
n
,
y
i
∈
Y
=
{
−
1
,
+
1
}
,
i
=
1
,
2
,
.
.
.
,
N
x_{i} \in X=R^n,y_{i} \in Y = \{-1,+1\},i=1,2,...,N
xi∈X=Rn,yi∈Y={−1,+1},i=1,2,...,N
则
(1)存在满足条件
∣
∣
w
^
o
p
t
∣
∣
=
1
||\hat w_{opt}||=1
∣∣w^opt∣∣=1的超平面
w
^
o
p
t
∙
x
^
=
w
o
p
t
∙
x
+
b
o
p
t
=
0
\hat w_{opt} \bullet \hat x=w_{opt} \bullet x +b_{opt}=0
w^opt∙x^=wopt∙x+bopt=0,且存在
γ
>
0
\gamma >0
γ>0,对所有
i
=
1
,
2
,
3
,
.
.
.
,
N
i=1,2,3,...,N
i=1,2,3,...,N,
y
i
(
w
^
o
p
t
∙
x
^
i
)
=
y
i
(
w
o
p
t
∙
x
+
b
o
p
t
)
≥
γ
y_{i}(\hat w_{opt} \bullet \hat x_{i})=y_{i}(w_{opt} \bullet x +b_{opt}) \ge \gamma
yi(w^opt∙x^i)=yi(wopt∙x+bopt)≥γ
证明:
由线性可分, 存在超平面:
w
^
o
p
t
∙
x
^
=
w
o
p
t
∙
x
+
b
o
p
t
=
0
\hat w_{opt} \bullet \hat x=w_{opt} \bullet x +b_{opt}=0
w^opt∙x^=wopt∙x+bopt=0
使
∣
∣
w
^
o
p
t
∣
∣
=
1
||\hat w_{opt}||=1
∣∣w^opt∣∣=1,由有限的点,均有:
y
i
(
w
^
o
p
t
∙
x
^
i
)
=
y
i
(
w
o
p
t
∙
x
+
b
o
p
t
)
>
0
y_{i}(\hat w_{opt} \bullet \hat x_{i})=y_{i}(w_{opt} \bullet x +b_{opt}) >0
yi(w^opt∙x^i)=yi(wopt∙x+bopt)>0
存在:
γ
=
m
i
n
i
{
y
i
(
w
o
p
t
∙
x
+
b
o
p
t
)
}
\gamma = min_{i}\{y_{i}(w_{opt} \bullet x +b_{opt})\}
γ=mini{yi(wopt∙x+bopt)}
使:
y
i
(
w
^
o
p
t
∙
x
^
i
)
=
y
i
(
w
o
p
t
∙
x
+
b
o
p
t
)
≥
γ
y_{i}(\hat w_{opt} \bullet \hat x_{i})=y_{i}(w_{opt} \bullet x +b_{opt}) \ge \gamma
yi(w^opt∙x^i)=yi(wopt∙x+bopt)≥γ
(2) 令
R
=
max
1
≤
i
≤
N
R=\max_{1 \leq i \leq N}
R=max1≤i≤N,算法在训练集的误分类次数K满足不等式,
k
≤
(
R
γ
)
2
k \leq \begin{pmatrix} \cfrac{R}{\gamma} \end{pmatrix}^2
k≤(γR)2
证明:令
w
^
k
−
1
\hat w_{k-1}
w^k−1,是第K个误分类实例之前的扩充权值向量,即:
w
^
k
−
1
=
(
w
k
−
1
T
,
b
k
−
1
)
T
\hat w_{k-1}=(w_{k-1}^T,b_{k-1})^T
w^k−1=(wk−1T,bk−1)T
第K个误分类实例的条件是:
y
i
(
w
^
k
−
1
∙
x
^
i
)
=
y
i
(
w
k
−
1
∙
x
i
+
b
k
−
1
)
≤
0
y_{i}(\hat w_{k-1} \bullet \hat x_{i}) = y_{i}(w_{k-1} \bullet x_{i}+b_{k-1}) \leq 0
yi(w^k−1∙x^i)=yi(wk−1∙xi+bk−1)≤0
则w和b的更新:
w
k
←
w
k
−
1
+
η
y
i
x
i
w_{k} \leftarrow w_{k-1}+\eta y_{i}x_{i}
wk←wk−1+ηyixi
b
k
←
b
k
−
1
+
η
y
i
b_{k} \leftarrow b_{k-1}+\eta y_{i}
bk←bk−1+ηyi
即:
w
^
k
=
w
^
k
−
1
+
η
y
i
x
^
i
\hat w_{k} = \hat w_{k-1}+\eta y_{i} \hat x_{i}
w^k=w^k−1+ηyix^i
推倒两个不等式:
(1)
w
^
k
∙
w
^
o
p
t
≥
k
η
γ
\hat w_{k} \bullet \hat w_{opt} \geq k \eta \gamma
w^k∙w^opt≥kηγ
由:
w
^
k
∙
w
^
o
p
t
=
w
^
k
−
1
∙
w
^
o
p
t
+
η
y
i
w
^
o
p
t
∙
x
^
i
≥
w
^
k
−
1
∙
w
^
o
p
t
+
η
γ
\hat w_{k} \bullet \hat w_{opt} =\hat w_{k-1} \bullet \hat w_{opt}+ \eta y_{i} \hat w_{opt} \bullet \hat x_{i} \geq \hat w_{k-1} \bullet \hat w_{opt}+ \eta \gamma
w^k∙w^opt=w^k−1∙w^opt+ηyiw^opt∙x^i≥w^k−1∙w^opt+ηγ
得:
w
^
k
∙
w
^
o
p
t
≥
w
^
k
−
1
∙
w
^
o
p
t
+
η
γ
≥
w
^
k
−
2
∙
w
^
o
p
t
+
2
η
γ
≥
.
.
.
≥
k
η
γ
\hat w_{k} \bullet \hat w_{opt} \geq \hat w_{k-1} \bullet \hat w_{opt}+ \eta \gamma \geq \hat w_{k-2} \bullet \hat w_{opt}+ 2\eta \gamma \geq...\geq k \eta \gamma
w^k∙w^opt≥w^k−1∙w^opt+ηγ≥w^k−2∙w^opt+2ηγ≥...≥kηγ
(2)
∣
∣
w
^
k
∣
∣
2
≤
k
η
2
R
2
||\hat w_{k}||^2 \leq k \eta^2 R^2
∣∣w^k∣∣2≤kη2R2
则:
∣
∣
w
^
k
∣
∣
2
=
(
∣
∣
w
k
−
1
+
η
y
i
x
i
∣
∣
)
2
=
∣
∣
w
^
k
−
1
2
∣
∣
+
2
η
y
i
w
^
k
−
1
∙
x
^
i
+
η
2
∣
∣
x
^
i
∣
∣
2
||\hat w_{k}||^2=(||w_{k-1}+\eta y_{i}x_{i}||)^2=||\hat w_{k-1}^2||+2 \eta y_{i} \hat w_{k-1} \bullet \hat x_{i}+\eta^2||\hat x_{i}||^2
∣∣w^k∣∣2=(∣∣wk−1+ηyixi∣∣)2=∣∣w^k−12∣∣+2ηyiw^k−1∙x^i+η2∣∣x^i∣∣2
≤
∣
∣
w
^
k
−
1
2
∣
∣
+
η
2
∣
∣
x
^
i
∣
∣
2
\leq ||\hat w_{k-1}^2||+\eta^2||\hat x_{i}||^2
≤∣∣w^k−12∣∣+η2∣∣x^i∣∣2
≤
∣
∣
w
^
k
−
1
2
∣
∣
+
η
2
R
2
\leq ||\hat w_{k-1}^2||+\eta^2 R^2
≤∣∣w^k−12∣∣+η2R2
≤
∣
∣
w
^
k
−
2
2
∣
∣
+
2
η
2
R
2
≤
.
.
.
\leq ||\hat w_{k-2}^2||+2\eta^2 R^2 \leq...
≤∣∣w^k−22∣∣+2η2R2≤...
≤
k
η
2
R
2
\leq k\eta^2 R^2
≤kη2R2
结合两个不等式:
(1)
k
η
γ
≤
w
^
k
∙
w
^
o
p
t
≤
∣
∣
w
^
k
∣
∣
∣
∣
w
^
o
p
t
∣
∣
≤
k
η
R
k \eta \gamma\leq \hat w_{k} \bullet \hat w_{opt} \leq ||\hat w_{k}||||\hat w_{opt}||\leq \sqrt{k} \eta \R
kηγ≤w^k∙w^opt≤∣∣w^k∣∣∣∣w^opt∣∣≤kηR
k
2
γ
2
≤
k
R
2
k^2 \gamma^2 \leq kR^2
k2γ2≤kR2
得:
k
≤
(
R
γ
)
2
k \leq \begin{pmatrix} \cfrac{R}{\gamma} \end{pmatrix}^2
k≤(γR)2
定理表明:
- 误分类的次数k是有上界的,当训练数据集线性可分时,感知机学习算法原始形式迭代是收敛的。
- 感知机算法存在许多解,既依赖于初值,也依赖迭代过程中误分类点的选择顺序。
- 为得到唯一分离超平面,需要增加约束,如SVM。
- 线性不可分数据集,迭代震荡。
感知机算法的对偶形式:
基本想法:将w和b表示为实例xi和标记yi的线性组会的形式,通过求解其系数而求得w和b,对误分类点:
w
←
w
+
η
y
i
x
i
,
b
←
b
+
η
y
i
w \leftarrow w +\eta y_{i}x_{i} ,\quad b \leftarrow b+\eta y_{i}
w←w+ηyixi,b←b+ηyi
最后学习到
w
=
∑
i
=
1
N
α
i
y
i
x
i
)
,
b
=
∑
i
=
1
N
α
i
y
i
w=\sum_{i=1}^{N} \alpha_{i}y_{i}x_{i}) ,\quad b=\sum_{i=1}^{N} \alpha_{i}y_{i}
w=i=1∑Nαiyixi),b=i=1∑Nαiyi
感知机学习算法的对偶形式:
输入:训练数据集
T
=
{
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
,
.
.
.
,
(
x
N
,
y
N
)
}
T=\{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{N},y_{N})\}
T={(x1,y1),(x2,y2),...,(xN,yN)},其中
x
i
∈
X
n
=
R
n
,
y
i
∈
Y
=
+
1
,
−
1
,
i
=
1
,
2
,
3
,
.
.
.
,
N
x_{i} \in Xn = R^{n},y_{i} \in Y={+1,-1},i=1,2,3,...,N
xi∈Xn=Rn,yi∈Y=+1,−1,i=1,2,3,...,N学习率
η
(
0
<
η
≤
1
)
\eta (0 < \eta \leq1)
η(0<η≤1);
输出:
α
\alpha
α,b;感知机模型
f
(
x
)
=
s
i
g
n
(
∑
j
=
1
N
α
j
y
j
x
j
∙
x
+
b
)
f(x)=sign\left(\sum_{j=1}^{N} \alpha_{j}y_{j}x_{j}\bullet x+b\right)
f(x)=sign(∑j=1Nαjyjxj∙x+b),其中
α
=
(
α
1
,
α
2
,
.
.
.
,
,
α
N
)
T
\alpha=(\alpha_{1},\alpha_{2},...,,\alpha_{N})^T
α=(α1,α2,...,,αN)T
(1)
α
←
0
,
b
←
0
\alpha \leftarrow0, b\leftarrow 0
α←0,b←0
(2) 在训练集中的选取数据
(
x
i
,
y
i
)
(x_{i},y_{i})
(xi,yi)
(3) 如果
y
i
(
∑
j
=
1
N
α
j
y
j
x
j
)
≤
0
y_{i}\left(\sum_{j=1}^{N} \alpha_{j} y_{j} x_{j}\right) \leq 0
yi(∑j=1Nαjyjxj)≤0
α
i
←
α
i
+
η
,
b
←
b
+
η
y
i
\alpha_{i} \leftarrow \alpha_{i}+\eta,b \leftarrow b+\eta y_{i}
αi←αi+η,b←b+ηyi
(4) 转至(2)直到没有误分类数据.
例:正样本点是
x
i
=
(
3
,
3
)
T
,
x
2
=
(
4
,
3
)
T
,
负
样
本
点
是
x
3
=
(
1
,
1
)
T
x_{i}=(3,3)^T, x_{2}=(4,3)^T,负样本点是x_{3}=(1,1)^T
xi=(3,3)T,x2=(4,3)T,负样本点是x3=(1,1)T
解:
(1)取
α
i
=
0
,
i
=
1
,
2
,
3
,
b
=
0
,
η
=
1
\alpha_{i}=0,i=1,2,3, b=0, \eta = 1
αi=0,i=1,2,3,b=0,η=1
(2)计算Gram矩阵
G
=
[
18
21
6
21
25
7
6
7
2
]
G=\begin{bmatrix} 18 &21 & 6 \\ 21 &25 &7 \\ 6 &7 & 2 \end{bmatrix}
G=⎣⎡1821621257672⎦⎤
(3)误分条件
y
i
(
∑
j
=
1
N
α
j
y
j
x
j
∙
x
j
+
b
)
≤
0
y_{i}\left(\sum_{j=1}^{N} \alpha_{j} y_{j} x_{j} \bullet x_{j}+b\right) \leq 0
yi(∑j=1Nαjyjxj∙xj+b)≤0
参数更新
α
i
←
α
i
+
1
,
b
←
b
+
y
i
\alpha_{i} \leftarrow \alpha_{i}+1,b \leftarrow b+ y_{i}
αi←αi+1,b←b+yi
(4)迭代。
(5)
w
=
2
x
1
+
0
x
2
−
5
x
3
=
(
1
,
1
)
T
,
b
=
−
3
w=2x_{1}+0x_{2}-5x_{3}=(1,1)^T,b=-3
w=2x1+0x2−5x3=(1,1)T,b=−3
分离超平面
x
(
1
)
+
x
(
2
)
−
3
=
0
x^{(1)}+x^{(2)}-3=0
x(1)+x(2)−3=0
感知机模型
f
(
x
)
=
s
i
g
n
(
x
(
1
)
+
x
(
2
)
−
3
)
f(x)=sign(x^{(1)}+x^{(2)}-3)
f(x)=sign(x(1)+x(2)−3)