机器学习笔记——SVM丝滑推导及代码实现,从硬间隔到软间隔再到核函数

前言

开始搓延期了一百年的机器学习实验,把SVM推导过程从定义到求解都刷了一遍,包括推导优化目标、提出对偶问题、利用KKT条件得到原问题的解以及SMO算法等。

注意:本文在某些地方比如KKT条件和SMO算法处不提供证明过程(太麻烦了喵),而是直接使用其结论,感兴趣的读者可以查阅参考资料

参考资料:

推导过程学习视频:(系列六) 支持向量机1-硬间隔SVM-模型定义_哔哩哔哩_bilibili

拉格朗日对偶性的条件:拉格朗日对偶性详解(手推笔记)-CSDN博客

从几何角度理解KKT条件和对偶关系:机器学习笔记(8)-对偶关系和KKT条件 - Epir - 博客园 (cnblogs.com)

代码参考:统计学习:线性可分支持向量机(Cvxpy实现) - orion-orion - 博客园 (cnblogs.com)

一、优化问题

当一个分类问题线性可分时,我们希望找到一个最优的超平面将其划分,这个最优的超平面需要满足以下性质:

超平面距超平面最近的样本点 之间的间隔应该尽可能大

举一个简单的例子:

当这是一个二分类问题,且样本点特征数为2时(点可以表示在二维平面),该超平面是一条一维的直线。那么,我们想要找到能够将两类数据划分的最优的直线,它与距离它最近的点的距离应该尽可能大

那么我们的优化问题可以描述为:

给定样本点 X = ( x 1 , x 2 , . . . , x m ) T , X ∈ R m × n , x i ∈ R n X = (x_1, x_2, ..., x_m)^T, X \in R^{m \times n}, x_i \in R^n X=(x1,x2,...,xm)T,XRm×n,xiRn
给定标签 y = ( y 1 , y 2 , . . . , y m ) T , y ∈ R m , y i ∈ { − 1 , 1 } y = (y_1, y_2, ..., y_m)^T, y\in R^m, y_i \in \{-1, 1\} y=(y1,y2,...,ym)T,yRm,yi{1,1}代表样本点 x i x_i xi所属的类
先求一最优的超平面 w T x + b = 0 w^T x + b = 0 wTx+b=0 y i y_i yi的值不同的样本点分割开,也就是求

m a x w , b m i n x i ∣ w T x i + b ∣ ∥ w ∥ s . t . y i ( w T x i + b ) > 0 max_{w, b} min_{x_i} \frac{| w^T x_i + b |}{\| w \|} \quad s.t. \quad y_i(w^T x_i + b) > 0 maxw,bminxiwwTxi+bs.t.yi(wTxi+b)>0

由于 y i ( w T x i + b ) = ∣ w T x i + b ∣ y_i(w^T x_i + b) = | w^T x_i + b | yi(wTxi+b)=wTxi+b,原问题等价于

m a x w , b m i n x i y i ( w T x i + b ) ∥ w ∥ s . t . y i ( w T x i + b ) > 0 max_{w, b} min_{x_i} \frac{y_i(w^T x_i + b)}{\| w \|} \quad s.t. \quad y_i(w^T x_i + b) > 0 maxw,bminxiwyi(wTxi+b)s.t.yi(wTxi+b)>0

由于最小化问题是关于 x i x_i xi的,那么 ∥ w ∥ \| w \| w便是无关变量,可往前提

m a x w , b 1 ∥ w ∥ m i n x i y i ( w T x i + b ) s . t . y i ( w T x i + b ) > 0 max_{w, b} \frac{1}{{\| w \|}} min_{x_i} y_i(w^T x_i + b) \quad s.t. \quad y_i(w^T x_i + b) > 0 maxw,bw1minxiyi(wTxi+b)s.t.yi(wTxi+b)>0

y i ( w T x i + b ) > 0 y_i(w^T x_i + b) > 0 yi(wTxi+b)>0

∃ γ > 0 \exist \gamma > 0 γ>0使得 m i n x i ( y i ( w T x i + b ) ) = γ min_{x_i}(y_i(w^T x_i + b)) = \gamma minxi(yi(wTxi+b))=γ

也就是 y i ( w T x i + b ) > = γ y_i(w^T x_i + b) >= \gamma yi(wTxi+b)>=γ

由于 w w w b b b的缩放不影响样本点到超平面的几何距离,可取 γ = 1 \gamma = 1 γ=1方便讨论

m i n x i ( y i ( w T x i + b ) ) = 1 min_{x_i}(y_i(w^T x_i + b)) = 1 minxi(yi(wTxi+b))=1代入到上述优化问题中,并修改约束如下:

m a x w , b 1 ∥ w ∥ s . t . y i ( w T x i + b ) > = 1 max_{w, b} \frac{1}{{\| w \|}} \quad s.t. \quad y_i(w^T x_i + b) >= 1 maxw,bw1s.t.yi(wTxi+b)>=1

问题等同于

m i n w , b 1 2 w T w s . t . y i ( w T x i + b ) > = 1 min_{w, b} \frac{1}{2} w^T w \quad s.t. \quad y_i(w^T x_i + b) >= 1 minw,b21wTws.t.yi(wTxi+b)>=1

m i n w , b 1 2 w T w s . t . 1 − y i ( w T x i + b ) < = 0 min_{w, b} \frac{1}{2} w^T w \quad s.t. \quad 1- y_i(w^T x_i + b) <= 0 minw,b21wTws.t.1yi(wTxi+b)<=0

二、对偶问题

1. 得到对偶问题

构造上述问题的广义拉格朗日函数:

L ( w , b , α ) = 1 2 w T w + Σ i = 1 n α i ( 1 − y i ( w T x i + b ) ) ( α i > = 0 ) \mathcal{L}(w, b, \alpha) = \frac{1}{2} w^T w + \Sigma_{i = 1}^n \alpha_i (1- y_i(w^T x_i + b)) \quad (\alpha_i >= 0) L(w,b,α)=21wTw+Σi=1nαi(1yi(wTxi+b))(αi>=0)

由拉格朗日对偶性:

拉格朗日对偶性

对于 m i n x f ( x ) s . t . c i ( x ) < = 0 , i = 1 , . . . , k h j ( x ) = 0 , j = 1 , . . . , l min_x f(x) \quad \\ s.t. \quad c_i(x) < = 0, i = 1, ..., k \\ \quad \quad h_j(x) = 0, j = 1, ..., l minxf(x)s.t.ci(x)<=0,i=1,...,khj(x)=0,j=1,...,l

构造广义拉格朗日函数如下:

L ( w , α , β ) = f ( x ) + Σ i = 1 k α i c i ( x ) + Σ j = 1 l β c i ( x ) ( α i > = 0 ) \mathcal{L}(w, \alpha, \beta) = f(x) + \Sigma_{i=1}^k \alpha_i c_i(x) + \Sigma_{j=1}^l \beta c_i(x) \quad (\alpha_i >= 0) L(w,α,β)=f(x)+Σi=1kαici(x)+Σj=1lβci(x)(αi>=0)

那么原问题等价于

m i n x m a x α , β L ( x , α , β ) min_x max_{\alpha, \beta} \mathcal{L}(x, \alpha, \beta) minxmaxα,βL(x,α,β)

证明:

m a x α , β L ( x , α , β ) = { f ( x ) , x 满足约束 ∞ , x 不满足约束 max_{\alpha, \beta} \mathcal{L}(x, \alpha, \beta) = \left\{ \\ \begin{matrix} f(x), \quad x满足约束 \\ \infty ,\quad x不满足约束\\ \end{matrix} \right. maxα,βL(x,α,β)={f(x),x满足约束,x不满足约束

m i n x m a x α , β L ( x , α , β ) ⇔ m i n x f ( x ) , x 满足约束 min_x max_{\alpha, \beta} \mathcal{L}(x, \alpha, \beta) \\ \Leftrightarrow min_x f(x), \quad x满足约束 minxmaxα,βL(x,α,β)minxf(x),x满足约束

对于任意极大极小问题: m i n a m a x b f ( a , b ) min_a max_b f(a, b) minamaxbf(a,b)

对应的极小极大问题: m a x b m i n a f ( a , b ) max_b min_a f(a, b) maxbminaf(a,b)都是它的弱对偶问题,即

m a x b m i n a f ( a , b ) < = m i n a m a x b f ( a , b ) max_b min_a f(a, b) <= min_a max_b f(a, b) maxbminaf(a,b)<=minamaxbf(a,b)

证明:

由于 m i n a f ( a , b ) < = f ( a , b ) < = m a x b f ( a , b ) min_a f(a, b) <= f(a, b) <= max_b f(a, b) minaf(a,b)<=f(a,b)<=maxbf(a,b)

m i n a f ( a , b ) < = m a x b f ( a , b ) min_a f(a, b) <= max_b f(a, b) minaf(a,b)<=maxbf(a,b)

由于该式恒成立,那么左式取最大值,右式取最小值时仍然成立:

m a x b m i n a f ( a , b ) < = m i n a m a x b f ( a , b ) max_b min_a f(a, b) <= min_a max_b f(a, b) maxbminaf(a,b)<=minamaxbf(a,b)

当问题满足以下两个条件时,上述不等式取等号,也就是强对偶关系

  1. 原问题是凸优化问题
  2. 原问题满足slater条件

凸优化问题:目标函数是凸函数,并且可行域是凸集
slater条件:所有约束函数都是凸函数并且存在满足所有约束的点

由于 m i n x m a x α , β L ( x , α , β ) min_x max_{\alpha, \beta} \mathcal{L}(x, \alpha, \beta) minxmaxα,βL(x,α,β)是关于 x x x的优化问题
而对偶问题 m a x α , β m i n x L ( x , α , β ) max_{\alpha, \beta} min_x \mathcal{L}(x, \alpha, \beta) maxα,βminxL(x,α,β)是关于 α \alpha α β \beta β的问题
当两者满足强对偶关系时,可以利用KKT条件将对偶问题的解映射到原问题的解

KKT条件的表述如下:

α ∗ , β ∗ \alpha^*, \beta^* α,β是对偶问题的解, x ∗ x^* x是原问题的解,那它们满足以下条件:

1.可行条件

c i ( x ∗ ) < = 0 , i = 1 , . . . , k h j ( x ∗ ) = 0 , j = 1 , . . . , l α i ∗ > = 0 c_i(x^*) < = 0, i = 1, ..., k \\ h_j(x^*) = 0, j = 1, ..., l \\ \alpha_i^* >= 0 ci(x)<=0,i=1,...,khj(x)=0,j=1,...,lαi>=0

2.互补松弛条件

α i ∗ c i ( x ∗ ) = 0 , i = 1 , . . . , k \alpha_i^* c_i(x^*) = 0, i = 1, ..., k αici(x)=0,i=1,...,k

3.梯度为0

∇ t L ( t , α ∗ , β ∗ ) ∣ t = x ∗ = 0 \nabla_t \mathcal{L}(t, \alpha^*, \beta^*)|_{t=x^*} = 0 tL(t,α,β)t=x=0

综上,原问题的对偶问题为

m a x α m i n w , b L ( w , b , α ) max_{\alpha} min_{w, b} \mathcal{L}(w, b, \alpha) maxαminw,bL(w,b,α)

对于 m i n w , b L ( w , b , α ) min_{w, b} \mathcal{L}(w, b, \alpha) minw,bL(w,b,α),对 w , b w, b w,b求偏导,得

∇ w L ( w , b , α ) = w − Σ i = 1 n α i y i x i = 0 \nabla_w \mathcal{L}(w, b, \alpha) = w - \Sigma_{i=1}^n \alpha_i y_i x_i = 0 wL(w,b,α)=wΣi=1nαiyixi=0

∇ b L ( w , b , α ) = − Σ i = 1 n α i y i = 0 \nabla_b \mathcal{L}(w, b, \alpha) = - \Sigma_{i=1}^n \alpha_i y_i = 0 bL(w,b,α)=Σi=1nαiyi=0

w = Σ i = 1 n α i y i x i w = \Sigma_{i=1}^n \alpha_i y_i x_i w=Σi=1nαiyixi

Σ i = 1 n α i y i = 0 \Sigma_{i=1}^n \alpha_i y_i = 0 Σi=1nαiyi=0

将上述两式代入到对偶问题中,得

m a x α − 1 2 Σ i = 1 n Σ j = 1 n α i α j y i y j x i T x j + Σ i = 1 n α i s . t . Σ i = 1 n α i y i = 0 α i > = 0 max_{\alpha} -\frac{1}{2} \Sigma_{i=1}^n \Sigma_{j=1}^n \alpha_i \alpha_j y_i y_j x_i^T x_j + \Sigma_{i = 1}^n \alpha_i \\ s.t. \quad \Sigma_{i=1}^n \alpha_i y_i = 0 \\ \quad \quad \quad \alpha_i >= 0 maxα21Σi=1nΣj=1nαiαjyiyjxiTxj+Σi=1nαis.t.Σi=1nαiyi=0αi>=0

该最大化问题等同于以下最小化问题

m i n α 1 2 Σ i = 1 n Σ j = 1 n α i α j y i y j x i T x j − Σ i = 1 n α i s . t . Σ i = 1 n α i y i = 0 α i > = 0 min_{\alpha} \frac{1}{2} \Sigma_{i=1}^n \Sigma_{j=1}^n \alpha_i \alpha_j y_i y_j x_i^T x_j - \Sigma_{i = 1}^n \alpha_i \\ s.t. \quad \Sigma_{i=1}^n \alpha_i y_i = 0 \\ \quad \quad \quad \alpha_i >= 0 minα21Σi=1nΣj=1nαiαjyiyjxiTxjΣi=1nαis.t.Σi=1nαiyi=0αi>=0

2. 求解过程

求上述问题的最优解 α ∗ \alpha^* α(使用SMO算法)

之后,由KKT条件,有

梯度为0:

∇ w L ( w ∗ , b ∗ , α ∗ ) = w ∗ − Σ i = 1 n α i ∗ y i x i = 0 \nabla_w \mathcal{L}(w^*, b^*, \alpha^*) = w^* - \Sigma_{i=1}^n \alpha_i^* y_i x_i = 0 wL(w,b,α)=wΣi=1nαiyixi=0

w ∗ = Σ i = 1 n α i ∗ y i x i w^* = \Sigma_{i=1}^n \alpha_i^* y_i x_i w=Σi=1nαiyixi

互补松弛条件:

α i ∗ ( 1 − y i ( ( w ∗ ) T x i + b ∗ ) ) = 0 , i = 1 , 2 , . . . , n \alpha_i^* (1- y_i((w^*)^T x_i + b^*)) = 0, \quad i = 1, 2, ..., n αi(1yi((w)Txi+b))=0,i=1,2,...,n

∀ α i ∗ = 0 \forall \alpha_i^* = 0 αi=0,由上,那么 w ∗ = 0 w^* = 0 w=0,而 w ∗ = 0 w^* = 0 w=0不是原问题的解,因而假设不成立。存在一个 α j ∗ > 0 \alpha_j^* > 0 αj>0

那么,取任一 α j ∗ > 0 \alpha_j^* > 0 αj>0,存在

y j ( ( w ∗ ) T x j + b ∗ ) = 1 y_j((w^*)^T x_j + b^*) = 1 yj((w)Txj+b)=1

由于 y j 2 = 1 y_j^2 = 1 yj2=1,等式左右乘以 y j y_j yj,得到

( w ∗ ) T x j + b ∗ = y j (w^*)^T x_j + b^* = y_j (w)Txj+b=yj

b ∗ = y j − ( w ∗ ) T x j = y j − Σ i = 1 n α i y i x i T x j b^* = y_j - (w^*)^T x_j = y_j - \Sigma_{i=1}^n \alpha_i y_i x_i^T x_j b=yj(w)Txj=yjΣi=1nαiyixiTxj

故,求得的最优超平面可写作:

Σ i = 1 n α i ∗ y i x i T x + b ∗ = 0 \Sigma_{i=1}^n \alpha_i^* y_i x_i^T x + b^* = 0 Σi=1nαiyixiTx+b=0

决策函数可写作:

f ( x ) = s i g n ( Σ i = 1 n α i ∗ y i x i T x + b ∗ ) f(x) = sign(\Sigma_{i=1}^n \alpha_i^* y_i x_i^T x + b^*) f(x)=sign(Σi=1nαiyixiTx+b)

三、软间隔

1. 得到优化问题

当不同类别的样本轻微混杂在一起,导致线性不可分时,原先的优化问题无法得到最优解。所以修改约束条件使其允许一定的误差:

m i n w , b 1 2 w T w + l o s s min_{w, b} \frac{1}{2} w^T w + loss minw,b21wTw+loss

假如将 l o s s loss loss定义为

l o s s = { 1 , y i ( w T x i + b ) < 1 0 , y i ( w T x i + b ) > = 1 loss = \left\{ \\ \begin{matrix} 1, \quad y_i(w^T x_i + b) < 1 \\ 0 , \quad y_i(w^T x_i + b) >= 1\\ \end{matrix} \right. loss={1,yi(wTxi+b)<10,yi(wTxi+b)>=1

但是它不连续,数学性质不好

故定义 l o s s loss loss

l o s s = { 1 − y i ( w T x i + b ) , y i ( w T x i + b ) < 1 0 , y i ( w T x i + b ) > = 1 loss = \left\{ \\ \begin{matrix} 1 - y_i(w^T x_i + b), \quad y_i(w^T x_i + b) < 1 \\ 0 , \quad y_i(w^T x_i + b) >= 1\\ \end{matrix} \right. loss={1yi(wTxi+b),yi(wTxi+b)<10,yi(wTxi+b)>=1

l o s s = m a x { 0 , 1 − y i ( w T x i + b ) } loss = max\{0, 1 - y_i(w^T x_i + b) \} loss=max{0,1yi(wTxi+b)}

ξ i = 1 − y i ( w T x i + b ) \xi_i = 1 - y_i(w^T x_i + b) ξi=1yi(wTxi+b),由于样本无法完全满足原问题的约束 y i ( w T x i + b ) > = 1 y_i(w^T x_i + b) >= 1 yi(wTxi+b)>=1,修改其约束为:

y i ( w T x i + b ) > = 1 − ξ i , ξ i > = 0 y_i(w^T x_i + b) >= 1 - \xi_i, \quad \xi_i >= 0 yi(wTxi+b)>=1ξi,ξi>=0

因此,软间隔的SVM的优化问题为:

m i n w , b , ξ 1 2 w T w + C Σ i = 1 n ξ i s . t . y i ( w T x i + b ) > = 1 − ξ i , i = 1 , 2 , . . . , n ξ i > = 0 , i = 1 , 2 , . . . , n min_{w, b, \xi} \frac{1}{2} w^T w + C \Sigma_{i=1}^n \xi_i \\ s.t. \quad y_i(w^T x_i + b) >= 1 - \xi_i, \quad i = 1, 2, ..., n\\ \quad \quad \quad \xi_i >= 0, \quad i = 1, 2, ..., n minw,b,ξ21wTw+CΣi=1nξis.t.yi(wTxi+b)>=1ξi,i=1,2,...,nξi>=0,i=1,2,...,n

2. 求解过程

同理,构造广义拉格朗日函数:

L ( w , b , ξ , α , β ) = 1 2 w T w + C Σ i = 1 n ξ i − Σ i = 1 n α i ( y i ( w T x i + b ) − 1 + ξ i ) − Σ i = 1 n β i ξ i s . t . α i > = 0 β i > = 0 \mathcal{L}(w, b, \xi, \alpha, \beta) = \frac{1}{2} w^T w + C\Sigma_{i=1}^n \xi_i - \Sigma_{i = 1}^n \alpha_i (y_i(w^T x_i + b) - 1 + \xi_i) - \Sigma_{i=1}^n \beta_i \xi_i \\ s.t. \quad \alpha_i >= 0 \\ \quad \quad \beta_i >= 0 L(w,b,ξ,α,β)=21wTw+CΣi=1nξiΣi=1nαi(yi(wTxi+b)1+ξi)Σi=1nβiξis.t.αi>=0βi>=0

w , b , ξ i w, b, \xi_i w,b,ξi求偏导,得

∇ w L = w − Σ i = 1 n α i y i x i = 0 \nabla_w \mathcal{L} = w - \Sigma_{i=1}^n \alpha_i y_i x_i = 0 wL=wΣi=1nαiyixi=0

∇ b L = − Σ i = 1 n α i y i = 0 \nabla_b \mathcal{L} = - \Sigma_{i=1}^n \alpha_i y_i = 0 bL=Σi=1nαiyi=0

∇ ξ i L = C − α i − β i = 0 \nabla_{\xi_i} \mathcal{L} = C - \alpha_i - \beta_i = 0 ξiL=Cαiβi=0

同理,将上述式子代入到对偶问题 m a x α , β m i n w , b , ξ L max_{\alpha, \beta} min_{w, b, \xi} \mathcal{L} maxα,βminw,b,ξL,得

m a x α − 1 2 Σ i = 1 n Σ j = 1 n α i α j y i y j x i T x j + Σ i = 1 n α i s . t . Σ i = 1 n α i y i = 0 C − α i − β i = 0 α i > = 0 β i > = 0 = max_{\alpha} -\frac{1}{2} \Sigma_{i=1}^n \Sigma_{j=1}^n \alpha_i \alpha_j y_i y_j x_i^T x_j + \Sigma_{i = 1}^n \alpha_i \\ s.t. \quad \Sigma_{i=1}^n \alpha_i y_i = 0 \\ \quad \quad \quad C - \alpha_i - \beta_i = 0 \\ \quad \quad \quad \alpha_i >= 0 \\ \quad \quad \quad \beta_i >= 0 = maxα21Σi=1nΣj=1nαiαjyiyjxiTxj+Σi=1nαis.t.Σi=1nαiyi=0Cαiβi=0αi>=0βi>=0=

C − α i − β i = 0 C - \alpha_i - \beta_i = 0 Cαiβi=0 β i > = 0 \beta_i >= 0 βi>=0,得 C − α i > = 0 C - \alpha_i >= 0 Cαi>=0,即 α i < = C \alpha_i <= C αi<=C

对偶问题表示如下:

m a x α − 1 2 Σ i = 1 n Σ j = 1 n α i α j y i y j x i T x j + Σ i = 1 n α i s . t . Σ i = 1 n α i y i = 0 0 < = α i < = C max_{\alpha} -\frac{1}{2} \Sigma_{i=1}^n \Sigma_{j=1}^n \alpha_i \alpha_j y_i y_j x_i^T x_j + \Sigma_{i = 1}^n \alpha_i \\ s.t. \quad \Sigma_{i=1}^n \alpha_i y_i = 0 \\ \quad \quad \quad 0 <= \alpha_i <= C maxα21Σi=1nΣj=1nαiαjyiyjxiTxj+Σi=1nαis.t.Σi=1nαiyi=00<=αi<=C

求上述问题的最优解 α ∗ \alpha^* α

同理,由KKT条件,取一 α j ∗ \alpha_j^* αj满足 0 < α j ∗ < C 0 < \alpha_j^* < C 0<αj<C

得到

w ∗ = Σ i = 1 n α i ∗ y i x i w^* = \Sigma_{i=1}^n \alpha_i^* y_i x_i w=Σi=1nαiyixi

b ∗ = y j − ( w ∗ ) T x j = y j − Σ i = 1 n α i y i x i T x j b^* = y_j - (w^*)^T x_j = y_j - \Sigma_{i=1}^n \alpha_i y_i x_i^T x_j b=yj(w)Txj=yjΣi=1nαiyixiTxj

四、核函数

当样本线性不可分时,可以将低维不可分的数据映射到高维,从而找到更高维的超平面将其分割

在对偶问题中,需要计算两个样本点之间的内积,当样本的特征数即维度很大时,计算内积相当耗时

故引入核函数直接求得两个样本点映射到高维空间后它们的内积:

K ( x i , x j ) = Φ ( x i ) T Φ ( x j ) K(x_i, x_j) = \Phi(x_i)^T \Phi(x_j) K(xi,xj)=Φ(xi)TΦ(xj)

对偶问题中的目标函数便可以写作:

W ( α ) = 1 2 Σ i = 1 n Σ j = 1 n α i α j y i y j K ( x i , x j ) − Σ i = 1 n α i W(\alpha) = \frac{1}{2} \Sigma_{i=1}^n \Sigma_{j=1}^n \alpha_i \alpha_j y_i y_j K(x_i, x_j) - \Sigma_{i = 1}^n \alpha_i W(α)=21Σi=1nΣj=1nαiαjyiyjK(xi,xj)Σi=1nαi

最优超平面可以写作:

Σ i = 1 n α i ∗ y i K ( x i , x j ) + b ∗ = 0 \Sigma_{i=1}^n \alpha_i^* y_i K(x_i, x_j) + b^* = 0 Σi=1nαiyiK(xi,xj)+b=0

决策函数可以写作:

f ( x ) = s i g n ( Σ i = 1 n α i ∗ y i K ( x i , x ) + b ∗ ) f(x) = sign(\Sigma_{i=1}^n \alpha_i^* y_i K(x_i, x)+ b^*) f(x)=sign(Σi=1nαiyiK(xi,x)+b)

五、代码实现

1. cvxpy实现

import copy

import numpy as np
import cvxpy as cp
from sklearn import datasets
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from random import choice


class SVM:
    def __init__(self, C=None, KF=None):
        # when self.C is not None, there is the soft-margin SVM
        self.C = C
        self.KF = KF

    def K(self, i, j):
        if self.KF and callable(self.KF):
            return self.KF(self.X_train[i], self.X_train[j])
        else:
            return self.X_train[i].T @ self.X_train[j]
    def object_func(self, alpha):
        sum = 0
        for i in range(self.n):
            for j in range(self.n):
                print("calculate entry: (%d, %d)" % (i, j))
                sum += alpha[i] * alpha[j] * self.y_train[i] * self.y_train[j] * self.K(i, j)
        return 0.5 * sum - cp.sum(alpha)

    def fit(self, X_train, y_train):
        self.X_train = copy.deepcopy(X_train)
        self.y_train = copy.deepcopy(y_train)
        self.n = self.X_train.shape[0]
        print("begin to construct the convex problem...")
        alpha = cp.Variable(self.n)
        objective = cp.Minimize(self.object_func(alpha))
        constraint = []
        if self.C:
            constraint = [alpha >= 0, alpha <= C, self.y_train @ alpha == 0]
        else:
            constraint = [alpha >= 0, self.y_train @ alpha == 0]

        print("convex problem have built...")
        prob = cp.Problem(objective, constraint)
        prob.solve(solver='CVXOPT')
        self.alpha_star = alpha.value

        print("dual problem have been solved!")
        # KKT条件求解w和b
        self.w = np.zeros(self.X_train.shape[1])
        for i in range(self.n):
            self.w += X_train[i] * (self.alpha_star[i] * y_train[i])

        S_with_idx = None
        if self.C:
            S_with_idx = [(alpha_star_i, idx)
                          for idx, alpha_star_i in enumerate(self.alpha_star) if 0 < alpha_star_i < self.C]
        else:
            S_with_idx = [(alpha_star_i, idx)
                          for idx, alpha_star_i in enumerate(self.alpha_star) if alpha_star_i > 0]

        (_, s) = choice(S_with_idx)
        self.b = y_train[s]
        for i in range(self.n):
            self.b -= self.alpha_star[i] * y_train[i] * (X_train[i].T @ X_train[s])
    def pred(self, x):
        if self.KF and callable(self.KF):
            y = np.zeros(self.X_train.shape[1])
            for i in range(self.n):
                y += self.alpha_star[i] * y_train[i] * self.KF(self.X_train[i], x)
            y += self.b
            return y
        else:
            return self.w.T @ x + self.b
    def acc(self, X_test, y_test):
        y_pred = []
        for x in X_test:
            y_hat = np.sign(self.pred(x))
            y_pred.append(y_hat)
        y_pred = np.array(y_pred)
        acc = accuracy_score(y_pred, y_test)
        return acc

读取数据集并实现SVM实例

X, y = datasets.load_digits(return_X_y=True)
# X, y = datasets.load_breast_cancer(return_X_y=True)
# X, y = datasets.load_wine(return_X_y=True)
# X, y = datasets.load_iris(return_X_y=True)


y = np.where(y == 1, y, -1)

print(X.shape, y.shape)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

classicSVM = SVM()
classicSVM.fit(X_train, y_train)
print("acc of classic SVM: ", classicSVM.acc(X_test, y_test))

C = 0.1
softSVM = SVM(C=C)
softSVM.fit(X_train, y_train)
print("acc of soft SVM: ", softSVM.acc(X_test, y_test))


# 选择最优的C
# for i in range(-10, 10):
#     C = pow(10, i)
#     softSVM = SVM(C=C)
#     softSVM.fit(X_train, y_train)
#     print("when C = %e, acc of softSVM SVM: %.4f"
#           % (C, softSVM.pred(X_test, y_test)))

没用SMO算法。。跑的巨慢,摆了

2. sklearn实现

哈哈我是调包侠

import numpy as np
from sklearn import datasets
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC

X, y = datasets.load_digits(return_X_y=True)
# X, y = datasets.load_breast_cancer(return_X_y=True)
# X, y = datasets.load_wine(return_X_y=True)
# X, y = datasets.load_iris(return_X_y=True)
y = np.where(y == 1, y, -1)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# 创建SVM模型
svm_classic = SVC(kernel='linear')  # 经典SVM
svm_soft = SVC(kernel='linear', C=0.1)  # 软间隔SVM
svm_kernel = SVC(kernel='rbf', gamma=0.001)  # 带核函数的SVM

# 拟合模型
svm_classic.fit(X_train, y_train)
svm_soft.fit(X_train, y_train)
svm_kernel.fit(X_train, y_train)

y_test_classic = svm_classic.predict(X_test)
y_test_soft = svm_soft.predict(X_test)
y_test_kernel = svm_kernel.predict(X_test)

accuracy_classic = accuracy_score(y_test_classic, y_test)
accuracy_soft = accuracy_score(y_test_soft, y_test)
accuracy_kernel = accuracy_score(y_test_kernel, y_test)

print("accuracy of classic svm: %.2f\n"
      "accuracy of soft svm: %.2f\n"
      "accuracy of kernel svm: %.2f\n"
      % (accuracy_classic, accuracy_soft, accuracy_kernel))

结果如下:

D:\.py\PythonProject\ml2024\svm\mySVM>python svm.py
accuracy of classic svm: 0.98
accuracy of soft svm: 0.99
accuracy of kernel svm: 1.00
  • 25
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值