机器学习——软间隔SVM


以下内容均为个人理解,如有错误,欢迎指出


硬间隔SVM的问题

硬间隔要求间隔之间不存在任何点,这点要求非常苛刻,这也导致了硬间隔SVM对于异常点非常敏感,由于噪声的因素,可能属于A类的点分布在B类中(异常点),此时硬间隔将无法找到一个划分超平面,因此,我们导出了软间隔SVM。


软间隔SVM


什么是软间隔SVM

软间隔SVM允许部分点分布在间隔内部,此时可以解决硬间隔SVM的问题(只需将异常点放到间隔内部即可),因为间隔内部的点对于SVM的思想来说是一种错误,所以我们希望位于间隔内部的点尽可能少,其实是一种折中,即在错误较少的情况下获得不错的划分超平面


软间隔SVM的数学表达式

回顾一下硬间隔SVM的数学表达式为
min ⁡ w , b ∣ ∣ w ∣ ∣ 2 2 s . t .   y i ( w T x i + b ) ≥ 1 , i = 1 , 2..... m \begin{aligned} & \min \limits_{w,b} \frac{||w||^2}{2} \\ \\ &s.t.\ y_i(w^Tx_i+b)\geq1,i=1,2.....m \end{aligned} w,bmin2w2s.t. yi(wTxi+b)1,i=1,2.....m

位于间隔内部的点满足 y i ( w T x i + b ) < 1 y_i(w^Tx_i+b)<1 yi(wTxi+b)<1,我们为每个点 ( x i , y i ) (x_i,y_i) (xi,yi)引入一个松弛变量 ε i \varepsilon_i εi,对于间隔内部的点,满足
ε i > 0 y i ( w T x i + b ) + ε i ≥ 1 \begin{aligned} & \varepsilon_i>0 \\ \\ & y_i(w^Tx_i+b)+\varepsilon_i\geq 1 \end{aligned} εi>0yi(wTxi+b)+εi1
而间隔外的点和支持向量,只需满足 ε i = 0 \varepsilon_i=0 εi=0即可,综上,对于软间隔SVM,我们的约束条件变为
ε i ≥ 0 y i ( w T x i + b ) + ε i ≥ 1    i = 1 , 2..... m \begin{aligned} & \varepsilon_i\geq0 \\ \\ & y_i(w^Tx_i+b)+\varepsilon_i\geq 1 \ \ i=1,2.....m \end{aligned} εi0yi(wTxi+b)+εi1  i=1,2.....m

由于我们希望位于间隔内的点尽可能少,为了体现这个特点,我们将优化目标变为
min ⁡ ∣ ∣ w ∣ ∣ 2 2 + C ∑ i = 1 m ε i s . t   ε i ≥ 0 y i ( w T x i + b ) + ε i ≥ 1    i = 1 , 2..... m (式1.0) \begin{aligned} &\min \frac{||w||^2}{2}+C\sum_{i=1}^m\varepsilon_i\\ &s.t \ \varepsilon_i\geq0 \\ \\ &y_i(w^Tx_i+b)+\varepsilon_i\geq 1 \ \ i=1,2.....m \tag{式1.0} \end{aligned} min2w2+Ci=1mεis.t εi0yi(wTxi+b)+εi1  i=1,2.....m(1.0)

其中,C>0为惩罚参数,为我们事先指定,C越大,位于间隔内部的点越少,感性一点的解释是, ∣ ∣ w ∣ ∣ 2 2 + C ∑ i = 1 m ε i \frac{||w||^2}{2}+C\sum_{i=1}^m\varepsilon_i 2w2+Ci=1mεi由两部分组成, ∣ ∣ w ∣ ∣ 2 2 \frac{||w||^2}{2} 2w2 C ∑ i = 1 m ε i C\sum_{i=1}^m\varepsilon_i Ci=1mεi,C越大,则 C ∑ i = 1 m ε i C\sum_{i=1}^m\varepsilon_i Ci=1mεi对于取值的占比越大,此时为了让式子取最小,则会有尽可能多的 ε i \varepsilon_i εi趋近于0


软间隔SVM的求解

由于 ε i   ( i = 1 , 2..... m ) \varepsilon_i \ (i=1,2.....m) εi (i=1,2.....m)为仿射函数,因此,我们仍然可以对式1.0使用拉格朗日对偶,则有下列过程
L ( w , b , α , β , ε ) = ∣ ∣ w ∣ ∣ 2 2 + C ∑ i = 1 m ε i + ∑ i = 1 m α i ( 1 − ε i − y i ( w T x i + b ) ) − ∑ i = 1 m β i ε i (式1.1) \begin{aligned} L(w,b,\alpha,\beta,\varepsilon)=\frac{||w||^2}{2}+C\sum_{i=1}^m\varepsilon_i+\sum_{i=1}^m\alpha_i(1-\varepsilon_i-y_i(w^Tx_i+b))-\sum_{i=1}^m \beta_i\varepsilon_i \tag{式1.1} \end{aligned} L(w,b,α,β,ε)=2w2+Ci=1mεi+i=1mαi(1εiyi(wTxi+b))i=1mβiεi(1.1)

拉格朗日对偶函数为 max ⁡ α , β min ⁡ w , b , ε L ( w , b , α , β , ε ) \max \limits_{\alpha,\beta} \min \limits_{w,b,\varepsilon}L(w,b,\alpha,\beta,\varepsilon) α,βmaxw,b,εminL(w,b,α,β,ε),若固定 α , β \alpha,\beta α,β,式1.1为 w , b , ε w,b,\varepsilon w,b,ε的凸函数,对 w , b , ε w,b,\varepsilon w,b,ε求导得
∂ L ( w , b , α , β , ε ) ∂ w = { w 1 − ∑ i = 1 m α i y i x i 1 w 2 − ∑ i = 1 m α i y i x i 2 . . . . . w n − ∑ i = 1 m α i y i x i 1 } = w − ∑ i = 1 m α i y i x i = 0 \frac{\partial L(w,b,\alpha,\beta,\varepsilon)}{\partial w}= \left\{ \begin{matrix} & w_1-\sum_{i=1}^m \alpha_iy_ix_{i1} \\ & w_2-\sum_{i=1}^m \alpha_iy_ix_{i2} \\ & .....\\ & w_n-\sum_{i=1}^m \alpha_iy_ix_{i1} \end{matrix} \right\}=w-\sum_{i=1}^m\alpha_iy_ix_i =0\\ wL(w,b,α,β,ε)=w1i=1mαiyixi1w2i=1mαiyixi2.....wni=1mαiyixi1=wi=1mαiyixi=0
∂ L ( w , b , α , β , ε ) ∂ b = ∑ i = 1 m α i y i = 0 \frac{\partial L(w,b,\alpha,\beta,\varepsilon)}{\partial b}=\sum_{i=1}^m \alpha_iy_i=0 bL(w,b,α,β,ε)=i=1mαiyi=0
α i + β i = C    ( i = 1 , 2 , . . . , m ) \alpha_i+\beta_i=C \ \ (i=1,2,...,m) αi+βi=C  (i=1,2,...,m)

将上述式子代入式1.0,可得
min ⁡ 1 2 ∑ i = 1 m ∑ j = 1 m α i α j y i y j x i T x j − ∑ i = 1 m α i s . t   α i ≥ 0       β i ≥ 0   ( i = 1 , 2 , . . . , m )        α i + β i = C \begin{aligned} & \min \frac{1}{2}\sum_{i=1}^m\sum_{j=1}^m\alpha_i\alpha_jy_iy_jx_i^Tx_j-\sum_{i=1}^m\alpha_i\\ \\ & s.t \ \alpha_i\geq0\\ \\ & \ \ \ \ \ \beta_i\geq0 \ (i=1,2,...,m)\\ \\ & \ \ \ \ \ \ \alpha_i+\beta_i=C \end{aligned} min21i=1mj=1mαiαjyiyjxiTxji=1mαis.t αi0     βi0 (i=1,2,...,m)      αi+βi=C
由于 β i = C − α i \beta_i=C-\alpha_i βi=Cαi,所以有 C − α i ≥ 0 C-\alpha_i\geq0 Cαi0,即 C ≥ α i C\geq\alpha_i Cαi,则最终的优化对象为

min ⁡ 1 2 ∑ i = 1 m ∑ j = 1 m α i α j y i y j x i T x j − ∑ i = 1 m α i s . t   0 ≤ α i ≤ C ( i = 1 , 2 , . . . , m ) (式1.2) \begin{aligned} & \min \frac{1}{2}\sum_{i=1}^m\sum_{j=1}^m\alpha_i\alpha_jy_iy_jx_i^Tx_j-\sum_{i=1}^m\alpha_i\\ \\ & s.t \ 0 \leq \alpha_i\leq C (i=1,2,...,m) \end{aligned}\tag{式1.2} min21i=1mj=1mαiαjyiyjxiTxji=1mαis.t 0αiC(i=1,2,...,m)(1.2)

接下来,我们使用SMO算法对其进行优化,我们将展现SMO算法的具体细节


SMO算法求解软间隔SVM

我们选择 α 1 , α 2 \alpha_1,\alpha_2 α1,α2作为可变元,剩下的 α i ( i = 3 , . . . . m ) \alpha_i (i=3,....m) αi(i=3,....m)不变,式1.3的原型为
1 2 ∑ i = 1 m α i y i x i ∑ j = 1 m α j y j x j − ∑ i = 1 m α i = 1 2 ( α 1 y 1 x 1 + α 2 y 2 x 2 + . . . . + α m y m x m ) ( α 1 y 1 x 1 + α 2 y 2 x 2 + . . . . + α m y m x m ) − ∑ i = 1 m α i = 1 2 α 1 2 y 1 2 x 1 T x 1 + 1 2 α 2 2 y 2 2 x 2 T x 2 + α 1 α 2 y 1 y 2 x 1 T x 2 + α 1 y 1 x 1 T ∑ i = 3 m α i y i x i + α 2 y 2 x 2 T ∑ i = 3 m α i y i x i − α 1 − α 2 + H + K ( 式 1.3 ) \begin{aligned} &\frac{1}{2}\sum_{i=1}^m\alpha_iy_ix_i \sum_{j=1}^m\alpha_jy_jx_j -\sum_{i=1}^m\alpha_i\\ =&\frac{1}{2}(\alpha_1y_1x_1+\alpha_2y_2x_2+....+\alpha_my_mx_m)(\alpha_1y_1x_1+\alpha_2y_2x_2+....+\alpha_my_mx_m)-\sum_{i=1}^m\alpha_i\\ =&\frac{1}{2}\alpha_1^2y_1^2x_1^Tx_1+\frac{1}{2}\alpha_2^2y_2^2x_2^Tx_2+\alpha_1\alpha_2y_1y_2x_1^Tx_2+\alpha_1y_1x_1^T\sum_{i=3}^m\alpha_iy_ix_i+ \\ &\alpha_2y_2x_2^T\sum_{i=3}^m\alpha_iy_ix_i-\alpha_1-\alpha_2+H+K(式1.3) \end{aligned} ==21i=1mαiyixij=1mαjyjxji=1mαi21(α1y1x1+α2y2x2+....+αmymxm)(α1y1x1+α2y2x2+....+αmymxm)i=1mαi21α12y12x1Tx1+21α22y22x2Tx2+α1α2y1y2x1Tx2+α1y1x1Ti=3mαiyixi+α2y2x2Ti=3mαiyixiα1α2+H+K1.3
其中
H = 1 2 ∑ i = 3 m ∑ j = 3 m α i y i α j y j x i T x j K = − ∑ i = 3 m α i \begin{aligned} &H=\frac{1}{2}\sum_{i=3}^m\sum_{j=3}^m\alpha_iy_i\alpha_jy_jx_i^Tx_j\\ &K=-\sum_{i=3}^m\alpha_i \end{aligned} H=21i=3mj=3mαiyiαjyjxiTxjK=i=3mαi

k i j = x i T x j k_{ij}=x_i^Tx_j kij=xiTxj,则式1.3变为
1 2 k 11 α 1 2 + 1 2 k 22 α 2 2 + α 1 α 2 y 1 y 2 k 12 + α 1 y 1 ∑ i = 3 m α i y i k i 1 + α 2 y 2 ∑ i = 3 m α i y i k i 2 + H + K − α 1 − α 2 (式1.4) \frac{1}{2}k_{11}\alpha_1^2+\frac{1}{2}k_{22}\alpha_2^2+\alpha_1\alpha_2y_1y_2k_{12}+\alpha_1y_1\sum_{i=3}^m\alpha_iy_ik_{i1}\\ +\alpha_2y_2\sum_{i=3}^m\alpha_iy_ik_{i2}+H+K-\alpha_1-\alpha_2\tag{式1.4} 21k11α12+21k22α22+α1α2y1y2k12+α1y1i=3mαiyiki1+α2y2i=3mαiyiki2+H+Kα1α2(1.4)

由于 α 1 y 1 + α 2 y 2 + ∑ i = 3 m α i y i = 0 \alpha_1y_1+\alpha_2y_2+\sum_{i=3}^m\alpha_iy_i=0 α1y1+α2y2+i=3mαiyi=0,所以有 α 1 y 1 + α 2 y 2 = − ∑ i = 3 m α i y i \alpha_1y_1+\alpha_2y_2=-\sum_{i=3}^m\alpha_iy_i α1y1+α2y2=i=3mαiyi,令 ξ = − ∑ i = 3 m α i y i \xi=-\sum_{i=3}^m\alpha_iy_i ξ=i=3mαiyi,则 α 1 = ξ y 1 − α 2 y 1 y 2 \alpha_1=\xi y_1-\alpha_2y_1y_2 α1=ξy1α2y1y2,将其代入式1.3可得
1 2 k 11 ( ξ − α 2 y 2 ) 2 + 1 2 k 22 α 2 2 + α 2 y 2 k 12 ( ξ − α 2 y 2 ) + α 2 y 2 v 2 + ( ξ − α 2 y 2 ) v 1 − ( α 1 + ξ y 1 − α 2 y 1 y 2 ) (式1.5) \frac{1}{2}k_{11}(\xi-\alpha_2y_2)^2+\frac{1}{2}k_{22}\alpha_2^2+\alpha_2y_2k_{12}(\xi-\alpha_2y_2)+\alpha_2y_2v_2+(\xi-\alpha_2y_2)v_1\\ -(\alpha_1+\xi y_1-\alpha_2y_1y_2)\tag{式1.5} 21k11(ξα2y2)2+21k22α22+α2y2k12(ξα2y2)+α2y2v2+(ξα2y2)v1(α1+ξy1α2y1y2)(1.5)
其中 v 1 = ∑ i = 3 m α i y i k i 1 v 2 = ∑ i = 3 m α i y i k i 2 v_1=\sum_{i=3}^m\alpha_iy_ik_{i1}\\ v_2=\sum_{i=3}^m\alpha_iy_ik_{i2} v1=i=3mαiyiki1v2=i=3mαiyiki2
式1.5是一个二次函数,其二次项系数为 1 2 k 11 + 1 2 k 22 − k 12 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12} 21k11+21k22k12,到这里其实就是一个高中数学题了,我们将问题重新表述一遍,即
min ⁡   1 2 k 11 ( ξ − α 2 y 2 ) 2 + 1 2 k 22 α 2 2 + α 2 y 2 k 12 ( ξ − α 2 y 2 ) + α 2 y 2 v 2 + ( ξ − α 2 y 2 ) v 1 − ( α 1 + ξ y 1 − α 2 y 1 y 2 ) s . t   0 ≤ α 1 ≤ C ⟹ 0 ≤ ξ y 1 − α 2 y 1 y 2 ≤ C 0 ≤ α 2 ≤ C \begin{aligned} \min \ & \frac{1}{2}k_{11}(\xi-\alpha_2y_2)^2+\frac{1}{2}k_{22}\alpha_2^2+\alpha_2y_2k_{12}(\xi-\alpha_2y_2)+\alpha_2y_2v_2+(\xi-\alpha_2y_2)v_1-(\alpha_1+\xi y_1-\alpha_2y_1y_2)\\ \\ s.t \ & 0 \leq \alpha_1\leq C \Longrightarrow 0 \leq \xi y_1-\alpha_2y_1y_2 \leq C\\ \\ &0 \leq \alpha_2\leq C \end{aligned} min s.t 21k11(ξα2y2)2+21k22α22+α2y2k12(ξα2y2)+α2y2v2+(ξα2y2)v1(α1+ξy1α2y1y2)0α1C0ξy1α2y1y2C0α2C
其二次项系数为 1 2 k 11 + 1 2 k 22 − k 12 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12} 21k11+21k22k12,即在定义域内求二次函数极值的问题
我们对二次项次数进行分类讨论(绝大多数资料都会画图,这里我们直接用解高中题的思路就可以求解了)

1. 1 2 k 11 + 1 2 k 22 − k 12 > 0 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12}>0 21k11+21k22k12>0,式1.5为开口向上的二次函数

此时对式1.5求导置0可得 α 2 \alpha_2 α2的新值为:
α 2 n e w = y 2 ( − k 12 ξ + k 11 ξ − v 1 + v 2 + y 2 − y 1 ) k 11 + k 22 − 2 k 12 \alpha_2^{new}=\frac{y_2(-k_{12}\xi+k_{11}\xi-v_1+v_2+y_2-y_1)}{k_{11}+k_{22}-2k_{12}} α2new=k11+k222k12y2(k12ξ+k11ξv1+v2+y2y1)
由于 v 1 = ∑ i = 3 m α i y i k i 1 , v 2 = ∑ i = 3 m α i y i k i 2 , ξ = α 1 y 1 + α 2 y 2 v_1=\sum_{i=3}^m\alpha_iy_ik_{i1},v_2=\sum_{i=3}^m\alpha_iy_ik_{i2},\xi=\alpha_1y_1+\alpha_2y_2 v1=i=3mαiyiki1,v2=i=3mαiyiki2,ξ=α1y1+α2y2,设 g ( x i ) = ∑ j = 1 m α j y j k j i + b g(x_i)=\sum_{j=1}^m\alpha_jy_jk_{ji}+b g(xi)=j=1mαjyjkji+b故有
y 2 ( − k 12 ξ + k 11 ξ − v 1 + v 2 + y 2 − y 1 ) = y 2 ( y 2 − y 1 + g ( x 1 ) − g ( x 2 ) + ∑ j = 1 2 y j α j k j 1 − ∑ j = 1 2 y j α j k j 2 + ( k 11 − k 12 ) ( α 1 y 1 + α 2 y 2 ) ) = y 2 ( y 2 − y 1 + g ( x 1 ) + g ( x 2 ) + ∑ j = 1 2 α j y j k j 2 − ∑ j = 1 2 α j y j k j 1 + α 1 y 1 k 11 − α j y 1 k 12 + α 2 y 2 k 11 − α 2 y 2 k 12 ) = y 2 ( y 2 − y 1 + g ( x 1 ) − g ( x 2 ) + α 2 y 2 k 22 + α 2 y 2 k 11 − 2 α 2 y 2 k 12 ) \begin{aligned} &y_2(-k_{12}\xi+k_{11}\xi-v_1+v_2+y_2-y_1)\\ =&y_2(y_2-y_1+g(x_1)-g_(x_2)+\sum_{j=1}^2y_j\alpha_jk_{j1}-\sum_{j=1}^2y_j\alpha_jk_{j2}+(k_{11}-k_{12})(\alpha_1y_1+\alpha_2y_2))\\ =&y_2(y_2-y_1+g(x_1)+g(x_2)+\sum_{j=1}^2\alpha_jy_jk_{j2}-\sum_{j=1}^2\alpha_jy_jk_{j1}+\alpha_1y_1k_{11}-\alpha_jy_1k_{12}+\alpha_2y_2k_{11}-\alpha_2y_2k_{12})\\ =&y_2(y_2-y_1+g(x_1)-g(x_2)+\alpha_2y_2k_{22}+\alpha_2y_2k_{11}-2\alpha_2y_2k_{12}) \end{aligned} ===y2(k12ξ+k11ξv1+v2+y2y1)y2(y2y1+g(x1)g(x2)+j=12yjαjkj1j=12yjαjkj2+(k11k12)(α1y1+α2y2))y2(y2y1+g(x1)+g(x2)+j=12αjyjkj2j=12αjyjkj1+α1y1k11αjy1k12+α2y2k11α2y2k12)y2(y2y1+g(x1)g(x2)+α2y2k22+α2y2k112α2y2k12)
所以有
α 2 n e w = y 2 ( y 2 − y 1 + g ( x 1 ) − g ( x 2 ) + α 2 y 2 k 22 + α 2 y 2 k 11 − 2 α 2 y 2 k 12 ) k 11 + k 22 − 2 k 12 = y 2 ( E 1 − E 2 ) k 11 + k 22 − 2 k 12 + α 2 \begin{aligned} \alpha_2^{new}=&\frac{y_2(y_2-y_1+g(x_1)-g(x_2)+\alpha_2y_2k_{22}+\alpha_2y_2k_{11}-2\alpha_2y_2k_{12})}{k_{11}+k_{22}-2k_{12}}\\ =&\frac{y_2(E_1-E_2)}{k_{11}+k_{22}-2k_{12}}+\alpha_2 \end{aligned} α2new==k11+k222k12y2(y2y1+g(x1)g(x2)+α2y2k22+α2y2k112α2y2k12)k11+k222k12y2(E1E2)+α2
其中, E i = g ( x i ) − y i E_i=g(x_i)-y_i Ei=g(xi)yi
由于大部分求解思路
当 y 1 = 1 , y 2 = 1 当y_1=1,y_2=1 y1=1,y2=1,首先先确定定义域,我们有 ξ − C ≤ α 2 ≤ ξ 0 ≤ α 2 ≤ C \begin{aligned} &\xi-C\leq\alpha_2\leq\xi \\ & 0\leq \alpha_2\leq C \end{aligned} ξCα2ξ0α2C
定义域为 [ max ⁡ ( 0 , ξ − C ) , min ⁡ ( C , ξ ) ] [\max(0,\xi-C),\min(C,\xi)] [max(0,ξC),min(C,ξ)],因此,我们有
α 2 n e w = { max ⁡ ( 0 , ξ − C )    y 2 ( E 1 − E 2 ) k 11 + k 22 − 2 k 12 + α 2 < max ⁡ ( 0 , ξ − C ) y 2 ( E 1 − E 2 ) k 11 + k 22 − 2 k 12 + α 2     max ⁡ ( 0 , ξ − C ) ≤ y 2 ( E 1 − E 2 ) k 11 + k 22 − 2 k 12 + α 2 ≤ max ⁡ ( 0 , ξ − C ) min ⁡ ( C , ξ )     y 2 ( E 1 − E 2 ) k 11 + k 22 − 2 k 12 + α 2 > min ⁡ ( C , ξ ) \alpha_2^{new}=\left\{ \begin{aligned} &\max(0,\xi-C) \ \ \frac{y_2(E_1-E_2)}{k_{11}+k_{22}-2k_{12}}+\alpha_2<\max(0,\xi-C)\\ &\frac{y_2(E_1-E_2)}{k_{11}+k_{22}-2k_{12}}+\alpha_2 \ \ \ \max(0,\xi-C)\leq\frac{y_2(E_1-E_2)}{k_{11}+k_{22}-2k_{12}}+\alpha_2\leq\max(0,\xi-C)\\ &\min(C,\xi)\ \ \ \frac{y_2(E_1-E_2)}{k_{11}+k_{22}-2k_{12}}+\alpha_2>\min(C,\xi) \end{aligned} \right. α2new=max(0,ξC)  k11+k222k12y2(E1E2)+α2<max(0,ξC)k11+k222k12y2(E1E2)+α2   max(0,ξC)k11+k222k12y2(E1E2)+α2max(0,ξC)min(C,ξ)   k11+k222k12y2(E1E2)+α2>min(C,ξ)
y 1 , y 2 y_1,y_2 y1,y2取其他值时,也具有类似的结构,在此不赘述

2. 1 2 k 11 + 1 2 k 22 − k 12 = 0 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12}=0 21k11+21k22k12=0,此时式1.5为一个一次函数,依据斜率取对应的定义域边界值

3. 1 2 k 11 + 1 2 k 22 − k 12 < 0 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12}<0 21k11+21k22k12<0,此时式1.5为一个开口向下的二次函数,求解思路与 1 2 k 11 + 1 2 k 22 − k 12 > 0 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12}>0 21k11+21k22k12>0一致,在此不赘述


选择两个α的方法(未解释原因,只阐明步骤)

以下内容摘自关于SVM数学细节逻辑的个人理解(三) :SMO算法理解
找第一个参数的具体过程是这样的:

①遍历一遍整个数据集,对每个不满足KKT条件的参数,选作第一个待修改参数
②在上面对整个数据集遍历一遍后,选择那些参数满足的子集,开始遍历,如果发现一个不满足KKT条件的,作为第一个待修改参数,然后找到第二个待修改的参数并修改,修改完后,重新开始遍历这个子集
③遍历完子集后,重新开始①②,直到在执行①和②时没有任何修改就结束

找第二个参数的过程是这样的:

①启发式找,找能让 ∣ E 1 − E 2 ∣ |E_1-E_2| E1E2最大的 α 2 \alpha_2 α2

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值