svm python_Python · SVM(二)· LinearSVM

很多人(包括我)第一次听说 SVM 时都觉得它是个非常厉害的东西,但其实SVM 本身“只是”一个线性模型。只有在应用了核方法后,SVM 才会“升级”成为一个非线性模型

不过由于普遍说起 SVM 时我们都默认它带核方法,所以我们还是随大流、称 SVM 的原始版本为 LinearSVM。不过即使“只是”线性模型,这个“只是”也是要打双引号的——它依旧强大,且在许许多多的问题上甚至要比带核方法的 SVM 要好(比如文本分类)

感知机回顾

在进入正题之前,我们先回顾一下感知机,因为 LinearSVM 往简单来说其实就只是改了感知机的损失函数而已,而且改完之后还很像

感知机模型只有

equation?tex=w

equation?tex=b这两个参数,它们决定了一张超平面

equation?tex=%5CPi%3Aw%5Ccdot+x%2Bb%3D0。感知机最终目的是使得

equation?tex=y%28w%5Ccdot+x%2Bb%29%3E0%2C%5Cforall%28x%2Cy%29%5Cin+D,其中

equation?tex=D是训练数据集、

equation?tex=y只能取正负一

训练方法则是梯度下降,其中梯度公式为:

equation?tex=%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial+w%7D%28x_i%2Cy_i%29+%3D+-y_ix_i

equation?tex=%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial+b%7D%28x_i%2Cy_i%29%3D-y_i

我们在实际实现时,采用了“极大梯度下降法”(亦即每次只选出使得损失函数最大的样本点来进行梯度下降)(注:这不是被广泛承认的称谓,只是本文的一个代称):

for _ in range(epoch):

# 计算 w·x+b

y_pred = x.dot(self._w) + self._b

# 选出使得损失函数最大的样本

idx = np.argmax(np.maximum(0, -y_pred * y))

# 若该样本被正确分类,则结束训练

if y[idx] * y_pred[idx] > 0:

break

# 否则,让参数沿着负梯度方向走一步

delta = lr * y[idx]

self._w += delta * x[idx]

self._b += delta

然后有理论证明,只要数据集线性可分,这样下去就一定能收敛

感知机的问题与 LinearSVM 的解决方案

由感知机损失函数的形式可知,感知机只要求样本被正确分类,而不要求样本被“很好地正确分类”。这就导致感知机弄出来的超平面(通常又称“决策面”)经常会“看上去很不舒服”:

之所以看上去很不舒服,是因为决策面离两坨样本都太近了。从直观上来说,我们希望得到的是这样的决策面:

(之所以画风突变是因为 matplotlib 的默认画风变了,然后我懒得改了……)(喂

那么应该如何将这种理想的决策面的状态翻译成机器能够学习的东西呢?直观来说,就是让决策面离正负样本点的间隔都尽可能大;而这个“间隔”翻译成数学语言,其实就是简单的:

equation?tex=d%28%28x%2Cy%29%2C%5CPi%29%3D%5Cfrac+%7B1%7D%7B%5C%7Cw%5C%7C%7Dy%28w%5Ccdot+x%2Bb%29(文末会给出相应说明)

在有了样本点到决策面的间隔后,数据集到决策面的间隔也就好定义了:

equation?tex=d%28D%2C+%5CPi%29%3D%5Cmin_%7B%28x%2Cy%29%5Cin+D%7Dd%28%28x%2Cy%29%2C%5CPi%29

所以我们现在的目的就转化为:让所有样本点都被正确分类:

equation?tex=y%28w%5Ccdot+x%2Bb%29%3E0%2C%5Cforall%28x%2Cy%29%5Cin+D

让决策面离正负样本点的间隔都尽可能大:

equation?tex=%5Cmax%5Cmin_%7B%28x%2Cy%29%5Cin+D%7D%5Cfrac+%7B1%7D%7B%5C%7Cw%5C%7C%7Dy%28w%5Ccdot+x%2Bb%29

注意到

equation?tex=y%28w%5Ccdot+x%2Bb%29%3E0的性质和

equation?tex=%5Cfrac+%7B1%7D%7B%5C%7Cw%5C%7C%7Dy%28w%5Ccdot+x%2Bb%29的值在

equation?tex=w

equation?tex=b同时扩大 k 倍时不会改变,所以我们完全可以假设:若

equation?tex=%28x%5E%2A%2Cy%5E%2A%29%3D%5Carg%5Cmin_%7B%28x%2Cy%29%5Cin+D%7D%5Cfrac+%7B1%7D%7B%5C%7Cw%5C%7C%7Dy%28w%5Ccdot+x%2Bb%29,则

equation?tex=y%5E%2A%28w%5Ccdot+x%5E%2A%2Bb%29%3D1

(否则假设

equation?tex=y%5E%2A%28w%5Ccdot+x%5E%2A%2Bb%29%3Dc,令

equation?tex=w%5Cleftarrow%5Cfrac+wc%2Cb%5Cleftarrow%5Cfrac+bc即可)

注意由于

equation?tex=%28x%5E%2A%2Cy%5E%2A%29%3D%5Carg%5Cmin_%7B%28x%2Cy%29%5Cin+D%7D%5Cfrac+%7B1%7D%7B%5C%7Cw%5C%7C%7Dy%28w%5Ccdot+x%2Bb%29这个最小化过程中

equation?tex=w是固定的,所以我们可以把

equation?tex=%5Cfrac1%7B%5C%7Cw%5C%7C%7D这一项拿掉,从而:

equation?tex=%28x%5E%2A%2Cy%5E%2A%29%3D%5Carg%5Cmin_%7B%28x%2Cy%29%5Cin+D%7Dy%28w%5Ccdot+x%2Bb%29

所以

equation?tex=y%5E%2A%28w%5Ccdot+x%5E%2A%2Bb%29%3D1%5CRightarrow+y%28w%5Ccdot+x%2Bb%29%5Cge1%2C%5Cforall%28x%2Cy%29%5Cin+D

于是优化问题转为:equation?tex=%5Cmax_%7Bw%2Cb%7D%5Cfrac+%7B1%7D%7B%5C%7Cw%5C%7C%7D,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

亦即equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

但是这会导致另一个问题:当数据集线性不可分时,上述优化问题是必定无解的,这就会导致模型震荡(换句话说,

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1这个约束太“硬”了)。所以为了让模型在线性不可分的数据上仍有不错的表现,从直观来说,我们应该“放松”对我们模型的限制(让我们模型的约束“软”一点):equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1-%5Cxi_i%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

其中

equation?tex=%5Cxi_i%5Cge0。当然仅仅放松限制会使模型变得怠惰(咦),所以我们还需要让这种放松受到惩罚:equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cleft%5B%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5Csum_%7Bi%3D1%7D%5EN%5Cxi_i%5Cright%5D,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1-%5Cxi_i%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

equation?tex=%5Cxi_i%5Cge0

其中

equation?tex=C是一个常数,可以把它理解为“惩罚力度”(这样做的合理性会在文末给出)。若假设数据集为

equation?tex=D%3D%5Cleft%5C%7B%28x_1%2Cy_1%29%2C...%2C%28x_N%2Cy_N%29%5Cright%5C%7D的话,那么经过数学变换后,可知上述优化问题等价于(文末会给出过程):

equation?tex=%5Cmin_%7Bw%2Cb%7D%7B%5Cleft%5B%5Cfrac%7B%5C%7Cw%5C%7C%5E2%7D2+%2B+C%5Csum_%7Bi%3D1%7D%5EN%5B1-y_i%28w%5Ccdot+x_i%2Bb%29%5D_%2B%5Cright%5D%7D

其中“

equation?tex=%5B+%5Ccdot+%5D_%2B”其实就是 ReLU 函数:

equation?tex=%5Bx%5D_%2B%3D%5Cleft%5C%7B+%5Cbegin%7Baligned%7D+0%26%5C+%5C+x%5Cle0+%5C%5C+x%26%5C+%5C+x%3E0+%5Cend%7Baligned%7D+%5Cright.

注意我们感知机的损失函数为

equation?tex=%5Csum_%7Bi%3D1%7D%5EN%5B-y%28w%5Ccdot+x%2Bb%29%5D_%2B,于是综上所述可以看出,LinearSVM 在形式上和感知机的差别只在于损失函数、且这两个损失函数确实长得很像

LinearSVM 的训练

【虽然比较简单,但是调优 LinearSVM 的训练这个过程是相当有启发性的事情。仍然是那句老话:麻雀虽小,五脏俱全。我们会先展示“极大梯度下降法”的有效性,然后会展示极大梯度下降法存在的问题,最后则会介绍如何应用 Mini-Batch 梯度下降法(MBGD)来进行训练】

为了使用梯度下降法,我们需要先求导。我们已知:

equation?tex=L%28D%29%3D%5Cfrac%7B%5C%7Cw%5C%7C%5E2%7D2+%2B+C%5Csum_%7Bi%3D1%7D%5EN%5B1-y_i%28w%5Ccdot+x_i%2Bb%29%5D_%2B

所以我们可以认为:

equation?tex=L%28x%2Cy%29%3D%5Cfrac%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5B1-y%28w%5Ccdot+x%2Bb%29%5D_%2B

于是:当

equation?tex=y%28w%5Ccdot+x%2Bb%29%5Cge1时:

equation?tex=%5Cfrac%7B%5Cpartial+L%28x%2Cy%29%7D%7B%5Cpartial+w%7D+%3D+w

equation?tex=%5Cfrac%7B%5Cpartial+L%28x%2Cy%29%7D%7B%5Cpartial+b%7D%3D0

equation?tex=y%28w%5Ccdot+x%2Bb%29%3C1时:

equation?tex=%5Cfrac%7B%5Cpartial+L%28x%2Cy%29%7D%7B%5Cpartial+w%7D+%3D+w-Cyx

equation?tex=%5Cfrac%7B%5Cpartial+L%28x%2Cy%29%7D%7B%5Cpartial+b%7D%3D-Cy

所以我们可以把极大梯度下降的形式写成(假设学习速率为

equation?tex=%5Ceta):

equation?tex=w%5Cleftarrow+%281-%5Ceta%29w

equation?tex=y%28w%5Ccdot+x%2Bb%29%3C1,则选出某个被错分的样本

equation?tex=%28x%2Cy%29,然后:

equation?tex=w%5Cleftarrow+w%2B%5Ceta+Cyx

equation?tex=b%5Cleftarrow+b%2B%5Ceta+Cy

我们完全可以照搬感知机里的代码来完成实现(由于思路基本一致,这里就略去注释了):

import numpy as np

class LinearSVM:

def __init__(self):

self._w = self._b = None

def fit(self, x, y, c=1, lr=0.01, epoch=10000):

x, y = np.asarray(x, np.float32), np.asarray(y, np.float32)

self._w = np.zeros(x.shape[1])

self._b = 0.

for _ in range(epoch):

self._w *= 1 - lr

err = 1 - y * self.predict(x, True)

idx = np.argmax(err)

# 注意即使所有 x, y 都满足 w·x + b >= 1

# 由于损失里面有一个 w 的模长平方

# 所以仍然不能终止训练,只能截断当前的梯度下降

if err[idx] <= 0:

continue

delta = lr * c * y[idx]

self._w += delta * x[idx]

self._b += delta

def predict(self, x, raw=False):

x = np.asarray(x, np.float32)

y_pred = x.dot(self._w) + self._b

if raw:

return y_pred

return np.sign(y_pred).astype(np.float32)

下面这张动图是该 LinearSVM 的训练过程:

虽然看上去不错,但仍然存在着问题:训练过程其实非常不稳定

从直观上来说,由于 LinearSVM 的损失函数比感知机要更复杂,所以相应的函数形状也会更复杂。这意味着当数据集稍微差一点的时候,直接单纯地应用极大梯度下降法可能会导致一些问题——比如说模型会卡在某个很奇怪的地方无法自拔(什么鬼)

通过将正负样本点的“中心”从原点 (0, 0)(默认值)挪到 (5, 5)(亦即破坏了一定的对称性)并将正负样本点之间的距离拉近一点,我们可以复现这个问题:

原理我不敢乱说,这里只提供一个牵强附会的直观解释:每次只取使得损失函数极大的一个样本进行梯度下降

equation?tex=%5Crightarrow模型在某个地方可能来来回回都只受那么几个样本的影响

equation?tex=%5Crightarrow死循环(什么鬼!)

专业的理论就留待专业的观众老爷补充吧 ( σ'ω')σ

然后解决方案的话,主要还是从改进随机梯度下降(SGD)的思路入手(因为极大梯度下降法其实就是 SGD 的特殊形式)。我们知道 SGD 的“升级版”是 MBGD、亦即拿随机 Mini-Batch 代替随机抽样,我们这里也完全可以依样画葫芦。以下是对应代码(只显示出了核心部分):

self._w *= 1 - lr

# 随机选取 batch_size 个样本

batch = np.random.choice(len(x), batch_size)

x_batch, y_batch = x[batch], y[batch]

err = 1 - y_batch * self.predict(x_batch, True)

if np.max(err) <= 0:

continue

# 注意这里我们只能利用误分类的样本做梯度下降

# 因为被正确分类的样本处、这一部分的梯度为 0

mask = err > 0

delta = lr * c * y_batch[mask]

# 取各梯度平均并做一步梯度下降

self._w += np.mean(delta[..., None] * x_batch[mask], axis=0)

self._b += np.mean(delta)

这样的话,通常而言会比 SGD 要好

但是问题仍然是存在的:那就是它们所运用的梯度下降法都只是朴素的 Vanilla Update,这会导致当数据的 scale 很大时模型对参数极为敏感、从而导致持续的震荡(所谓的 scale 比较大,可以理解为“规模很大”,或者直白一点——以二维数据为例的话——就是横纵坐标的数值很大)。下面这张动图或许能提供一些直观:

Again,原理我不敢乱说,所以只提供一个有可能对(更有可能错)(喂)的直观解释:scale太大

equation?tex=%5Crightarrow梯度很大

equation?tex=%5Crightarrow蹦跶得很欢(什么鬼!)

专业的理论就留待专业的观众老爷补充吧 ( σ'ω')σ

解决方案的话,一个很直接的想法就是进行数据归一化:

equation?tex=X%5Cleftarrow+%5Cfrac%7BX-%5Cbar+X%7D%7B%5Csqrt%7BVar%28X%29%7D%7D。事实证明这样做了之后,最基本的极大梯度下降法也能解决上文出现过的所有问题了

然后一个稍微“偷懒”一点的做法就是,用更好的梯度下降算法来代替朴素的 Vanilla Update。比如说 Adam 的训练过程将如下(这张动图被知乎弄得有点崩……将就着看吧 ( σ'ω')σ):

关于各种梯度下降算法的定义、性质等等可以参见这篇文章,实现和在 LinearSVM 上的应用则可以参见这里和这里

相关数学理论

我们尚未解决的问题有三个,但这些问题基本都挺直观的,所以大体上不深究也没问题(趴:为什么被正确分类的样本

equation?tex=%28x%2Cy%29到决策面

equation?tex=%5CPi%3Aw%5Ccdot+x%2Bb%3D0的间隔可以写成

equation?tex=d%28%28x%2Cy%29%2C%5CPi%29%3D%5Cfrac+%7B1%7D%7B%5C%7Cw%5C%7C%7Dy%28w%5Ccdot+x%2Bb%29

为什么把优化问题从

*

equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

转化成

*

equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cleft%5B%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5Csum_%7Bi%3D1%7D%5EN%5Cxi_i%5Cright%5D,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1-%5Cxi_i%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

equation?tex=%5Cxi_i%5Cge0

是合理的

为什么上面这 个优化问题

*

equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cleft%5B%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5Csum_%7Bi%3D1%7D%5EN%5Cxi_i%5Cright%5D,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1-%5Cxi_i%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

equation?tex=%5Cxi_i%5Cge0

等价于

*

equation?tex=%5Cmin_%7Bw%2Cb%7D%7B%5Cleft%5B%5Cfrac%7B%5C%7Cw%5C%7C%5E2%7D2+%2B+C%5Csum_%7Bi%3D1%7D%5EN%5B1-y_i%28w%5Ccdot+x_i%2Bb%29%5D_%2B%5Cright%5D%7D

这三个问题有一定递进关系,我们一个个来看

1)间隔的定义

我们在定义点

equation?tex=%28x%2Cy%29到平面(超平面)

equation?tex=%5CPi的间隔时,一般都是这样做的:将

equation?tex=%28x%2Cy%29(垂直)投影到

equation?tex=%5CPi

设投影点为

equation?tex=%28x%5E%2A%2Cy%5E%2A%29,则定义

equation?tex=d%28%28x%2Cy%29%2C%5CPi%29%3D%5Cleft%5C%7B+%5Cbegin%7Baligned%7D+%5C%7Cx-x%5E%2A%5C%7C%5E2%2C%26%5C+%5C+y%28w%5Ccdot+x+%2B+b%29+%5Cge0+%5C%5C+-%5C%7Cx-x%5E%2A%5C%7C%5E2%2C%26%5C+%5C+y%28w%5Ccdot+x+%2B+b%29+%3C0+%5Cend%7Baligned%7D+%5Cright.

注意这里我们允许(当样本被错分类时的)间隔为负数,所以间隔其实严格来说并不是一般意义上的距离

那么为了找到垂直投影,我们得先找到垂直于超平面

equation?tex=%5CPi的方向。不难看出

equation?tex=w就是垂直于

equation?tex=%5CPi的,因为对

equation?tex=%5Cforall+x_1%2Cx_2%5Cin%5CPi,由

equation?tex=%5Cleft%5C%7B+%5Cbegin%7Baligned%7D+%26w%5Ccdot+x_1%2Bb%3D0+%5C%5C+%26w%5Ccdot+x_2%2Bb%3D0+%5Cend%7Baligned%7D+%5Cright.

equation?tex=w%5Ccdot%28x_1-x_2%29%3D0(两式相减即可),从而

equation?tex=w垂直于向量

equation?tex=x_1-x_2,从而也就垂直于

equation?tex=%5CPi

那么结合之前那张图,不难得知我们可以设

equation?tex=x-x%5E%2A%3D%5Clambda+w(这里的

equation?tex=%5Clambda可正可负),于是就有(注意由

equation?tex=x%5E%2A%5Cin%5CPi

equation?tex=w%5Ccdot+x%5E%2A%2Bb%3D0

equation?tex=%5Cbegin%7Balign%7D+%5C%7Cx-x%5E%2A%5C%7C%5E2%26%3D%28x-x%5E%2A%29%5Ccdot%28x-x%5E%2A%29%3D%5Clambda+w%5Ccdot%28x-x%5E%2A%29+%5C%5C+%26%3D%5Clambda+%5Cleft%5Bw%5Ccdot%28x-x%5E%2A%29%2B%28b-b%29%5Cright%5D%5C%5C+%26%3D%5Clambda%5Cleft%5B+w%5Ccdot+x%2Bb+-+%28w%5Ccdot+x%5E%2A+%2B+b%29%5Cright%5D+%5C%5C+%26%3D%5Clambda%28w%5Ccdot+x%2Bb%29+%5Cend%7Balign%7D

从而

equation?tex=d%28%28x%2Cy%29%2C%5CPi%29%3D%5Cleft%5C%7B+%5Cbegin%7Baligned%7D+%5Clambda%28w%5Ccdot+x%2Bb%29%2C%26%5C+%5C+y%28w%5Ccdot+x+%2B+b%29+%5Cge0+%5C%5C+-%5Clambda%28w%5Ccdot+x%2Bb%29%2C%26%5C+%5C+y%28w%5Ccdot+x+%2B+b%29+%3C0+%5Cend%7Baligned%7D+%5Cright.

注意这么定义的间隔有一个大问题:当

equation?tex=w

equation?tex=b同时增大

equation?tex=k倍时,新得到的超平面

equation?tex=%5Ctilde%5CPi%3A%28kw%29%5Ccdot+x%2B%28kb%29其实等价于原超平面

equation?tex=%5CPi

equation?tex=x%5Cin%5Ctilde%5CPi%5CLeftrightarrow%28kw%29%5Ccdot+x%2B%28kb%29%3D0%5CLeftrightarrow+w%5Ccdot+x%2Bb%3D0%5CLeftrightarrow+x%5Cin%5CPi

但此时

equation?tex=d%28%28x%2Cy%29%2C%5CPi%29却会直接增大

equation?tex=k倍。极端的情况就是,当

equation?tex=w

equation?tex=b同时增大无穷倍时,超平面没变,间隔却也跟着增大了无穷倍,这当然是不合理的

所以我们需要把 scale 的影响给抹去,常见的做法就是做某种意义上的归一化:

equation?tex=d%28%28x%2Cy%29%2C%5CPi%29%3D%5Cleft%5C%7B+%5Cbegin%7Baligned%7D+%5Cfrac1%7B%5C%7Cw%5C%7C%7D%7Cw%5Ccdot+x%2Bb%7C%2C%26%5C+%5C+y%28w%5Ccdot+x+%2B+b%29+%5Cge0+%5C%5C+-%5Cfrac1%7B%5C%7Cw%5C%7C%7D%7Cw%5Ccdot+x%2Bb%7C%2C%26%5C+%5C+y%28w%5Ccdot+x+%2B+b%29+%3C0+%5Cend%7Baligned%7D+%5Cright.

(注意:由于 scale 的影响已被抹去,所以

equation?tex=%5Clambda也就跟着被抹去了;同时由

equation?tex=0%5Cle%5C%7Cx-x%5E%2A%5C%7C%5E2%3D%5Clambda%28w%5Ccdot+x%2Bb%29知,我们需要在抹去

equation?tex=%5Clambda的同时、给

equation?tex=w%5Ccdot+x%2Bb套一个绝对值)

不难看出上式可改写为:

equation?tex=d%28%28x%2Cy%29%2C%5CPi%29%3D%5Cfrac1%7B%5C%7Cw%5C%7C%7Dy%28w%5Ccdot+x%2Bb%29

这正是我们想要的结果

2)优化问题的转化的合理性

我们已知原问题为equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

且由 1)知,式中的

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29其实就是(没有抹去 scale 的影响的)间隔。所以想要放松对模型的限制的话,很自然的想法就是让这个间隔不必一定要不小于 1、而是只要不小于

equation?tex=1-%5Cxi_i就行,其中

equation?tex=%5Cxi_i是个不小于 0 的数。正如前文所说,只放松限制的话肯定不行、还得给这个放松一些惩罚,所以就在损失函数中加一个

equation?tex=C%5Cxi_i即可,其中

equation?tex=C是个大于 0 的常数、可以理解为对放松的惩罚力度

综上所述,优化问题即可合理地转化为:equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cleft%5B%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5Csum_%7Bi%3D1%7D%5EN%5Cxi_i%5Cright%5D,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1-%5Cxi_i%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

equation?tex=%5Cxi_i%5Cge0

3)优化问题的等价性

为方便,称优化问题:equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cleft%5B%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5Csum_%7Bi%3D1%7D%5EN%5Cxi_i%5Cright%5D,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1-%5Cxi_i%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

equation?tex=%5Cxi_i%5Cge0

为问题一;称:

equation?tex=%5Cmin_%7Bw%2Cb%7D%7B%5Cleft%5B%5Cfrac%7B%5C%7Cw%5C%7C%5E2%7D2+%2B+C%5Csum_%7Bi%3D1%7D%5EN%5B1-y_i%28w%5Ccdot+x_i%2Bb%29%5D_%2B%5Cright%5D%7D

为问题二,则我们需要证明问题一与问题二等价

先来看问题一怎么转为问题二。事实上不难得知:

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1-%5Cxi_i%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D%5CRightarrow%5Cxi_i%5Cge1-y_i%28w%5Ccdot+x_i%2Bb%29

注意问题一是针对

equation?tex=w

equation?tex=b进行优化的,且当

equation?tex=w

equation?tex=b固定时,为使

equation?tex=%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5Csum_%7Bi%3D1%7D%5EN%5Cxi_i最小,必有:equation?tex=1-y_i%28w%5Ccdot+x_i%2Bb%29%5Cge0时,

equation?tex=%5Cxi_i%3D1-y_i%28w%5Ccdot+x_i%2Bb%29

equation?tex=1-y_i%28w%5Ccdot+x_i%2Bb%29%3C0时,

equation?tex=%5Cxi_i%3D0(因为我们要求

equation?tex=%5Cxi_i%5Cge0

亦即

equation?tex=%5Cxi_i%3D%5B1-y_i%28w%5Ccdot+x_i%2Bb%29%5D_%2B。此时损失函数即为

equation?tex=%5Cfrac%7B%5C%7Cw%5C%7C%5E2%7D2+%2B+C%5Csum_%7Bi%3D1%7D%5EN%5B1-y_i%28w%5Ccdot+x_i%2Bb%29%5D_%2B,换句话说,我们就把问题一转为了问题二

再来看问题二怎么转为问题一。事实上,直接令

equation?tex=%5Cxi_i%3D%5B1-y_i%28w%5Ccdot+x_i%2Bb%29%5D_%2B,就有:模型的损失为

equation?tex=%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5Csum_%7Bi%3D1%7D%5EN%5Cxi_i

模型的约束为

equation?tex=%5Cxi_i%5Cge+1-y_i%28w%5Ccdot+x_i%2Bb%29

equation?tex=%5Cxi_i%5Cge0

亦即转为了问题一

4)LinearSVM 的对偶问题

原始问题equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cleft%5B%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5Csum_%7Bi%3D1%7D%5EN%5Cxi_i%5Cright%5D,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1-%5Cxi_i%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

equation?tex=%5Cxi_i%5Cge0

的对偶问题为equation?tex=%5Cmin_%7B%5Calpha%7D%5Cleft%5B+%5Cfrac12%5Csum_%7Bi%3D1%7D%5EN%5Csum_%7Bj%3D1%7D%5EN%5Calpha_i%5Calpha_jy_iy_j%28x_i%5Ccdot+x_j%29-%5Csum_%7Bi%3D1%7D%5EN%5Calpha_i%5Cright%5D,使得

equation?tex=%5Csum_%7Bi%3D1%7D%5EN%5Calpha_iy_i%3D0

equation?tex=0%5Cle%5Calpha_i%5Cle+C

通过拉格朗日乘子法可以比较简单地完成证明。不难得知原始问题相应的拉格朗日函数为:

equation?tex=L%3D%5Cfrac%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5Csum_%7Bi%3D1%7D%5EN%5Cxi_i-%5Csum_%7Bi%3D1%7D%5EN%5Calpha_i%5By_i%28w%5Ccdot+x_i%2Bb%29-1%2B%5Cxi_i%5D-%5Csum_%7Bi%3D1%7D%5EN%5Cbeta_i%5Cxi_i

其中

equation?tex=%5Calpha_i%5Cge0

equation?tex=%5Cbeta_i%5Cge0,于是原始问题为

equation?tex=%5Cmin_%7Bw%2Cb%2C%5Cxi%7D%5Cmax_%7B%5Calpha%2C%5Cbeta%7D+L

从而对偶问题为

equation?tex=%5Cmax_%7B%5Calpha%2C%5Cbeta%7D%5Cmin_%7Bw%2Cb%2C%5Cxi%7DL

于是我们需要求偏导并令它们为 0:对

equation?tex=w求偏导:

equation?tex=%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial+w%7D%3Dw-%5Csum_%7Bi%3D1%7D%5EN%5Calpha_iy_ix_i%3D0%5CRightarrow+w%3D%5Csum_%7Bi%3D1%7D%5EN%5Calpha_iy_ix_i

equation?tex=b求偏导:

equation?tex=%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial+b%7D%3D-%5Csum_%7Bi%3D1%7D%5EN%5Calpha_iy_i%3D0%5CRightarrow%5Csum_%7Bi%3D1%7D%5EN%5Calpha_iy_i%3D0

equation?tex=%5Cxi_i求偏导:

equation?tex=%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial%5Cxi_i%7D%3DC-%5Calpha_i-%5Cbeta_i%3D0%5CRightarrow%5Calpha_i%2B%5Cbeta_i%3DC

注意这些约束中

equation?tex=%5Cbeta_i除了

equation?tex=%5Cbeta_i%5Cge0之外没有其它约束,

equation?tex=%5Calpha_i%2B%5Cbeta_i%3DC的约束可以转为

equation?tex=%5Calpha_i%5Cle+C。然后把这些东西代入拉格朗日函数

equation?tex=L、即可得到:

equation?tex=%5Cbegin%7Balign%7D+L%26%3D%5Cfrac%7B%5C%7C%5Csum_%7Bi%3D1%7D%5EN%5Calpha_iy_ix_i%5C%7C%5E2%7D2%2B%5Csum_%7Bi%3D1%7D%5EN%28C-%5Calpha_i-%5Cbeta_i%29%5Cxi_i-%5Csum_%7Bi%3D1%7D%5EN%5Calpha_iy_i%5Cleft%28%5Csum_%7Bj%3D1%7D%5EN%5Calpha_jy_jx_j%5Cright%29%5Ccdot+x_i-b%5Csum_%7Bi%3D1%7D%5EN%5Calpha_iy_i%2B%5Csum_%7Bi%3D1%7D%5EN%5Calpha_i+%5C%5C+%26%3D-%5Cfrac12%5Csum_%7Bi%3D1%7D%5EN%5Csum_%7Bj%3D1%7D%5EN%5Calpha_i%5Calpha_jy_iy_j%28x_i%5Ccdot+x_j%29%2B%5Csum_%7Bi%3D1%7D%5EN%5Calpha_i+%5Cend%7Balign%7D

于是对偶问题为equation?tex=%5Cmax_%7B%5Calpha%7D%5Cleft%5B+-%5Cfrac12%5Csum_%7Bi%3D1%7D%5EN%5Csum_%7Bj%3D1%7D%5EN%5Calpha_i%5Calpha_jy_iy_j%28x_i%5Ccdot+x_j%29%2B%5Csum_%7Bi%3D1%7D%5EN%5Calpha_i%5Cright%5D,使得

equation?tex=%5Csum_%7Bi%3D1%7D%5EN%5Calpha_iy_i%3D0

equation?tex=0%5Cle%5Calpha_i%5Cle+C

亦即equation?tex=%5Cmin_%7B%5Calpha%7D%5Cleft%5B+%5Cfrac12%5Csum_%7Bi%3D1%7D%5EN%5Csum_%7Bj%3D1%7D%5EN%5Calpha_i%5Calpha_jy_iy_j%28x_i%5Ccdot+x_j%29-%5Csum_%7Bi%3D1%7D%5EN%5Calpha_i%5Cright%5D,使得

equation?tex=%5Csum_%7Bi%3D1%7D%5EN%5Calpha_iy_i%3D0

equation?tex=0%5Cle%5Calpha_i%5Cle+C

可以看到在对偶形式中,样本仅以内积的形式(

equation?tex=x_i%5Ccdot+x_j)出现,这就使得核方法的引入变得简单而自然

5)Extra

作为结尾,我来叙述一些上文用到过的、但是没有给出具体名字的概念(假设样本为

equation?tex=%28x%2Cy%29,超平面为

equation?tex=%5CPi%3Aw%5Ccdot+x%2Bb%3D0)样本到超平面的函数间隔为:

equation?tex=y%28w%5Ccdot+x%2Bb%29

样本到超平面的几何间隔为:

equation?tex=%5Cfrac1%7B%5C%7Cw%5C%7C%7Dy%28w%5Ccdot+x%2Bb%29

优化问题

*

equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

的求解过程常称为硬间隔最大化,求解出来的超平面则常称为最大硬间隔分离超平面

优化问题

*

equation?tex=%5Cmin_%7Bw%2Cb%7D%5Cleft%5B%5Cfrac+%7B%5C%7Cw%5C%7C%5E2%7D2%2BC%5Csum_%7Bi%3D1%7D%5EN%5Cxi_i%5Cright%5D,使得

equation?tex=y_i%28w%5Ccdot+x_i%2Bb%29%5Cge1-%5Cxi_i%2C%5Cforall%28x_i%2Cy_i%29%5Cin+D

equation?tex=%5Cxi_i%5Cge0

的求解过程常称为软间隔最大化,求解出来的超平面则常称为最大软间隔分离超平面

然后最后的最后,请允许我不加证明地给出两个结论(因为结论直观且证明太长……):若数据集线性可分,则最大硬间隔分离超平面存在且唯一

若数据集线性不可分,则最大软间隔分离超平面的解存在但不唯一,其中:法向量(

equation?tex=w+)唯一

偏置量(

equation?tex=b)可能不唯一(感谢评论区@shuyu cheng 指出)

下一篇文章我们则会介绍核方法,并会介绍如何将它应用到感知机和 SVM 上

希望观众老爷们能够喜欢~

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值