支持向量机
- 要解决的问题:什么样的决策边界才是最好的呢?
距离的计算
考虑平面上的点x’和x’’:
w
T
x
′
=
−
b
,
w
T
x
′
′
=
−
b
w^Tx'=-b,w^Tx''=-b
wTx′=−b,wTx′′=−b
有w垂直于平面:
w
T
(
x
′
′
−
x
′
)
=
0
w^T(x''-x')=0
wT(x′′−x′)=0
利用平面上两点x’,x’'得到平面法向量,计算x到x’的距离,利用法向量计算投影距离
d
i
a
t
a
n
c
e
(
x
,
b
,
w
)
=
∣
w
T
∣
∣
w
∣
∣
(
x
−
x
′
)
∣
=
∣
w
T
x
+
b
∣
∣
∣
w
∣
∣
diatance(x,b,w)=|\frac{w^T}{||w||}(x-x')|=\frac{|w^Tx+b|}{||w||}
diatance(x,b,w)=∣∣∣w∣∣wT(x−x′)∣=∣∣w∣∣∣wTx+b∣
其中向量/模就是单位向量(方向)
数据标签定义
- 支持向量机是一个有监督学习,因此需要标签。数据集(X1,y1)(X2,y2)…(Xnyn)
- y样本的类别:当X为正例时y=+1,当X为负例时,y=-1
- 决策方程:(其中
Φ
(
x
)
\Phi(x)
Φ(x) 是对数据做了变换)
y ( x ) = w T Φ ( x ) + b (1) y(x)=w^T\Phi(x)+b \tag1 y(x)=wTΦ(x)+b(1)
⇒ y ( x i ) > 0 ⇔ y i = + 1 y ( x i ) < 0 ⇔ y i = − 1 ⇒ y i y ( x i ) > 0 (2) \Rightarrow \begin{matrix} y(x_i)>0 \Leftrightarrow y_i=+1 \\ y(x_i)<0 \Leftrightarrow y_i=-1 \end{matrix} \Rightarrow y_iy(x_i)>0 \tag2 ⇒y(xi)>0⇔yi=+1y(xi)<0⇔yi=−1⇒yiy(xi)>0(2)
优化的目标
- 通俗解释:找到一个条线(w和b),使得离该线最近的点能够最远
- 将点到直线的距离化简得:(由(1)和(2)知等式上方恒大于0,因此去掉绝对值) y i ( w T Φ ( x i ) + b ) ∣ ∣ w ∣ ∣ \frac{y_i(w^T\Phi(x_i)+b)}{||w||} ∣∣w∣∣yi(wTΦ(xi)+b)
目标函数
-
放缩变换:对于决策方程(w,b)可以通过放缩使得其结果值 ∣ Y ∣ > = 1 |Y|>= 1 ∣Y∣>=1 ,使得:
y i ( w T Φ ( x i ) + b ) ≥ 1 (3) y_i(w^T\Phi(x_i)+b)\geq1 \tag3 yi(wTΦ(xi)+b)≥1(3)
(之前我们认为恒大于0,现在严格了些) -
优化目标(即让到最近点的距离最大):
a r g m a x w , b { 1 ∣ ∣ w ∣ ∣ m i n i [ y i ( w T Φ ( x i ) + b ) ] } argmax_{w,b}\{\frac{1}{||w||}min_i[y_i(w^T\Phi(x_i)+b)]\} argmaxw,b{∣∣w∣∣1mini[yi(wTΦ(xi)+b)]}
由于(3)式 ,只需要考虑: a r g m a x w , b 1 ∣ ∣ w ∣ ∣ (4) argmax_{w,b}\frac{1}{||w||} \tag4 argmaxw,b∣∣w∣∣1(4) -
当前目标(4),约束条件(3)
-
常规套路:将求解极大值问题转换为求解极小值问题 m i n w , b 1 2 w 2 min_{w,b}\frac{1}{2}w^2 minw,b21w2
-
如何求解:应用拉格朗日乘子法求解
拉格朗日乘子法
- 带约束条件的优化问题
m i n x f 0 ( x ) , s u b j e c t t o f i ( x ) ≤ 0 ( i = 1 , . . . , m ) , h i ( x ) = 0 , i = 1 , . . . , q min_x f_0(x),subject\ to f_i(x)\leq 0(i=1,...,m),h_i(x)=0,i=1,...,q minxf0(x),subject tofi(x)≤0(i=1,...,m),hi(x)=0,i=1,...,q - 原式转换为: m i n L ( x , λ , v ) = f 0 ( x ) + ∑ i = 1 m λ i f i ( x ) + ∑ i = 1 q v i h i ( x ) minL(x,\lambda,v)=f_0(x)+\sum^m_{i=1}{\lambda_if_i(x)}+\sum^q_{i=1}{v_ih_i(x)} minL(x,λ,v)=f0(x)+i=1∑mλifi(x)+i=1∑qvihi(x)
- 我们的式子 m i n L ( w , b , α ) = 1 2 ∣ ∣ w ∣ ∣ 2 − ∑ i = 1 n α i ( y i ( w T Φ ( x i ) + b ) − 1 ) (5) minL(w,b,\alpha)=\frac{1}{2}||w||^2-\sum^n_{i=1}{\alpha_i(y_i(w^T\Phi(x_i)+b)-1)} \tag5 minL(w,b,α)=21∣∣w∣∣2−i=1∑nαi(yi(wTΦ(xi)+b)−1)(5)
SVM求解
- 分别对w和b求偏导,分别得到两个条件(由于对偶性质) m i n w , b m a x α L ( w , b , α ) ⇒ m a x α m i n w , b L ( w , b , α ) min_{w,b}max_\alpha L(w,b,\alpha) \Rightarrow max_\alpha min_{w,b} L(w,b,\alpha) minw,bmaxαL(w,b,α)⇒maxαminw,bL(w,b,α)
- 对w求偏导: ∂ L ∂ w = 0 ⇒ w = ∑ i = 1 n α i y i Φ ( x n ) \frac{\partial L}{\partial w}=0 \Rightarrow w=\sum^n_{i=1}{\alpha_iy_i\Phi(x_n)} ∂w∂L=0⇒w=i=1∑nαiyiΦ(xn)
- 对b求偏导: ∂ L ∂ b = 0 ⇒ 0 = ∑ i = 1 n α i y i \frac{\partial L}{\partial b}=0 \Rightarrow 0=\sum^n_{i=1}{\alpha_iy_i} ∂b∂L=0⇒0=i=1∑nαiyi
- 代入(5)式中得到 ∑ i = 1 n α i − 1 2 ∑ i = 1 , j = 1 n α i α j y i y j Φ T x i Φ ( x j ) \sum^n_{i=1}{\alpha_i}-\frac{1}{2}\sum^n_{i=1,j=1}{\alpha_i\alpha_jy_iy_j\Phi^T{x_i}\Phi(x_j)} i=1∑nαi−21i=1,j=1∑nαiαjyiyjΦTxiΦ(xj)
- 因此变为对α求极大值,转换为求极小值:
m i n α 1 2 ∑ i = 1 n ∑ j = 1 n α i α j y i y j ( Φ ( x i ) Φ ( x j ) ) − ∑ i = 1 n α i min_\alpha \frac{1}{2}\sum^n_{i=1}{\sum^n_{j=1}{\alpha_i\alpha_jy_iy_j(\Phi(x_i)\Phi(x_j))}}-\sum^n_{i=1}{\alpha_i} minα21i=1∑nj=1∑nαiαjyiyj(Φ(xi)Φ(xj))−i=1∑nαi
条件(其中第二个条件是朗格朗日乘子法所要求的):
∑ i = 1 n α i y i = 0 , α i ≥ 0 \sum^n_{i=1}{\alpha_iy_i}=0,\alpha_i \geq 0 i=1∑nαiyi=0,αi≥0
SVM求解实例
数据:3个点,其中正例 X1(3,3) ,X2(4,3) ,负例X3(1,1)
求解:
1
2
∑
i
=
1
n
∑
j
=
1
n
α
i
α
j
y
i
y
j
(
x
i
x
j
)
−
∑
i
=
1
n
α
i
\frac{1}{2}\sum^n_{i=1}{\sum^n_{j=1}{\alpha_i\alpha_jy_iy_j(x_ix_j)}}-\sum^n_{i=1}{\alpha_i}
21i=1∑nj=1∑nαiαjyiyj(xixj)−i=1∑nαi
约束条件:
α
1
+
α
2
−
α
3
=
0
(6)
\alpha_1+\alpha_2-\alpha_3=0 \tag6
α1+α2−α3=0(6)
α
i
≥
0
,
i
=
1
,
2
,
3
\alpha_i \geq 0,i=1,2,3
αi≥0,i=1,2,3
代入式子得:
1
2
(
18
α
1
2
+
25
α
2
2
+
2
α
3
2
+
42
α
1
α
2
−
12
α
1
α
3
−
14
α
2
α
3
)
−
α
1
−
α
2
−
α
3
\frac{1}{2}(18\alpha_1^2+25\alpha_2^2+2\alpha_3^2+42\alpha_1\alpha_2-12\alpha_1\alpha_3-14\alpha_2\alpha_3)-\alpha_1-\alpha_2-\alpha_3
21(18α12+25α22+2α32+42α1α2−12α1α3−14α2α3)−α1−α2−α3
因为(6)可化简去掉
α
3
\alpha_3
α3 得:
4
α
1
2
+
13
2
α
2
2
+
10
α
1
α
2
−
2
α
1
−
2
α
2
4\alpha_1^2+\frac{13}{2}\alpha_2^2+10\alpha_1\alpha_2-2\alpha_1-2\alpha_2
4α12+213α22+10α1α2−2α1−2α2
分别对
α
1
\alpha_1
α1 和
α
2
\alpha_2
α2 求偏导,偏导等于0得唯一一个满足约束条件得答案
α
1
=
0.25
\alpha_1=0.25
α1=0.25 ,
α
2
=
0
\alpha_2=0
α2=0,因此最小值再(0.25,0,0.25)处取得
将α结果代入求解得
w
=
∑
i
=
1
n
α
i
y
i
Φ
(
x
n
)
=
1
4
∗
1
∗
(
3
,
3
)
+
1
4
∗
(
−
1
)
∗
(
1
,
1
)
=
(
1
2
,
1
2
)
w=\sum^n_{i=1}{\alpha_iy_i\Phi(x_n)}=\frac{1}{4}*1*(3,3)+\frac{1}{4}*(-1)*(1,1)=(\frac{1}{2},\frac{1}{2})
w=i=1∑nαiyiΦ(xn)=41∗1∗(3,3)+41∗(−1)∗(1,1)=(21,21)
b
=
y
i
−
∑
i
=
1
n
a
i
y
i
(
x
i
x
j
)
=
1
−
(
1
4
∗
1
∗
18
+
1
4
∗
(
−
1
)
∗
6
)
=
−
2
b=y_i-\sum^n_{i=1}{a_iy_i(x_ix_j)}=1-(\frac{1}{4}*1*18+\frac{1}{4}*(-1)*6)=-2
b=yi−i=1∑naiyi(xixj)=1−(41∗1∗18+41∗(−1)∗6)=−2
因此平面方程为:
0.5
x
1
+
0.5
x
2
−
2
=
0
0.5x_1+0.5x_2-2=0
0.5x1+0.5x2−2=0
边界上的点(即α不等于0的点)为支持向量。只有支持向量才会对最终结果产生影响。
soft-margin软间隔
软间隔:有时候数据中有一些噪音点,如果考虑它们得到得线会很差。为了解决该问题,引入松弛因子:
y
i
(
w
x
i
+
b
)
≥
1
−
ξ
i
y_i(wx_i+b)\geq1-\xi_i
yi(wxi+b)≥1−ξi
因此产生新的目标函数:
m
i
n
1
2
∣
∣
w
∣
∣
2
+
C
∑
i
=
1
n
ξ
i
min\frac{1}{2}||w||^2+C\sum^n_{i=1}{\xi_i}
min21∣∣w∣∣2+Ci=1∑nξi
当C趋近于很大时:意味着分类严格不能有错误
当C趋近于很小时:意味着可以有更大的错误容忍
利用拉格朗日乘子法求解:
L
(
w
,
b
,
ξ
,
α
,
μ
)
=
1
2
∣
∣
w
∣
∣
2
+
C
∑
i
=
1
n
ξ
i
−
∑
i
=
1
n
α
i
(
y
i
(
w
x
i
+
b
)
−
1
+
ξ
i
)
−
∑
i
=
1
n
μ
i
ξ
i
L(w,b,\xi,\alpha,\mu)=\frac{1}{2}||w||^2+C\sum^n_{i=1}{\xi_i}-\sum^n_{i=1}{\alpha_i(y_i(wx_i+b)-1+\xi_i)}-\sum^n_{i=1}{\mu_i\xi_i}
L(w,b,ξ,α,μ)=21∣∣w∣∣2+Ci=1∑nξi−i=1∑nαi(yi(wxi+b)−1+ξi)−i=1∑nμiξi
约束:
∑
i
=
1
n
α
i
y
i
=
0
,
C
−
α
i
−
μ
i
=
0
,
α
i
≥
0
,
μ
i
≥
0
\sum^n_{i=1}{\alpha_iy_i}=0,C-\alpha_i-\mu_i=0,\alpha_i \geq 0,\mu_i \geq 0
i=1∑nαiyi=0,C−αi−μi=0,αi≥0,μi≥0
同样得解法:
∑
i
=
1
n
α
i
y
i
=
0
,
0
≤
α
i
≤
C
\sum^n_{i=1}{\alpha_iy_i}=0,0\leq\alpha_i\leq C
i=1∑nαiyi=0,0≤αi≤C
核变换(低维不可分)
- 核变换:既然低维的时候不可分,那我给它映射到高维
- 目标:找到一种变换的方法,也就是Φ(𝑋)
- 高斯核函数: K ( X < Y ) = e x p { − ∣ ∣ X − Y ∣ ∣ 2 2 σ 2 } K(X<Y)=exp\{-\frac{||X-Y||^2}{2\sigma^2}\} K(X<Y)=exp{−2σ2∣∣X−Y∣∣2}
SVM:支持向量机实验分析
-
SVM的效果
-
软间隔的作用,防止过拟合
-
核函数的作用,SVM的强大之处
import numpy as np
import os
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
import warnings
warnings.filterwarnings('ignore')
支持向量机的效果
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris['data'][:,(2,3)]
y = iris['target']
setosa_or_versicolor = (y==0)|(y==1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
svm_clf = SVC(kernel='linear',C=float('inf'))
#kernel默认情况下为rbf(高斯核函数),除此之外还有linear(线性,即不做任何处理),还有poly可以设置degree值,增加特征维度)
svm_clf.fit(X,y)
SVC(C=inf, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='auto_deprecated',
kernel='linear', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
# 一般的模型
x0 = np.linspace(0, 5.5, 200)
pred_1 = 5*x0 - 20
pred_2 = x0 - 1.8
pred_3 = 0.1 * x0 + 0.5
def plot_svc_decision_boundary(svm_clf, xmin, xmax,sv=True):
w = svm_clf.coef_[0]
b = svm_clf.intercept_[0]
print (w)
x0 = np.linspace(xmin, xmax, 200)
decision_boundary = - w[0]/w[1] * x0 - b/w[1]
margin = 1/w[1]
gutter_up = decision_boundary + margin
gutter_down = decision_boundary - margin
if sv:
svs = svm_clf.support_vectors_ #support_vectors_为模型所有得支持向量
plt.scatter(svs[:,0],svs[:,1],s=180,facecolors='#FFAAAA')
plt.plot(x0,decision_boundary,'k-',linewidth=2)
plt.plot(x0,gutter_up,'k--',linewidth=2)
plt.plot(x0,gutter_down,'k--',linewidth=2)
plt.figure(figsize=(14,4))
plt.subplot(121)
plt.plot(X[:,0][y==1],X[:,1][y==1],'bs')
plt.plot(X[:,0][y==0],X[:,1][y==0],'ys')
plt.plot(x0,pred_1,'g--',linewidth=2)
plt.plot(x0,pred_2,'m-',linewidth=2)
plt.plot(x0,pred_3,'r-',linewidth=2)
plt.axis([0,5.5,0,2])
plt.subplot(122)
plot_svc_decision_boundary(svm_clf, 0, 5.5)
plt.plot(X[:,0][y==1],X[:,1][y==1],'bs')
plt.plot(X[:,0][y==0],X[:,1][y==0],'ys')
plt.axis([0,5.5,0,2])
[1.29411744 0.82352928]
[0, 5.5, 0, 2]
数据标准化的影响
数据标准化真的特别重要!!
软间隔
- 如果不加入软间隔会遇到哪些问题呢?
可以使用超参数C控制软间隔程度:
import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
iris=datasets.load_iris()
X = iris["data"][:,(2,3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Viginica
svm_clf = Pipeline((
('std',StandardScaler()),
('linear_svc',LinearSVC(C=1))
))
svm_clf.fit(X,y)
Pipeline(memory=None,
steps=[('std', StandardScaler(copy=True, with_mean=True, with_std=True)), ('linear_svc', LinearSVC(C=1, class_weight=None, dual=True, fit_intercept=True,
intercept_scaling=1, loss='squared_hinge', max_iter=1000,
multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,
verbose=0))])
svm_clf.predict([[5.5,1.7]])
array([1.])
对比不同C值所带来的效果差异
scaler = StandardScaler()
svm_clf1 = LinearSVC(C=1,random_state = 42)
svm_clf2 = LinearSVC(C=100,random_state = 42)
scaled_svm_clf1 = Pipeline((
('std',scaler),
('linear_svc',svm_clf1)
))
scaled_svm_clf2 = Pipeline((
('std',scaler),
('linear_svc',svm_clf2)
))
scaled_svm_clf1.fit(X,y)
scaled_svm_clf2.fit(X,y)
Pipeline(memory=None,
steps=[('std', StandardScaler(copy=True, with_mean=True, with_std=True)), ('linear_svc', LinearSVC(C=100, class_weight=None, dual=True, fit_intercept=True,
intercept_scaling=1, loss='squared_hinge', max_iter=1000,
multi_class='ovr', penalty='l2', random_state=42, tol=0.0001,
verbose=0))])
# 对标准化得数据进行还原
b1 = svm_clf1.decision_function([-scaler.mean_ / scaler.scale_])
b2 = svm_clf2.decision_function([-scaler.mean_ / scaler.scale_])
w1 = svm_clf1.coef_[0] / scaler.scale_
w2 = svm_clf2.coef_[0] / scaler.scale_
svm_clf1.intercept_ = np.array([b1])
svm_clf2.intercept_ = np.array([b2])
svm_clf1.coef_ = np.array([w1])
svm_clf2.coef_ = np.array([w2])
plt.figure(figsize=(14,4.2))
plt.subplot(121)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs", label="Iris-Versicolor")
plot_svc_decision_boundary(svm_clf1, 4, 6,sv=False)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.title("$C = {}$".format(svm_clf1.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6,sv=False)
plt.xlabel("Petal length", fontsize=14)
plt.title("$C = {}$".format(svm_clf2.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
[0.86508935 2.24726149]
[1.72273715 3.20298118]
[4, 6, 0.8, 2.8]
- 在右侧,使用较高的C值,分类器会减少误分类,但最终会有较小间隔。
- 在左侧,使用较低的C值,间隔要大得多,但很多实例最终会出现在间隔之内。
- 把C当成参数,进行交叉验证来进行选择
非线性支持向量机
X1D = np.linspace(-4, 4, 9).reshape(-1, 1)
X2D = np.c_[X1D, X1D**2]
y = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.plot(X1D[:, 0][y==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][y==1], np.zeros(5), "g^")
plt.gca().get_yaxis().set_ticks([])
plt.xlabel(r"$x_1$", fontsize=20)
plt.axis([-4.5, 4.5, -0.2, 0.2])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(X2D[:, 0][y==0], X2D[:, 1][y==0], "bs")
plt.plot(X2D[:, 0][y==1], X2D[:, 1][y==1], "g^")
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plt.gca().get_yaxis().set_ticks([0, 4, 8, 12, 16])
plt.plot([-4.5, 4.5], [6.5, 6.5], "r--", linewidth=3)
plt.axis([-4.5, 4.5, -1, 17])
plt.subplots_adjust(right=1)
plt.show()
有点难度的数据:
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)
def plot_dataset(X, y, axes):
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.axis(axes)
plt.grid(True, which='both')
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.show()
from sklearn.datasets import make_moons
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
#PolynomialFeatures增加特征维度
polynomial_svm_clf=Pipeline((("poly_features",PolynomialFeatures(degree=3)),
("scaler",StandardScaler()),
("svm_clf",LinearSVC(C=10,loss="hinge"))
))
polynomial_svm_clf.fit(X,y)
Pipeline(memory=None,
steps=[('poly_features', PolynomialFeatures(degree=3, include_bias=True, interaction_only=False)), ('scaler', StandardScaler(copy=True, with_mean=True, with_std=True)), ('svm_clf', LinearSVC(C=10, class_weight=None, dual=True, fit_intercept=True,
intercept_scaling=1, loss='hinge', max_iter=1000, multi_class='ovr',
penalty='l2', random_state=None, tol=0.0001, verbose=0))])
def plot_predictions(clf,axes):
x0s = np.linspace(axes[0],axes[1],100)
x1s = np.linspace(axes[2],axes[3],100)
x0,x1 = np.meshgrid(x0s,x1s)
X = np.c_[x0.ravel(),x1.ravel()]
y_pred = clf.predict(X).reshape(x0.shape)
plt.contourf(x0,x1,y_pred,cmap=plt.cm.brg,alpha=0.2)
plot_predictions(polynomial_svm_clf,[-1.5,2.5,-1,1.5])
plot_dataset(X,y,[-1.5,2.5,-1,1.5])
SVM中的核技巧
from sklearn.svm import SVC
poly_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))
])
#kernel得poly和PolynomialFeatures得区别在于:PolynomialFeatures是对数据增加特征维度,但poly只是假设把数据映射到高维中增加特征维度实际上并没有
poly_kernel_svm_clf.fit(X, y)
Pipeline(memory=None,
steps=[('scaler', StandardScaler(copy=True, with_mean=True, with_std=True)), ('svm_clf', SVC(C=5, cache_size=200, class_weight=None, coef0=1,
decision_function_shape='ovr', degree=3, gamma='auto_deprecated',
kernel='poly', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False))])
poly100_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5))
])
poly100_kernel_svm_clf.fit(X, y)
Pipeline(memory=None,
steps=[('scaler', StandardScaler(copy=True, with_mean=True, with_std=True)), ('svm_clf', SVC(C=5, cache_size=200, class_weight=None, coef0=100,
decision_function_shape='ovr', degree=10, gamma='auto_deprecated',
kernel='poly', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False))])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_predictions(poly_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=3, r=1, C=5$", fontsize=18)
plt.subplot(122)
plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=10, r=100, C=5$", fontsize=18)
plt.show()
coef0表示偏置项
高斯核函数:
- 利用相似度来变换特征
- 选择一份一维数据集,并在 x 1 = − 2 x_1 = -2 x1=−2 和 x 1 = 1 x_1 = 1 x1=1 处为其添加两个高斯函数。
- 接下来,让我们将相似度函数定义为 γ = 0.3 γ= 0.3 γ=0.3 的径向基函数(RBF)
ϕ γ ( x , l ) = e x p ( − γ ∣ ∣ x − l ∣ ∣ 2 ) \phi\gamma(x,l)=exp(-\gamma||x-l||^2) ϕγ(x,l)=exp(−γ∣∣x−l∣∣2)
例如:
x
1
=
−
1
x_1 = -1
x1=−1 :它位于距第一个地标距离为1的地方,距第二个地标距离为 2。因此,其新特征是
x
2
=
e
x
p
(
−
0.3
×
1
2
)
≈
0.74
x_2 = exp(-0.3×1^2)≈0.74
x2=exp(−0.3×12)≈0.74 并且
x
3
=
e
x
p
(
−
0.3
×
2
2
)
≈
0.30
x_3 = exp(-0.3×2^2)≈0.30
x3=exp(−0.3×22)≈0.30。
也可以把这个当作变换完后得相似度的特征。
def gaussian_rbf(x, landmark, gamma):
return np.exp(-gamma * np.linalg.norm(x - landmark, axis=1)**2)
gamma = 0.3
x1s = np.linspace(-4.5, 4.5, 200).reshape(-1, 1)
x2s = gaussian_rbf(x1s, -2, gamma)
x3s = gaussian_rbf(x1s, 1, gamma)
XK = np.c_[gaussian_rbf(X1D, -2, gamma), gaussian_rbf(X1D, 1, gamma)]
yk = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.scatter(x=[-2, 1], y=[0, 0], s=150, alpha=0.5, c="red")
plt.plot(X1D[:, 0][yk==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][yk==1], np.zeros(5), "g^")
plt.plot(x1s, x2s, "g--")
plt.plot(x1s, x3s, "b:")
plt.gca().get_yaxis().set_ticks([0, 0.25, 0.5, 0.75, 1])
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"Similarity", fontsize=14)
plt.annotate(r'$\mathbf{x}$',
xy=(X1D[3, 0], 0),
xytext=(-0.5, 0.20),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.text(-2, 0.9, "$x_2$", ha="center", fontsize=20)
plt.text(1, 0.9, "$x_3$", ha="center", fontsize=20)
plt.axis([-4.5, 4.5, -0.1, 1.1])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(XK[:, 0][yk==0], XK[:, 1][yk==0], "bs")
plt.plot(XK[:, 0][yk==1], XK[:, 1][yk==1], "g^")
plt.xlabel(r"$x_2$", fontsize=20)
plt.ylabel(r"$x_3$ ", fontsize=20, rotation=0)
plt.annotate(r'$\phi\left(\mathbf{x}\right)$',
xy=(XK[3, 0], XK[3, 1]),
xytext=(0.65, 0.50),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.plot([-0.1, 1.1], [0.57, -0.1], "r--", linewidth=3)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplots_adjust(right=1)
plt.show()
理论情况下会得到怎么维特征呢?可以对每一个实例(样本数据点)创建一个地标,此时会将m·n的训练集转换成m·m的训练集
SVM中利用了核函数的计算技巧,大大降低了计算复杂度:
- 增加 gamma γ 使高斯曲线变窄,因此每个实例的影响范围都较小:决策边界最终变得更不规则,在个别实例周围摆动。
- 减少 gamma γ 使高斯曲线变宽,因此实例具有更大的影响范围,并且决策边界更加平滑。
rbf_kernel_svm_clf = Pipeline((
("scaler",StandardScaler()),
("svm_clf",SVC(kernel="rbf",gamma=5,C=0.001))
))
rbf_kernel_svm_clf.fit(X,y)
Pipeline(memory=None,
steps=[('scaler', StandardScaler(copy=True, with_mean=True, with_std=True)), ('svm_clf', SVC(C=0.001, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma=5, kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False))])
from sklearn.svm import SVC
gamma1, gamma2 = 0.1, 5
C1, C2 = 0.001, 1000
hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)
svm_clfs = []
for gamma, C in hyperparams:
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C))
])
rbf_kernel_svm_clf.fit(X, y)
svm_clfs.append(rbf_kernel_svm_clf)
plt.figure(figsize=(11, 7))
for i, svm_clf in enumerate(svm_clfs):
plt.subplot(221 + i)
plot_predictions(svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
gamma, C = hyperparams[i]
plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16)
plt.show()
由图可以看出: γ \gamma γ 越小,过拟合的风险越小