机器学习笔记之Kernelized Support Vectors Machines

机器学习笔记之Kernelized Support Vectors Machines

(这里书上说的太简略了,三脸懵逼)

  • Kernelized support vector machines are an extension that allows more complex models that are not simply defined by hyper planes in the space

  • Adding nonlinear features to representation of the data can make
    linear models more powerful.

  • Mainly two ways to map the data into higher-dimensional space

    Polynomial kernel: compute all possible polynomials
    Radial basis function(RBF) model: considers polynomials but importance of features decreases for higher degrees

  • Only a subset of data points to define the decision boundary: the ones that lie on the border of between classed. (called support vectors

在Python当中可以通过scikit-learn中的 SVC来实现

from sklearn.svm import SVC
import mglearn
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
x, y = mglearn.tools.make_handcrafted_dataset()
'''gamma是指GAUSSIAN KERNEL下的RADIUS,c指模型的复杂性'''
svm = SVC(kernel = 'rbf', C = 10, gamma = 0.1).fit(x,y)

mglearn.plots.plot_2d_separator(svm,x,eps = .5)
mglearn.discrete_scatter(x[:,0],x[:,1],y)
sv = svm.support_vectors_
sv_labels = svm.dual_coef_.ravel() >0
mglearn.discrete_scatter(sv[:,0],sv[:,1],sv_labels,s = 15, markeredgewidth=3)
plt.xlabel('Feature0')
plt.ylabel('Feature1')
plt.show()

下图是默认C=10,GAMMA=0.1时的分类情况

在这里插入图片描述

通过变化C和gamma的值来看看具体影响

fig,axes = plt.subplots(3,3,figsize = (15,10))

for ax,C in zip(axes,[-1,0,3]):
    for a ,gamma in zip(ax,range(-1,2)):
        mglearn.plots.plot_svm(log_C=C,log_gamma=gamma, ax = a)
axes[0,0].legend(['Class0,Class1','sv class0','sv class1'],ncol =4, loc = (-9,1.2))
plt.show()

可以看到随着C和GAMMA的增大,分界线变得越来越曲线化

在这里插入图片描述

还是以乳腺癌人群的数据为例,

cancer = load_breast_cancer()
x_train,x_test,y_train,y_test = train_test_split(cancer.data,cancer.target,random_state = 0)
sv = SVC().fit(x_train,y_train)
plt.boxplot(x_train)
plt.yscale('symlog')
plt.xlabel('Feature Index')
plt.ylabel('Feature Magnitude')
plt.show()

可以看出这个数据集的magnitude差距很大,这给kernel SVM带来了很差的影响(测试集表现只有0.67),这意味着需要进行一些数据的前处理

在这里插入图片描述

min_on_training = x_train.min(axis =0)
range_on_training = (x_train - min_on_training).max(axis = 0)
x_train_scaled = (x_train - min_on_training)/range_on_training
x_test_scaled = (x_test - min_on_training)/range_on_training
svc =SVC()
svc.fit(x_train_scaled,y_train)
print('Training score:%s'%(svc.score(x_train_scaled,y_train)),'Test score:%s'%(svc.score(x_test_scaled,y_test)))

可以看到进行了前处理后,测试集表现得到了大幅提升
在这里插入图片描述

Strengthsallow for complex boundaries, work well on both low-dimensional and high-dimensional data

Weaknesses: don’t scale very well with the number of samples, require careful pre-processing of the data and tuning of parameters

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值