机器学习-支持向量机(非线性分类)

一,介绍

在进行分类的时候,大部分数据并不是线性可分的,而是需要通过数据映射,将数据变换到高维度进行分类,这就需要借助核函数来对其进行变换。

我们已经在线性情况下,超平面公式可以写为:

                                                         

对于线性不可分,我们使用一个非线性映射,将数据映射到特征空间,在特征空间中使用线性学习器,分类函数变形如下:

                                                          

二,非线性数据高纬度处理

我举一个简单的线性不可分例子,假设有两类,分别满足:

                                                                                   

 如下图:

                                                          

像上面这种图形,我们无法用一个线性函数去分类,但是,当我们将其变换到三维后,如下图:

                                                            

可见红色和蓝色的点被映射到了不同的平面,在更高维空间中是线性可分的(用一个平面去分割)。

三,核函数-径向基函数

常用的核函数有很多,例如,线性核:;多项式核:,拉普拉斯核:等等。

我们这里主要介绍径向基函数核(高斯核):

                                                           

其中,σ是用户自定义的用于确定到达率(reach)或者说函数值跌落到0的速度参数。如果σ选得很大的话,高次特征上的权重实际上衰减得非常快,所以实际上(数值上近似一下)相当于一个低维的子空间;反过来,如果σ选得很小,则可以将任意的数据映射为线性可分——当然,这并不一定是好事,因为随之而来的可能是非常严重的过拟合问题。不过,总的来说,通过调控参数σ,高斯核实际上具有相当高的灵活性,也是使用最广泛的核函数之一。

四,Python实现

训练集数据:

-0.214824   0.662756   -1.000000
-0.061569  -0.091875  1.000000
0.406933   0.648055   -1.000000
0.223650   0.130142   1.000000
0.231317   0.766906   -1.000000
-0.748800  -0.531637  -1.000000
-0.557789  0.375797   -1.000000
0.207123   -0.019463  1.000000
0.286462   0.719470   -1.000000
0.195300   -0.179039  1.000000
-0.152696  -0.153030  1.000000
0.384471   0.653336   -1.000000
-0.117280  -0.153217  1.000000
-0.238076  0.000583   1.000000
-0.413576  0.145681   1.000000
0.490767   -0.680029  -1.000000
0.199894   -0.199381  1.000000
-0.356048  0.537960   -1.000000
-0.392868  -0.125261  1.000000
0.353588   -0.070617  1.000000
0.020984   0.925720   -1.000000
-0.475167  -0.346247  -1.000000
0.074952   0.042783   1.000000
0.394164   -0.058217  1.000000
0.663418   0.436525   -1.000000
0.402158   0.577744   -1.000000
-0.449349  -0.038074  1.000000
0.619080   -0.088188  -1.000000
0.268066   -0.071621  1.000000
-0.015165  0.359326   1.000000
0.539368   -0.374972  -1.000000
-0.319153  0.629673   -1.000000
0.694424   0.641180   -1.000000
0.079522   0.193198   1.000000
0.253289   -0.285861  1.000000
-0.035558  -0.010086  1.000000
-0.403483  0.474466   -1.000000
-0.034312  0.995685   -1.000000
-0.590657  0.438051   -1.000000
-0.098871  -0.023953  1.000000
-0.250001  0.141621   1.000000
-0.012998  0.525985   -1.000000
0.153738   0.491531   -1.000000
0.388215   -0.656567  -1.000000
0.049008   0.013499   1.000000
0.068286   0.392741   1.000000
0.747800   -0.066630  -1.000000
0.004621   -0.042932  1.000000
-0.701600  0.190983   -1.000000
0.055413   -0.024380  1.000000
0.035398   -0.333682  1.000000
0.211795   0.024689   1.000000
-0.045677  0.172907   1.000000
0.595222   0.209570   -1.000000
0.229465   0.250409   1.000000
-0.089293  0.068198   1.000000
0.384300   -0.176570  1.000000
0.834912   -0.110321  -1.000000
-0.307768  0.503038   -1.000000
-0.777063  -0.348066  -1.000000
0.017390   0.152441   1.000000
-0.293382  -0.139778  1.000000
-0.203272  0.286855   1.000000
0.957812   -0.152444  -1.000000
0.004609   -0.070617  1.000000
-0.755431  0.096711   -1.000000
-0.526487  0.547282   -1.000000
-0.246873  0.833713   -1.000000
0.185639   -0.066162  1.000000
0.851934   0.456603   -1.000000
-0.827912  0.117122   -1.000000
0.233512   -0.106274  1.000000
0.583671   -0.709033  -1.000000
-0.487023  0.625140   -1.000000
-0.448939  0.176725   1.000000
0.155907   -0.166371  1.000000
0.334204   0.381237   -1.000000
0.081536   -0.106212  1.000000
0.227222   0.527437   -1.000000
0.759290   0.330720   -1.000000
0.204177   -0.023516  1.000000
0.577939   0.403784   -1.000000
-0.568534  0.442948   -1.000000
-0.011520  0.021165   1.000000
0.875720   0.422476   -1.000000
0.297885   -0.632874  -1.000000
-0.015821  0.031226   1.000000
0.541359   -0.205969  -1.000000
-0.689946  -0.508674  -1.000000
-0.343049  0.841653   -1.000000
0.523902   -0.436156  -1.000000
0.249281   -0.711840  -1.000000
0.193449   0.574598   -1.000000
-0.257542  -0.753885  -1.000000
-0.021605  0.158080   1.000000
0.601559   -0.727041  -1.000000
-0.791603  0.095651   -1.000000
-0.908298  -0.053376  -1.000000
0.122020   0.850966   -1.000000
-0.725568  -0.292022  -1.000000
测试集数据:
0.676771    -0.486687  -1.000000
0.008473   0.186070   1.000000
-0.727789  0.594062   -1.000000
0.112367   0.287852   1.000000
0.383633   -0.038068  1.000000
-0.927138  -0.032633  -1.000000
-0.842803  -0.423115  -1.000000
-0.003677  -0.367338  1.000000
0.443211   -0.698469  -1.000000
-0.473835  0.005233   1.000000
0.616741   0.590841   -1.000000
0.557463   -0.373461  -1.000000
-0.498535  -0.223231  -1.000000
-0.246744  0.276413   1.000000
-0.761980  -0.244188  -1.000000
0.641594   -0.479861  -1.000000
-0.659140  0.529830   -1.000000
-0.054873  -0.238900  1.000000
-0.089644  -0.244683  1.000000
-0.431576  -0.481538  -1.000000
-0.099535  0.728679   -1.000000
-0.188428  0.156443   1.000000
0.267051   0.318101   1.000000
0.222114   -0.528887  -1.000000
0.030369   0.113317   1.000000
0.392321   0.026089   1.000000
0.298871   -0.915427  -1.000000
-0.034581  -0.133887  1.000000
0.405956   0.206980   1.000000
0.144902   -0.605762  -1.000000
0.274362   -0.401338  1.000000
0.397998   -0.780144  -1.000000
0.037863   0.155137   1.000000
-0.010363  -0.004170  1.000000
0.506519   0.486619   -1.000000
0.000082   -0.020625  1.000000
0.057761   -0.155140  1.000000
0.027748   -0.553763  -1.000000
-0.413363  -0.746830  -1.000000
0.081500   -0.014264  1.000000
0.047137   -0.491271  1.000000
-0.267459  0.024770   1.000000
-0.148288  -0.532471  -1.000000
-0.225559  -0.201622  1.000000
0.772360   -0.518986  -1.000000
-0.440670  0.688739   -1.000000
0.329064   -0.095349  1.000000
0.970170   -0.010671  -1.000000
-0.689447  -0.318722  -1.000000
-0.465493  -0.227468  -1.000000
-0.049370  0.405711   1.000000
-0.166117  0.274807   1.000000
0.054483   0.012643   1.000000
0.021389   0.076125   1.000000
-0.104404  -0.914042  -1.000000
0.294487   0.440886   -1.000000
0.107915   -0.493703  -1.000000
0.076311   0.438860   1.000000
0.370593   -0.728737  -1.000000
0.409890   0.306851   -1.000000
0.285445   0.474399   -1.000000
-0.870134  -0.161685  -1.000000
-0.654144  -0.675129  -1.000000
0.285278   -0.767310  -1.000000
0.049548   -0.000907  1.000000
0.030014   -0.093265  1.000000
-0.128859  0.278865   1.000000
0.307463   0.085667   1.000000
0.023440   0.298638   1.000000
0.053920   0.235344   1.000000
0.059675   0.533339   -1.000000
0.817125   0.016536   -1.000000
-0.108771  0.477254   1.000000
-0.118106  0.017284   1.000000
0.288339   0.195457   1.000000
0.567309   -0.200203  -1.000000
-0.202446  0.409387   1.000000
-0.330769  -0.240797  1.000000
-0.422377  0.480683   -1.000000
-0.295269  0.326017   1.000000
0.261132   0.046478   1.000000
-0.492244  -0.319998  -1.000000
-0.384419  0.099170   1.000000
0.101882   -0.781145  -1.000000
0.234592   -0.383446  1.000000
-0.020478  -0.901833  -1.000000
0.328449   0.186633   1.000000
-0.150059  -0.409158  1.000000
-0.155876  -0.843413  -1.000000
-0.098134  -0.136786  1.000000
0.110575   -0.197205  1.000000
0.219021   0.054347   1.000000
0.030152   0.251682   1.000000
0.033447   -0.122824  1.000000
-0.686225  -0.020779  -1.000000
-0.911211  -0.262011  -1.000000
0.572557   0.377526   -1.000000
-0.073647  -0.519163  -1.000000
-0.281830  -0.797236  -1.000000
-0.555263  0.126232   -1.000000
代码:
import numpy as np
import matplotlib.pyplot as plt

# 读取数据
def loadDataSet(fileName):
    dataMat = []; labelMat = []
    fr = open(fileName)
    for line in fr.readlines():                                     #逐行读取,滤除空格等
        lineArr = line.strip().split('\t')
        dataMat.append([float(lineArr[0]), float(lineArr[1])])      #添加数据
        labelMat.append(float(lineArr[2]))                          #添加标签
    return dataMat,labelMat

# 核函数 ,本例中计算ΣXj*Xi
def kernelTrans(X, A, kTup):
    m,n = np.shape(X)
    K = np.mat(np.zeros((m,1)))
    if kTup[0]=='lin': K = X * A.T   # 线性核函数
    elif kTup[0]=='rbf':
        for j in range(m):
            deltaRow = X[j,:] - A
            K[j] = deltaRow*deltaRow.T
        K = np.exp(K/(-1*kTup[1]**2))
    else: raise NameError('Houston We Have a Problem -- \
    That Kernel is not recognized')
    return K

# 选择一个不同于i的j值,以获得两个乘子
def selectJrand(i,m):
    j=i
    while (j==i):
        j = int(np.random.uniform(0,m))
    return j

# 修剪alpha
# aj - alpha值
# H - alpha上限
# L - alpha下限
def clipAlpha(aj, H, L):
    if aj > H:
        aj = H
    if L > aj:
        aj = L
    return aj

class optStruct:
    def __init__(self,dataMatIn, classLabels, C, toler, kTup):  # Initialize the structure with the parameters
        self.X = dataMatIn                            # 分类数据
        self.labelMat = classLabels                   # 分类标示
        self.C = C                                    # 常数
        self.tol = toler                              # 学习目标
        self.m = np.shape(dataMatIn)[0]
        self.alphas = np.mat(np.zeros((self.m,1)))    # 参数α
        self.b = 0
        self.eCache = np.mat(np.zeros((self.m,2)))    # 存储误差
        self.K = np.mat(np.zeros((self.m,self.m)))    # 存储ΣXj*Xi
        for i in range(self.m):
            self.K[:,i] = kernelTrans(self.X, self.X[i,:], kTup)
            print(self.K[:,i])

# 计算误差
def calcEk(oS, k):
    fXk = float(np.multiply(oS.alphas,oS.labelMat).T*oS.K[:,k] + oS.b)
    Ek = fXk - float(oS.labelMat[k])
    return Ek

# 选取第二个参数并计算其误差,并返回误差最大的j和Ej
def selectJ(i, oS, Ei):
    maxK = -1; maxDeltaE = 0; Ej = 0
    oS.eCache[i] = [1,Ei]  # 保存误差
    # 取出保存了误差的行
    validEcacheList = np.nonzero(oS.eCache[:,0].A)[0] # oS.eCache[:,0].A 去第一列转化为数组
    if (len(validEcacheList)) > 1:
        for k in validEcacheList:   # 循环计算获取最大步长
            if k == i: continue    # 参数相同不计算
            Ek = calcEk(oS, k)
            deltaE = abs(Ei - Ek)
            if (deltaE > maxDeltaE):
                maxK = k; maxDeltaE = deltaE; Ej = Ek          # 选取步长最大的j
        return maxK, Ej
    else:   # 第一次获取
        j = selectJrand(i, oS.m)
        Ej = calcEk(oS, j)
    return j, Ej

#α更新后重新计算误差
def updateEk(oS, k):
    Ek = calcEk(oS, k)
    oS.eCache[k] = [1,Ek]

def innerL(i, oS):
    Ei = calcEk(oS, i)     # 第一步,计算误差
    if ((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0)):
        j,Ej = selectJ(i, oS, Ei)                                               # 随机选取第二个参数
        alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy();       # 保存αold
        # 第二步,计算边界
        if (oS.labelMat[i] != oS.labelMat[j]):
            L = max(0, oS.alphas[j] - oS.alphas[i])
            H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i])
        else:
            L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C)
            H = min(oS.C, oS.alphas[j] + oS.alphas[i])
        if L==H: print ("L==H"); return 0
        eta = 2.0 * oS.K[i,j] - oS.K[i,i] - oS.K[j,j] # 第三步,计算学习速率η
        if eta >= 0: print ("eta>=0"); return 0
        oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta  # 第四步:更新αj
        oS.alphas[j] = clipAlpha(oS.alphas[j],H,L)    # 第五步:修剪αj
        updateEk(oS, j) # 更新误差率
        if (abs(oS.alphas[j] - alphaJold) < 0.00001): print ("j not moving enough"); return 0
        oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j]) # 第六步:更新αi
        updateEk(oS, i) # 更新误差率
        # 第七步:更新b1,b2
        b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,i] - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[i,j]
        b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,j]- oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[j,j]
        # 第八步:更新b
        if (0 < oS.alphas[i]) and (oS.C > oS.alphas[i]): oS.b = b1
        elif (0 < oS.alphas[j]) and (oS.C > oS.alphas[j]): oS.b = b2
        else: oS.b = (b1 + b2)/2.0
        return 1
    else: return 0

def smoP(dataMatIn, classLabels, C, toler, maxIter,kTup=('lin', 0)):    #full Platt SMO
    oS = optStruct(np.mat(dataMatIn),np.mat(classLabels).transpose(),C,toler, kTup)
    iter = 0
    entireSet = True; alphaPairsChanged = 0
    while (iter < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
        alphaPairsChanged = 0
        if entireSet:   #go over all
            for i in range(oS.m):
                alphaPairsChanged += innerL(i,oS)
                print ("全样本遍历:第%d次迭代 样本:%d, alpha优化次数:%d" % (iter,i,alphaPairsChanged))
            iter += 1
        else:#go over non-bound (railed) alphas
            nonBoundIs = np.nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
            for i in nonBoundIs:
                alphaPairsChanged += innerL(i,oS)
                print ("non-bound, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        if entireSet: entireSet = False #toggle entire set loop
        elif (alphaPairsChanged == 0): entireSet = True
        print ("iteration number: %d" % iter)
    return oS.b,oS.alphas

def calcWs(alphas,dataArr,classLabels):
    X = np.mat(dataArr); labelMat = np.mat(classLabels).transpose()
    m,n = np.shape(X)
    w = np.zeros((n,1))
    for i in range(m):
        w += np.multiply(alphas[i]*labelMat[i],X[i,:].T)
    return w

def testRbf(k1=1.3):
    dataArr,labelArr = loadDataSet('testSetRBF.txt')
    b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, ('rbf', k1))
    datMat=np.mat(dataArr); labelMat = np.mat(labelArr).transpose()
    svInd=np.nonzero(alphas.A>0)[0]
    sVs=datMat[svInd]
    labelSV = labelMat[svInd]
    print ("支持向量个数: %d " % np.shape(sVs)[0])
    m,n = np.shape(datMat)
    errorCount = 0
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],('rbf', k1))
        predict=kernelEval.T * np.multiply(labelSV,alphas[svInd]) + b
        if np.sign(predict)!=np.sign(labelArr[i]): errorCount += 1
    print ("训练集错误率: %f" % (float(errorCount)/m))
    dataArr,labelArr = loadDataSet('testSetRBF2.txt')
    errorCount = 0
    datMat=np.mat(dataArr); labelMat = np.mat(labelArr).transpose()
    m,n = np.shape(datMat)
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],('rbf', k1))
        predict=kernelEval.T * np.multiply(labelSV,alphas[svInd]) + b
        if np.sign(predict)!=np.sign(labelArr[i]): errorCount += 1
    print ("测试集错误率: %f" % (float(errorCount)/m))

if __name__ == '__main__':
    testRbf()
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值