【粒子群算法优化支持向量机——回归预测】

  1. 先介绍一下优化分类支持向量机
    导入需要的库:
import numpy as np
import random
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import cross_val_score

主程序:

data = np.loadtxt(r'E:/12k3/pailieshang.txt', dtype=float, delimiter=',')
y, x = np.split(data, (1,), axis=1)

data_train_x, data_test_x, data_train_y, data_test_y = train_test_split(x, y, random_state=1, test_size=0.25)
x_train = np.array(data_train_x)
x_test = np.array(data_test_x)


# 初始化参数
W = 0.5                                # 惯性因子
c1 = 0.2                                # 学习因子
c2 = 0.5                                # 学习因子
n_iterations = 50                       # 迭代次数
n_particles = 30                       # 种群规模


# 4.设置适应度值
def fitness_function(position):

    svclassifier = SVC(kernel='rbf', gamma=position[0], C=position[1])      # 参数gamma和惩罚参数c以实数向量的形式进行编码作为PSO的粒子的位置
    svclassifier.fit(data_train_x, data_train_y)
    score = cross_val_score(svclassifier, data_test_x, data_test_y, cv=5).mean()                # 交叉验证
    print(score)                                                            # 分类精度
    Y_pred = cross_val_predict(svclassifier, data_test_x, data_test_y, cv=5)                    # 获取预测值
    print(svclassifier.score(data_test_x, data_test_y))
    # 我这里是四分类,下面输出错误分类结果
    return confusion_matrix(data_test_y, Y_pred)[0][1] + confusion_matrix(data_test_y, Y_pred)[0][2] + confusion_matrix(data_test_y, Y_pred)[0][3] + \
           confusion_matrix(data_test_y, Y_pred)[1][0] + confusion_matrix(data_test_y, Y_pred)[1][2] + confusion_matrix(data_test_y, Y_pred)[1][3] +\
           confusion_matrix(data_test_y, Y_pred)[2][0] + confusion_matrix(data_test_y, Y_pred)[2][1] + confusion_matrix(data_test_y, Y_pred)[2][3] + \
           confusion_matrix(data_test_y, Y_pred)[3][0] + confusion_matrix(data_test_y, Y_pred)[3][1] + confusion_matrix(data_test_y, Y_pred)[3][2] \
        ,  confusion_matrix(data_test_y, Y_pred)[0][1] + confusion_matrix(data_test_y, Y_pred)[0][2] + confusion_matrix(data_test_y, Y_pred)[0][3] + \
           confusion_matrix(data_test_y, Y_pred)[1][0] + confusion_matrix(data_test_y, Y_pred)[1][2] + confusion_matrix(data_test_y, Y_pred)[1][3] +\
           confusion_matrix(data_test_y, Y_pred)[2][0] + confusion_matrix(data_test_y, Y_pred)[2][1] + confusion_matrix(data_test_y, Y_pred)[2][3] + \
           confusion_matrix(data_test_y, Y_pred)[3][0] + confusion_matrix(data_test_y, Y_pred)[3][1] + confusion_matrix(data_test_y, Y_pred)[3][2] 

# 初始化粒子位置,进行迭代
particle_position_vector = np.array([np.array([random.random() * 100, random.random() * 100]) for _ in range(n_particles)])
pbest_position = particle_position_vector
pbest_fitness_value = np.array([float('inf') for _ in range(n_particles)])
gbest_fitness_value = np.array([float('inf'), float('inf')])
gbest_position = np.array([float('inf'), float('inf')])
velocity_vector = ([np.array([0, 0]) for _ in range(n_particles)])
iteration = 0
while iteration < n_iterations:
    # plot(particle_position_vector)
    for i in range(n_particles):
        fitness_cadidate = fitness_function(particle_position_vector[i])
        print("error of particle-", i, "is (training, test)", fitness_cadidate, " At (gamma, c): ",
              particle_position_vector[i])

        if (pbest_fitness_value[i] > fitness_cadidate[1]):
            pbest_fitness_value[i] = fitness_cadidate[1]
            pbest_position[i] = particle_position_vector[i]

        if (gbest_fitness_value[1] > fitness_cadidate[1]):
            gbest_fitness_value = fitness_cadidate
            gbest_position = particle_position_vector[i]

        elif (gbest_fitness_value[1] == fitness_cadidate[1] and gbest_fitness_value[0] > fitness_cadidate[0]):
            gbest_fitness_value = fitness_cadidate
            gbest_position = particle_position_vector[i]

    for i in range(n_particles):
        new_velocity = (W * velocity_vector[i]) + (c1 * random.random()) * (
                    pbest_position[i] - particle_position_vector[i]) + (c2 * random.random()) * (
                                   gbest_position - particle_position_vector[i])
        new_position = new_velocity + particle_position_vector[i]
        particle_position_vector[i] = new_position

    iteration = iteration + 1


# 输出最终结果
print("The best position is ", gbest_position, "in iteration number", iteration, "with error (train, test):",
      fitness_function(gbest_position))

2.优化支持回归向量机和分类向量机的区别
2.1 优化参数数量不同
SVR需要优化三个参数,C,gamma及epsilon

svclassifier = SVC(kernel='rbf', gamma=position[0], C=position[1]) 
svr = SVR(kernel='rbf', gamma=position[0], C=position[1], epsilon=position[2])

2.2 后续代码改动

return mean_squared_error(y[730:833], predict), mean_squared_error(y[730:833], predict)
particle_position_vector = np.array([np.array([random.random() * 10, random.random() * 10, random.random() * 10]) for _ in range(n_particles)])
velocity_vector = ([np.array([0, 0, 0]) for _ in range(n_particles)])

按照上面程序,都要将参数改成三个,适应度函数返回改成回归预测评价指标即可,这样就可以运行啦。

  • 3
    点赞
  • 32
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 6
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

似水不惧

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值