pso(粒子群算法)算法优化神经网络算法

目录

一、pso(粒子群)算法简要介绍

二、代码关键实现

三、相关信息


本文主要涉及工程实现,若要关注原理,网上比较多啦~

一、pso(粒子群)算法简要介绍

粒子群优化算法(PSO:Particle swarm optimization) 是一种进化计算技术(evolutionary computation)。源于对鸟群捕食的行为研究。粒子群优化算法的基本思想:是通过群体中个体之间的协作和信息共享来寻找最优解.
  PSO的优势:在于简单容易实现并且没有许多参数的调节。目前已被广泛应用于函数优化、神经网络训练、模糊系统控制以及其他遗传算法的应用领域。


二、代码关键实现

首先需要安装pywarm这个训练包,使用 

pip install pywarm

from pyswarm import pso

#fine_tuning,nnunits,dropout,learning_rate
lb=[0.2,0.0008]
ub=[0.5,0.003]
#上面是要调节的超参数的两个阈值。
xopt, fopt = pso(best_model, lb, ub)

其中best_model为我们神经网络返回的值,作为粒子群算法的目标值。本程序中设置的适应度函数的目标值为accuracy。下面是神经网络的部分代码。使用的是renet

def best_model(x,count=1):
    model = model_design(x)
    history = model.fit(X_train,
        Y_train,
        batch_size=1000,
        epochs=50,
        verbose=2,
        validation_data=(X_test, Y_test),
        #validation_split = 0.3,
        callbacks = [
            keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=True, mode='auto'),
            keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='auto')
        ])
    train_loss, train_acc = model.evaluate(X_train, Y_train, verbose=0)
    print(f"Train Accuracy:{train_acc} Train Loss: {train_loss}")

    test_loss, test_acc = model.evaluate(X_test, Y_test, verbose=0)
    print(f"Test Accuracy:{test_acc} Test Loss: {test_loss}")

    model.save(f"./model/model-{count}-{round(test_acc, 3)}-{round(test_loss, 3)}--Units-{x[0]}--Learning_rate-{x[1]}")
    np.savetxt(f"data-{count}.csv", x, delimiter=',')

    count = count
    test_acc_list = []
    test_loss_list = []
    count_no = []
    test_units = []
    test_learning_rate = []
    if test_acc > 0.99 and count < 0:
        # Plot the graph
        count = count - 1
        count_no.append(count)
        test_acc_list.append(test_acc)
        test_loss_list.append(test_loss)
        test_units.append(x[0])
        test_learning_rate.append(x[1])
    global result
    result = pd.DataFrame()
    result["count_no"] = count_no
    result["Test_Acc"] = test_acc_list
    result["Test_Loss"] = test_loss_list
    result["Units"] = test_units
    result["Learning_rate"] = test_learning_rate
    #描述训练集的loss

    val_loss_list = history.history['val_loss']
    loss_list = history.history['loss']
    plt.plot(range(len(loss_list)), val_loss_list)
    plt.plot(range(len(loss_list)), loss_list)
    plt.show()

    return test_acc

以上是使用pso算法进行神经网络优化的实现代码。具体全部代码的内容可以去我的github获取

https://github.com/cyjack/pso-deeplearning.git

三、相关信息

如果有关于pso进行模型参数优化的问题,可以来咨询我~微信:Paper_pass_a

This add-in to the PSO Research toolbox (Evers 2009) aims to allow an artificial neural network (ANN or simply NN) to be trained using the Particle Swarm Optimization (PSO) technique (Kennedy, Eberhart et al. 2001). This add-in acts like a bridge or interface between MATLAB’s NN toolbox and the PSO Research Toolbox. In this way, MATLAB’s NN functions can call the NN add-in, which in turn calls the PSO Research toolbox for NN training. This approach to training a NN by PSO treats each PSO particle as one possible solution of weight and bias combinations for the NN (Settles and Rylander ; Rui Mendes 2002; Venayagamoorthy 2003). The PSO particles therefore move about in the search space aiming to minimise the output of the NN performance function. The author acknowledges that there already exists code for PSO training of a NN (Birge 2005), however that code was found to work only with MATLAB version 2005 and older. This NN-addin works with newer versions of MATLAB till versions 2010a. HELPFUL LINKS: 1. This NN add-in only works when used with the PSORT found at, http://www.mathworks.com/matlabcentral/fileexchange/28291-particle-swarm-optimization-research-toolbox. 2. The author acknowledges the modification of code used in an old PSO toolbox for NN training found at http://www.mathworks.com.au/matlabcentral/fileexchange/7506. 3. User support and contact information for the author of this NN add-in can be found at http://www.tricia-rambharose.com/ ACKNOWLEDGEMENTS The author acknowledges the support of advisors and fellow researchers who supported in various ways to better her understanding of PSO and NN which lead to the creation of this add-in for PSO training of NNs. The acknowledged are as follows: * Dr. Alexander Nikov - Senior lecturer and Head of Usaility Lab, UWI, St. Augustine, Trinidad, W.I. http://www2.sta.uwi.edu/~anikov/ * Dr. Sabine Graf - Assistant Professor, Athabasca University, Alberta, Canada. http://scis.athabascau.ca/scis/staff/faculty.jsp?id=sabineg * Dr. Kinshuk - Professor, Athabasca University, Alberta, Canada. http://scis.athabascau.ca/scis/staff/faculty.jsp?id=kinshuk * Members of the iCore group at Athabasca University, Edmonton, Alberta, Canada.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值