GWO灰狼优化算法python和matlab代码

简单实现了GWO灰狼优化算法的Matlab版本和Python版本,程序易读且简洁。
Matlab:
这里可以移步github:https://github.com/ZYunfeii/GreyWolfOptimization-GWO中GWO.mlx文件,因为我是用实时编辑器写的,用matlab打开这个文件可以更好得展示其中的批注。
Python:

#!/usr/bin/python
# -*- coding: UTF-8 -*-
"""
author: Y. F. Zhang
"""

import numpy as np
import matplotlib.pyplot as plt

class GWO:
    def __init__(self):
        self.wolf_num = 15
        self.max_iter = 150
        self.dim = 30
        self.lb = -30*np.ones((self.dim,))
        self.ub = 30*np.ones((self.dim,))
        self.alpha_pos = np.zeros((1,self.dim))
        self.beta_pos = np.zeros((1, self.dim))
        self.delta_pos = np.zeros((1, self.dim))
        self.alpha_score = np.inf
        self.beta_score = np.inf
        self.delta_score = np.inf
        self.convergence_curve = np.zeros((self.max_iter,))
        self.position = np.zeros((self.wolf_num,self.dim))

    def run(self):
        count = 0
        self.init_pos()
        while count < self.max_iter:
            for i in range(self.wolf_num):
                flag_ub = self.position[i,:] > self.ub
                flag_lb = self.position[i,:] < self.lb
                self.position[i,:] = self.position[i,:]*(~(flag_lb+flag_ub))+flag_ub*self.ub+flag_lb*self.lb
                fitness = self.func(self.position[i,:])
                if fitness < self.alpha_score:
                    self.alpha_score = fitness
                    self.alpha_pos = self.position[i,:]
                elif fitness < self.beta_score:
                    self.beta_score = fitness
                    self.beta_pos = self.position[i,:]
                elif fitness < self.delta_score:
                    self.delta_score = fitness
                    self.delta_pos = self.position[i,:]
            a = 2 - count*(2/self.max_iter)
            for i in range(self.wolf_num):
                for j in range(self.dim):
                    alpha = self.update_pos(self.alpha_pos[j],self.position[i,j],a)
                    beta = self.update_pos(self.beta_pos[j], self.position[i, j], a)
                    delta = self.update_pos(self.delta_pos[j], self.position[i, j], a)
                    self.position[i, j] = sum(np.array([alpha, beta, delta]) * np.array([1/3,1/3,1/3]))
            count += 1
            self.convergence_curve[count-1] = self.alpha_score
        self.plot_results()

    def init_pos(self):
        for i in range(self.wolf_num):
            for j in range(self.dim):
                self.position[i,j] = np.random.rand()*(self.ub[j]-self.lb[j])+self.lb[j]

    @staticmethod
    def update_pos(v1,v2,a):
        A = 2*np.random.rand()*a-a
        C = 2*np.random.rand()
        temp = np.abs(C*v1-v2)
        return v1 - A*temp

    def plot_results(self):
        plt.style.use('seaborn-darkgrid')
        plt.plot(range(1,self.max_iter+1),self.convergence_curve,'g.--')
        plt.xlabel('iteration')
        plt.ylabel('fitness')
        plt.title('GWO fitness curve')
        plt.show()

    @staticmethod
    def func(x):
        dim, s = 30, 0
        for i in range(len(x)-1):
            s += 100*(x[i+1]-x[i]**2)**2+(x[i]-1)**2
        return s

if __name__ == "__main__":
    gwo = GWO()
    gwo.run()

优化函数:
f ( x ) = ∑ i = 1 n − 1 [ 100 ( x i + 1 − x i 2 ) 2 + ( x i − 1 ) 2 ] f(x)=\sum_{i=1}^{n-1}\left[100\left(x_{i+1}-x_{i}^{2}\right)^{2}+\left(x_{i}-1\right)^{2}\right] f(x)=i=1n1[100(xi+1xi2)2+(xi1)2]

  • 维数:30
  • 搜索范围 [ − 30 , 30 ] [-30,30] [30,30]
  • f m i n = 0 f_{min}=0 fmin=0

结果:
请添加图片描述
左图是Matlab版本,右图是Python版本,这两份代码均在https://github.com/ZYunfeii/GreyWolfOptimization-GWO

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

iπ弟弟

如果可以的话,请杯咖啡吧!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值