个人理解,有错欢迎评论留言!毕竟也不想误人子弟!哈哈哈!
基本原理
概要:
借助个体和群体走过的当前最优值进行迭代更新从而得到全局最优解的一种全局优化算法。
必须理解的公式:
v
j
i
(
k
+
1
)
=
w
(
k
)
v
j
i
(
k
)
+
φ
1
r
a
n
d
(
0
,
a
1
)
(
p
j
i
−
x
j
i
(
k
)
)
+
φ
2
r
a
n
d
(
0
,
a
2
)
(
p
j
g
−
x
j
i
(
k
)
)
v_j^{i}(k+1)=w(k)v_j^{i}(k)+\varphi_1rand(0,a_1)(p_j^{i}-x_j^{i}(k))+\varphi_2rand(0,a_2)(p_j^{g}-x_j^{i}(k))
vji(k+1)=w(k)vji(k)+φ1rand(0,a1)(pji−xji(k))+φ2rand(0,a2)(pjg−xji(k))
x
j
i
(
k
+
1
)
=
x
i
j
(
k
)
+
v
j
i
(
k
+
1
)
x_j^{i}(k+1)=x_i^{j}(k)+v_j^{i}(k+1)
xji(k+1)=xij(k)+vji(k+1)
i
=
1
,
2
,
3
…
…
m
;
j
=
1
,
2
,
3
…
…
n
(
m
个
粒
子
,
搜
索
n
维
的
空
间
)
i=1,2,3……m;j=1,2,3……n(m个粒子,搜索n维的空间)
i=1,2,3……m;j=1,2,3……n(m个粒子,搜索n维的空间)
注:
φ
1
r
a
n
d
(
0
,
a
1
)
(
p
j
i
−
x
j
i
(
k
)
)
\varphi_1rand(0,a_1)(p_j^{i}-x_j^{i}(k))
φ1rand(0,a1)(pji−xji(k)) 个体认知部分
φ
2
r
a
n
d
(
0
,
a
2
)
(
p
j
g
−
x
j
i
(
k
)
)
\varphi_2rand(0,a_2)(p_j^{g}-x_j^{i}(k))
φ2rand(0,a2)(pjg−xji(k)) 群体认知部分
参数理解:
w
w
w控制粒子速度的惯性
φ
1
,
φ
2
\varphi_1,\varphi_2
φ1,φ2表示个体认知分量和群体认知分量相对贡献的学习率
r
a
n
d
(
0
,
a
1
)
,
r
a
n
d
(
0
,
a
2
)
rand(0,a_1),rand(0,a_2)
rand(0,a1),rand(0,a2)增加认知和社会搜索方向的随机性和算法多样性
代码实践
解决的问题:在迪卡尔坐标系范围为
x:-3 ~ 4.1;
y:12.1 ~ 5.8
fitness(度量值)为
y
=
21.5
+
x
1
s
i
n
(
4
π
x
1
)
+
x
2
s
i
n
(
20
π
x
2
)
y=21.5+x_1sin(4\pi x_1)+x_2sin(20\pi x_2)
y=21.5+x1sin(4πx1)+x2sin(20πx2)的最优值(最大值)。
# coding: utf-8
import numpy as np
import math
import random
import matplotlib.pyplot as plt
pop_size = 100 #研究问题的维度
max_gen = 100 #迭代次数
dec_num = 2 #两个变量,维度
obj_num = 1 #返回一个值
dec_min_val = (-3, 4.1) #搜索范围
dec_max_val = (12.1, 5.8)
w = 0.4 # Self weight factor
c1 = 2 # learning factor
c2 = 2
pop_x = np.zeros((pop_size, dec_num)) # The positions of all the particles
pop_v = np.zeros((pop_size, dec_num)) # The velocities of all the particles
p_best = np.zeros((pop_size, dec_num)) # The best place each individual has ever arrived
g_best = np.zeros((1, dec_num)) #Global optimum position
popobj = [] #存放fitness度量值的列表
def init_population(pop_size, dec_num, dec_min_val, dec_max_val, pop_x, pop_v, p_best):
for i in range(pop_size):
for j in range(dec_num):
pop_x[i][j] = random.uniform(dec_min_val[j], dec_max_val[j]) #random.uniform(s,e) 用于产生在s到e之间具有均匀分布的随机小数
pop_v[i][j] = random.uniform(0, 1)
p_best[i] = pop_x[i] # p_best stores the optimal of an individual during the history
def fitness(s): #度量值;可以理解为评价函数或者是奖励机制的打分
x1 = s[0]
x2 = s[1]
y = 21.5 + x1 * math.sin(4 * math.pi * x1) + x2 * math.sin(20 * math.pi * x2)
return y
if __name__ == '__main__':
init_population(pop_size, dec_num, dec_min_val, dec_max_val, pop_x, pop_v, p_best)
temp = -1
for i in range(pop_size): # Update global optima
fit = fitness(p_best[i])
if fit > temp:
g_best = p_best[i]
temp = fit
print(fitness(g_best))
for i in range(max_gen):
for j in range(pop_size):
# ----------------Updates individual position and speed-----------------
'''粒子群优化算法中公式的表示相应代码部分'''
pop_v[j] = w * pop_v[j] + c1 * random.uniform(0, 1) * (p_best[j] - pop_x[j]) + \
c2 * random.uniform(0, 1) * (g_best - pop_x[j])
pop_x[j] = pop_x[j] + pop_v[j] # 位置的更新
for k in range(dec_num): # avoid crossing the boundary 防止越界
if pop_x[j][k] < dec_min_val[k]:
pop_x[j][k] = dec_min_val[k]
if pop_x[j][k] > dec_max_val[k]:
pop_x[j][k] = dec_max_val[k]
# -----------------Update p_best and g_best-----------------
if fitness(pop_x[j]) > fitness(p_best[j]): #更新个体最优值
p_best[j] = pop_x[j]
if fitness(pop_x[j]) > fitness(g_best): #更新群体最优值
g_best = pop_x[j]
popobj.append(fitness(g_best)) #添加到列表中
print(fitness(g_best))
# -------------------draw the result-------------------
plt.figure(1)
plt.title("PSO")
plt.xlabel("iterators", size=14)
plt.ylabel("fitness", size=14)
t = [t for t in range(0, 100)]
plt.plot(t, popobj, color='b', linewidth=3)
plt.show()