NSGA and NSGA-II

1 NSGA

1.1 传统多目标优化方法

  1. 使用权重向量,将多目标问题转化为单目标问题: Z = ∑ i = 1 N w i f i ( x ) Z = \sum_{i = 1}^N w_if_i(x) Z=i=1Nwifi(x) w i w_i wi为目标的重要程度, f i ( x ) f_i(x) fi(x)为目标函数

所有目标权重相等会出现冲突,但是在现实需求上,需要对公式的权重优先降低以符合需求

  1. 距离向量函数:

距离向量函数和目标权重相似,不同点在于距离向量函数需要知道每个目标函数的标,而目标权重方法则需要赋予每个目标相关重要性

  1. 最大最小公式: min ⁡ F ( x ) = max ⁡ [ Z j ( x ) ] \min\mathcal{F}(x) = \max [\mathcal{Z_j(x)}] minF(x)=max[Zj(x)]
    在这里插入图片描述
  1. 适用于优先级相等的目标
  2. 可以和无量纲的权重结合,以改变优先级
  3. 可以和需求等级向量结合使用

1.2 多目标转为单目标的缺点

  • 单目标的优化可以保证pareto最优解,但结果是单点解
  • 如果某些目标有噪声或具有不连续的变量空间,这些方法可能无法有效工作。其中一些方法也很昂贵,因为它们需要在向量优化之前了解个体最优

1.3 权重向量距离说明

在这里插入图片描述

  1. 权重向量为 ( 0.5 , 0.5 ) (0.5, 0.5) (0.5,0.5)时,说明要求 f 11 f11 f11 f 12 f12 f12同时兼顾较小,故解为 x = 1 x = 1 x=1
  2. 权重向量为 ( 1 , 0 ) (1, 0) (1,0)时,说明更加强调 f 11 f11 f11小,因此此时的解为 x = 0 x = 0 x=0
  3. 权重向量为 ( 0 , 1 ) (0, 1) (0,1)时,说明更加强调 f 12 f12 f12小,因此此时的解为 x = 2 x = 2 x=2

如果想获得图中 ( 0 , 2 ) (0, 2) (0,2)之间特殊的Pareto可行解必须要知道权重,但是这个区间的Pareto解对应的权重不易获取

1.4 NSGA方法

1.4.1 流程

在这里插入图片描述

1.4.2 关键步骤

  1. 对于 f 1 , f 2 f_1, f_2 f1,f2均为最小化问题,如果 f 1 ( i ) < f 1 ( j )    a n d    f 2 ( i ) < f 2 ( j ) f_1(i) < f_1(j) \;and \; f_2(i) < f_2(j) f1(i)<f1(j)andf2(i)<f2(j),则 i i i支配 j j j
# 选择排序
def non_sort(matrix):
    realValue = binaryDecode(matrix)
    # realValue = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']
    # y1[j] < y1[i] and y2[j] < y2[i] 说明j支配i
    ranks = []  # 等级列表
    y1 = np.array(f1(realValue))
    # y1 = np.array([2, 3, 3, 4, 4, 5, 5, 5, 6])
    y2 = np.array(f2(realValue))
    # y2 = np.array([7.5, 6, 7.5, 5, 6.5, 4.5, 6, 7, 6.5])
    while True:
        rank = []   # 同一等级的元素列表
        indexs = []   # 记录当前非支配的索引
        for i in range(y1.size):
            nondominated = True  # 当前元素知否被其它元素所支配
            for j in range(y1.size):
                if y1[j] < y1[i] and y2[j] < y2[i] and i != j:
                    nondominated = False     # i被j支配
                    break
            if nondominated == True:
                rank.append(realValue[i])
                indexs.append(i)

        if len(rank) > 0:
            ranks.append(rank)

        # 移除已经分好等级的元素
        y1 = np.delete(y1, indexs)
        y2 = np.delete(y2, indexs)
        realValue = np.delete(realValue, indexs)

        if y1.size == 0:
            break

    return ranks
  1. 求小生境 n i c h e niche niche
    S h ( d i j ) = { 1 − ( d i j σ s h a r e ) 2 d i j < σ s h a r e 0 e l s e Sh(d_{ij})= \begin{cases} 1- (\frac {d_{ij}}{\sigma_{share}})^2 & d_{ij} < \sigma_{share} \\ 0 & else \end{cases} Sh(dij)={1(σsharedij)20dij<σshareelse

A parameter niche count is calculated by adding the above sharing function values for all individuals in the current front.

n i c h e C o u n t = ∑ i N − 1 S h nicheCount = \sum_i^{N - 1} Sh nicheCount=iN1Sh

# 求当前等级中各元素小生境的个数,在目标空间上共享
def niche(rank):
    l = len(rank)
    counts = [0] * l
    y1 = f1(np.array(rank))
    y2 = f2(np.array(rank))
    # x = np.array(rank)
    for i in range(l):
        for j in range(l):
            if i == j:
                continue
            dij = math.pow((y1[i] - y1[j]) ** 2 + (y2[i] - y2[j]) ** 2, 0.5)
            # dij = math.fabs((x[i] - x[j]))
            if dij < shareDistance:
                counts[i] += 1 - (dij / shareDistance) ** 2
    return counts

共享可以在参数上共享也可以在目标上共享

3. 求个体适应度值
f i t n e s s = d u m m y F i t n e s s n i c h e C o u n t fitness = \frac {dummyFitness}{nicheCount} fitness=nicheCountdummyFitness

def fitness(ranks):
    dummyFitness = 1
    fits = []
    for rank in ranks:
        counts = niche(rank)
        for count in counts:
            if count == 0:
                fit = dummyFitness
            else:
                fit = dummyFitness / count
            fits.append(fit)
        dummyFitness = dummyFitness - 1/len(ranks)
    return fits

1.5 注意

  1. σ s h a r e \sigma_{share} σshare需要指定,指定的不合适收敛比较难
  2. 在参数上共享容易提早收敛,在目标上共享会使得目标值具有多样性,但是参数变量不具有多样性。
  3. 种群太小,导致非支配解过少;种群太大,容易提早收敛
  4. 随着 r a n k rank rank变大, d u m m y F i t n e s s dummyFitness dummyFitness 减小。 d u m m y F i t n e s s dummyFitness dummyFitness 是自己指定的
  5. NSGA-II中遗传算法采用实编码方式比二进制编码方式效果好

2 NSGA-II

2.1 NSGA的缺点

  1. 非支配排序计算复杂度高
  2. 缺乏精英策略
  3. 需要指定合适的共享参数 σ s h a r e \sigma_{share} σshare

2.2 NSGA-II在NSGA上的变动

  1. 采用了快速的非支配排序(精英主义),以达到快速收敛的目的
  2. 采用拥挤度,为了使得pareto解具有多样性

2.3 NSGA-II流程

  1. 父代子代混合
    在这里插入图片描述
    在这里插入图片描述

  2. 对混合后的个体计算两个值: n p 和 S e t p n_p和Set_p npSetp
    在这里插入图片描述

  3. 对混合后的个体进行非支配排序
    在这里插入图片描述

  4. 计算拥挤度
    i < n j i < _n j i<nj表示 i i i拥挤度支配 j j j
    i < n j      满足下面至少一个条件 i r a n k < j r a n k                      条件 1 i r a n k = j r a n k      a n d      i d i s t a n c e > j d i s t a n c e                    条件 2 i < _n j \;\;满足下面至少一个条件\\ i_{rank} < j_{rank} \;\;\; \;\;\;\;\;\;\;条件1\\ i_{rank} = j_{rank} \;\;and \;\; i_{distance} > j_{distance} \;\;\;\;\;\;\;\;\; 条件2 i<nj满足下面至少一个条件irank<jrank条件1irank=jrankandidistance>jdistance条件2
    将当前的个体按照目标函数进行排序
    在这里插入图片描述
    对于两个目标函数拥挤距离为长方形周长的 1 2 \frac{1}{2} 21
    1. 不需要计算全部拥挤度,只需要按照ranks来计算,只要个数达到N,拥挤度计算就可以停止了。
    2. 要对所有目标函数归一化。
    3. 第一个和最后一个个体的拥挤度为无穷,因为这两个点只有一边有值。
    伪代码
    在这里插入图片描述
    I [ i ] . m \mathcal{I[i]}.m I[i].m:表示集合中第 i i i个个体的第 m m m个目标函数值
    f m m a x f_m^{max} fmmax:表示种群中第 m m m个目标函数的最大值
    f m m i n f_m^{min} fmmin:表示种群中第 m m m个目标函数的最小值

  5. 相同的 r a n k rank rank比较拥挤度的时候采用的是锦标赛选择算子。

    • 确定每次选择的个体数量N。(二元锦标赛选择即选择2个个体,即K = 2)
    • 从种群中随机选择K个个体(每个个体被选择的概率相同) ,根据每个个体的适应度值,选择其中适应度值最好的个体加入到下一代种群集合中。
    • 重复步骤(2)多次(重复次数为种群的大小),直到新的种群规模达到N。

2.4 NSGA-II代码

f 1 ( x ) = x 2 f 2 ( x ) = ( x − 2 ) 2 f ( x ) = min ⁡ { f 1 ( x ) }    a n d    { f 2 ( x ) } f_1(x) = x^2 \\ f_2(x) = (x - 2)^2 \\ f(x) = \min\{f_1(x)\} \;and\; \{f_2(x)\} f1(x)=x2f2(x)=(x2)2f(x)=min{f1(x)}and{f2(x)}

import torch
from geatpy.core.ndsortESS import ndsortESS    # 非支配排序
from geatpy.operators.mutation.Mutpolyn import Mutpolyn      # 多项式变异
from geatpy.operators.recombination.Recsbx import Recsbx     # 模拟二进制交叉
from geatpy.visualization.PointScatter import PointScatter   # 画图

import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler   # 数据归一化


def func1(population):
    res = (population[:, 0] - 2) ** 2
    return res


def func2(population):
    res = population[:, 0] ** 2
    return res

# 支配  x1支配x2
def is_domination(x1, x2):
    weak_domination = False
    absolute_condition = False

    if x1[0] <= x2[0] and x1[1] <= x2[1]:
        weak_domination = True
    if x1[0] < x2[0] or x1[1] < x2[1]:
        absolute_condition = True

    return weak_domination and absolute_condition

def rank_element(population, k = 1, return_n_level = False):
    levels, n_level = ndsortESS(population) # levels: 等级   criLevel: 总共的等级
    if return_n_level:
        return n_level
    levels = levels.astype('i4')   # float -> int
    pd_levels = pd.DataFrame(levels)   # numpy -> pandas
    rank_group = pd_levels.groupby(0).groups           # 等级分组
    k_element = rank_group[k].values           # 第k等级内的所有个体
    return k_element

def crowded_distance(func, k):
    func_copy = func.copy()
    size = func_copy.shape[0]

    indexs = func_copy[:,-1]        # 个体下标
    cuboid = np.full(size, np.inf)

    # 对目标函数归一化
    scaler = MinMaxScaler()
    normal_func = scaler.fit_transform(func_copy[:, 0:2])
    # 矩阵列(f1, f2 归一化) -> f1, f2, index, cuboid
    func_extend = np.column_stack((normal_func, indexs, cuboid))
    func_extend[:, -2].astype('i4')
    func_extend = func_extend[np.argsort(func_extend[:, 0])]

    for i in range(1, size - 1):
        func_extend[i, -1] = np.abs(func_extend[i + 1, 0] - func_extend[i - 1, 0]) + np.abs(
            func_extend[i + 1, 1] - func_extend[i - 1, 1])

    cuboid = torch.tensor(func_extend[:, -1])
    indices = cuboid.topk(k, dim=0, largest=True, sorted=True)[1]   # 选取最大的k个距离的个体保留下来
    survive = func_extend[indices, -2]   # 幸存下来的个体
    return survive.astype('i4')



if __name__ == '__main__':
    size = 50  # 种群大小
    dim = 1
    lb = -2     # 种群下界
    ub = 3      # 种群上界
    iteration = 500   # 迭代次数
    np.random.seed(1)
    xvars_parents = xvars_children = np.random.uniform(lb, ub, (size, dim))
    PS = PointScatter(2, True, True, "NSGA-II", ['F1', 'F2'], saveName=None)
    for i in range(iteration):
        pop = np.row_stack((xvars_parents, xvars_children))
        func = np.column_stack((func1(pop), func2(pop)))  # 种群的函数值

        n_level = rank_element(func, return_n_level=True)
        xvars_parents = np.copy(xvars_children)
        xvars_children = np.full((1, dim), np.nan)

        for j in range(1, n_level + 1):
            F = rank_element(func, j)
            # func_F: f1, f2, index
            func_F = np.column_stack((func[F, :], F))
            if np.sum(np.isnan(xvars_children) == False) + len(F) > size: break
            size_F = func_F.shape[0]
            survive_pop = pop.copy()
            xvars_children = np.row_stack((xvars_children, survive_pop[F, :]))

        rest_size = size - np.sum(np.isnan(xvars_children) == False)        # 剩余需要填充的的population size
        survive = crowded_distance(func_F, rest_size)
        survive_pop = pop.copy()
        xvars_children = np.row_stack((xvars_children, survive_pop[survive, :]))
        xvars_children = xvars_children[1:, :]  # 去除第一行的nan

        recsbx = Recsbx(XOVR=1 - 1/size, Half_N=False, n=20, Parallel=True)  # 交叉
        xvars_children = recsbx.do(xvars_children)

        mutation = Mutpolyn(Pm=1/size, DisI=20)  # 变异
        xvars_children = mutation.do(Encoding='RI', OldChrom=xvars_children, FieldDR=np.array([[lb], [ub], [0]]))

    func = np.column_stack((func1(xvars_children), func2(xvars_children)))  # 进化种群后的函数值
    right_label = 'size: ' + str(size) + '\n' + 'iteration: ' + str(iteration)
    PS.add(func, color = 'green', label=right_label)
    PS.draw()
        

2.5 效果

在这里插入图片描述

NSGA-II (Non-dominated Sorting Genetic Algorithm II) is a popular multi-objective optimization algorithm that is widely used in various fields such as engineering, finance, and biology. It is an extension of the standard genetic algorithm and uses a non-dominated sorting technique to rank the solutions based on their dominance relationship. To implement NSGA-II in Python, we can use the DEAP (Distributed Evolutionary Algorithms in Python) library. DEAP provides a comprehensive set of tools for implementing various evolutionary algorithms, including NSGA-II. Here is a simple example of how to use DEAP to implement NSGA-II in Python: ```python import random from deap import base, creator, tools, algorithms # Define the fitness function (minimize two objectives) creator.create("FitnessMin", base.Fitness, weights=(-1.0, -1.0)) # Define the individual class (a list of two floats) creator.create("Individual", list, fitness=creator.FitnessMin) # Initialize the toolbox toolbox = base.Toolbox() # Define the range of the two objectives BOUND_LOW, BOUND_UP = 0.0, 1.0 # Define the evaluation function (two objectives) def evaluate(individual): return individual[0], individual[1] # Register the evaluation function and the individual class toolbox.register("evaluate", evaluate) toolbox.register("individual", tools.initCycle, creator.Individual, (random.uniform(BOUND_LOW, BOUND_UP) for _ in range(2)), n=1) toolbox.register("population", tools.initRepeat, list, toolbox.individual) # Define the genetic operators toolbox.register("mate", tools.cxSimulatedBinaryBounded, low=BOUND_LOW, up=BOUND_UP, eta=20.0) toolbox.register("mutate", tools.mutPolynomialBounded, low=BOUND_LOW, up=BOUND_UP, eta=20.0, indpb=1.0/2) toolbox.register("select", tools.selNSGA2) # Define the main function def main(seed=0): random.seed(seed) # Initialize the population pop = toolbox.population(n=100) # Evaluate the initial population fitnesses = [toolbox.evaluate(ind) for ind in pop] for ind, fit in zip(pop, fitnesses): ind.fitness.values = fit # Run the algorithm pop = algorithms.eaMuPlusLambda(pop, toolbox, mu=100, lambda_=100, cxpb=0.9, mutpb=0.1, ngen=100, verbose=False) # Print the final population print("Final population:") for ind in pop: print(ind, ind.fitness.values) if __name__ == "__main__": main() ``` This code defines a simple two-objective optimization problem and uses NSGA-II to find the Pareto front. The `creator` module is used to define the fitness and individual classes. The `toolbox` is used to register the genetic operators and the evaluation function. Finally, the `algorithms` module is used to run the algorithm and obtain the Pareto front.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值