【项目总结】论文复现与改进:一般选择模型的产品组合优化算法(Research@收益管理)

  • 论文标题:Assortment Optimization Under General Choice
  • 中文标题:一般选择下的产品组合优化
  • 论文下载链接:SSRN

前情提要

本文是基于笔者之前对上述论文做的笔注的深入分析与算法改进探究,上述论文的笔注发布于前一篇博客https://caoyang.blog.csdn.net/article/details/121246506。问题背景是你有很多产品可以卖,这时候来了一些客户想买产品,你可能会直接向这些客户展示所有的产品,然而也许你只展示一部分产品反而可能获得更高的收益,因为客户选择每件产品的概率与你展示的产品集合(我们称之为报价集)密切相关,如何找到最优的报价集是选择模型(Choice Model)需要解决的问题。

如果你熟悉收益管理的研究方法,这将有助于你更好的理解上述论文,事实上Jagabathula在上面这篇论文中提出的算法是非常浅然的,即从一个空的产品子集开始搜索,每次向里面增加、或删除、或交换一个产品以使得期望收益提升最多,然而如此浅然的算法在最简单的一系列选择模型上的收敛性(即是否能够得到全局最优解)也涉及极其复杂的证明。

笔者在这篇博客中不会涉及任何理论推导,仅根据上述的论文及其相关的三个版本(上述链接中下载到的是2016年的版本,事实上与Jagabathula在2014年上传的版本是有很大区别的,此外2011年Jagabathula还写了一篇同名的论文,算法也稍有区别,这在本博客的的2.1节中会详细阐述,三版论文可以从下面的链接中下载得到)进行进一步的改进。笔者提出了两种具有启发性的改进措施,并在仿真实验中对其有效性进行了检验。

链接: https://pan.baidu.com/s/1F0eC4ZgQlMzjxUWjpeBQPw 
提取码: xap2

其实上一篇笔注博客https://caoyang.blog.csdn.net/article/details/121246506是用来做课程汇报的,本文则是用于期末提交的报告,虽然笔者也不是主要研究收益管理这块,甚至都很少做OM和OP的事情了,不过笔者觉得自己的想法还是挺有趣的,所以就拿出来遛遛😀。本博客的仿真代码在笔者的GitHub repository可以下载得到,笔者在本博客附录3中也放了一份备用,代码逻辑非常清晰,注释详实,有兴趣的朋友结合本博客搞明白应该并不困难。

PS:上周六队内测试时5000米跑出19分59秒73,成功打开20分钟的大关,这对于我一个本科体测从来没跑进4分10秒的体育废柴来说真的是奇迹般的突破。实话说没有什么事情是不可能的,人还是要相信自己的能力,不要被世俗的眼光所束缚住。今年虽然马拉松赛事大面积取消和延期,但是对我来说也不是一件坏事,今天下午随意跑了一次万米就能到42分30秒的水平,感觉已经能够把有氧配速提到4分15秒左右,明年半马能往90分钟大关冲一冲,今年要是真去舟山跑的话可能95分钟都够呛。养兵千日,不急于一时。



1 引言

在本次课程中,笔者对Srikanth Jagabathula在2014年提出的一种产品组合优化的ADXOpt算法进行汇报,该算法不依赖选择模型的参数结构,适用于任何给定报价集即可高效计算对应期望收益的一般选择模型。Jagabathula在MNL模型、鲁棒MNL模型、嵌套Logit模型(相异度参数小于1的二级嵌套,且只包含两个嵌套,第一个嵌套只包含不购买选项,第二个嵌套包含所有产品)和混合Logit模型(不同客户类的差异仅为不购买估值相异,而对产品的估值都相同)中分别证明算法的收敛性质与时间复杂度(表1),并在仿真模拟中得出算法能够在绝大多数情况下收敛到全局最优解的结论。

在这里插入图片描述

ADXOpt算法本质是对贪心算法的一种改进,逻辑简单易于实现,因此笔者根据原论文中仿真模拟的参数进行代码复现,验证原文数值分析结果的正确性。笔者在代码编写过程中受Jagabathula在2011年与2016年的两份相关研究的启发,采用统一的逻辑框架将算法进一步推广到更加一般的形式。此外,考虑到Jagabathula仅对二级嵌套Logit模型中最简单的一种模型设定进行理论证明与仿真模拟,笔者还将推广的算法框架应用到更一般的二级嵌套Logit模型中,并与原论文中ADXOpt算法进行对比实验分析。基于本文的实验结果,笔者提出对一般选择模型中产品组合优化算法的新见解。最后,本文使用的一些数学标记的默认含义详见表2,本文的代码详见附件内容或从笔者的GitHub repository中获取。

在这里插入图片描述


2 研究回顾与思考

经过笔者查证,原论文的算法框架经历三个版本的演进,2011年Jagabathula等人提出GreedyOpt算法,2014年Jagabathula提出ADXOpt算法,2016年Jagabathula对ADXOpt算法进行改进,为了区分2014年与2016年的ADXOpt算法,下文分别标记为ADXOpt2014算法与ADXOpt2016算法,本次课程汇报的论文对应2014年的版本,即ADXOpt2014算法。在本节内容中,笔者将简要回顾三版算法逻辑的异同处,并阐述这些算法的缺陷以及笔者进行的具体改进。

2.1 Jagabathula三版算法研究概述与问题分析

三版算法的伪代码详见附录2,其本质都是对平凡贪心算法(下称NaiveGreedy算法)的贪心策略进行改进。NaiveGreedy算法搜索最优报价集的思路非常简单,即初始化候选解为空集,每轮迭代向候选解中增加一个能够使得收益提升最多的产品,直到增加任意一个产品都不能提升收益或候选解已经达到报价集容量上限时算法终止。

NaiveGreedy算法必然在 O ( n ) O(n) O(n)轮迭代后终止,然而即便在最简单的MNL模型中,它也不能确保收敛到全局最优解,下面是一个反例:

  • 三个产品:
    N = { 1 , 2 , 3 } \mathcal{N}=\{1,2,3\} N={1,2,3}

  • 其中前两个产品在价格与估值上完全相同:
    p 1 = p 2 = p , p 3 = p ′ and v 1 = v 2 = , v 3 = v ′ p_1=p_2=p,p_3=p' \quad\text{and}\quad v_1=v_2=,v_3=v' p1=p2=p,p3=pandv1=v2=,v3=v

  • 假定不购买的估值 v 0 = 1 v_0=1 v0=1,若满足:
    p v 1 + v ≤ p ′ v ′ 1 + v ′ ≤ 2 p v 1 + 2 v and p v + p ′ v ′ 1 + v + v ′ ≤ 2 p v 1 + 2 v \frac{pv}{1+v}\le \frac{p'v'}{1+v'}\le \frac{2pv}{1+2v}\quad \text{and}\quad \frac{pv+p'v'}{1+v+v'}\le\frac{2pv}{1+2v} 1+vpv1+vpv1+2v2pvand1+v+vpv+pv1+2v2pv
    C = 1 C=1 C=1时, S ∗ = { 3 } S^*=\{3\} S={3} C = 2 C=2 C=2时, S ∗ = { 1 , 2 } S^*=\{1,2\} S={1,2}

  • 构造 p = 2 , p ′ = 1.9 , v = e 0.5 , v ′ = e 0.7 p=2,p'=1.9,v=e^{0.5},v'=e^{0.7} p=2,p=1.9,v=e0.5,v=e0.7,可以满足上述条件:
    1.245 ≤ 1.270 ≤ 1.535 and 1.528 ≤ 1.535 1.245\le 1.270\le 1.535\quad \text{and}\quad 1.528\le 1.535 1.2451.2701.535and1.5281.535
    此时NaiveGreedy算法显然无法收敛到 C = 2 C=2 C=2时的最优解

为了改善贪心算法的收敛性,Jagabathula提出可以改进NaiveGreedy算法每轮迭代中所执行的贪心调整策略,具体而言,算法仍然初始化候选解为空集,但是每轮迭代中对当前候选解执行一次能够使得收益提升最多的局部调整,如在候选解中增加一个新产品、或在候选解中删除一个产品、或在候选解删除一个产品并增加一个新产品(即交换操作),为了确保算法能够终止,需要设定每个产品能够被移出候选集(包括删除操作与交换操作中的移出行为)的最大次数 b b b。三版算法与NaiveGreedy算法的贪心策略比较如表3所示,GreedyOpt算法没有考虑删除操作,且优先执行增加操作(指在某轮迭代中,若存在能够使得收益提升的合法增加操作,则在本轮迭代不再考虑其他操作),ADXOpt2014算法与ADXOpt2016算法同时考虑增加、删除、交换三种操作,它们的区别在于是否优先执行增加操作。根据本次课程汇报原论文中的理论证明,在具有特殊参数结构的嵌套Logit模型与混合Logit模型中,ADXOpt2014算法即便不考虑交换操作(但需要优先执行增加操作)也可以收敛到局部最优解。

在这里插入图片描述

虽然Jagabathula证明ADXOpt2014算法具有良好的收敛性质,且在仿真模拟中验证了ADXOpt2014算法在理论证明无法收敛到全局最优解的嵌套Logit模型与混合Logit模型中,分别有98.5%与98.7%的情形能够收敛到全局最优解,并且那些未能收敛到全局最优解的情形中,得到的局部最优解相比于全局最优解在实际收益数值上的差距非常小,并以此立论ADXOpt2014算法的可用性。但是笔者认为Jagabathula对ADXOpt2014算法的研究存在如下问题:

  1. 仅对设定非常简单的嵌套Logit模型与混合Logit模型进行理论证明,事实上对原论文中的这两种模型的参数设定推广到稍一般的情况,ADXOpt2014算法可能甚至无法收敛到局部最优解。
  2. 仿真模拟中对产品总数的设定只考虑了 n = 10 n=10 n=10 n = 15 n=15 n=15两种总数较少的情况,注意到算法搜索到的解空间大小为 O ( n 2 C b ) O(n^2Cb) O(n2Cb),而穷举空间大小为 O ( 2 n ) O(2^n) O(2n),在 n n n较小的情况下,两者的差距并不大。此外,仿真模拟中对选择模型参数的随机生成范围也很保守,如产品估值取值范围为 [ 0 , 10 ] [0,10] [0,10]、产品价格取值范围为 [ 100 , 150 ] [100,150] [100,150],这会导致局部最优解相较于全局最优解在实际收益上的差距很难拉开。因此仿真模拟中得出的几乎都能够收敛到全局最优解能够得到高质量的局部最优解的结论是不足够令人信服的。
  3. 2011年的GreedyOpt算法中提及算法初始候选集可以从大小为 S S S的产品子集族中遍历选取,然而Jagabathula事实上在原论文与之后的研究中都只考虑了 S = 0 S=0 S=0的情况,即初始候选集为空集的情况。这本质是一种牺牲时间以获得精度的做法,可能Jagabathula觉得研究 S S S取值意义不大,笔者在下文中将说明如何基于这一点对算法进行一个反向形式的改进。
  4. 2014年的原论文中提及嵌套Logit模型最优解的块级性质,因此可以考虑对算法进行块级调整的推广,这本质也是一种牺牲时间以获得精度的做法,然而Jagabathula在原论文与之后的研究中也没有对这种块级调整策略进行深入分析,笔者在本文的仿真模拟中将说明块级调整策略的引入对算法在一般形式的二级嵌套Logit模型中收敛性具有显著的提升。

2.2 本文的改进与贡献

综上所述,笔者认为Jagabathula提出的算法仍有改进空间,具体而言,本文中笔者提出算法的反向变体块级调整两种改进手段,本节将阐述算法改进的启发与意义。

关于本文2.1节中第3点问题中的初始候选解大小 S S S的取法,若 S = 0 S=0 S=0,即从空集开始搜索,这与Jagabathul提出的三版算法逻辑相同;若 S S S取一个适中的数,那么初始候选解的数量为 C n S C_{n}^S CnS,这可能会是一个很大的数字,算法时间复杂度会很高,本文也不考虑这种情况;若 S = C S=C S=C,即取到可能的最大值(即报价集容量上限)呢?这就很有趣了,即从全集开始搜索,若将增加操作与删除操作调换,则就可以得到三版算法的反向形式,具体的算法配置如表4所示。

在这里插入图片描述

为了说明这种反向形式的意义,笔者举一个简单的例子:

  • 考虑MNL模型的一个特殊情况,所有产品的估值与价格都相同,并假定不购买估值 v 0 > 0 v_0>0 v0>0,即:
    v 1 = v 2 = . . . = v n = v and p 1 = p 2 = . . . = p n = p v_1=v_2=...=v_n=v\quad\text{and}\quad p_1=p_2=...=p_n=p v1=v2=...=vn=vandp1=p2=...=pn=p

  • 那么显然最优报价集是产品全集,因为:
    n p v 1 + n v > k p v 1 + k v ∀ 1 ≤ k < n \frac{npv}{1+nv}>\frac{kpv}{1+kv}\quad\forall 1\le k<n 1+nvnpv>1+kvkpv1k<n

虽然没有经过严格的证明,但是笔者猜测,若选择模型的参数取值较为“平稳”,则最优报价集中的产品数量倾向于是“比较多”的,那么此时使用反向的贪心搜索就会相对更有优势一些。事实上在本文对原论文的仿真模拟复现过程中,也发现在大多数情况下,三版算法的反向形式的耗时相比与正向形式要更少,并且反向形式的算法并没有影响原论文中几乎都能够收敛到全局最优解能够得到高质量的局部最优解的两个结论的得出。

除了对算法进行反向形式的改进外,笔者将块级调整策略引入到ADXOpt2014算法中,并在更一般的二级嵌套Logit模型中进行仿真模拟。所谓块级调整策略指算法的每轮迭代中可以增加(删除、交换)数量不超过 s b l o c k s_{\rm block} sblock产品块,这启发于原论文中Jagabathula指出的嵌套Logit模型最优解的块级性质,本质也是牺牲时间以提升精度的方法。笔者在仿真模拟中将二级嵌套Logit模型推广到更一般的形式,这包括:

  1. 引入更多的嵌套;

  2. 设定嵌套相异度可以大于1;

  3. 允许同一产品可以出现在不同嵌套中;

  4. 每个嵌套中都可以有一个非零估值的不购买选项;

事实上对于这种更一般的二级嵌套Logit模型设定,ADXOpt2014算法在能够收敛到全局最优解的比例可能不足80%,然而本文的仿真模拟结果显示使用块级调整策略可以有效提升算法收敛到全局最优解的比例。

最后再提一点,如表4所示,所有的算法其实都可以使用同一套代码逻辑框架实现,只需要修改其中各个配置项(如是否允许交换、是否优先执行增加、块级调整的大小等)的取值即可产生新的算法,但是本文主要还是在GreedyOpt,ADXOpt2014和ADXOpt2016这三种算法配置的基础上进行实验分析。


3 仿真模拟与结果分析

3.1 实验环境与仿真模拟配置

为了方便代码管理与提升代码可读性与可延展性,笔者使用Python3.7语言进行模拟仿真,详细代码见提交的附件内容或从笔者的GitHub repository中获取。选择模型定义在src/choice_model.py文件中,统一的算法框架定义在src/algorithm.py中,其他关于代码文件的详细说明见附录1

限于算力与时间限制,本文固定产品总数 n = 10 n=10 n=10,产品价格均取自 [ 100 , 150 ] [100,150] [100,150]上的均匀分布,产品估值(包括不购买的估值)取自 [ 0 , 10 ] [0,10] [0,10]上的均匀分布,对于每种模型配置生成1000个模型实例对不同配置的算法进行实验,全局最优解通过穷举法得到。

笔者主要进行两项实验,具体如下所示:

  1. 实验一:利用表4中的6种改进后的算法分别复现原论文的仿真模拟实验,在MNL模型,嵌套Logit模型与混合Logit模型上进行测试。其中嵌套Logit模型使用的是原论文中具有特殊参数结构的二级嵌套Logit模型(相异度参数小于1,且只包含两个嵌套,第一个嵌套只包含不购买选项,第二个嵌套包含所有产品),混合Logit模型设定是一般的形式,即不同客户类在所有产品的估值上都相异(原论文只考虑在不购买估值上相异的情况)。在这项实验中,笔者考察不同的模型与算法配置参数设置(即调参)对仿真结果的影响,具体如表5所示。该实验的目的有两个,一是检验原论文仿真模拟结果的可复现性,二是研究“正向形式”算法与“反向形式”算法在不同设定的选择模型下的性能对比。

    在这里插入图片描述

  2. 实验二:研究ADXOpt2014Forward算法与ADXOpt2014Backward算法在 s b l o c k = 1 s_{\rm block}=1 sblock=1 s b l o c k = 2 s_{\rm block}=2 sblock=2的设定(即使用块级调整与不使用块级调整策略两种设定)下,在更一般形式的二级嵌套Logit模型中上的仿真模拟情况。更一般形式的二级嵌套Logit模型相比于原论文中的特殊二级嵌套Logit模型的区别包括:

    ① 引入更多的嵌套(从 m = 1 m=1 m=1变为 m ∈ { 2 , 3 , 4 } m\in\{2,3,4\} m{2,3,4});

    ② 设定嵌套相异度可以大于1(从 [ 0 , 1 ] [0,1] [0,1]均匀分布变为 [ 1 , 10 ] [1,10] [1,10]均匀分布);

    ③ 允许同一产品可以出现在不同嵌套中;

    ④ 每个嵌套中都可以有一个非零估值的不购买选项;

    笔者的目的是通过复杂化模型使得一般的ADXOpt2014算法不能很好的收敛到全局最优解(原论文仿真模拟中收敛到全局最优解的比例实在是太高),以说明本文做出的两个改进措施是有意义的。

3.2 实验结果与分析

3.2.1 实验一

完整的实验数据见提交附件中的result1.1.xlsx与result1.2.xlsx(在GitHub repository中也可以找到),result1.1.xlsx中是216种不同的模型与算法配置下的仿真结果(每种配置模拟仿真1000次),result1.2.xlsx中主要对比了36组(36组是因为一组里面包含表4中的6个算法, 36 × 6 = 216 36\times 6=216 36×6=216)同类正向算法反向算法在运行时间与收敛到全局最优解的比例上的对比情况。主要的结论有如下几条:

  1. 成功复现原论文的模拟仿真的结论,的确在99%以上的情况都收敛到了最简单的嵌套Logit模型与混合Logit模型的全局最优解,并且所有算法都能够在无容量限制的MNL模型中100%的收敛到全局最优解。
  2. result1.2.xlsx的36组正向算法反向算法的对比显示,有27组模型与算法配置下,反向算法的用时要比对应的正向算法的用时少,且正向算法反向算法平均收敛到全局最优解的比例为99.25%与99.68%,没有显著的区别,笔者认为这至少说明“反向形式”的改进是一个可以考虑的方向,虽然缺乏理论支撑。
  3. 由于result1.1.xlsx中每一种设定下收敛到全局最优解的比例都非常高,虽然块级调整一定程度上能够改善收敛性质,但是不够显著。

3.2.2 实验二

完整的实验数据见提交附件中的result2.xlsx(在GitHub repository中也可以找到),result2.xlsx中是96种不同的模型与算法配置下的仿真结果(每种配置模拟仿真1000次),96是来自2×2×2×2×2×3,每个数字的含义是:

  1. ADXOpt2014Forward算法与ADXOpt2014Backward算法;

  2. s b l o c k = 1 s_{\rm block}=1 sblock=1 s b l o c k = 2 s_{\rm block}=2 sblock=2

  3. 嵌套相异度参数大于1和小于1两种情况;

  4. 是否允许同一产品可以出现在不同嵌套中;

  5. 是否每个嵌套中都可以有一个非零估值的不购买选项;

  6. 嵌套数量取自 { 2 , 3 , 4 } \{2,3,4\} {2,3,4}

主要的结论有如下几条:

  1. 在一般形式上的二级Logit模型上算法的表现明显要比原论文的结果要差,所有设定下平均收敛到最优解的比例只有96.41%,最差的一种情况的1000次模拟仿真收敛到最优解的比例只有77.3%。虽然还是有一半左右的情况算法能够在98%以上的情况收敛到全局最优解,但是相对于实验一而言,算法收敛到最优解的比例会要比低很多。
  2. 以result2.xlsx中一种典型的情况来说明(50-57行,筛选条件为块大小为1和2,嵌套数为3,嵌套相异度上下界为1和10,嵌套中是否存在不购买选项为TRUE,同一产品是否在不同嵌套中同时出现为FALSE),结果如表6所示,注意到块级调整策略的引入显著地提升了算法的收敛性,但是的确耗时要高得多。但是我们同时可以注意到反向算法相比于对应的正向算法在耗时上要少一些,且收敛到最优解的比例反而更高。笔者认为这可能就是因为最优解中包含的产品数量一般是比较多的,所以反向算法更有优势。**平均差距(总)平均差距(非最优)**对应原论文OptGapAll与GapNonOpt两个指标,即与实际最优解得到的收益之间的差距比例(全部情形与非收敛到最优的情形两种考量)。
    在这里插入图片描述

4 总结与展望

本文对Jagabathula提出的产品组合优化算法进行了改进,具体而言,本文提出一种统一的算法配置与测试框架,可以很好地对算法架构进行延展。本文对对算法进行反向形式地改进,并在仿真模拟中验证了其能够提升算法性能,此外通过引入块级调整策略能够改善算法在复杂模型上的收敛性。但是本文仍然有很多不足的地方,首先限于专业知识限制,笔者对提出的反向算法块级调整缺少理论证明的支撑,仅在仿真模拟中说明了其有效性;其次,限于个人算力与时间限制,在仿真模拟中还有很多可以改进的地方,如引入更大的产品全集,更多的块级调整的块大小对比;最后,算法依然有可以改进的空间,如进一步深入考察2011年版本中的initial size算法参数,以增大搜索空间。笔者希望本文的研究成果能够有助于相关研究的推进。


附录1 仿真模拟代码说明

  1. config.py:算法及模型的默认配置;
  2. manage.py:主程序,实验一对应main,main_offerset_capacity,main_max_addition_or_removal,main_block_size四个函数,实验二对应main_adxopt2014_for_nl2_further_analysis_new函数;
  3. setting.py:一些全局变量设定;
  4. test_script.py:对算法与模型逻辑的测试函数,以确保算法与模型的实现是正确无误的;
  5. src/algorithm.py:统一的算法框架;
  6. src/choice_model.py:选择模型定义;
  7. src/evaluation_tools.py:给定算法与模型进行仿真模拟评估;
  8. src/plot_tools.py:生成可视化实验结果,这里面的函数与manage.py中的函数对应;
  9. src/simulation_tools.py:随机生成模型参数以及算法配置;
  10. src/utils.py:工具函数;

附录2 Jagabathula提出的三版算法伪代码

  • 2011年GreedyOpt算法:
    在这里插入图片描述
    在这里插入图片描述
  • 2014年ADXOpt算法:
    在这里插入图片描述
  • 2016年ADXOpt算法:
    在这里插入图片描述

附录3 项目代码

  1. config.py:算法及模型的默认配置;

    # -*- coding: utf-8 -*-
    # @author : caoyang
    # @email: caoyang@163.sufe.edu.cn
    # 存储可变参数的配置文件
    
    import argparse
    
    class BaseModelConfig:
    	"""模型基本配置"""
    	parser = argparse.ArgumentParser('--')
    	parser.add_argument('--num_product', default=10, type=int, help='产品的数量')
    	parser.add_argument('--offerset_capacity', default=None, type=int, help='报价集容量限制')
    	parser.add_argument('--max_product_price', default=150., type=float, help='最大产品价格')
    	parser.add_argument('--min_product_price', default=100., type=float, help='最大产品价格')
    	parser.add_argument('--max_product_valuation', default=10., type=float, help='最大产品估值')
    	parser.add_argument('--min_product_valuation', default=0., type=float, help='最小产品估值')
    	
    
    class MNLConfig(BaseModelConfig):
    	"""多项逻辑模型的参数"""
    	pass
    	
    
    class NL2Config(BaseModelConfig):
    	"""嵌套逻辑模型的参数"""
    	BaseModelConfig.parser.add_argument('--num_nest', default=1, type=int, help='嵌套数')
    	BaseModelConfig.parser.add_argument('--max_dis_similarity', default=1., type=float, help='最大嵌套相异度')
    	BaseModelConfig.parser.add_argument('--min_dis_similarity', default=0., type=float, help='最小嵌套相异度')
    	BaseModelConfig.parser.add_argument('--exist_no_purchase_per_nest', default=False, type=bool, help='每个嵌套内是否都存在不购买选项,默认不存在')
    	BaseModelConfig.parser.add_argument('--allow_nest_repetition', default=False, type=bool, help='是否允许同一个商品出现在不同嵌套内,默认不允许')
    
    
    class MLConfig(BaseModelConfig):
    	"""一般情况的混合逻辑模型的参数"""
    	BaseModelConfig.parser.add_argument('--num_class', default=5, type=int, help='客户类别数')
    
    # --------------------------------------------------------------------
    # -*-*-*-*-*-*- 这-*-里-*-是-*-华-*-丽-*-的-*-分-*-割-*-线 -*-*-*-*-*-*-
    # --------------------------------------------------------------------
    
    class BaseAlgorithmConfig:
    	"""算法基本配置"""
    	parser = argparse.ArgumentParser('--')
    	parser.add_argument('--do_add', default=True, type=bool, help='算法是否执行增加操作,后来我想了想是否可以将算法反向执行')
    	parser.add_argument('--do_add_first', default=False, type=bool, help='算法是否优先执行增加操作,2011年的GreedyOpt优先执行增加操作,2014年的ADXOpt算法在某些情况下会优先执行增加操作,2016年的ADXOpt算法完全修正为优先执行增加操作')
    	parser.add_argument('--do_delete', default=True, type=bool, help='算法是否执行删除操作,除了平凡的贪心算法外,所有算法都会考虑删除操作')
    	parser.add_argument('--do_exchange', default=True, type=bool, help='算法是否执行交换操作,为了降低算法复杂度,可以不执行交换操作,这个思想从2014年的ADXOpt算法开始被提出')
    	parser.add_argument('--max_removal', default=float('inf'), type=float, help='每个产品被移出的最多次数,该参数三版算法都有提及')
    	parser.add_argument('--max_addition', default=float('inf'), type=float, help='每个产品被移入的最多次数,如果算法的起点是产品全集,就会有这个参数的引入')
    	
    	# 新加的几个参数用于改进算法
    	parser.add_argument('--initial_size', default=0, type=int, help='2011年的GreedyOpt算法中提及,后来就不考虑了,指算法会遍历所有的大小为initial_size的子集,并分别以它们为起点进行迭代,最终取所有起点得到的最优解中的最优解')
    	parser.add_argument('--addable_block_size', default=1, type=int, help='算法每次迭代可增加的产品块大小')
    	parser.add_argument('--deleteable_block_size', default=1, type=int, help='算法每次迭代可删除的产品块大小')
    	parser.add_argument('--exchangeable_block_size', default=1, type=int, help='算法每次迭代可交换的产品块大小')
    
    class GreedyOptConfig(BaseAlgorithmConfig):
    	"""GreedyOpt算法的基本配置"""
    	pass
    
    class ADXOpt2014Config(BaseAlgorithmConfig):
    	"""2014年的ADXOpt算法的基本配置"""
    	pass
    
    class ADXOpt2016Config(BaseAlgorithmConfig):
    	"""2016年的ADXOpt算法的基本配置"""
    	pass
    
    if __name__ == "__main__":
    	import json
    	from src.utils import load_args, save_args
    	
    	config = BaseAlgorithmConfig()
    	parser = config.parser
    	args = parser.parse_args()
    	
    	save_args(args, '1.json')
    	
    	# print(args.__getattribute__('num_product'))
    	# args.__setattr__('num_product', 100)
    	# print(args.num_product)
    
    
    
  2. manage.py:主程序,实验一对应main,main_offerset_capacity,main_max_addition_or_removal,main_block_size四个函数,实验二对应main_adxopt2014_for_nl2_further_analysis_new函数;

    # -*- coding: utf-8 -*-
    # @author: caoyang
    # @email: caoyang@163.sufe.edu.cn
    # 主程序
    
    import os
    import json
    
    from copy import deepcopy
    
    from config import *
    from setting import *
    
    from src.simulation_tools import generate_algorithm_args
    from src.evaluation_tools import evaluate, evaluate_new, analysis
    from src.utils import load_args
    
    # 默认的模型配置在默认的算法配置上的情况:
    # 1. 报价集无容量限制
    # 2. 移除或移入次数的上限都为1
    # 3. 不考虑块级调整
    def main(n_sample=1000):
    	summary = {}
    
    	for model_name in MODEL_MAPPING.keys():
    		summary[model_name] = {}
    		model_args = load_args(eval(MODEL_MAPPING[model_name]['config']))
    		for algorithm_name in ALGORITHM_NAMES:
    			print(model_name, algorithm_name)
    			algorithm_args = generate_algorithm_args(algorithm_name=algorithm_name)
    			_, result = evaluate(model_name=model_name, 
    								 model_args=model_args, 
    								 algorithm_name=algorithm_name, 
    								 algorithm_args=algorithm_args, 
    								 n_sample=n_sample, 
    								 do_export=True)
    			summary[model_name][algorithm_name] = [result]
    
    	with open(os.path.join(TEMP_FOLDER, 'summary.json'), 'w') as f:
    		json.dump(summary, f, indent=4)
    
    # 情形一:考虑报价集容量限制
    def main_offerset_capacity(n_sample=1000):
    	capacity_ratios = [0.25, 0.5, 0.75]
    	summary = {}
    	for model_name in MODEL_MAPPING.keys():
    		summary[model_name] = {}
    		model_args = load_args(eval(MODEL_MAPPING[model_name]['config']))
    		for algorithm_name in ALGORITHM_NAMES:
    			summary[model_name][algorithm_name] = []
    			for capacity_ratio in capacity_ratios:
    				offerset_capacity = int(model_args.num_product * capacity_ratio)
    				model_args.offerset_capacity = offerset_capacity	# 修改报价集容量限制
    				print(model_name, algorithm_name, offerset_capacity)
    				algorithm_args = generate_algorithm_args(algorithm_name=algorithm_name)
    				_, result = evaluate(model_name=model_name, 
    									 model_args=model_args, 
    									 algorithm_name=algorithm_name, 
    									 algorithm_args=algorithm_args, 
    									 n_sample=n_sample, 
    									 do_export=True)
    				result['offerset_capacity'] = offerset_capacity
    				summary[model_name][algorithm_name].append(result)
    	with open(os.path.join(TEMP_FOLDER, 'summary_offerset_capacity.json'), 'w') as f:
    		json.dump(summary, f, indent=4)
    
    # 情形二:增加移除或移入次数的上限
    def main_max_addition_or_removal(n_sample=1000):
    	max_addition_or_removals = [2, 4, 8, 16]
    	summary = {}
    	for model_name in MODEL_MAPPING.keys():
    		summary[model_name] = {}
    		model_args = load_args(eval(MODEL_MAPPING[model_name]['config']))
    		for algorithm_name in ALGORITHM_NAMES:
    			summary[model_name][algorithm_name] = []
    			for max_addition_or_removal in max_addition_or_removals:
    				print(model_name, algorithm_name, max_addition_or_removal)
    				if algorithm_name.endswith('_forward'):
    					kwargs = {'max_removal': max_addition_or_removal}
    				elif algorithm_name.endswith('_backward'):
    					kwargs = {'max_addition': max_addition_or_removal}
    				else:
    					raise NotImplementedError
    				algorithm_args = generate_algorithm_args(algorithm_name=algorithm_name, **kwargs)
    				_, result = evaluate(model_name=model_name, 
    									 model_args=model_args, 
    									 algorithm_name=algorithm_name, 
    									 algorithm_args=algorithm_args, 
    									 n_sample=n_sample, 
    									 do_export=True)
    				result['max_addition_or_removal'] = max_addition_or_removal
    				summary[model_name][algorithm_name].append(result)
    	with open(os.path.join(TEMP_FOLDER, 'summary_max_addition_or_removal.json'), 'w') as f:
    		json.dump(summary, f, indent=4)
    
    # 情形三:考虑块级调整策略
    def main_block_size(n_sample=1000):
    	block_sizes = [2, 3, 4, 5]
    	summary = {}
    	for model_name in MODEL_MAPPING.keys():
    		summary[model_name] = {}
    		model_args = load_args(eval(MODEL_MAPPING[model_name]['config']))
    		for algorithm_name in ALGORITHM_NAMES:
    			summary[model_name][algorithm_name] = []
    			for block_size in block_sizes:
    				print(model_name, algorithm_name, block_size)
    				kwargs = {
    					'addable_block_size'		: block_size,
    					'deleteable_block_size'		: block_size,
    					'exchangeable_block_size'	: block_size,
    				}
    				algorithm_args = generate_algorithm_args(algorithm_name=algorithm_name, **kwargs)
    				_, result = evaluate(model_name=model_name, 
    									 model_args=model_args, 
    									 algorithm_name=algorithm_name, 
    									 algorithm_args=algorithm_args, 
    									 n_sample=n_sample, 
    									 do_export=True)
    				result['block_size'] = block_size
    				summary[model_name][algorithm_name].append(result)
    	with open(os.path.join(TEMP_FOLDER, 'summary_block_size.json'), 'w') as f:
    		json.dump(summary, f, indent=4)
    		
    # 深度分析ADXOpt2014在多个嵌套逻辑模型上的效果,研究前后向算法与块级调整两个因素的影响
    def main_adxopt2014_for_nl2_further_analysis(n_sample=1000):
    	model_name = 'nl2'
    	model_args = load_args(eval(MODEL_MAPPING[model_name]['config']))
    	for algorithm_name in ['adxopt2014_forward', 'adxopt2014_backward']:
    		for block_size in [1, 2]:
    			kwargs = {
    				'addable_block_size'		: block_size,
    				'deleteable_block_size'		: block_size,
    				'exchangeable_block_size'	: block_size,
    			}
    			algorithm_args = generate_algorithm_args(algorithm_name=algorithm_name, **kwargs)
    			summary = []
    			for num_nest in [2, 3, 4]:
    				for min_dis_similarity, max_dis_similarity in zip([0., 1.], [1., 10.]):
    					for exist_no_purchase_per_nest in [True, False]:
    						for allow_nest_repetition in [True, False]:
    							print(algorithm_name, block_size, num_nest, min_dis_similarity, max_dis_similarity, exist_no_purchase_per_nest, allow_nest_repetition)
    							_model_args = deepcopy(model_args)
    							_model_args.num_nest = num_nest
    							_model_args.min_dis_similarity = min_dis_similarity
    							_model_args.max_dis_similarity = max_dis_similarity
    							_model_args.exist_no_purchase_per_nest = exist_no_purchase_per_nest
    							_model_args.allow_nest_repetition = allow_nest_repetition
    							
    							_algorithm_args = deepcopy(algorithm_args)
    							_, result = evaluate(model_name=model_name, 
    												 model_args=_model_args, 
    												 algorithm_name=algorithm_name, 
    												 algorithm_args=_algorithm_args, 
    												 n_sample=n_sample, 
    												 do_export=True)
    							result['num_nest'] = num_nest
    							result['min_dis_similarity'] = min_dis_similarity
    							result['max_dis_similarity'] = max_dis_similarity
    							result['exist_no_purchase_per_nest'] = exist_no_purchase_per_nest
    							result['allow_nest_repetition'] = allow_nest_repetition
    							summary.append(result)
    			
    			with open(os.path.join(TEMP_FOLDER, f'summary_{algorithm_name}_for_nl2_further_analysis_{block_size}.json'), 'w') as f:
    				json.dump(summary, f, indent=4)	
    	
    # 20211206改进,节约时间
    def main_adxopt2014_for_nl2_further_analysis_new(n_sample=1000):
    	model_name = 'nl2'
    	model_args = load_args(eval(MODEL_MAPPING[model_name]['config']))
    	
    	summary = []
    	for num_nest in [2, 3, 4]:
    		for min_dis_similarity, max_dis_similarity in zip([0., 1.], [1., 10.]):
    			for exist_no_purchase_per_nest in [True, False]:
    				for allow_nest_repetition in [True, False]:
    					print(num_nest, min_dis_similarity, max_dis_similarity, exist_no_purchase_per_nest, allow_nest_repetition)
    					_model_args = deepcopy(model_args)
    					_model_args.num_nest = num_nest
    					_model_args.min_dis_similarity = min_dis_similarity
    					_model_args.max_dis_similarity = max_dis_similarity
    					_model_args.exist_no_purchase_per_nest = exist_no_purchase_per_nest
    					_model_args.allow_nest_repetition = allow_nest_repetition
    					
    					algorithm_name_list = []
    					algorithm_args_list = []
    					
    					for algorithm_name in ['adxopt2014_forward', 'adxopt2014_backward']:
    						for block_size in [1, 2]:
    							kwargs = {
    								'addable_block_size'		: block_size,
    								'deleteable_block_size'		: block_size,
    								'exchangeable_block_size'	: block_size,
    							}
    							algorithm_args = generate_algorithm_args(algorithm_name=algorithm_name, **kwargs)
    							
    							algorithm_name_list.append(f'{algorithm_name}_{block_size}')	# 重新定义算法名称
    							algorithm_args_list.append(algorithm_args)					
    							
    
    
    					_, results = evaluate_new(model_name=model_name, 
    											  model_args=_model_args, 
    											  algorithm_name_list=algorithm_name_list, 
    											  algorithm_args_list=algorithm_args_list, 
    											  n_sample=n_sample, 
    											  do_export=True)
    					
    					_summary = {
    						'results': results,
    						'num_nest': num_nest,
    						'min_dis_similarity': min_dis_similarity,
    						'max_dis_similarity': max_dis_similarity,
    						'exist_no_purchase_per_nest': exist_no_purchase_per_nest,
    						'allow_nest_repetition': allow_nest_repetition,
    					}
    					summary.append(_summary)
    			
    	with open(os.path.join(TEMP_FOLDER, f'summary_adxopt2014_for_nl2_further_analysis_all.json'), 'w') as f:
    		json.dump(summary, f, indent=4)
    
    
    if __name__ == '__main__':
    	# evaluate(model_name='mnl', 
    			 # model_args=load_args(MNLConfig), 
    			 # algorithm_name='naivegreedy_backward', 
    			 # algorithm_args=generate_algorithm_args(algorithm_name='naivegreedy_forward'), 
    			 # n_sample=100, 
    			 # do_export=True)
    			 
    	# main(1000)
    	# main_offerset_capacity(1000)
    	# main_max_addition_or_removal(1000)
    	# main_block_size(1000)
    
    	main_adxopt2014_for_nl2_further_analysis_new(1000)
    
    
  3. setting.py:一些全局变量设定;

    # -*- coding: utf-8 -*-
    # @author: caoyang
    # @email: caoyang@163.sufe.edu.cn
    # 存储全局变量的设置文件
    
    # 文件夹路径
    IMAGE_FOLDER = 'image'
    LOGGING_FOLDER = 'logging'
    TEMP_FOLDER = 'temp'
    
    
    # 算法类映射:这个貌似没什么用了
    ALGORITHM_CLASS_MAPPING = {
    	'naivegreedy'	: {'class': 'NaiveGreedy'},
    	'greedyopt'		: {'class': 'GreedyOpt'},
    	'adxopt2014'	: {'class': 'ADXOpt2014'},
    	'adxopt2016'	: {'class': 'ADXOpt2016'},
    }
    
    # 模型类与相关方法映射
    MODEL_MAPPING = {
    	'mnl'	: {'class': 'MultiNomialLogit',	'config': 'MNLConfig',	'param': 'generate_params_for_MNL'},
    	'nl2'	: {'class': 'NestedLogit2',		'config': 'NL2Config',	'param': 'generate_params_for_NL2'},
    	'ml'	: {'class': 'MixedLogit',		'config': 'MLConfig',	'param': 'generate_params_for_ML'},
    }
    
    # 目前已经开发的算法
    ALGORITHM_NAMES = [
    	# 'naivegreedy_forward',
    	# 'naivegreedy_backward',
    	'greedyopt_forward',
    	'greedyopt_backward',
    	'adxopt2014_forward',
    	'adxopt2014_backward',
    	'adxopt2016_forward',
    	'adxopt2016_backward',
    ]
    
    # 极小的数
    EPSILON = 1e-6
    
    
  4. test_script.py:对算法与模型逻辑的测试函数,以确保算法与模型的实现是正确无误的;

    # -*- coding: utf-8 -*-
    # @author: caoyang
    # @email: caoyang@163.sufe.edu.cn
    # 主程序
    
    import os
    import numpy
    from pprint import pprint
    
    from config import *
    from setting import *
    
    from src.algorithm import BaseAlgorithm, NaiveGreedy, GreedyOpt, ADXOpt2014, ADXOpt2016
    from src.choice_model import MultiNomialLogit, NestedLogit2, MixedLogit
    from src.plot_tools import plot_main_1_to_4, plot_main_adxopt2014_for_nl2_further_analysis, plot_main_adxopt2014_for_nl2_further_analysis_new
    from src.simulation_tools import generate_params_for_ML, generate_params_for_MNL, generate_params_for_NL2
    from src.utils import load_args
    # ----------------------------------------------------------------------
    # MNL测试
    def test_MNL(fixed=True):
    	args = load_args(MNLConfig)
    	params = {
    		'product_prices': numpy.array([100, 100]),
    		'product_valuations': numpy.array([1, 2]),
    		'no_purchase_valuation': 1.,
    		'offerset_capacity': None,
    	} if fixed else generate_params_for_MNL(args)
    	pprint(params)
    	model = MultiNomialLogit(**params)
    	print(model.calc_product_choice_probabilitys([0, 1]))
    	print(model.calc_revenue_expectation([0]))
    	print(model.calc_revenue_expectation([1]))
    	print(model.calc_revenue_expectation([0, 1]))
    # ----------------------------------------------------------------------
    # NL2测试
    def test_NL2(fixed=True):
    	args = load_args(NL2Config)
    	params = {
    		'product_prices': numpy.array([100, 100]),
    		'product_valuations': numpy.array([1, 2]),
    		'no_purchase_valuation': 1.,
    		'offerset_capacity': None,
    		'nests': [[0], [1]],
    		'nest_dis_similaritys': [0.5, 1.],
    		'nest_no_purchase_valuations': [0., 0.],	
    	} if fixed else generate_params_for_NL2(args)
    	pprint(params)
    	model = NestedLogit2(**params)
    	print(model.calc_product_choice_probabilitys([0]))
    	print(model.calc_product_choice_probabilitys([1]))
    	print(model.calc_product_choice_probabilitys([0, 1]))
    	print(model.calc_revenue_expectation([0]))
    	print(model.calc_revenue_expectation([1]))
    	print(model.calc_revenue_expectation([0, 1]))
    # ----------------------------------------------------------------------
    # ML测试
    def test_ML(fixed=True):
    	args = load_args(MLConfig)
    	params = generate_params_for_ML(args)
    	params = {
    		'product_prices'		: numpy.array([100, 100]),
    		'product_valuations'	: numpy.array([[1, 2], [2, 1]]),	
    		'no_purchase_valuation'	: numpy.array([0.5, 1]),
    		'offerset_capacity'		: None,
    		'class_weight'			: numpy.array([0.3, 0.7]),
    	} if fixed else generate_params_for_ML(args)
    	pprint(params)
    	print(sum(params['class_weight']))
    	model = MixedLogit(**params)
    	print(model.calc_product_choice_probabilitys([0]))
    	print(model.calc_product_choice_probabilitys([1]))
    	print(model.calc_product_choice_probabilitys([0, 1]))
    	print(model.calc_revenue_expectation([0]))
    	print(model.calc_revenue_expectation([1]))
    	print(model.calc_revenue_expectation([0, 1]))
    # ----------------------------------------------------------------------
    # naivegreedy测试
    def test_naivegreedy(fixed_model=True, fixed_algorithm=True):
    	model_args = load_args(MNLConfig)
    	params = {
    		'product_prices': numpy.array([2, 2, 2]),
    		'product_valuations': numpy.array([numpy.exp(.5), numpy.exp(.5), numpy.exp(.7)]),
    		'no_purchase_valuation': 1.,
    		'offerset_capacity': 2,
    	} if fixed_model else generate_params_for_MNL(model_args)
    	model = MultiNomialLogit(**params)
    
    	algorithm_args = {
    		'do_add'					: True,
    		'do_add_first'				: True,
    		'do_delete'					: False,
    		'do_delete_first'			: False,
    		'do_exchange'				: False,
    		'max_removal'				: 0.,
    		'max_addition'				: float('inf'),
    		'initial_size'				: 0,
    		'addable_block_size'		: 1,
    		'deleteable_block_size'		: 1,
    		'exchangeable_block_size'	: 1,
    	}
    
    	algorithm_args = load_args(BaseAlgorithmConfig)
    	# algorithm_args.initial_size = 1
    	# algorithm_args.addable_block_size = 2
    	naivegreedy = NaiveGreedy(algorithm_args)
    	print(BaseAlgorithm.bruteforce(model, max_size=model.offerset_capacity))
    	print(naivegreedy.run(model))
    	
    # ----------------------------------------------------------------------
    # greedyopt测试
    def test_greedyopt():
    	model_args = load_args(MNLConfig)
    	params = {
    		'product_prices': numpy.array([2, 2, 1.9]),
    		'product_valuations': numpy.array([numpy.exp(.5), numpy.exp(.5), numpy.exp(.7)]),
    		'no_purchase_valuation': 1.,
    		'offerset_capacity': 2,
    	} if fixed else generate_params_for_MNL(model_args)
    	model = MultiNomialLogit(**params)
    	
    	algorithm_args = load_args(BaseAlgorithmConfig)
    	# algorithm_args.initial_size = 1
    	# algorithm_args.addable_block_size = 2
    
    	naivegreedy = NaiveGreedy(algorithm_args)
    	print(BaseAlgorithm.bruteforce(model, max_size=model.offerset_capacity))
    	print(naivegreedy.run(model))
    
    # ----------------------------------------------------------------------
    # adxopt2014测试
    def test_adxopt2014():
    	raise NotImplementedError
    # ----------------------------------------------------------------------
    # adxopt2016测试
    def test_adxopt2016():
    	raise NotImplementedError
    
    if __name__ == '__main__':
    	# test_MNL(False)
    	# test_NL2(False)
    	# test_ML(False)
    	# test_naivegreedy(True, True)
    	
    	plot_main_1_to_4(do_export=False)
    	plot_main_adxopt2014_for_nl2_further_analysis(do_export=False)
    	plot_main_adxopt2014_for_nl2_further_analysis_new(do_export=True)
    	
    
    
  5. src/algorithm.py:统一的算法框架;

    # -*- coding: utf-8 -*-
    # @author: caoyang
    # @email: caoyang@163.sufe.edu.cn
    # 算法设计
    
    if __name__ == '__main__':
    	import sys
    	sys.path.append('../')
    
    import numpy
    
    from copy import deepcopy
    
    from setting import *
    from src.utils import generate_subset
    from src.choice_model import BaseChoiceModel
    
    class BaseAlgorithm:
    	"""算法基类"""
    	def __init__(self, algorithm_args, *args, **kwargs):
    		self.algorithm_args = deepcopy(algorithm_args)
    		# 根据传入的构造参数修改算法配置
    		for key, value in kwargs.items():
    			self.algorithm_args.__setattr__(key, value)
    
    	@classmethod
    	def find_initial_offerset(cls, model, initial_size):
    		"""
    		我原本2011版本中的GreedyOpt算法是穷举子集尺寸不超过inital_size的最优报价集作为算法起点;
    		事实上这是对initial_size的错误理解,应该是穷举所有大小的为initial_size的报价子集作为算法起点;
    		前者是牺牲精度以减少算法复杂度,这可能也是一个对节约时间有意义的方向;
    		后者是牺牲时间以提升算法精度,不过设置initial_size值为零就没有任何影响;
    		这里我还是对这种错误的想法进行了实现。
    		:param model				: 选择模型,类型为src.choice_model.BaseModel的子类;
    		:param initial_size			: 初始报价集的大小;
    		:return initial_offerset	: 初始报价集;
    		"""
    		max_revenue, optimal_solutions = BaseAlgorithm.bruteforce(model=model, min_size=1, max_size=initial_size)
    		assert len(optimal_solutions) == 1
    		initial_offerset = optimal_solutions[0]
    		return initial_offerset
    
    	@classmethod
    	def bruteforce(cls, model, min_size=1, max_size=None):
    		"""
    		暴力穷举法
    		:param model	: 选择模型,类型为src.choice_model.BaseModel的子类;
    		:param min_size	: 穷举子集的最小尺寸,默认忽略空集;
    		:param max_size	: 穷举子集的最大尺寸,默认值None表示穷举所有子集,也可以限制子集大小以少枚举一些情况;
    		
    		:return max_revenue			: 最大收益;
    		:return optimal_solutions	: 所有最优解;
    		"""
    		
    		if max_size is None:
    			max_size = model.offerset_capacity
    		else:
    			# 理论上max_size不能超过报价集容量,但是可能会有一些特殊的用法,总之断言一般都能成立
    			assert max_size <= model.offerset_capacity
    		max_revenue = 0.
    		optimal_solutions = []
    		for offerset in generate_subset(universal_set=deepcopy(model.product_ids),
    										min_size=min_size,
    										max_size=max_size):
    			# 遍历所有产品子集并更新最大收益与最优解集
    			offerset = list(offerset)
    			revenue = model.calc_revenue_expectation(offerset=offerset)
    			if revenue > max_revenue:
    				optimal_solutions = [offerset]
    				max_revenue = revenue
    			elif revenue == max_revenue:
    				optimal_solutions.append(offerset)
    			else:
    				continue
    		return max_revenue, optimal_solutions
    	
    	@classmethod
    	def greedy_add(cls, 
    				   model: BaseChoiceModel, 
    				   current_offerset: list, 
    				   current_revenue: float, 
    				   addable_product_ids: list, 
    				   max_addable_block_size: int):
    		"""
    		穷举搜索最优的贪心增加产品块
    		:param model					: 选择模型,类型为src.choice_model.BaseChoiceModel的子类;
    		:param current_offerset			: 当前报价集;
    		:param current_revenue			: 当前收益;
    		:param addable_product_ids		: 可增加的产品子集;
    		:param max_addable_block_size	: 最多可增加产品块的大小;
    		
    		:return added_product_block		: 搜索得到的最优的贪心增加产品块,可能为None则表示没有能够使得收益提升的增加操作;
    		:return max_improvement			: 贪心增加能够达到的最高收益提升,若added_product_block为None,则对应的max_improvement为0;
    		"""
    		# 初始化最优的贪心增加产品块
    		added_product_block = None
    		max_improvement = 0.
    		
    		# 遍历产品找到能使得收益提升最多的一个产品块
    		for addable_product_block in generate_subset(universal_set=addable_product_ids, 
    													 min_size=1,
    													 max_size=max_addable_block_size):
    			addable_product_block = list(addable_product_block)
    			updated_offerset = current_offerset + addable_product_block
    			updated_revenue = model.calc_revenue_expectation(offerset=updated_offerset)
    			revenue_improvement = updated_revenue - current_revenue
    			if revenue_improvement > max_improvement:
    				added_product_block = addable_product_block
    				max_improvement = revenue_improvement
    				
    		return added_product_block, max_improvement
    	
    	@classmethod
    	def greedy_delete(cls, 
    					  model: BaseChoiceModel,  
    					  current_offerset: list, 
    					  current_revenue: float, 
    					  deleteable_product_ids: list, 
    					  max_deleteable_block_size: int):
    		"""
    		穷举搜索最优的贪心删除产品块
    		:param model					: 选择模型,类型为src.choice_model.BaseChoiceModel的子类;
    		:param current_offerset			: 当前报价集;
    		:param current_revenue			: 当前收益;
    		:param deleteable_product_ids	: 可删除的产品子集;
    		:param max_deleteable_block_size: 最多可删除产品块的大小;
    		
    		:return deleted_product_block	: 搜索得到的最优的贪心删除产品块,可能为None则表示没有能够使得收益提升的删除操作;
    		:return max_improvement			: 贪心删除能够达到的最高收益提升,若added_product_block为None,则对应的max_improvement为0;
    		"""		
    		# 初始化最优的贪心删除产品块
    		deleted_product_block = None
    		max_improvement = 0.
    		
    		# 遍历产品找到能使得收益提升最多的一个产品块
    		for deleteable_product_block in generate_subset(universal_set=deleteable_product_ids, 
    														min_size=1,
    														max_size=max_deleteable_block_size):
    			deleteable_product_block = list(deleteable_product_block)
    			updated_offerset = list(set(current_offerset) - set(deleteable_product_block))
    			updated_revenue = model.calc_revenue_expectation(offerset=updated_offerset)
    			revenue_improvement = updated_revenue - current_revenue
    			if revenue_improvement > max_improvement:
    				deleted_product_block = deleteable_product_block
    				max_improvement = revenue_improvement
    				
    		return deleted_product_block, max_improvement
    		
    	@classmethod
    	def greedy_exchange(cls, 
    						model: BaseChoiceModel, 
    						current_offerset: list, 
    						current_revenue: float, 
    						addable_product_ids: list, 
    						deleteable_product_ids: list, 
    						max_exchangeable_block_size: int):
    		"""
    		穷举搜索最优的贪心交换产品块
    		:param model						: 选择模型,类型为src.choice_model.BaseChoiceModel的子类;
    		:param current_offerset				: 当前报价集;
    		:param current_revenue				: 当前收益;
    		:param addable_product_ids			: 可增加的产品子集;
    		:param deleteable_product_ids		: 可删除的产品子集;
    		:param max_exchangeable_block_size	: 最多可交换产品块的大小;
    		
    		:return added_product_block			: [×]搜索得到的最优的用于交换进去的贪心交换产品块,可能为None则表示没有能够使得收益提升的交换操作;
    		:return deleted_product_block		: [×]搜索得到的最优的用于交换出来的贪心交换产品块,可能为None则表示没有能够使得收益提升的交换操作;
    		:return exchanged_product_block		: (added_product_block, deleted_product_block);
    		:return max_improvement				: 贪心交换能够达到的最高收益提升,若added_product_block为None,则对应的max_improvement为0;
    		"""		
    		# 初始化最优的贪心交换产品块
    		added_product_block = None
    		deleted_product_block = None
    		max_improvement = 0.
    		
    		# 遍历产品找到能使得收益提升最多的一个产品块
    		for addable_product_block in generate_subset(universal_set=addable_product_ids, 
    													 min_size=1,
    													 max_size=max_exchangeable_block_size):
    			addable_product_block = list(addable_product_block)
    			for deleteable_product_block in generate_subset(universal_set=deleteable_product_ids, 
    															min_size=1,
    															max_size=max_exchangeable_block_size):
    				deleteable_product_block = list(deleteable_product_block)
    				updated_offerset = list(set(current_offerset) - set(deleteable_product_block)) + addable_product_block
    				updated_revenue = model.calc_revenue_expectation(offerset=updated_offerset)
    				revenue_improvement = updated_revenue - current_revenue
    				if revenue_improvement > max_improvement:
    					added_product_block = addable_product_block
    					deleted_product_block = deleteable_product_block
    					max_improvement = revenue_improvement
    		exchanged_product_block = (added_product_block, deleted_product_block)
    		return exchanged_product_block, max_improvement		
    
    	def run(self, model):
    		"""算法逻辑编写:后来我发现所有版本的贪心算法都可以在同一框架下实现"""
    		# 提取参数
    		initial_size = self.algorithm_args.initial_size
    		addable_block_size = self.algorithm_args.addable_block_size
    		do_add = self.algorithm_args.do_add
    		do_add_first = self.algorithm_args.do_add_first
    		do_delete = self.algorithm_args.do_delete
    		do_delete_first = self.algorithm_args.do_delete_first
    		do_exchange = self.algorithm_args.do_exchange
    		max_addition = self.algorithm_args.max_addition
    		max_removal = self.algorithm_args.max_removal
    		initial_size = self.algorithm_args.initial_size
    		addable_block_size = self.algorithm_args.addable_block_size
    		deleteable_block_size = self.algorithm_args.deleteable_block_size
    		exchangeable_block_size = self.algorithm_args.exchangeable_block_size
    		
    		assert do_add >= do_add_first, 'If do add first, you must allow add operation.'
    		assert do_delete >= do_delete_first, 'If do delete first, you must allow delete operation.'
    		assert do_add_first * do_delete_first == 0, 'You cannot do add first and do delete first at the same time.'
    		
    		product_ids = model.product_ids[:]
    		num_product = model.num_product
    		offerset_capacity = model.offerset_capacity
    		
    		# 若initial_size为负,则认为是从反向搜索,如initial_size为-1表示从产品全集开始搜索
    		if initial_size < 0:
    			initial_size = offerset_capacity + initial_size + 1
    		
    		# 一些节约时间的标记
    		limit_addition = not max_addition == float('inf')
    		limit_removal = not max_removal == float('inf')
    		
    		# 用于统计算法运行状况的全局变量
    		global_optimal_offerset = None	# 全局的最优报价集
    		global_max_revenue = 0.			# 全局的最有报价集对应的收益
    		global_initial_offerset = None	# 全局的最优报价集从哪一个初始报价集迭代得到的
    
    		# 算法逻辑开始
    		for initial_offerset in generate_subset(universal_set=product_ids,
    												min_size=initial_size,
    												max_size=initial_size):
    			# 每个initial_offerset对应的局部最优报价集与局部最大收益
    			local_optimal_offerset = list(initial_offerset)
    			local_max_revenue = model.calc_revenue_expectation(offerset=local_optimal_offerset)
    			
    			# 统计局部的增加与删除次数,这里使用numpy数组存储是为了简化索引代码逻辑
    			addition_count = numpy.zeros((num_product, ))
    			removal_count = numpy.zeros((num_product, ))
    			
    			# 算法迭代逻辑
    			while True:
    				# 下面这些变量如果最终在一次迭代结束仍为None,就说明对应的操作是不可行的;
    				# 可能的原因包括:无法带来收益提升、对应操作不被允许、对应的操作受报价集容量限制而无法执行等;
    				# 注意exchanged_product_block是由增加和删除两个产品块构成的,其他两个都只有一个产品块;
    				added_product_block = None	
    				deleted_product_block = None
    				exchanged_product_block = (None, None)
    				max_revenue_add = max_revenue_delete = max_revenue_exchange = local_max_revenue
    				
    				# 增加操作逻辑
    				if do_add:
    					# 初始化用于记录的变量
    					optimal_offerset_add = local_optimal_offerset[:]
    					
    					# 检验报价集可用的容量是否为正
    					optimal_offerset_length = len(optimal_offerset_add)
    					available_capacity = offerset_capacity - optimal_offerset_length
    					if available_capacity > 0:	
    						max_addable_block_size = addable_block_size if addable_block_size <= available_capacity else available_capacity	# 确定可用于增加的报价集容量
    						
    						# 确定可用于增加的产品子集
    						addable_product_ids = set(product_ids) - set(optimal_offerset_add)													
    						if limit_addition:
    							# 将不在候选报价集中的产品构成的产品子集与增加次数还没有超过限制的产品构成的产品的子集取交集
    							addable_product_ids = addable_product_ids.intersection(set(numpy.where(addition_count < max_addition)[0].tolist()))
    						addable_product_ids = list(addable_product_ids)
    						
    						# 遍历产品找到能使得收益提升最多的一个产品块
    						added_product_block, max_improvement = BaseAlgorithm.greedy_add(model=model,
    																						current_offerset=optimal_offerset_add[:],
    																						current_revenue=max_revenue_add,
    																						addable_product_ids=addable_product_ids,
    																						max_addable_block_size=max_addable_block_size)	
    						# 更新最优报价集与最大收益														
    						if added_product_block is not None:
    							optimal_offerset_add.extend(added_product_block)
    							max_revenue_add += max_improvement
    
    				# 删除操作逻辑
    				if do_delete:
    					# 初始化删除操作的最优解与最大收益
    					optimal_offerset_delete = local_optimal_offerset[:]
    					
    					# 检验候选报价集是否为空
    					optimal_offerset_length = len(optimal_offerset_delete)
    					if optimal_offerset_length > 0:	
    						max_deleteable_block_size = deleteable_block_size if deleteable_block_size <= optimal_offerset_length else optimal_offerset_length	# 确定可用于删除的产品子集大小
    						
    						# 确定可用于删除的产品子集													
    						if limit_removal:
    							# 将在候选报价集中的产品构成的产品子集与删除次数还没有超过限制的产品构成的产品的子集取交集
    							deleteable_product_ids = list(set(optimal_offerset_delete).intersection(set(numpy.where(removal_count < max_removal)[0].tolist())))
    						else:
    							# 否则所有在候选报价集中的产品都可以用于删除
    							deleteable_product_ids = optimal_offerset_delete[:]
    						
    						# 遍历产品找到能使得收益提升最多的一个产品块
    						deleted_product_block, max_improvement = BaseAlgorithm.greedy_delete(model=model,
    																							 current_offerset=optimal_offerset_delete[:],
    																							 current_revenue=max_revenue_delete,
    																							 deleteable_product_ids=deleteable_product_ids,
    																							 max_deleteable_block_size=max_deleteable_block_size)	
    						# 更新最优报价集与最大收益														
    						if deleted_product_block is not None:
    							optimal_offerset_delete = list(set(optimal_offerset_delete) - set(deleted_product_block))
    							max_revenue_delete += max_improvement
    				
    				# 交换操作逻辑
    				if do_exchange:
    					# 初始化交换操作的最优解与最大收益
    					optimal_offerset_exchange = local_optimal_offerset[:]
    					
    					# 检验报价集可用的容量是否为正且候选集是否为空
    					optimal_offerset_length = len(optimal_offerset_exchange)
    					available_capacity = offerset_capacity - optimal_offerset_length
    					if available_capacity > 0 and optimal_offerset_length > 0:	
    						max_addable_block_size = exchangeable_block_size if exchangeable_block_size <= available_capacity else available_capacity				# 确定可用于增加的报价集容量
    						max_deleteable_block_size = exchangeable_block_size if exchangeable_block_size <= optimal_offerset_length else optimal_offerset_length	# 确定可用于删除的报价集容量
    						# 最多可以交换的块大小取增加块与删除块中的较小值
    						max_exchangeable_block_size = min(max_addable_block_size, max_deleteable_block_size)
    						
    						# 确定可用于交换进来的产品子集
    						addable_product_ids = set(product_ids) - set(optimal_offerset_exchange)													
    						if limit_addition:
    							# 将不在候选报价集中的产品构成的产品子集与增加次数还没有超过限制的产品构成的产品的子集取交集
    							addable_product_ids = addable_product_ids.intersection(set(numpy.where(addition_count < max_addition)[0].tolist()))
    						addable_product_ids = list(addable_product_ids)
    
    						# 确定可用于交换出去的产品子集													
    						if limit_removal:
    							# 将在候选报价集中的产品构成的产品子集与删除次数还没有超过限制的产品构成的产品的子集取交集
    							deleteable_product_ids = list(set(optimal_offerset_exchange).intersection(set(numpy.where(removal_count < max_removal)[0].tolist())))
    						else:
    							# 否则所有在候选报价集中的产品都可以用于删除
    							deleteable_product_ids = optimal_offerset_exchange[:]
    						
    						# 遍历产品找到能使得收益提升最多的一个产品块
    						exchanged_product_block, max_improvement = BaseAlgorithm.greedy_exchange(model=model,
    																								 current_offerset=optimal_offerset_exchange[:],
    																								 current_revenue=max_revenue_exchange,
    																								 addable_product_ids=addable_product_ids,
    																								 deleteable_product_ids=deleteable_product_ids,
    																								 max_exchangeable_block_size=max_exchangeable_block_size)	
    						# 更新最优报价集与最大收益														
    						if exchanged_product_block[0] is not None:
    							optimal_offerset_exchange = list(set(optimal_offerset_exchange) - set(exchanged_product_block[1]))
    							optimal_offerset_exchange.extend(exchanged_product_block[0])
    							max_revenue_exchange += max_improvement
    					
    				
    				# 三种操作都不可行,算法终止
    				if added_product_block is None and deleted_product_block is None and exchanged_product_block[0] is None:
    					assert exchanged_product_block[1] is None
    					break
    				
    				# 若优先执行增加操作(从产品空集正向迭代可能会出现这种情况),并且增加操作能够带来更大的收益
    				if do_add_first and added_product_block is not None:
    					assert max_revenue_add > local_max_revenue				
    					# 更新局部变量
    					local_optimal_offerset = optimal_offerset_add[:]
    					local_max_revenue = max_revenue_add
    					for product_id in added_product_block:
    						addition_count[product_id] += 1
    						
    				
    				# 若优先执行删除操作(从产品全集反向迭代可能会出现这种情况),并且删除操作能够带来更大的收益
    				elif do_delete_first and deleted_product_block is not None:
    					assert max_revenue_delete > local_max_revenue
    					# 更新局部变量
    					local_optimal_offerset = optimal_offerset_delete[:]
    					local_max_revenue = max_revenue_delete
    					for product_id in deleted_product_block:
    						removal_count[product_id] += 1
    				
    				# 否则就比较三种操作带来的收益
    				else:
    					max_revenue_of_adx = max([max_revenue_add, max_revenue_delete, max_revenue_exchange])
    					if max_revenue_of_adx == max_revenue_add and added_product_block is not None:
    						# 执行增加操作的局部变量更新
    						local_optimal_offerset = optimal_offerset_add[:]
    						local_max_revenue = max_revenue_add
    						for product_id in added_product_block:
    							addition_count[product_id] += 1	
    					elif max_revenue_of_adx == max_revenue_delete and deleted_product_block is not None:
    						# 执行删除操作的局部变量更新
    						local_optimal_offerset = optimal_offerset_delete[:]
    						local_max_revenue = max_revenue_delete
    						for product_id in deleted_product_block:
    							removal_count[product_id] += 1		
    					else:		
    						# 执行交换操作的局部变量更新	
    						local_optimal_offerset = optimal_offerset_exchange[:]
    						local_max_revenue = max_revenue_exchange
    						for product_id in exchanged_product_block[0]:
    							addition_count[product_id] += 1	
    						for product_id in exchanged_product_block[1]:
    							removal_count[product_id] += 1	
    				
    			# 更新全局变量
    			if local_max_revenue > global_max_revenue:
    				global_max_revenue = local_max_revenue
    				global_optimal_offerset = local_optimal_offerset[:]
    				global_initial_offerset = list(initial_offerset)		
    			
    		return global_max_revenue, global_optimal_offerset
    	
    # --------------------------------------------------------------------
    # -*-*-*-*-*-*- 这-*-里-*-是-*-华-*-丽-*-的-*-分-*-割-*-线 -*-*-*-*-*-*-
    # --------------------------------------------------------------------
    # !!!!!!!!!!!!!!!重要提示 !!!!!!! !!!!!!!
    # 以下几个子类都可以忽略了,因为BaseAlgorithm集成了统一的框架;
    # 只需要修改算法配置即可实现不同的算法逻辑,这里保留代码仅供参考一些默认配置参数;
    # 具体可见src.simulation_tools中的generate_params_for_algorithm函数;
    # --------------------------------------------------------------------
    # -*-*-*-*-*-*- 这-*-里-*-是-*-华-*-丽-*-的-*-分-*-割-*-线 -*-*-*-*-*-*-
    # --------------------------------------------------------------------
    
    class NaiveGreedy(BaseAlgorithm):
    	"""平凡的贪心算法:只是增加操作"""
    	def __init__(self, algorithm_args, *args, **kwargs):
    		# 默认算法配置
    		self.default_kwargs = {
    			'do_add'					: True,
    			'do_add_first'				: True,
    			'do_delete'					: False,
    			'do_delete_first'			: False,
    			'do_exchange'				: False,
    			'max_removal'				: 0.,
    			'max_addition'				: float('inf'),
    			'initial_size'				: 0,
    			'addable_block_size'		: 1,
    			'deleteable_block_size'		: 1,
    			'exchangeable_block_size'	: 1,
    		} if len(kwargs) == 0 else kwargs.copy()
    		super(NaiveGreedy, self).__init__(algorithm_args=algorithm_args, *args, **self.default_kwargs)	
    
    
    class GreedyOpt(BaseAlgorithm):
    	"""2011年的GreedyOpt算法:考虑增加与交换两种操作,且优先考虑增加操作"""
    	def __init__(self, algorithm_args, *args, **kwargs):
    		# 默认算法配置
    		self.default_kwargs = {
    			'do_add'					: True,
    			'do_add_first'				: True,
    			'do_delete'					: False,
    			'do_delete_first'			: False,
    			'do_exchange'				: True,
    			'max_removal'				: 1.,
    			'max_addition'				: float('inf'),
    			'initial_size'				: 0,
    			'addable_block_size'		: 1,
    			'deleteable_block_size'		: 1,
    			'exchangeable_block_size'	: 1,
    		} if len(kwargs) == 0 else kwargs.copy()
    		super(GreedyOpt, self).__init__(algorithm_args=algorithm_args, *args, **self.default_kwargs)
    
    
    class ADXOpt2014(BaseAlgorithm):
    	"""2014年的ADXOpt算法:考虑增删换三种操作,且各种操作的优先级相同"""
    	def __init__(self, algorithm_args, *args, **kwargs):
    		# 默认算法配置
    		self.default_kwargs = {
    			'do_add'					: True,
    			'do_add_first'				: False,
    			'do_delete'					: True,
    			'do_delete_first'			: False,
    			'do_exchange'				: True,
    			'max_removal'				: 1.,
    			'max_addition'				: float('inf'),
    			'initial_size'				: 0,
    			'addable_block_size'		: 1,
    			'deleteable_block_size'		: 1,
    			'exchangeable_block_size'	: 1,
    		} if len(kwargs) == 0 else kwargs.copy()
    		super(ADXOpt2014, self).__init__(algorithm_args=algorithm_args, *args, **self.default_kwargs)
    
    
    class ADXOpt2016(BaseAlgorithm):
    	"""2014年的ADXOpt算法:考虑增删换三种操作,且优先考虑增加操作"""
    	def __init__(self, algorithm_args, *args, **kwargs):
    		# 默认算法配置
    		self.default_kwargs = {
    			'do_add'					: True,
    			'do_add_first'				: True,
    			'do_delete'					: True,
    			'do_delete_first'			: False,
    			'do_exchange'				: True,
    			'max_removal'				: 1.,
    			'max_addition'				: float('inf'),
    			'initial_size'				: 0,
    			'addable_block_size'		: 1,
    			'deleteable_block_size'		: 1,
    			'exchangeable_block_size'	: 1,
    		} if len(kwargs) == 0 else kwargs.copy()
    		super(ADXOpt2016, self).__init__(algorithm_args=algorithm_args, *args, **self.default_kwargs)
    
    
  6. src/choice_model.py:选择模型定义;

    # -*- coding: utf-8 -*-
    # @author: caoyang
    # @email: caoyang@163.sufe.edu.cn
    # 选择模型
    
    if __name__ == '__main__':
    	import sys
    	sys.path.append('../')
    
    import numpy
    from copy import deepcopy
    from abc import abstractmethod
    
    from setting import *
    
    class BaseChoiceModel:
    	"""选择模型基类"""
    	def __init__(self, 
    				 product_prices: numpy.ndarray, 
    				 product_valuations: numpy.ndarray, 
    				 no_purchase_valuation,
    				 offerset_capacity: int=None, 
    				 *args, **kwargs):
    		"""
    		:param product_prices		: 产品价格数组,形状为(n, ),目前不考虑价格歧视模型,因此只可能是一阶数组;
    		:param product_valuations	: 产品估值数组,形状为(-1, n),通常为一阶数组,在混合逻辑模型中每个客户类都会有一套产品估值数组,因此可能是二阶数组;
    		:param no_purchase_valuation: 不购买的估值,形状为(-1, 1),通常为标量,在混合逻辑模型中每个用户类都会有一个不购买的估值,因此可能是一阶数组;
    		:param offerset_capacity	: 报价集容量,默认值None表示无容量限制,即等于产品总数;
    		"""
    		
    		# 初始化构造参数
    		self.product_prices = deepcopy(product_prices)
    		self.product_valuations = deepcopy(product_valuations)
    		self.no_purchase_valuation = deepcopy(no_purchase_valuation)
    		
    		assert product_prices.shape[0] == product_valuations.shape[-1]							
    		self.num_product = product_prices.shape[0]													
    		self.product_ids = list(range(self.num_product))											
    		self.offerset_capacity = self.num_product if offerset_capacity is None else offerset_capacity
    	
    	def validate_offerset(self, offerset):
    		"""验证报价集的合法性"""
    		assert len(offerset) <= self.offerset_capacity
    		assert set(offerset).issubset(set(self.product_ids))
    	
    	def calc_revenue_expectation(self, offerset):
    		"""计算收益期望"""
    		if len(offerset) == 0:
    			return 0.
    		self.validate_offerset(offerset)
    		product_choice_probabilitys = self.calc_product_choice_probabilitys(offerset)
    		revenue_expectation = numpy.sum(product_choice_probabilitys * self.product_prices[offerset])
    		return revenue_expectation
    
    	@abstractmethod
    	def calc_product_choice_probabilitys(self, *args, **kwargs):
    		"""计算所有产品选择概率"""
    		raise NotImplementedError
    
    
    class MultiNomialLogit(BaseChoiceModel):
    	"""多项逻辑模型"""
    	def __init__(self, 
    				 product_prices: numpy.ndarray, 
    				 product_valuations: numpy.ndarray, 
    				 no_purchase_valuation: float,
    				 offerset_capacity: int):
    		"""
    		:param product_prices		: 产品价格数组,形状为(n, );
    		:param product_valuations	: 产品估值数组,形状为(n, );
    		:param no_purchase_valuation: 不购买的估值,标量;
    		:param offerset_capacity	: 报价集容量,默认值None表示无容量限制,即等于产品总数;
    		"""
    		super(MultiNomialLogit, self).__init__(product_prices=product_prices, 
    											   product_valuations=product_valuations,
    											   no_purchase_valuation=no_purchase_valuation,
    											   offerset_capacity=offerset_capacity)
    		
    	def calc_product_choice_probabilitys(self, offerset):
    		# 计算分母总估值
    		total_valuation = self.no_purchase_valuation + numpy.sum(self.product_valuations[offerset])
    		
    		# 计算每个产品的选择概率
    		product_choice_probabilitys = self.product_valuations[offerset] / total_valuation
    		return product_choice_probabilitys
    
    
    class NestedLogit2(BaseChoiceModel):
    	"""二级嵌套逻辑模型:必然存在一个只包含不购买选项的空嵌套"""
    	def __init__(self, 
    				 product_prices: numpy.ndarray, 
    				 product_valuations: numpy.ndarray, 
    				 no_purchase_valuation: float,
    				 offerset_capacity: int,
    				 nests: list,
    				 nest_dis_similaritys: numpy.ndarray,
    				 nest_no_purchase_valuations: numpy.ndarray):
    		"""
    		:param product_prices				: 产品价格数组,形状为(n, );
    		:param product_valuations			: 产品估值数组,形状为(n, );
    		:param no_purchase_valuation		: 不购买的估值,标量;
    		:param offerset_capacity			: 报价集容量,默认值None表示无容量限制,即等于产品总数;
    		:param nests						: 产品嵌套,长度为m,每个元素为一个随机产品子集列表;
    		:param nest_dis_similaritys			: 嵌套相异度参数,形状为(m, );
    		:param nest_no_purchase_valuations	: 每个嵌套内的不购买选项估值,形状为(m, );
    		"""
    		super(NestedLogit2, self).__init__(product_prices=product_prices, 
    										   product_valuations=product_valuations,
    										   no_purchase_valuation=no_purchase_valuation,
    										   offerset_capacity=offerset_capacity)
    		self.nests = deepcopy(nests)
    		self.nest_dis_similaritys = deepcopy(nest_dis_similaritys)
    		self.nest_no_purchase_valuations = nest_no_purchase_valuations
    
    	def classify_offerset_by_nests(self, offerset):
    		"""根据嵌套情况对给定的报价集进行划分,即将S划分为{S1, S2, S3,..., Sm}"""
    		offerset_nests = []
    		for nest in self.nests:
    			offerset_nest = []
    			for product_id in offerset:
    				if product_id in nest:
    					offerset_nest.append(product_id)
    			offerset_nests.append(offerset_nest)
    		return offerset_nests
    
    	def calc_product_choice_probabilitys(self, offerset):
    		# 生成报价集在给定嵌套下划分得到的报价子集
    		offerset_nests = self.classify_offerset_by_nests(offerset)
    		
    		# 计算每个嵌套的效用值V_i(S)
    		nest_valuations = []
    		for nest_id, offerset_nest in enumerate(offerset_nests):
    			nest_valuation = self.nest_no_purchase_valuations[nest_id] + numpy.sum(self.product_valuations[offerset_nest])
    			nest_valuations.append(numpy.power(nest_valuation, self.nest_dis_similaritys[nest_id]))
    		
    		# 计算每个嵌套的选择概率Q_i(S)
    		nest_choice_probabilitys = []
    		total_valuation = self.no_purchase_valuation + sum(nest_valuations)
    		for nest_id, offerset_nest in enumerate(offerset_nests):
    			nest_choice_probability = nest_valuations[nest_id] / total_valuation
    			nest_choice_probabilitys.append(nest_choice_probability)
    		
    		# 计算每个报价子集的总效用值V(S_i)
    		offerset_nest_total_valuations = []
    		for offerset_nest, nest_no_purchase_valuation in zip(offerset_nests, self.nest_no_purchase_valuations):
    			offerset_nest_total_valuation = nest_no_purchase_valuation + numpy.sum(self.product_valuations[offerset_nest])
    			offerset_nest_total_valuations.append(offerset_nest_total_valuation)
    
    		# 计算每个产品的选择概率Pr(j|S)
    		product_choice_probabilitys = []
    		for target_product_id in offerset:
    			product_choice_probability = 0.
    			for offerset_nest, offerset_nest_total_valuation, nest_choice_probability in zip(offerset_nests, offerset_nest_total_valuations, nest_choice_probabilitys):
    				if target_product_id in offerset_nest:
    					product_choice_probability += (nest_choice_probability * self.product_valuations[target_product_id] / offerset_nest_total_valuation)
    			product_choice_probabilitys.append(product_choice_probability)
    		return numpy.array(product_choice_probabilitys, dtype=numpy.float64)
    
    
    class MixedLogit(BaseChoiceModel):
    	"""混合逻辑模型"""
    	def __init__(self, 
    				 product_prices: numpy.ndarray, 
    				 product_valuations: numpy.ndarray, 
    				 no_purchase_valuation: numpy.ndarray,
    				 offerset_capacity: int,
    				 class_weight: numpy.ndarray):
    		"""
    		:param product_prices			: 产品价格数组,形状为(n, );
    		:param product_valuations		: 产品估值数组,形状为(k, n);
    		:param no_purchase_valuation	: 不购买的估值,形状为(k, );
    		:param offerset_capacity		: 报价集容量,默认值None表示无容量限制,即等于产品总数;
    		:param class_weight				: 客户类别的权重,形状为(k, );
    		"""
    		super(MixedLogit, self).__init__(product_prices=product_prices, 
    										 product_valuations=product_valuations,
    										 no_purchase_valuation=no_purchase_valuation,
    										 offerset_capacity=offerset_capacity)
    		self.class_weight = deepcopy(class_weight)
    
    	def calc_product_choice_probabilitys(self, offerset):
    		# 计算每个客户类的分母总估值,形状为(k, )
    		total_valuation = self.no_purchase_valuation + numpy.sum(self.product_valuations[:, offerset], axis=-1)
    		
    		# 计算每个产品的选择概率
    		product_choice_probabilitys = numpy.dot(self.class_weight, self.product_valuations[:, offerset] / numpy.vstack([total_valuation] * len(offerset)).T)
    		return product_choice_probabilitys
    
    
    
  7. src/evaluation_tools.py:给定算法与模型进行仿真模拟评估;

    # -*- coding: utf-8 -*-
    # @author: caoyang
    # @email: caoyang@163.sufe.edu.cn
    # 用于算法评估的工具函数
    if __name__ == '__main__':
    	import sys
    	sys.path.append('../')
    
    import os
    import time
    import json
    import pandas
    
    from config import *
    from setting import *
    
    from src.algorithm import BaseAlgorithm, NaiveGreedy, GreedyOpt, ADXOpt2014, ADXOpt2016
    from src.choice_model import MultiNomialLogit, NestedLogit2, MixedLogit
    from src.simulation_tools import generate_params_for_MNL, generate_params_for_NL2, generate_params_for_ML, generate_model_instance_and_solve
    from src.utils import load_args, save_args
    	
    # 仿真模拟
    # :param model_name		: 模型名称,目前只考虑{'mnl', 'nl', 'ml'};
    # :param model_args		: 模型配置,可用于生成不同的模型参数;
    # :param algorithm_name	: 算法名称,这个参数其实并不重要,只用于文件命名,因为目前所有算法目前已经可以在统一的BaseAlgorithm框架下实现,只需要修改算法配置即可实现不同的算法;
    # :param algorithm_args	: 算法配置,关键参数,设置不同的配置可以实现很多不同的算法;
    # :param n_sample		: 模型实例仿真次数;
    # :param do_export		: 是否导出详细评估结果;
    def evaluate(model_name, model_args, algorithm_name, algorithm_args, n_sample=1000, do_export=True):
    	# 模型设定
    	model_name = model_name.replace(' ', '').lower()
    	assert model_name in MODEL_MAPPING
    	Model = eval(MODEL_MAPPING[model_name]['class'])
    	generate_params_function = eval(MODEL_MAPPING[model_name]['param'])
    	
    	# 算法设定
    	algorithm_name = algorithm_name.replace(' ', '').lower()
    	algorithm = BaseAlgorithm(load_args(BaseAlgorithmConfig), **algorithm_args)
    	
    	# 用于评估记录的字典
    	evaluation_dict = {
    		'optimal': [],	# 记录每一次仿真是否收敛到全局最优
    		'max_revenue': [],	# 最大收益
    		'gap': [],			# 记录每一次仿真算法输出与全局最优之间在收益上的差距
    	}
    	
    	start_time = time.time()
    	for _ in range(n_sample):
    		# 随机生成模型参数与模型实例
    		model_params = generate_params_function(model_args)
    		model = Model(**model_params)
    		
    		# 穷举精确求解所有的最优解
    		max_revenue, optimal_solutions = BaseAlgorithm.bruteforce(model=model, 
    																  min_size=1, 
    																  max_size=model.offerset_capacity)
    		
    		# 算法求解
    		output_max_revenue, output_optimal_solution = algorithm.run(model)
    		
    		if set(output_optimal_solution) in list(map(set, optimal_solutions)):
    			# 算法收敛到全局最优解
    			assert abs(max_revenue - output_max_revenue) < EPSILON, f'Untolerable error between {max_revenue} and {output_max_revenue}'
    			optimal = 1
    			gap = 0
    		else:
    			# 算法未收敛到全局最优解
    			assert max_revenue > output_max_revenue - EPSILON
    			optimal = 0
    			gap = max_revenue - output_max_revenue
    		evaluation_dict['optimal'].append(optimal)
    		evaluation_dict['max_revenue'].append(max_revenue)
    		evaluation_dict['gap'].append(gap)	
    	end_time = time.time()
    	
    	evaluation_dataframe = pandas.DataFrame(evaluation_dict, columns=list(evaluation_dict.keys()))
    	result = analysis(evaluation_dataframe)
    	result['time'] = end_time - start_time
    	result['n_sample'] = n_sample
    	if do_export:
    		dirname = f'{model_name}-{algorithm_name}-{n_sample}-{int(time.time()) % 100000}'
    		root = os.path.join(LOGGING_FOLDER, dirname)
    		os.makedirs(root, exist_ok=True)
    		
    		# 导出模型配置
    		save_args(args=model_args, save_path=os.path.join(root, 'model_args.json'))
    		
    		# 导出算法配置
    		# save_args(args=algorithm.algorithm_args, save_path=os.path.join(root, 'algorithm_args.json'))
    		with open(os.path.join(root, 'algorithm_args.json'), 'w') as f:
    			json.dump(algorithm_args, f, indent=4)
    		
    		# 导出仿真结果
    		evaluation_dataframe.to_csv(os.path.join(root, 'evaluation.csv'), header=True, index=False, sep='\t')
    		
    		# 导出分析结果
    		result['model_name'] = model_name
    		result['algorithm_name'] = algorithm_name
    		result['dirname'] = dirname	# 便于检索
    		with open(os.path.join(root, 'result.json'), 'w') as f:
    			json.dump(result, f, indent=4)
    		
    	
    	return evaluation_dataframe, result
    
    # 20211206改进的仿真模拟函数,可以在同一模型配置下对多个算法同时进行演算,因为穷举最优解太费时间,可以用一个模型实例对多个算法同时进行评估,这样也更公平
    # :param model_name				: 模型名称,目前只考虑{'mnl', 'nl', 'ml'};
    # :param model_args				: 模型配置,可用于生成不同的模型参数;
    # :param algorithm_name_list	: 算法名称列表,这个参数其实并不重要,只用于文件命名,因为目前所有算法目前已经可以在统一的BaseAlgorithm框架下实现,只需要修改算法配置即可实现不同的算法;
    # :param algorithm_args_list	: 算法配置列表,关键参数,设置不同的配置可以实现很多不同的算法;
    # :param n_sample				: 模型实例仿真次数;
    # :param do_export				: 是否导出详细评估结果;
    def evaluate_new(model_name, model_args, algorithm_name_list, algorithm_args_list, n_sample=1000, do_export=True):
    	# 模型设定
    	model_name = model_name.replace(' ', '').lower()
    	assert model_name in MODEL_MAPPING
    	Model = eval(MODEL_MAPPING[model_name]['class'])
    	generate_params_function = eval(MODEL_MAPPING[model_name]['param'])
    	
    	# 算法设定
    	algorithm_dict = {}
    	for algorithm_name, algorithm_args in zip(algorithm_name_list, algorithm_args_list):
    		algorithm_name = algorithm_name.replace(' ', '').lower()
    		algorithm = BaseAlgorithm(load_args(BaseAlgorithmConfig), **algorithm_args)
    		algorithm_dict[algorithm_name] = algorithm
    	
    	# 用于评估记录的字典
    	evaluation_dict = {}
    	for algorithm_name in algorithm_name_list:
    		evaluation_dict[algorithm_name] = {
    			'optimal': [],		# 记录每一次仿真是否收敛到全局最优
    			'max_revenue': [],	# 最大收益
    			'gap': [],			# 记录每一次仿真算法输出与全局最优之间在收益上的差距
    			'time': [],
    		}
    	
    	for model, max_revenue, optimal_solutions in generate_model_instance_and_solve(model_name=model_name, 
    																				   model_args=model_args, 
    																				   n_sample=n_sample):
    		# 对于一个模型实例使用所有算法进行求解
    		for algorithm_name, algorithm in algorithm_dict.items():
    			# 算法求解
    			start_time = time.time()
    			output_max_revenue, output_optimal_solution = algorithm.run(model)
    			end_time = time.time()
    			if set(output_optimal_solution) in list(map(set, optimal_solutions)):
    				# 算法收敛到全局最优解
    				assert abs(max_revenue - output_max_revenue) < EPSILON, f'Untolerable error between {max_revenue} and {output_max_revenue}'
    				optimal = 1
    				gap = 0
    			else:
    				# 算法未收敛到全局最优解
    				assert max_revenue > output_max_revenue - EPSILON
    				optimal = 0
    				gap = max_revenue - output_max_revenue
    			_time = end_time - start_time
    			evaluation_dict[algorithm_name]['optimal'].append(optimal)
    			evaluation_dict[algorithm_name]['max_revenue'].append(max_revenue)
    			evaluation_dict[algorithm_name]['gap'].append(gap)	
    			evaluation_dict[algorithm_name]['time'].append(_time)	
    	
    	results = {}
    	for algorithm_name, algorithm_args in zip(algorithm_name_list, algorithm_args_list):
    		evaluation_dataframe = pandas.DataFrame(evaluation_dict[algorithm_name], columns=list(evaluation_dict[algorithm_name].keys()))
    		result = analysis(evaluation_dataframe)
    		result['time'] = evaluation_dataframe['time'].sum()
    		result['n_sample'] = n_sample
    		if do_export:
    			dirname = f'{model_name}-{algorithm_name}-{n_sample}-{int(time.time()) % 100000}'
    			root = os.path.join(LOGGING_FOLDER, dirname)
    			os.makedirs(root, exist_ok=True)
    			
    			# 导出模型配置
    			save_args(args=model_args, save_path=os.path.join(root, 'model_args.json'))
    			
    			# 导出算法配置
    			# save_args(args=algorithm.algorithm_args, save_path=os.path.join(root, 'algorithm_args.json'))
    			with open(os.path.join(root, 'algorithm_args.json'), 'w') as f:
    				json.dump(algorithm_args, f, indent=4)
    			
    			# 导出仿真结果
    			evaluation_dataframe.to_csv(os.path.join(root, 'evaluation.csv'), header=True, index=False, sep='\t')
    			
    			# 导出分析结果
    			result['model_name'] = model_name
    			result['algorithm_name'] = algorithm_name
    			result['dirname'] = dirname	# 便于检索
    			with open(os.path.join(root, 'result.json'), 'w') as f:
    				json.dump(result, f, indent=4)
    		results[algorithm_name] = result
    
    	return evaluation_dict, results
    
    
    # 根据evaluate函数统计的evaluation_dataframe进行相关统计指数的计算
    def analysis(evaluation_dataframe):
    	evaluation_dataframe_optimal = evaluation_dataframe[evaluation_dataframe['optimal']==1]
    	evaluation_dataframe_unoptimal = evaluation_dataframe[evaluation_dataframe['optimal']==0]
    	gap_ratio_all = evaluation_dataframe['gap'] / evaluation_dataframe['max_revenue']
    	gap_ratio_nonopt = evaluation_dataframe_unoptimal['gap'] / evaluation_dataframe_unoptimal['max_revenue']
    	
    	total_case = evaluation_dataframe.shape[0]
    	num_optimal_case = evaluation_dataframe_optimal.shape[0]
    	num_unoptimal_case = evaluation_dataframe_unoptimal.shape[0]
    	percentage_of_optimal_instances = num_optimal_case / total_case
    	average_gap_ratio_all = gap_ratio_all.mean()
    	average_gap_ratio_nonopt = 0. if evaluation_dataframe_unoptimal.shape[0] == 0 else gap_ratio_nonopt.mean()
    
    	return {
    		'num_optimal_case': num_optimal_case,
    		'num_unoptimal_case': num_unoptimal_case,
    		'percentage_of_optimal_instances': percentage_of_optimal_instances,
    		'average_gap_ratio_all': average_gap_ratio_all,
    		'average_gap_ratio_nonopt': average_gap_ratio_nonopt,
    	}
    
    
  8. src/plot_tools.py:生成可视化实验结果,这里面的函数与manage.py中的函数对应;

    # -*- coding: utf-8 -*-
    # @author: caoyang
    # @email: caoyang@163.sufe.edu.cn
    # 用于绘图的工具函数
    if __name__ == '__main__':
    	import sys
    	sys.path.append('../')
    
    import os
    import json
    import pandas
    
    from matplotlib import pyplot
    
    from setting import *
    
    # 对manage.py中main, main_offerset_capacity, main_max_addition_or_removal, main_block_size四个函数的结果进行分析
    def plot_main_1_to_4(do_export=True, root=TEMP_FOLDER):
    	paths = [
    		os.path.join(root, 'summary.json'),
    		os.path.join(root, 'summary_offerset_capacity.json'),
    		os.path.join(root, 'summary_max_addition_or_removal.json'),
    		os.path.join(root, 'summary_block_size.json'),
    	]	
    	summary_dict = {
    		'model_name': [],
    		'algorithm_name': [],
    		'block_size': [],
    		'max_addition_or_removal': [],
    		'offerset_capacity': [],
    		'percentage_of_optimal_instances': [],
    		'average_gap_ratio_all': [],
    		'average_gap_ratio_nonopt': [],
    		'time': [],
    	}	
    		
    	for path in paths:
    		summary = json.load(open(path, 'r'))
    		for model_name in MODEL_MAPPING.keys():
    			for algorithm_name in ALGORITHM_NAMES:
    				results = summary[model_name][algorithm_name]
    				for result in results:
    					percentage_of_optimal_instances = round(result['percentage_of_optimal_instances'], 3)
    					average_gap_ratio_nonopt = round(result['average_gap_ratio_nonopt'], 5)
    					average_gap_ratio_all = round(result['average_gap_ratio_all'], 8)
    					_time = round(result['time'], 1)
    					
    					offerset_capacity = result.get('offerset_capacity', 10)
    					block_size = result.get('block_size', 1)
    					max_addition_or_removal = result.get('max_addition_or_removal', 1.)
    					
    					summary_dict['model_name'].append(model_name)
    					summary_dict['algorithm_name'].append(algorithm_name)
    					summary_dict['block_size'].append(block_size)
    					summary_dict['max_addition_or_removal'].append(max_addition_or_removal)
    					summary_dict['offerset_capacity'].append(offerset_capacity)
    					summary_dict['percentage_of_optimal_instances'].append(percentage_of_optimal_instances)
    					summary_dict['average_gap_ratio_all'].append(average_gap_ratio_all)
    					summary_dict['average_gap_ratio_nonopt'].append(average_gap_ratio_nonopt)
    					summary_dict['time'].append(_time)
    	
    	summary_dataframe = pandas.DataFrame(summary_dict, columns=list(summary_dict.keys()))
    	if do_export:
    		summary_dataframe.to_csv('summary1.csv', header=True, index=False, sep=',')
    	
    	
    	# 分析运行时间
    	
    	new_summary = {
    		'model_name': [],
    		'block_size': [],
    		'max_addition_or_removal': [],
    		'offerset_capacity': [],
    		'total_time_backward': [],
    		'total_time_forward': [],
    		'average_percentage_of_optimal_instances_backward': [],
    		'average_percentage_of_optimal_instances_forward': [],
    	}
    	
    	for (model_name, 
    		 block_size, 
    		 max_addition_or_removal, 
    		 offerset_capacity), _summary_dataframe in summary_dataframe.groupby(['model_name', 
    																			  'block_size', 
    																			  'max_addition_or_removal', 
    																			  'offerset_capacity']):
    		print(model_name, block_size, max_addition_or_removal, offerset_capacity)		
    		_summary_dataframe = _summary_dataframe.reset_index(drop=True)
    		_summary_dataframe['suffix'] = _summary_dataframe['algorithm_name'].map(lambda x: x.split('_')[-1])
    		
    		total_time_backward = _summary_dataframe[_summary_dataframe['suffix'] == 'backward']['time'].sum()
    		total_time_forward = _summary_dataframe[_summary_dataframe['suffix'] == 'forward']['time'].sum()
    		average_percentage_of_optimal_instances_backward = _summary_dataframe[_summary_dataframe['suffix'] == 'backward']['percentage_of_optimal_instances'].mean()
    		average_percentage_of_optimal_instances_forward = _summary_dataframe[_summary_dataframe['suffix'] == 'forward']['percentage_of_optimal_instances'].mean()
    		
    		new_summary['model_name'].append(model_name)														  
    		new_summary['block_size'].append(block_size)														  
    		new_summary['max_addition_or_removal'].append(max_addition_or_removal)														  
    		new_summary['offerset_capacity'].append(offerset_capacity)	
    		new_summary['total_time_backward'].append(total_time_backward)	
    		new_summary['total_time_forward'].append(total_time_forward)	
    		new_summary['average_percentage_of_optimal_instances_backward'].append(average_percentage_of_optimal_instances_backward)	
    		new_summary['average_percentage_of_optimal_instances_forward'].append(average_percentage_of_optimal_instances_forward)	
    															  
    		# print(_summary_dataframe[['algorithm_name', 'percentage_of_optimal_instances', 'average_gap_ratio_all', 'average_gap_ratio_nonopt', 'time']])
    		# print('#' * 64)
    	
    	print(pandas.DataFrame(new_summary, columns=list(new_summary.keys())).to_csv('summary1.1.csv', sep=',', header=True, index=False))
    	
    	return summary_dataframe
    
    # 对manage.py中main_adxopt2014_for_nl2_further_analysis函数的结果进行分析
    def plot_main_adxopt2014_for_nl2_further_analysis(do_export=True, root=TEMP_FOLDER):
    	paths = [
    		os.path.join(root, 'summary_adxopt2014_forward_for_nl2_further_analysis_1.json'),
    		os.path.join(root, 'summary_adxopt2014_forward_for_nl2_further_analysis_2.json'),
    		os.path.join(root, 'summary_adxopt2014_backward_for_nl2_further_analysis_1.json'),
    		os.path.join(root, 'summary_adxopt2014_backward_for_nl2_further_analysis_2.json'),
    	]
    	summary_dict = {
    		'algorithm_name': [],
    		'block_size': [],
    		'num_nest': [],
    		'min_dis_similarity': [],
    		'max_dis_similarity': [],
    		'exist_no_purchase_per_nest': [],
    		'allow_nest_repetition': [],
    
    		'percentage_of_optimal_instances': [],
    		'average_gap_ratio_all': [],
    		'average_gap_ratio_nonopt': [],
    		'time': [],
    	}		
    
    	
    	for path in paths:
    		summary = json.load(open(path, 'r'))
    		block_size = int(path[-6])
    		algorithm_name = '_'.join(path.split('_')[1: 3])
    		
    		for result in summary:
    			num_nest = result['num_nest']
    			min_dis_similarity = result['min_dis_similarity']
    			max_dis_similarity = result['max_dis_similarity']
    			exist_no_purchase_per_nest = int(result['exist_no_purchase_per_nest'])
    			allow_nest_repetition = int(result['allow_nest_repetition'])
    			percentage_of_optimal_instances = round(result['percentage_of_optimal_instances'], 3)
    			average_gap_ratio_nonopt = round(result['average_gap_ratio_nonopt'], 5)
    			average_gap_ratio_all = round(result['average_gap_ratio_all'], 8)
    			_time = round(result['time'], 1)
    			
    			summary_dict['algorithm_name'].append(algorithm_name)
    			summary_dict['block_size'].append(block_size)
    			summary_dict['num_nest'].append(num_nest)
    			summary_dict['min_dis_similarity'].append(min_dis_similarity)
    			summary_dict['max_dis_similarity'].append(max_dis_similarity)
    			summary_dict['exist_no_purchase_per_nest'].append(exist_no_purchase_per_nest)
    			summary_dict['allow_nest_repetition'].append(allow_nest_repetition)
    			summary_dict['percentage_of_optimal_instances'].append(percentage_of_optimal_instances)
    			summary_dict['average_gap_ratio_all'].append(average_gap_ratio_all)
    			summary_dict['average_gap_ratio_nonopt'].append(average_gap_ratio_nonopt)
    			summary_dict['time'].append(_time)
    		
    	summary_dataframe = pandas.DataFrame(summary_dict, columns=list(summary_dict.keys()))
    	if do_export:
    		summary_dataframe.to_csv('summary2.csv', header=True, index=False, sep=',')
    	
    	
    	for (num_nest, 
    		 min_dis_similarity, 
    		 max_dis_similarity, 
    		 exist_no_purchase_per_nest,
    		 allow_nest_repetition), _summary_dataframe in summary_dataframe.groupby(['num_nest', 
    																				  'min_dis_similarity', 
    																				  'max_dis_similarity', 
    																				  'exist_no_purchase_per_nest',
    																				  'allow_nest_repetition']):
    		print(num_nest, min_dis_similarity, max_dis_similarity, exist_no_purchase_per_nest, allow_nest_repetition)																  
    		print(_summary_dataframe[['algorithm_name', 'block_size', 'percentage_of_optimal_instances', 'average_gap_ratio_all', 'average_gap_ratio_nonopt', 'time']])
    		print('#' * 64)
    	
    	return summary_dataframe
    	
    def plot_main_adxopt2014_for_nl2_further_analysis_new(do_export=True, root=TEMP_FOLDER):
    	
    	path = os.path.join(root, 'summary_adxopt2014_for_nl2_further_analysis_all.json')
    	summary = json.load(open(path, 'r'))
    
    	summary_dict = {
    		'algorithm_name': [],
    		'block_size': [],
    		'num_nest': [],
    		'min_dis_similarity': [],
    		'max_dis_similarity': [],
    		'exist_no_purchase_per_nest': [],
    		'allow_nest_repetition': [],
    		'percentage_of_optimal_instances': [],
    		'average_gap_ratio_all': [],
    		'average_gap_ratio_nonopt': [],
    		'time': [],
    	}	
    	
    	for result in summary:
    		num_nest = result['num_nest']
    		min_dis_similarity = result['min_dis_similarity']
    		max_dis_similarity = result['max_dis_similarity']
    		exist_no_purchase_per_nest = result['exist_no_purchase_per_nest']
    		allow_nest_repetition = result['allow_nest_repetition']
    
    		
    		for full_algorithm_name in result['results']:
    			algorithm_name = '_'.join(full_algorithm_name.split('_')[: -1])
    			block_size = int(full_algorithm_name.split('_')[-1])
    			percentage_of_optimal_instances = round(result['results'][full_algorithm_name]['percentage_of_optimal_instances'], 3)
    			average_gap_ratio_all = round(result['results'][full_algorithm_name]['average_gap_ratio_all'], 5)
    			average_gap_ratio_nonopt = round(result['results'][full_algorithm_name]['average_gap_ratio_nonopt'], 8)
    			_time = round(result['results'][full_algorithm_name]['time'], 1)
    			
    			summary_dict['algorithm_name'].append(algorithm_name)
    			summary_dict['block_size'].append(block_size)
    			summary_dict['num_nest'].append(num_nest)
    			summary_dict['min_dis_similarity'].append(min_dis_similarity)
    			summary_dict['max_dis_similarity'].append(max_dis_similarity)
    			summary_dict['exist_no_purchase_per_nest'].append(exist_no_purchase_per_nest)
    			summary_dict['allow_nest_repetition'].append(allow_nest_repetition)
    			summary_dict['percentage_of_optimal_instances'].append(percentage_of_optimal_instances)
    			summary_dict['average_gap_ratio_all'].append(average_gap_ratio_all)
    			summary_dict['average_gap_ratio_nonopt'].append(average_gap_ratio_nonopt)
    			summary_dict['time'].append(_time)
    
    	summary_dataframe = pandas.DataFrame(summary_dict, columns=list(summary_dict.keys()))
    	if do_export:
    		summary_dataframe.to_csv('summary3.csv', header=True, index=False, sep=',')
    
    	for (num_nest, 
    		 min_dis_similarity, 
    		 max_dis_similarity, 
    		 exist_no_purchase_per_nest,
    		 allow_nest_repetition), _summary_dataframe in summary_dataframe.groupby(['num_nest', 
    																				  'min_dis_similarity', 
    																				  'max_dis_similarity', 
    																				  'exist_no_purchase_per_nest',
    																				  'allow_nest_repetition']):
    		print(num_nest, min_dis_similarity, max_dis_similarity, exist_no_purchase_per_nest, allow_nest_repetition)																  
    		print(_summary_dataframe[['algorithm_name', 'block_size', 'percentage_of_optimal_instances', 'average_gap_ratio_all', 'average_gap_ratio_nonopt', 'time']])
    		print('#' * 64)
    
    	return summary_dataframe
    	
    
    
  9. src/simulation_tools.py:随机生成模型参数以及算法配置;

    # -*- coding: utf-8 -*-
    # @author: caoyang
    # @email: caoyang@163.sufe.edu.cn
    # 用于仿真的工具函数
    if __name__ == '__main__':
        import sys
        sys.path.append('../')
    
    import numpy
    from copy import deepcopy
    
    from setting import *
    from src.algorithm import BaseAlgorithm, NaiveGreedy, GreedyOpt, ADXOpt2014, ADXOpt2016
    from src.choice_model import MultiNomialLogit, NestedLogit2, MixedLogit
    from src.utils import random_split
    
    # 多项逻辑模型实例化
    def generate_params_for_MNL(args):
    	# 提取配置参数
    	num_product = args.num_product
    	offerset_capacity = args.offerset_capacity
    	max_product_price = args.max_product_price
    	min_product_price = args.min_product_price
    	max_product_valuation = args.max_product_valuation
    	min_product_valuation = args.min_product_valuation
    	
    	# 随机生成模型参数
    	product_prices = (max_product_price - min_product_price) * numpy.random.rand(num_product) + min_product_price					# 产品价格
    	product_valuations = (max_product_valuation - min_product_valuation) * numpy.random.rand(num_product) + min_product_valuation	# 产品估值
    	no_purchase_valuation =  (max_product_valuation - min_product_valuation) * numpy.random.random() + min_product_valuation		# 不购买估值
    	
    	# 打包参数
    	params = {
    		'product_prices'		: product_prices,
    		'product_valuations'	: product_valuations,
    		'no_purchase_valuation'	: no_purchase_valuation,
    		'offerset_capacity'		: offerset_capacity,
    	}
    	return params
    
    # 嵌套逻辑模型实例化
    def generate_params_for_NL2(args):
    	# 提取配置参数
    	num_product = args.num_product
    	offerset_capacity = args.offerset_capacity
    	max_product_price = args.max_product_price
    	min_product_price = args.min_product_price
    	max_product_valuation = args.max_product_valuation
    	min_product_valuation = args.min_product_valuation
    	
    	num_nest = args.num_nest
    	max_dis_similarity = args.max_dis_similarity
    	min_dis_similarity = args.min_dis_similarity
    	exist_no_purchase_per_nest = args.exist_no_purchase_per_nest
    	allow_nest_repetition = args.allow_nest_repetition
    	
    	# 随机生成模型参数
    	product_prices = (max_product_price - min_product_price) * numpy.random.rand(num_product) + min_product_price					# 产品价格
    	product_valuations = (max_product_valuation - min_product_valuation) * numpy.random.rand(num_product) + min_product_valuation	# 产品估值			
    	no_purchase_valuation = (max_product_valuation - min_product_valuation) * numpy.random.random() + min_product_valuation			# 不购买估值	
    	nest_dis_similaritys = (max_dis_similarity - min_dis_similarity) * numpy.random.random(num_nest) + min_dis_similarity			# 嵌套相异度参数																			# 产品嵌套
    	if exist_no_purchase_per_nest:																									# 每个嵌套的不购买估值
    		nest_no_purchase_valuations = (max_product_valuation - min_product_valuation) * numpy.random.rand(num_nest) + min_product_valuation  
    	else: 
    		nest_no_purchase_valuations = numpy.zeros((num_nest, ))
    	
    	if allow_nest_repetition:											# 允许一个产品出现在多个嵌套内的随机分组
    		nests = [numpy.random.choice(a=list(range(num_product)), 
    								     size=numpy.random.randint(1, num_product + 1), 
    								     replace=False) for _ in range(num_nest)]
    	else:																# 不允许一个产品出现在多个嵌套内的随机分组
    		nests = random_split(array=list(range(num_product)), 
    							 n_splits=num_nest,
    							 do_shuffle=True,
    							 do_balance=False)	
    
    	# 打包参数
    	params = {
    		'product_prices'				: product_prices,
    		'product_valuations'			: product_valuations,	
    		'no_purchase_valuation'			: no_purchase_valuation,
    		'offerset_capacity'				: offerset_capacity,
    		'nests'							: nests,
    		'nest_dis_similaritys'			: nest_dis_similaritys,
    		'nest_no_purchase_valuations'	: nest_no_purchase_valuations,
    	}
    	return params
    
    # 混合逻辑模型实例化
    def generate_params_for_ML(args):
    	# 提取配置参数
    	num_product = args.num_product
    	offerset_capacity = args.offerset_capacity
    	max_product_price = args.max_product_price
    	min_product_price = args.min_product_price
    	max_product_valuation = args.max_product_valuation
    	min_product_valuation = args.min_product_valuation
    	
    	num_class = args.num_class
    	
    	# 随机生成模型参数
    	product_prices = (max_product_price - min_product_price) * numpy.random.rand(num_product) + min_product_price								# 产品价格
    	product_valuations = (max_product_valuation - min_product_valuation) * numpy.random.rand(num_class, num_product) + min_product_valuation	# 产品估值
    	no_purchase_valuation = (max_product_valuation - min_product_valuation) * numpy.random.rand(num_class) + min_product_valuation				# 不购买估值
    	sorted_random_array = numpy.sort(numpy.random.rand(num_class - 1))
    	_class_weight = sorted_random_array[1: ] - sorted_random_array[: -1]
    	_class_weight = numpy.append(_class_weight, sorted_random_array[0])
    	class_weight = numpy.append(_class_weight, 1 - sorted_random_array[-1])
    	
    	# 打包参数
    	params = {
    		'product_prices'		: product_prices,
    		'product_valuations'	: product_valuations,	
    		'no_purchase_valuation'	: no_purchase_valuation,
    		'offerset_capacity'		: offerset_capacity,
    		'class_weight'			: class_weight,
    	}
    	return params
    
    # 随机生成给定配置下的模型实例,并求解最优解的生成器
    def generate_model_instance_and_solve(model_name, model_args, n_sample=1000):
    	model_name = model_name.replace(' ', '').lower()
    	assert model_name in MODEL_MAPPING
    	Model = eval(MODEL_MAPPING[model_name]['class'])
    	generate_params_function = eval(MODEL_MAPPING[model_name]['param'])
    		
    	for _ in range(n_sample):
    		# 随机生成模型参数与模型实例
    		model_params = generate_params_function(model_args)
    		model = Model(**model_params)
    		
    		# 穷举精确求解所有的最优解
    		max_revenue, optimal_solutions = BaseAlgorithm.bruteforce(model=model, 
    																  min_size=1, 
    																  max_size=model.offerset_capacity)
    		
    		yield model, max_revenue, optimal_solutions
    		
    		
    # 算法实例化
    def generate_algorithm_args(algorithm_name, **kwargs):
    	if algorithm_name == 'naivegreedy_forward':
    		# 平凡的正向贪心算法
    		params = {
    			'do_add'					: True,
    			'do_add_first'				: True,
    			'do_delete'					: False,
    			'do_delete_first'			: False,
    			'do_exchange'				: False,
    			'max_removal'				: 0.,
    			'max_addition'				: float('inf'),
    			'initial_size'				: kwargs.get('initial_size', 0),
    			'addable_block_size'		: kwargs.get('addable_block_size', 1),
    			'deleteable_block_size'		: 1,
    			'exchangeable_block_size'	: 1,
    		}
    
    	elif algorithm_name == 'naivegreedy_backward':
    		# 平凡的反向贪心算法
    		params = {
    			'do_add'					: False,
    			'do_add_first'				: False,
    			'do_delete'					: True,
    			'do_delete_first'			: True,
    			'do_exchange'				: False,
    			'max_removal'				: float('inf'),
    			'max_addition'				: 0.,
    			'initial_size'				: kwargs.get('initial_size', -1),	# -1表示默认从全集开始搜索,-2则表示从比全集少一个元素的子集开始搜索,以此类推
    			'addable_block_size'		: 1,
    			'deleteable_block_size'		: kwargs.get('deleteable_block_size', 1),
    			'exchangeable_block_size'	: 1,
    		}
    
    	elif algorithm_name == 'greedyopt_forward':
    		# 正向的2011年GreedyOpt算法
    		params = {
    			'do_add'					: True,
    			'do_add_first'				: True,
    			'do_delete'					: False,
    			'do_delete_first'			: False,
    			'do_exchange'				: True,
    			'max_removal'				: kwargs.get('max_removal', 1),
    			'max_addition'				: float('inf'),
    			'initial_size'				: kwargs.get('initial_size', 0),
    			'addable_block_size'		: kwargs.get('addable_block_size', 1),
    			'deleteable_block_size'		: 1,
    			'exchangeable_block_size'	: kwargs.get('exchangeable_block_size', 1),
    		}
    
    	elif algorithm_name == 'greedyopt_backward':
    		# 反向的2011年GreedyOpt算法
    		params = {
    			'do_add'					: False,
    			'do_add_first'				: False,
    			'do_delete'					: True,
    			'do_delete_first'			: True,
    			'do_exchange'				: True,
    			'max_removal'				: float('inf'),
    			'max_addition'				: kwargs.get('max_addition', 1),
    			'initial_size'				: kwargs.get('initial_size', -1),
    			'addable_block_size'		: 1,
    			'deleteable_block_size'		: kwargs.get('deleteable_block_size', 1),
    			'exchangeable_block_size'	: kwargs.get('exchangeable_block_size', 1),
    		}
    	
    	elif algorithm_name == 'adxopt2014_forward':
    		# 正向的2014年ADXOpt算法
    		params = {
    			'do_add'					: True,
    			'do_add_first'				: False,
    			'do_delete'					: True,
    			'do_delete_first'			: False,
    			'do_exchange'				: True,
    			'max_removal'				: kwargs.get('max_removal', 1),
    			'max_addition'				: float('inf'),
    			'initial_size'				: kwargs.get('initial_size', 0),
    			'addable_block_size'		: kwargs.get('addable_block_size', 1),
    			'deleteable_block_size'		: kwargs.get('deleteable_block_size', 1),
    			'exchangeable_block_size'	: kwargs.get('exchangeable_block_size', 1),
    		}
    		
    	elif algorithm_name == 'adxopt2014_backward':
    		# 反向的2014年ADXOpt算法
    		params = {
    			'do_add'					: True,
    			'do_add_first'				: False,
    			'do_delete'					: True,
    			'do_delete_first'			: False,
    			'do_exchange'				: True,
    			'max_removal'				: float('inf'),
    			'max_addition'				: kwargs.get('max_addition', 1),
    			'initial_size'				: kwargs.get('initial_size', -1),
    			'addable_block_size'		: kwargs.get('addable_block_size', 1),
    			'deleteable_block_size'		: kwargs.get('deleteable_block_size', 1),
    			'exchangeable_block_size'	: kwargs.get('exchangeable_block_size', 1),
    		}
    
    	elif algorithm_name == 'adxopt2016_forward':
    		# 正向的2016年ADXOpt算法
    		params = {
    			'do_add'					: True,
    			'do_add_first'				: True,
    			'do_delete'					: True,
    			'do_delete_first'			: False,
    			'do_exchange'				: True,
    			'max_removal'				: kwargs.get('max_removal', 1),
    			'max_addition'				: float('inf'),
    			'initial_size'				: kwargs.get('initial_size', 0),
    			'addable_block_size'		: kwargs.get('addable_block_size', 1),
    			'deleteable_block_size'		: kwargs.get('deleteable_block_size', 1),
    			'exchangeable_block_size'	: kwargs.get('exchangeable_block_size', 1),
    		}
    		
    	elif algorithm_name == 'adxopt2016_backward':
    		# 反向的2016年ADXOpt算法
    		params = {
    			'do_add'					: True,
    			'do_add_first'				: False,
    			'do_delete'					: True,
    			'do_delete_first'			: True,
    			'do_exchange'				: True,
    			'max_removal'				: float('inf'),
    			'max_addition'				: kwargs.get('max_addition', 1),
    			'initial_size'				: kwargs.get('initial_size', -1),
    			'addable_block_size'		: kwargs.get('addable_block_size', 1),
    			'deleteable_block_size'		: kwargs.get('deleteable_block_size', 1),
    			'exchangeable_block_size'	: kwargs.get('exchangeable_block_size', 1),
    		}
    	
    	else:
    		# 可以继续开发新的算法配置以获得新算法
    		raise NotImplementedError
    		
    	return params
    
    
    
    
  10. src/utils.py:工具函数;

    # -*- coding: utf-8 -*-
    # @author: caoyang
    # @email: caoyang@163.sufe.edu.cn
    # 工具函数
    if __name__ == '__main__':
    	import sys
    	sys.path.append('../')
    
    import json
    import numpy
    import logging
    import argparse
    
    from copy import deepcopy
    from itertools import combinations
    
    from setting import *
    
    # 初始化日志配置
    def initialize_logging(filename, filemode='w'):
    	logging.basicConfig(
    		level=logging.DEBUG,
    		format='%(asctime)s | %(filename)s | %(levelname)s | %(message)s',
    		datefmt='%Y-%m-%d %H:%M:%S',
    		filename=filename,
    		filemode=filemode,
    	)
    	console = logging.StreamHandler()
    	console.setLevel(logging.INFO)
    	formatter = logging.Formatter('%(asctime)s | %(filename)s | %(levelname)s | %(message)s')
    	console.setFormatter(formatter)
    	logging.getLogger().addHandler(console)
    
    # 加载配置参数
    def load_args(Config):
    	config = Config()
    	parser = config.parser
    	try:
    		return parser.parse_args()
    	except:
    		return parser.parse_known_args()[0]
    
    # 保存配置参数
    def save_args(args, save_path):
    	
    	class _MyEncoder(json.JSONEncoder):
    		# 自定义特殊类型的序列化
    		def default(self, obj):
    			if isinstance(obj, type) or isinstance(obj, types.FunctionType):
    				return str(obj)
    			return json.JSONEncoder.default(self, obj)
    
    	with open(save_path, 'w') as f:
    		f.write(json.dumps(vars(args), cls=_MyEncoder, indent=4))
    
    
    # 随机分组
    # :param array			: 可以是列表和数组,但只会对第一个维度进行随机分组;
    # :parma n_split		: 分组数;
    # :param do_shuffle		: 是否打乱顺序;
    # :param do_balance		: 每个组中元素数量是否尽量均衡;
    # :return split_arrays	: 列表的列表,每个子列表是一个组
    def random_split(array, n_splits, do_shuffle=True, do_balance=False):
    	array_length = len(array)
    	assert array_length > n_splits
    	_array = deepcopy(array)
    	if do_shuffle:
    		numpy.random.shuffle(_array)
    	index = list(range(array_length - 1))
    	if do_balance:
    		num_per_split = int(array_length / n_splits)
    		split_points = [i * num_per_split - 1 for i in range(1, n_splits)]
    	else:
    		split_points = sorted(numpy.random.choice(a=index, size=n_splits - 1, replace=False))
    	split_arrays = []
    	current_point = 0
    	for split_point in split_points:
    		split_arrays.append(_array[current_point: split_point + 1])
    		current_point = split_point + 1
    	split_arrays.append(_array[current_point: ])
    	return split_arrays
    
    # 穷举子集生成器
    # :param universal_set	: 全集;
    # :param min_size		: 穷举子集的最小尺寸,默认忽略空集;
    # :param max_size		: 穷举子集的最大尺寸,默认值None表示穷举所有子集,也可以限制子集大小以少枚举一些情况;
    # :yield subset			: tuple类型的子集
    def generate_subset(universal_set, min_size=1, max_size=None):	
    	if max_size is None:
    		max_size = len(universal_set)
    	for size in range(min_size, max_size + 1):
    		for subset in combinations(universal_set, size):
    			yield subset
    
    
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值