“囚徒困境”与帕累托最优理论以及基于OpenAI的辅助--测试(3)

1. “囚徒困境”的定义与理论起源

1.1 基本定义和起源

囚徒困境(Prisoner’sDilemma)是博弈论的非零和博弈中具代表性的例子,反映个人最佳选择并非团体最佳选择。或者说在一个群体中,个人做出理性选择却往往导致集体的非理性。虽然困境本身只属模型性质,但现实中的价格竞争、环境保护等方面,也会频繁出现类似情况。

  • 囚徒案例说明
    警方逮捕甲、乙两名嫌疑犯,但没有足够证据指控二人入罪。于是警方分开囚禁嫌疑犯,分别和二人见面,并向双方提供以下相同的选择:
    若一人认罪并作证检控对方(相关术语称“背叛”对方),而对方保持沉默,此人将即时获释,沉默者将判监10年。
    若二人都保持沉默(相关术语称互相“合作”),则二人同样判监1年。
    若二人都互相检举(相关术语称互相“背叛”),则二人同样判监8年。

“囚徒困境”是1950年美国兰德公司的梅里尔·弗勒德(MerrillFlood)和梅尔文·德雷希尔(MelvinDresher)拟定出相关困境的理论,后来由顾问艾伯特·塔克(AlbertTucker)以囚徒方式阐述,并命名为“囚徒困境”。两个共谋犯罪的人被关入监狱,不能互相沟通情况。如果两个人都不揭发对方,则由于证据不确定,每个人都坐牢一年;若一人揭发,而另一人沉默,则揭发者因为立功而立即获释,沉默者因不合作而入狱十年;若互相揭发,则因证据确凿,二者都判刑八年。由于囚徒无法信任对方,因此倾向于互相揭发,而不是同守沉默。最终导致纳什均衡仅落在非合作点上的博弈模型。
囚徒困境的理论起源如下:两个嫌疑犯作案后被警察抓住,分别关在不同的屋子里接受审讯。警察知道两人有罪,但缺乏足够的证据。警察告诉每个人:如果两人都抵赖,各判刑一年;如果两人都坦白,各判八年;如果两人中一个坦白而另一个抵赖,坦白的放出去,抵赖的判十年。于是,每个囚徒都面临两种选择:坦白或抵赖。然而,不管同伙选择什么,每个囚徒的最优选择是坦白:如果同伙抵赖、自己坦白的话放出去,抵赖的话判一年,坦白比不坦白好;如果同伙坦白、自己坦白的话判八年,比起抵赖的判十年,坦白还是比抵赖的好。结果,两个嫌疑犯都选择坦白,各判刑八年。如果两人都抵赖,各判一年,显然这个结果好。囚徒困境所反映出的深刻问题是,人类的个人理性有时能导致集体的非理性-聪明的人类会因自己的聪明而作茧自缚,或者损害集体的利益。

1.2 重复多次的“囚徒困境”

在重复的囚徒困境中,博弈被反复地进行。因而每个参与者都有机会去“惩罚”另一个参与者前一回合的不合作行为。这时,合作可能会作为平衡的结果出现。欺骗的动机这时可能被惩罚的威胁所克服,从而可能导向一个较好的、合作的结果。反复的、接近无限的重复次数时,纳什均衡趋向于帕累托最优,从互相背叛趋向于互相忠诚。
囚徒们虽然彼此合作,坚不吐实,可为全体带来最佳利益(无罪开释),但在对方的表现不明的情况下,因为出卖同伙可为自己带来利益(缩短刑期),也因为同伙把自己招出来可为他带来利益,因此彼此出卖虽违反最佳共同利益,反而是自己最大利益所在。但实际上,执法机构不可能设立如此情境来诱使所有囚徒招供,因为囚徒们必须考虑刑期以外之因素(出卖同伙会受到报复等),而无法完全以执法者所设立之利益(刑期)作为必须考量的因素。

2. “囚徒困境”的Python 实现

这次的博客撰写借用了OpenAI的模型来自动生成代码,但似乎经过这些天的试用,OpenAI模型的准确性似乎降低的有点多,而且无法被修正,以下是这一过程的记录。

2.1 定义博弈的环境场景

cooperate_cooperate = (-1, -1)
defect_defect = (-2, -2)
cooperate_defect = (-3, 0)
defect_cooperate = (0, -3)

定义两个选手的策略

def player1_strategy(history1, history2):
    # The first time, cooperate
    if not history1:
        return "cooperate"
    # If the opponent defected last time, defect
    elif history2[-1] == "defect":
        return "defect"
    # Otherwise, cooperate
    else:
        return "cooperate"

def player2_strategy(history1, history2):
    # The first time, cooperate
    if not history2:
        return "cooperate"
    # If the opponent defected last time, defect
    elif history1[-1] == "defect":
        return "defect"
    # Otherwise, cooperate
    else:
        return "cooperate"

2.3 博弈过程

# Play the game
num_rounds = 5
history1 = []
history2 = []
for round in range(num_rounds):
    p1_action = player1_strategy(history1, history2)
    p2_action = player2_strategy(history1, history2)
    if p1_action == "cooperate" and p2_action == "cooperate":
        payout1, payout2 = cooperate_cooperate
    elif p1_action == "defect" and p2_action == "defect":
        payout1, payout2 = defect_defect
    elif p1_action == "cooperate" and p2_action == "defect":
        payout1, payout2 = cooperate_defect
    elif p1_action == "defect" and p2_action == "cooperate":
        payout1, payout2 = defect_cooperate
    history1.append(p1_action)
    history2.append(p2_action)
    print("Round", round+1, "p1:", p1_action, "p2:", p2_action, "payouts:", payout1, payout2)

2.4 结果展示

在这段代码中,我们定义了每个场景的支出,用于计算每一轮中每个玩家的惩罚量。我们还为每个参与者定义了策略。在这种情况下,两名球员都使用简单的“针锋相对”策略,他们首先合作,然后模仿对手先前的动作。
然后,使用上面定义的策略和罚函数,在指定的回合数内进行游戏。在每一轮中,我们打印出每个玩家的动作和相应的付款。
运行得到如下输出:

Round 1 p1: cooperate p2: cooperate payouts: 3 3
Round 2 p1: cooperate p2: cooperate payouts: 3 3
Round 3 p1: cooperate p2: cooperate payouts: 3 3
Round 4 p1: cooperate p2: cooperate payouts: 3 3
Round 5 p1: cooperate p2: cooperate payouts: 3 3

2.4.1 OpenAI的问题:

可以观察到,这个框架虽然正确,但明显是一直在cooperate。就是初始化状态为cooperate,然后由于histroy2为“cooperate”,就进入了“cooperate”的死循环。算法没有采用random函数对选手策略在iteration阶段进行update,我指正OpenAI,它还不承认…
In the example code I provided, the players are not always cooperating. Instead, they are using a “tit-for-tat” strategy, in which they cooperate in the first round and then mimic the opponent’s previous move in subsequent rounds.
This strategy often leads to mutual cooperation, because if both players cooperate in the first round, they will continue to cooperate in all subsequent rounds. However, if one player defects in a round, the other player will defect in the next round, which can create a cycle of mutual defection.

2.4.2 OpenAI问题的解决

后面,我告诉他需要用random,它重新生成了新代码,=。 =

import random

# Define the payoff matrix
payoff_matrix = {
    ("C", "C"): (3, 3),
    ("C", "D"): (0, 5),
    ("D", "C"): (5, 0),
    ("D", "D"): (1, 1)
}

# Define the initial moves of the players
player1_move = random.choice(["C", "D"])
player2_move = random.choice(["C", "D"])

# Play the game for 10 rounds
for round in range(1, 11):
    # Determine the payoffs for each player
    player1_payoff, player2_payoff = payoff_matrix[(player1_move, player2_move)]

    # Print the results of the round
    print(f"Round {round}: Player 1 chose {player1_move}, Player 2 chose {player2_move}")
    print(f"Player 1's payoff: {player1_payoff}, Player 2's payoff: {player2_payoff}\n")

    # Update the players' moves based on the "tit-for-tat" strategy
    player1_move = player2_move
    player2_move = random.choice(["C", "D"])

运行结果:

Round 1: Player 1 chose D, Player 2 chose D
Player 1's payoff: 1, Player 2's payoff: 1

Round 2: Player 1 chose D, Player 2 chose C
Player 1's payoff: 5, Player 2's payoff: 0

Round 3: Player 1 chose C, Player 2 chose D
Player 1's payoff: 0, Player 2's payoff: 5

Round 4: Player 1 chose D, Player 2 chose C
Player 1's payoff: 5, Player 2's payoff: 0

Round 5: Player 1 chose C, Player 2 chose D
Player 1's payoff: 0, Player 2's payoff: 5


Process finished with exit code 0

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Robo-网络矿产提炼工

你的鼓励将是我最大的动力!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值