python mulit函数_pytorch实现MADDPG (multi-agent deep deterministic policy gradient)

本文介绍了一个基于PyTorch实现的MADDPG算法,实验环境是在MADRL的Waterworld基础上修改的版本。修改后的环境特性包括物理规则下的边界反弹、大小一致的智能体以及需要合作才能捕获食物的机制。依赖项包括Python 3.6.1和可选的OpenCV库。安装MADRL后,通过运行main.py可启动实验。实验结果显示,当两个智能体合作时,它们需要协作才能获得奖励。未来计划复现论文中的竞争环境实验。
摘要由CSDN通过智能技术生成

An implementation of MADDPG

1. Introduction

The experimental environment is a modified version of Waterworld based on MADRL.

2. Environment

The main features (different from MADRL) of the modified Waterworld environment are:

evaders and poisons now bounce at the wall obeying physical rules

sizes of the evaders, pursuers and poisons are now the same so that random actions will lead to average rewards around 0.

need exactly n_coop agents to catch food.

3. Dependency

python==3.6.1 (recommend using the anaconda/miniconda)

if you need to render the environments, opencv is required

4. Install

Install MADRL.

Replace the madrl_environments/pursuit directory with the one in this repo.

python main.py

if scene rendering is enabled, recommend to install opencv through conda-forge.

5. Results

two agents, cooperation = 2

The two agents need to cooperate to achieve the food for reward 10.

demo.gif

3.png

the average

4.png

one agent, cooperation = 1

newplot.png

6. TODO

reproduce the experiments in the paper with competitive environments.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值