Proximal Policy Optimization Algorithms

37 篇文章 10 订阅
提出了一种新的策略梯度方法——近端策略优化(PPO),该方法交替进行环境交互采样及使用随机梯度上升优化代理目标函数。PPO允许进行多次小批量更新,兼具TRPO的优点且更易于实现,实验证明其样本复杂度较低。
摘要由CSDN通过智能技术生成

Proximal Policy Optimization Algorithms

We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.
Subjects: Learning (cs.LG)
Cite as: arXiv:1707.06347 [cs.LG]
  (or arXiv:1707.06347v1 [cs.LG] for this version)

Submission history

From: John Schulman [ view email
[v1] Thu, 20 Jul 2017 02:32:33 GMT (2178kb,D)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值