Trust Region Policy Optimization

https://arxiv.org/abs/1502.05477

Trust Region Policy Optimization

We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
Comments: 16 pages, ICML 2015
Subjects: Learning (cs.LG)
Cite as: arXiv:1502.05477 [cs.LG]
  (or arXiv:1502.05477v5 [cs.LG] for this version)

Submission history

From: John Schulman [ view email
[v1] Thu, 19 Feb 2015 06:44:25 GMT (547kb,D)
[v2] Mon, 18 May 2015 14:56:50 GMT (540kb,D)
[v3] Mon, 8 Jun 2015 10:47:03 GMT (540kb,D)
[v4] Mon, 6 Jun 2016 01:00:57 GMT (541kb,D)
[v5] Thu, 20 Apr 2017 18:04:12 GMT (541kb,D)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值