- 博客(1)
- 资源 (1)
- 收藏
- 关注
原创 强化学习笔记(1):导论
导论 1. 引言 学习的本质:人类通过与环境进行交互学习。运动感知使我们与外部环境直接联结,告诉我们各类行为的后果。(三思而后行?) 2. 强化学习 定义:基于当前环境,agent选取可获得最大化数值收益信号的动作。该收益不仅指当前即时收益,也指长远的收益。agent不一定是完整的有机体或机器人,也可以指某个动作系统的组成部分。 基本特征:试错和延迟收益 强化学习即代表一类问题,同时也是这类问题...
2019-12-13 22:25:55
193
Curiosity-driven Exploration by Self-supervised Prediction.pdf
In many real-world scenarios, rewards extrinsic
to the agent are extremely sparse, or absent altogether.
In such cases, curiosity can serve as
an intrinsic reward signal to enable the agent
to explore its environment and learn skills that
might be useful later in its life. We formulate
curiosity as the error in an agent’s ability to predict
the consequence of its own actions in a visual
feature space learned by a self-supervised
inverse dynamics model. Our formulation scales
to high-dimensional continuous state spaces like
images, bypasses the difficulties of directly predicting
pixels, and, critically, ignores the aspects
of the environment that cannot affect the agent.
The proposed approach is evaluated in two environments:
VizDoom and Super Mario Bros.
Three broad settings are investigated: 1) sparse
extrinsic reward, where curiosity allows for far
fewer interactions with the environment to reach
the goal; 2) exploration with no extrinsic reward,
where curiosity pushes the agent to explore more
efficiently; and 3) generalization to unseen scenarios
(e.g. new levels of the same game) where
the knowledge gained from earlier experience
helps the agent explore new places much faster
than starting from scratch.
2019-12-13
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人