RL:prat1:key_concepts_in_RL强化学习

强化学习概念

代理-环境交互

States and Observations

  • state是对世界状态的完全描述,
  • observation是对state的部分描述

Action Spaces

  • 离散,如一些游戏
  • 连续,如机器人的运动速度和角度

Policies

  • agent决定如何action
  • 可以是确定的:a(t)=\mu(s_t)
  • 可以是随机的,由概率分布\pi决定:a_{t} \sim \pi(\cdot |s_t)

Trajectories

  • \tau =(s_0 , a_0,s_1,a_1,...)
  • s_0是初始的状态分布s_0 ~ \rho_0(\cdot)
  • 可以是确定的:s_{t+1} =f(s_t,a_t)
  • 可以是随机的:s_{t+1} ~ P(\cdot|s_t,a_t)

Reward and Return

  • 当前步奖励Reward:r_t = R(s_t, a_t, s_{t+1})
  • 累计奖励R(\tau) = \sum_{t=0}^T r_t.R(\tau) = \sum_{t=0}^{\infty} \gamma^t r_t.

The RL Problem

  • select a policy which maximizes expected return when the agent acts according to it.

  • 首先计算给定策略\pi特定\tau发生的概率:P(\tau | \pi)=\rho_0(s_0)\prod_{t=0}^{T-1}P(s_{t+1}| s_{t} ,a_{t}}) \pi(a_t|s_t)
  • 策略\pi期望return:J(\pi)=\int\limits_{\tau}P(\tau|\pi)R(\tau)=\mathop{E}\limits_{\tau\sim\pi}[R(\tau)]
  • RL优化问题可以表示为:\pi^* =arg \mathop{max}\limits_{\pi}J(\pi),最优策略;

Value Functions

  • value是知以指定状态,或状态-action为起始的期望回报 expected return ;
  • The On-Policy Value Function:V^{\pi}(s) = \underE{\tau \sim \pi}{R(\tau)\left| s_0 = s\right.}
  • The On-Policy Action-Value Function:Q^{\pi}(s,a) = \underE{\tau \sim \pi}{R(\tau)\left| s_0 = s, a_0 = a\right.}
  • The Optimal Value FunctionV^*(s):V^*(s) = \max_{\pi} \underE{\tau \sim \pi}{R(\tau)\left| s_0 = s\right.}
  • The Optimal Action-Value Function:Q^*(s,a) = \max_{\pi} \underE{\tau \sim \pi}{R(\tau)\left| s_0 = s, a_0 = a\right.}
  • 关系:V^{\pi}(s) = \underE{a\sim \pi}{Q^{\pi}(s,a)},\mathop{E}\limits_{a\sim\pi}[Q^{\pi}(s,a)]=\int_{a}\pi(a_0=a|s_0=s)Q^{\pi}(s,a)

Bellman Equations

  • \begin{align*} V^{\pi}(s) &= \underE{a \sim \pi \\ s'\sim P}{r(s,a) + \gamma V^{\pi}(s')}, \\ Q^{\pi}(s,a) &= \underE{s'\sim P}{r(s,a) + \gamma \underE{a'\sim \pi}{Q^{\pi}(s',a')}}, \end{align*}
  • {\displaystyle \max _{a_{0}}\left\{F(x_{0},a_{0})+\beta \left[\max _{\left\{a_{t}\right\}_{t=1}^{\infty }}\sum _{t=1}^{\infty }\beta ^{t-1}F(x_{t},a_{t}):a_{t}\in \Gamma (x_{t}),\;x_{t+1}=T(x_{t},a_{t}),\;\forall t\geq 1\right]\right\}}V(x_{0})\;=\;\max _{\left\{a_{t}\right\}_{t=0}^{\infty }}\sum _{t=0}^{\infty }\beta ^{t}F(x_{t},a_{t}),

  • {\displaystyle \max _{a_{0}}\left\{F(x_{0},a_{0})+\beta \left[\max _{\left\{a_{t}\right\}_{t=1}^{\infty }}\sum _{t=1}^{\infty }\beta ^{t-1}F(x_{t},a_{t}):a_{t}\in \Gamma (x_{t}),\;x_{t+1}=T(x_{t},a_{t}),\;\forall t\geq 1\right]\right\}}
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值