Asynchronous Methods for Deep Reinforcement Learning
Research background
- For an online RL agent, the sequence of observed data is non-stationary and strong correlated. A common solution is to aggregate memory, which can reduce non-stationarity and decorrelate updates. However, this pipeline has several drawbacks: it requires large memory and constrains the algorithm to be off-policy.
Proposed method
- This paper asynchronously executes multiple agents in parallel, on multiple instance of environment.
- The proposed Asynchronous RL framework
(1) Run asynchronous actor learner on multiple CPU threads.
(2) Multiple actor learners run in parallel are likely exploring different parts of the environment. Thus, the overall changes to the model parameters are less likely to be correlated in time. As a result, this paper does not need the replay memory, making an on-policy reinforcement learning algorithm.
One-step Q-Learning
做法中的要点:
(a): The target network is shared and slowly changed.
(b): Accumulated gradients over multiple timesteps. This is similar to using minibatch. This helps to computation efficiency as well.
©: For each agent, using e-greedy exploration.
n-steps Q-Learning
做法中的要点:
- For each step, calculate accumulated reward from returns of later steps. Each n-step update uses the longest possible n-step return.
Asynchronous Actor-Critic
做法中的要点:
- 计算累积reward
- 网络结构一般为:基础网络共享,顶端一个网络(经过softmax)输出action probobality,另一个网络输出critic估计得value