强化学习可以分成off-policy(离线)和on-policy(在线)两种学习方法,按照个人理解,判断一个强化学习是off-policy还是on-policy的依据在于生成样本的policy(value-funciton)和网络参数更新时的policy(value-funciton)是否相同。
off-policy的经典算法有Q-learning,而on-policy的经典算法有SARSA算法,两者的算法流程如下所示。
Q-learning算法:
initialize Q(s,a) randomly
for each episode:
initialize state s;
while s is not terminal:
choose action a from s using ε-greedy strategy;
observe reward r and next state s';
Q(s,a) <- Q(s,a) + α[r + γ*maxQ(s