深度强化学习之A3C网络—理论及代码(小车上山)

A3C

  由于DQN学习过程需要强大的计算能力和大量的训练过程。为此,DeeoMind团队提出了一种新的算法,称为异步优势行为者评论家(A3C)算法,该算法要优于其他深度强化学习算法,因为其需要较少的计算能力和训练时间。A3C的主要思想是通过多个智能体并行学习并整合其所有经验。

  A3C网络还可以与其他算法产生更好的精度,在连续和离散行为空间中均有很好的效果。该网络是使用多个智能,且每个智能体在实际环境副本中以不同的探索策略进行并行学习。然后,将这些智能体所获得的经验整合在一起构成全局智能体。全局智能体也称为主网络或全局网络,而其他智能体称为工人。


异步优势行为者

  在继续之前,首先分析什么是A3C?其中3个A有什么含义?

   在A3C中,第1个A是异步,表明了它是如何工作的。 并不是像在DQN中只有一个智能体来学习最优策略,在此有多个智能体与环境交互。由于同时有多个智能体与环境交互,因此需对每个智能体提供环境副本,以便每个智能体都能与其各自的环境副本进行交互。因此,这些多个智能体称为工人智能体,且有一个称为全局网络的独立智能体、所有智能体均向其汇报。这种全局网络将经验整合在一起。

   第2个A是指优势, 在讨论DQN的对抗网络架构时已了解了什么是优势函数。优势函数可定义为Q函数与值函数之差。已知Q函数是确定某一状态下行为的好坏程度,值函数是确定所处状态的好坏程度。那么,直观地考虑Q函数与值函数之差意味着什么呢?其实表明了与其他所有行为相比,智能体在状态s下执行动作a的好坏程度。

   第3个A是行为者评论家,网络架构有两种类型:行为者和评论家。 行为者的作用是学习一种策略,而评论家的作用是对行为者所学习的策略评估其好坏。


A3C架构

在这里插入图片描述
  如上所述,图10.1中有多个工人智能体,且每个智能体都与其各自的环境副本进行交互。。然后,工人智能体学习策略,计算策略损失的梯度,并在全局网络中更新梯度。这个全局网络是同时由每个智能体进行更新的。A3C的最大优点之一就是在此不使用经验回放记忆。由于有多个智能体与环境交互,并将各自的信息整合到全局网络,因此,经验之间的相关性很小,甚至无相关性。经验回放需要大量记忆单元来保存所有的经验。由于A3C无需记忆,因此可大大减少存储空间和计算时间。


A3C工作原理

  首先,工人智能体重置全局网络,然后开始与环境进行交互。每个工人智能体按照不同的探索策略来学习一个最优策略。接下来,计算值和策略损失,然后计算损失梯度,并在全局网络中更新梯度。工人智能体又重新开始重置全局网络并重复上述相同过程,周而复始。在分析值函数和策略损失函数之前,先了解如何计算优势函数。正如人们所知,优势函数是Q函数与值函数之差:

               A ( s , a ) = Q ( s , a ) − V ( s ) A(s, a)=Q(s, a)-V(s) A(s,a)=Q(s,a)V(s)

  由于在A3C中实际上并没有直接计算Q值,在此使用折扣回报作为Q值的估计值。折扣回报R可表示为:

               R = r n + γ r n − 1 + γ 2 r n − 2 R=r_{n}+\gamma r_{n-1}+\gamma^{2} r_{n-2} R=rn+γrn1+γ2rn2

  用折扣函数R来代替Q函数:

               A ( s , a ) = R − V ( s ) A(s, a)=R-V(s) A(s,a)=RV(s)

  这时,可以得到值损失为折扣回报与状态值之间的均方差:

              值损失 ( L p ) = ∑ ( R − V ( s ) ) 2 (L_{p})=\sum(R-V(s))^{2} (Lp)=(RV(s))2

  那么,策略损失就可定义如下:

            策略损失 ( L p ) = log ⁡ ( π ( s ) ) A ( s ) β H ( π ) (L_{p})=\log (\pi(s)) A(s) \beta H(\pi) (Lp)=log(π(s))A(s)βH(π)

  好的,那么上式中的新项 H ( π ) H(\pi) H(π)是什么?这是熵,用于确保策略得到充分探索。熵表明了行为概率的传播。当熵值较大时,每个行为的概率都是相同的,因此,智能体不能确定执行哪个行为,而当熵值较小时,某一行为将会比其他行为具有更高的概率,那么智能体就会选择概率较大的行为。这样,在损失函数中增加熵将会鼓励智能体进一步探索,从
而避免陷人局部最优。

基于A3C爬山

在此,以山地车为例来理解A3C。此时智能体是山地车,置于两座山之间。智能体的目标是向右侧爬山。然而,汽车不能一次就爬上山,必
须来回行驶来产生动力。若智能体花费了较少的能量来爬山,那么就会得到较高的奖励。
在这里插入图片描述
代码:

import gym
import multiprocessing
import threading
import numpy as np
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf

初始化所有参数

# number of worker agents
no_of_workers = multiprocessing.cpu_count() 

# maximum number of steps per episode  每个情景中最大的时间步
no_of_ep_steps = 200 

# total number of episodes
no_of_episodes = 2000 

global_net_scope = 'Global_Net'

# sets how often the global network should be updated 设置全局网络更新频率
update_global = 10

# discount factor
gamma = 0.90 

# entropy factor  熵因数
entropy_beta = 0.01 

# learning rate for actor 行为者的学习速率
lr_a = 0.0001 

# learning rate for critic 评论家的学习速率
lr_c = 0.001 

# boolean for rendering the environment 是否渲染环境
render=False 

# directory for storing logs  保存日志文件
log_dir = 'logs'

初始化MountainCar环境:

env = gym.make('MountainCarContinuous-v0')
env.reset()

得到states和actions值,以及action_bound:

# we get the number of states, actions and also the action bound
no_of_states = env.observation_space.shape[0]
no_of_actions = env.action_space.shape[0]
action_bound = [env.action_space.low, env.action_space.high]

在ActorCritic类中定义行为者评论家网络:

class ActorCritic(object):
     def __init__(self, scope, sess, globalAC=None):
         
        # first we initialize the session and RMS prop optimizer(RMS概率优化器) for both
        # our actor and critic networks
        
        self.sess=sess
        
        self.actor_optimizer = tf.train.RMSPropOptimizer(lr_a, name='RMSPropA')
        self.critic_optimizer = tf.train.RMSPropOptimizer(lr_c, name='RMSPropC')
 
        # now, if our network is global then,
    
        if scope == global_net_scope:
            with tf.variable_scope(scope):
                
                # initialize states and build actor and critic network
                self.s = tf.placeholder(tf.float32, [None, no_of_states], 'S')
                
                # get the parameters of actor and critic networks
                self.a_params, self.c_params = self._build_net(scope)[-2:]
                
        # if our network is local then,
        else:
            with tf.variable_scope(scope):
                
                # initialize state, action and also target value as v_target(目标值)
                
                self.s = tf.placeholder(tf.float32, [None, no_of_states], 'S')
                self.a_his = tf.placeholder(tf.float32, [None, no_of_actions], 'A')
                self.v_target = tf.placeholder(tf.float32, [None, 1], 'Vtarget')
                
                # since we are in continuous actions space, we will calculate
                # mean and variance for choosing action 由于是在连续行为空间,因此需计算选择行为的均值和方差
                
                mean, var, self.v, self.a_params, self.c_params = self._build_net(scope)

                # then we calculate td error as the difference between v_target - v  由v_target与v之间的差值来计算td误差
                td = tf.subtract(self.v_target, self.v, name='TD_error')

                # minimize the TD error
                with tf.name_scope('critic_loss'):
                    self.critic_loss = tf.reduce_mean(tf.square(td))

                # 通过均值乘以行为边界并将方差增加0.0001来更新均值与方差

                with tf.name_scope('wrap_action'):
                    mean, var = mean * action_bound[1], var + 1e-4
                                            
                # we can generate distribution using this updated mean and var 根据更新的均值和方差来生成概率分布
                normal_dist = tf.contrib.distributions.Normal(mean, var)
    
                # now we shall calculate the actor loss. Recall the loss function.
                with tf.name_scope('actor_loss'):
                    
                    # calculate first term of loss which is log(pi(s)) 计算损失的第一项log(pi(s))
                    log_prob = normal_dist.log_prob(self.a_his)
                    exp_v = log_prob * td
                    
                    # calculate entropy from our action distribution for ensuring exploration  行为分布中计算熵,以确保探索
                    entropy = normal_dist.entropy()
                    
                    # we can define our final loss as, 定义总损失为
                    self.exp_v = exp_v + entropy_beta * entropy
                    
                    # then, we try to minimize the loss  
                    self.actor_loss = tf.reduce_mean(-self.exp_v)
                    
                 # now, we choose action by drawing from the distribution and clipping it between action bounds,
                 # 现在,通过绘制分布曲线并在行为范围内裁剪来选择一个行为 
                with tf.name_scope('choose_action'):
                    self.A = tf.clip_by_value(tf.squeeze(normal_dist.sample(1), axis=0), action_bound[0], action_bound[1])
     
                # calculate gradients for both of our actor and critic networks, 计算行为者和评论家网络的梯度
        
                with tf.name_scope('local_grad'):

                    self.a_grads = tf.gradients(self.actor_loss, self.a_params)
                    self.c_grads = tf.gradients(self.critic_loss, self.c_params)
 
            # now, we update our global network weights,
            with tf.name_scope('sync'):
                
                # pull the global network weights to the local networks  将全局网络的权重复制到局部网络
                with tf.name_scope('pull'):
                    self.pull_a_params_op = [l_p.assign(g_p) for l_p, g_p in zip(self.a_params, globalAC.a_params)]
                    self.pull_c_params_op = [l_p.assign(g_p) for l_p, g_p in zip(self.c_params, globalAC.c_params)]
                
                # push the local gradients to the global network  将局部网络的梯度传给全局网络
                with tf.name_scope('push'):
                    self.update_a_op = self.actor_optimizer.apply_gradients(zip(self.a_grads, globalAC.a_params))
                    self.update_c_op = self.critic_optimizer.apply_gradients(zip(self.c_grads, globalAC.c_params))
                    
        


     # next, we define a function called _build_net for building our actor and critic network
    
     def _build_net(self, scope):
     # initialize weights
        w_init = tf.random_normal_initializer(0., .1)
        
        with tf.variable_scope('actor'):
            l_a = tf.layers.dense(self.s, 200, tf.nn.relu6, kernel_initializer=w_init, name='la')
            mean = tf.layers.dense(l_a, no_of_actions, tf.nn.tanh,kernel_initializer=w_init, name='mean')
            var = tf.layers.dense(l_a, no_of_actions, tf.nn.softplus, kernel_initializer=w_init, name='var')
            
        with tf.variable_scope('critic'):
            l_c = tf.layers.dense(self.s, 100, tf.nn.relu6, kernel_initializer=w_init, name='lc')
            v = tf.layers.dense(l_c, 1, kernel_initializer=w_init, name='v')
        
        a_params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope + '/actor')
        c_params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope + '/critic')
        
        return mean, var, v, a_params, c_params
    
         
     # update the local gradients to the global network
     def update_global(self, feed_dict):
        self.sess.run([self.update_a_op, self.update_c_op], feed_dict)
     
     # get the global parameters to the local networks
     def pull_global(self):
        self.sess.run([self.pull_a_params_op, self.pull_c_params_op])
     
     # select action
     def choose_action(self, s):
        s = s[np.newaxis, :]
        return self.sess.run(self.A, {self.s: s})[0]

现在,初始化worker类:

class Worker(object):
    def __init__(self, name, globalAC, sess):
        # intialize environment for each worker 对每个工人智能体初始化环境
        self.env = gym.make('MountainCarContinuous-v0').unwrapped
        self.name = name
        
        # create ActorCritic agent for each worker
        self.AC = ActorCritic(name, sess, globalAC)
        self.sess=sess
        
    def work(self):
        global global_rewards, global_episodes
        total_step = 1
 
        # store state, action, reward
        buffer_s, buffer_a, buffer_r = [], [], []
        
        # loop if the coordinator is active and global episode is less than the maximum episode
        # 如果合作者处于活动状态且全局情景小于最大回合,则继续循环
        while not coord.should_stop() and global_episodes < no_of_episodes:
            
            # initialize the environment by resetting 通过重设下列表来初始化环境
            s = self.env.reset()
            
            # store the episodic reward 保存情景奖励
            ep_r = 0
            for ep_t in range(no_of_ep_steps):
    
                # Render the environment for only worker 1  仅对工人/渲染环境
                if self.name == 'W_0' and render:
                    self.env.render()
                    
                # choose the action based on the policy     根据策略来选择行为
                a = self.AC.choose_action(s)

                # perform the action (a), recieve reward (r) and move to the next state (s_)
                s_, r, done, info = self.env.step(a)
             
                # set done as true if we reached maximum step per episode 若每个情景达到最大时间步,则设done为真
                done = True if ep_t == no_of_ep_steps - 1 else False
                
                ep_r += r
                
                # store the state, action and rewards in the buffer(缓存中)
                buffer_s.append(s)
                buffer_a.append(a)
                
                # normalize the reward
                buffer_r.append((r+8)/8)
    
    
                # we Update the global network after particular time step  经过一定时间步之后,更新全局网络
                if total_step % update_global == 0 or done:
                    if done:
                        v_s_ = 0
                    else:
                        v_s_ = self.sess.run(self.AC.v, {self.AC.s: s_[np.newaxis, :]})[0, 0]
                    
                    # buffer for target v  目标v值的缓存
                    buffer_v_target = []
                    
                    for r in buffer_r[::-1]:
                        v_s_ = r + gamma * v_s_
                        buffer_v_target.append(v_s_)
                        
                    buffer_v_target.reverse()
                    
                    buffer_s, buffer_a, buffer_v_target = np.vstack(buffer_s), np.vstack(buffer_a), np.vstack(buffer_v_target)
                    feed_dict = {
                                 self.AC.s: buffer_s,
                                 self.AC.a_his: buffer_a,
                                 self.AC.v_target: buffer_v_target,
                                 }
                    
                    # update global network
                    self.AC.update_global(feed_dict)
                    buffer_s, buffer_a, buffer_r = [], [], []
                    
                    # get global parameters to local ActorCritic
                    self.AC.pull_global()
                    
                s = s_
                total_step += 1
                if done:
                    if len(global_rewards) < 5:
                        global_rewards.append(ep_r)
                    else:
                        global_rewards.append(ep_r)
                        global_rewards[-1] =(np.mean(global_rewards[-5:]))
                    
                    global_episodes += 1
                    break

这时,启动Tensorflow会话,运行模型:

# create a list for string global rewards and episodes 创建保存全局奖励和情景的列表
global_rewards = []
global_episodes = 0

# start tensorflow session
sess = tf.Session()

with tf.device("/cpu:0"):
    
# create an instance to our ActorCritic Class  创建一个类实例
    global_ac = ActorCritic(global_net_scope,sess)
    
    workers = []
    
    # loop for each workers
    for i in range(no_of_workers):
        i_name = 'W_%i' % i
        workers.append(Worker(i_name, global_ac,sess))

coord = tf.train.Coordinator()
sess.run(tf.global_variables_initializer())

# log everything so that we can visualize the graph in tensorboard
# 日志任何信息,以便在tensorboard中可视化图表

if os.path.exists(log_dir):
    shutil.rmtree(log_dir)

tf.summary.FileWriter(log_dir, sess.graph)

worker_threads = []

# start workers

for worker in workers:

    job = lambda: worker.work()
    t = threading.Thread(target=job)
    t.start()
    worker_threads.append(t)
coord.join(worker_threads)
深度强化学习小车爬坡是指使用深度强化学习算法来训练一个智能体(小车)学会在一个山地地形中向上爬坡的任务。在这个任务中,智能体的目标是通过来回行驶来产生动力,以尽量少的能量消耗爬上山。这个任务可以使用A3C(Asynchronous Advantage Actor-Critic)算法来解决。 A3C算法是一种并行化的深度强化学习算法,它使用多个并行的智能体来同时进行训练。每个智能体都有自己的神经网络模型,它们通过与环境交互来收集经验,并使用这些经验来更新模型参数。A3C算法中的Actor网络负责选择动作,Critic网络负责评估动作的价值。通过不断迭代训练,智能体可以逐渐学会在山地地形中爬坡的策略。 在训练过程中,可以使用DDPG(Deep Deterministic Policy Gradient)算法来解决连续控制版本的小车爬坡问题。DDPG算法是一种基于策略梯度的深度强化学习算法,它可以处理连续动作空间的问题。通过使用DDPG算法,可以给小车一个力(连续量),使得车上的摆杆倒立起来。 以上是关于深度强化学习小车爬坡的简要介绍。如果你对具体的代码实现感兴趣,可以参考引用\[1\]和引用\[3\]中提供的代码示例。 #### 引用[.reference_title] - *1* [深度强化学习之A3C网络理论代码(小车上山)](https://blog.csdn.net/weixin_43283397/article/details/105120623)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^koosearch_v1,239^v4^insert_chatgpt"}} ] [.reference_item] - *2* [强化学习小车爬山进阶QLearning→A3C](https://blog.csdn.net/ningmengzhihe/article/details/117528065)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^koosearch_v1,239^v4^insert_chatgpt"}} ] [.reference_item] - *3* [强化学习算法 DDPG 解决 CartPole 问题,代码逐条详解](https://blog.csdn.net/qq_42067550/article/details/106886427)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^koosearch_v1,239^v4^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论 10
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值