Unity ml-agents 源码分析 基础算法(1)

###预备知识
本文章基于ml-agents v0.7版本,因为都是阅览版,若读者使用更其他版本肯定有较大不同之处。
再看本文之前希望先对ml-agents有一个初步的认识,将项目运行起来尝试一下。
1.这里可以参考 浪尖儿大神的文章
2.除了 浪尖儿推荐的几个视频外,我还推荐李毅宏的https://www.bilibili.com/video/av24724071已经包含了ml agents所要用到的大多数算法,总长只有6个多小时,最主要的一点视频是中文的。
3.当然这里还需要了解tensorflow,起码要看的懂代码。

打开v0.7的时候我发现多了个barracude代替了原来的TFCSharp,可以使用自己的nn文件来保存模型,使得ml-agents可以在其他平台上运行。不过 barracude需要反编译才能看见源码,下次再与大家分享。
###PPO
参考论文是OpenAI的Proximal Policy Optimization Algorithms
这里我们定位到ml agents项目目录下ml-agents\mlagents\trainers\ppo\models.py,其中包含了ppo和Curiosity模型的主要代码。其中create_ppo_optimizer函数包含了ppo的算法和最后的总loss。
还有一个是bc目录下的models.py这个是模仿学习用到的模型,以后再讲。

代码如下:

    def create_ppo_optimizer(self, probs, old_probs, value, entropy, beta, epsilon, lr, max_step):
        """
        Creates training-specific Tensorflow ops for PPO models.
        :param probs: Current policy probabilities
        :param old_probs: Past policy probabilities
        :param value: Current value estimate
        :param beta: Entropy regularization strength
        :param entropy: Current policy entropy
        :param epsilon: Value for policy-divergence threshold
        :param lr: Learning rate
        :param max_step: Total number of training steps.
        """
        self.returns_holder = tf.placeholder(shape=[None], dtype=tf.float32, name='discounted_rewards')
        self.advantage = tf.placeholder(shape=[None, 1], dtype=tf.float32, name='advantages')
        self.learning_rate = tf.train.polynomial_decay(lr, self.global_step, max_step, 1e-10, power=1.0)

        self.old_value = tf.placeholder(shape=[None], dtype=tf.float32, name='old_value_estimates')

        decay_epsilon = tf.train.polynomial_decay(epsilon, self.global_step, max_step, 0.1, power=1.0)
        decay_beta = tf.train.polynomial_decay(beta, self.global_step, max_step, 1e-5, power=1.0)
        optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate)

        clipped_value_estimate = self.old_value + tf.clip_by_value(tf.reduce_sum(value, axis=1) - self.old_value,
                                                                   - decay_epsilon, decay_epsilon)

        v_opt_a = tf.squared_difference(self.returns_holder, tf.reduce_sum(value, axis=1))
        v_opt_b = tf.squared_difference(self.returns_holder, clipped_value_estimate)
        self.value_loss = tf.reduce_mean(tf.dynamic_partition(tf.maximum(v_opt_a, v_opt_b), self.mask, 2)[1])

        # Here we calculate PPO policy loss. In continuous control this is done independently for each action gaussian
        # and then averaged together. This provides significantly better performance than treating the probability
        # as an average of probabilities, or as a joint probability.

        #region ppo2
        r_theta = tf.exp(probs - old_probs)
        p_opt_a = r_theta * self.advantage
        p_opt_b = tf.clip_by_value(r_theta, 1.0 - decay_epsilon, 1.0 + decay_epsilon) * self.advantage
        self.policy_loss = -tf.reduce_mean(tf.dynamic_partition(tf.minimum(p_opt_a, p_opt_b), self.mask, 2)[1])
        #endregion ppo2
        self.loss = self.policy_loss + 0.5 * self.value_loss - decay_beta * tf.reduce_mean(
            tf.dynamic_partition(entropy, self.mask, 2)[1])

        #curiosity
        if self.use_curiosity:
            self.loss += 10 * (0.2 * self.forward_loss + 0.8 * self.inverse_loss)
        self.update_batch = optimizer.minimize(self.loss)

其中的

        r_theta = tf.exp(probs - old_probs)
        p_opt_a = r_theta * self.advantage
        p_opt_b = tf.clip_by_value(r_theta, 1.0 - decay_epsilon, 1.0 + decay_epsilon) * self.advantage
        self.policy_loss = -tf.reduce_mean(tf.dynamic_partition(tf.minimum(p_opt_a, p_opt_b), self.mask, 2)[1])

就是我们的PPO算法了。对应PPO论文中的第三算法Clipped Surrogate Objective,具体公式如下:
L C L I P ( θ ) = E ^ t [ min ⁡ ( r t ( θ ) A ^ t , c l i p ( r t ( θ ) , 1 − ϵ , 1 + ϵ ) A ^ t ) ] L^{CLIP}(\theta) = \hat{\Bbb{E}}_t[\min(r_t(\theta)\hat{A}_t,clip(r_t(\theta),1-\epsilon,1+\epsilon) \hat{A}_t)] LCLIP(θ)=E^t[min(rt(θ)A^t,clip(rt(θ),1ϵ,1+ϵ)A^t)]
所以显然r_theta就是 r t ( θ ) r_t(\theta) rt(θ),p_opt_a就是 r t ( θ ) A ^ t r_t(\theta)\hat{A}_t rt(θ)A^t,p_opt_b就是 c l i p ( r t ( θ ) , 1 − ϵ , 1 + ϵ ) A ^ t ) clip(r_t(\theta),1-\epsilon,1+\epsilon) \hat{A}_t) clip(rt(θ),1ϵ,1+ϵ)A^t),最后policy_loss就是 L C L I P ( θ ) L^{CLIP}(\theta) LCLIP(θ)

公式的具体推导可以从前面提到的资料中找到,如李宏毅的ppo章节,还有我所写的PPO公式推导

###Curiosity
这里我们来看下上面提到过的好奇心模型,主要作用就是让机器尝试更多的方法来获得奖励。
参考论文是Curiosity-driven Exploration by Self-supervised Prediction
Curiosity
这么方法也很直观,就是在原来模型上添加ICM模块,就如从 a 1 a_1 a1 s 1 s_1 s1预测出的 s ^ 2 \hat{s}_2 s^2,如果 s 1 s_1 s1 s ^ 2 \hat{s}_2 s^2相差越大就给越大的奖励 r 1 i r_1^i r1i。这就是好奇心模型的思想。ICM内部如下图:
ICM
图片和内容都来自于李宏毅的Sparse Reward章节,有兴趣的读者可以参考。
我们下面来看看ml agents的具体实现,首先在上面提到的create_ppo_optimizer函数中如下代码:

        #curiosity
        if self.use_curiosity:
            self.loss += 10 * (0.2 * self.forward_loss + 0.8 * self.inverse_loss)

意思就是如果使用Curiosity功能,就要添加forward_lossinverse_loss
forward_loss是由create_forward_model函数获得的。代码如下:

    def create_forward_model(self, encoded_state, encoded_next_state):
        """
        Creates forward model TensorFlow ops for Curiosity module.
        Predicts encoded future state based on encoded current state and given action.
        :param encoded_state: Tensor corresponding to encoded current state.
        :param encoded_next_state: Tensor corresponding to encoded next state.
        """
        combined_input = tf.concat([encoded_state, self.selected_actions], axis=1)
        hidden = tf.layers.dense(combined_input, 256, activation=self.swish)
        # We compare against the concatenation of all observation streams, hence `self.vis_obs_size + int(self.vec_obs_size > 0)`.
        pred_next_state = tf.layers.dense(hidden, self.curiosity_enc_size * (self.vis_obs_size + int(self.vec_obs_size > 0)),
                                          activation=None)

        squared_difference = 0.5 * tf.reduce_sum(tf.squared_difference(pred_next_state, encoded_next_state), axis=1)
        self.intrinsic_reward = tf.clip_by_value(self.curiosity_strength * squared_difference, 0, 1)
        self.forward_loss = tf.reduce_mean(tf.dynamic_partition(squared_difference, self.mask, 2)[1])

create_forward_model作用是ICM图中的Network1。
Network1

inverse_loss是由create_inverse_model函数获得的。代码如下:

    def create_inverse_model(self, encoded_state, encoded_next_state):
        """
        Creates inverse model TensorFlow ops for Curiosity module.
        Predicts action taken given current and future encoded states.
        :param encoded_state: Tensor corresponding to encoded current state.
        :param encoded_next_state: Tensor corresponding to encoded next state.
        """
        combined_input = tf.concat([encoded_state, encoded_next_state], axis=1)
        hidden = tf.layers.dense(combined_input, 256, activation=self.swish)
        if self.brain.vector_action_space_type == "continuous":
            pred_action = tf.layers.dense(hidden, self.act_size[0], activation=None)
            squared_difference = tf.reduce_sum(tf.squared_difference(pred_action, self.selected_actions), axis=1)
            self.inverse_loss = tf.reduce_mean(tf.dynamic_partition(squared_difference, self.mask, 2)[1])
        else:
            pred_action = tf.concat(
                [tf.layers.dense(hidden, self.act_size[i], activation=tf.nn.softmax)
                 for i in range(len(self.act_size))], axis=1)
            cross_entropy = tf.reduce_sum(-tf.log(pred_action + 1e-10) * self.selected_actions, axis=1)
            self.inverse_loss = tf.reduce_mean(tf.dynamic_partition(cross_entropy, self.mask, 2)[1])

create_inverse_model作用是ICM图中的Network2。
Network2
还有create_curiosity_encoders函数就是为了encoded_state, encoded_next_state做准备。
encoded_state就是图片中的 ϕ ( s t ) \phi (s_t) ϕ(st),encoded_next_state就是 ϕ ( s t + 1 ) \phi (s_{t+1}) ϕ(st+1);
代码如下:

    def create_curiosity_encoders(self):
        """
        Creates state encoders for current and future observations.
        Used for implementation of Curiosity-driven Exploration by Self-supervised Prediction
        See https://arxiv.org/abs/1705.05363 for more details.
        :return: current and future state encoder tensors.
        """
        encoded_state_list = []
        encoded_next_state_list = []

        if self.vis_obs_size > 0:
            self.next_visual_in = []
            visual_encoders = []
            next_visual_encoders = []
            for i in range(self.vis_obs_size):
                # Create input ops for next (t+1) visual observations.
                next_visual_input = self.create_visual_input(self.brain.camera_resolutions[i],
                                                             name="next_visual_observation_" + str(i))
                self.next_visual_in.append(next_visual_input)

                # Create the encoder ops for current and next visual input. Not that these encoders are siamese.
                encoded_visual = self.create_visual_observation_encoder(self.visual_in[i], self.curiosity_enc_size,
                                                                        self.swish, 1, "stream_{}_visual_obs_encoder"
                                                                        .format(i), False)

                encoded_next_visual = self.create_visual_observation_encoder(self.next_visual_in[i],
                                                                             self.curiosity_enc_size,
                                                                             self.swish, 1,
                                                                             "stream_{}_visual_obs_encoder".format(i),
                                                                             True)
                visual_encoders.append(encoded_visual)
                next_visual_encoders.append(encoded_next_visual)

            hidden_visual = tf.concat(visual_encoders, axis=1)
            hidden_next_visual = tf.concat(next_visual_encoders, axis=1)
            encoded_state_list.append(hidden_visual)
            encoded_next_state_list.append(hidden_next_visual)

        if self.vec_obs_size > 0:
            # Create the encoder ops for current and next vector input. Not that these encoders are siamese.
            # Create input op for next (t+1) vector observation.
            self.next_vector_in = tf.placeholder(shape=[None, self.vec_obs_size], dtype=tf.float32,
                                                 name='next_vector_observation')

            encoded_vector_obs = self.create_vector_observation_encoder(self.vector_in,
                                                                        self.curiosity_enc_size,
                                                                        self.swish, 2, "vector_obs_encoder",
                                                                        False)
            encoded_next_vector_obs = self.create_vector_observation_encoder(self.next_vector_in,
                                                                             self.curiosity_enc_size,
                                                                             self.swish, 2,
                                                                                 "vector_obs_encoder",
                                                                             True)
            encoded_state_list.append(encoded_vector_obs)
            encoded_next_state_list.append(encoded_next_vector_obs)

        encoded_state = tf.concat(encoded_state_list, axis=1)
        encoded_next_state = tf.concat(encoded_next_state_list, axis=1)
        return encoded_state, encoded_next_state

其中create_visual_observation_encodercreate_vector_observation_encoder的作用就是将 s t s_t stencode为 s t + 1 s_{t+1} st+1.如图中
Feature Ext

###结束语
ml agnets中的ppo\models.py就解释完了,policy.py中还有rewards的代码等到下次再说,希望读者有所收获。本文没有太多的详细的解释,因为强化学习内容丰富并不是一篇文章能说的清楚的。只是提炼出我认为重要的内容和大家分享,还有ml-agents与论文之间的联系。希望大家喜欢。

Unity 机器学习 332928260

  • 3
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值