2020-08-10

<div id="article_content" class="article_content clearfix">
            <link rel="stylesheet" href="https://csdnimg.cn/release/phoenix/template/css/ck_htmledit_views-211130ba7a.css">
                            <div class="htmledit_views" id="content_views">
                                            <p>每次都是看了就忘,看了就忘,从今天开始,细节开始,推一遍交叉熵。</p>

<p>我的第一篇CSDN,献给你们(有错欢迎指出啊)。</p>

<p>一.什么是交叉熵</p>

<p>交叉熵是一个信息论中的概念,它原来是用来估算平均编码长度的。给定两个概率分布p和q,通过q来表示p的交叉熵为:</p>

<p style="text-align:center;"><img alt="" class="has" src="https://img-blog.csdn.net/20180703114720894?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p>&nbsp; &nbsp; <span style="color:#ff6666;">&nbsp;注意,交叉熵刻画的是两个概率分布之间的距离,或可以说它刻画的是通过概率分布q来表达概率分布p的困难程度,p代表正确答案,q代表的是预测值,交叉熵越小,两个概率的分布约接近</span>。</p>

<p>&nbsp; &nbsp; &nbsp;那么,在神经网络中怎样把前向传播得到的结果也变成概率分布呢?Softmax回归就是一个非常有用的方法。(所以面试官会经常问你,为什么交叉熵经常要个softmax一起使用?)</p>

<p>&nbsp; &nbsp; 假设原始的神经网络的输出为<img alt="" class="has" src="https://img-blog.csdn.net/20180703114753541?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70">,那么经过Softmax回归处理之后的输出为:&nbsp;</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<img alt="" class="has" src="https://img-blog.csdn.net/20180703115304709?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p>这样就把神经网络的输出也变成了一个概率分布,从而可以通过交叉熵来计算预测的概率分布和真实答案的概率分布之间的距离了。</p>

<p>&nbsp; &nbsp;举个例子,假设有一个3分类问题,某个样例的正确答案是(1,0,0),这个模型经过softmax回归之后的预测答案是(0.5,0.4,0.1),那么预测和正确答案之间的交叉熵为:</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<img alt="" class="has" src="https://img-blog.csdn.net/20180703142653823?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p>如果另一个模型的预测是(0.8,0.1,0.1),那么这个预测值和真实值之间的交叉熵是:</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<img alt="" class="has" src="https://img-blog.csdn.net/20180703142845875?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p>显然我们看到第二个预测要优于第一个。这里的(1,0,0)就是正确答案p,(0.5,0.4,0.1)和(0.8,0.1,0.1)就是预测值q,显然用(0.8,0.1,0.1)表达(1,0,0)的困难程度更小。</p>

<h3><a name="t0"></a><a name="t0"></a>二.为什么要用交叉熵来做损失函数</h3>

<p>在逻辑回归问题中,常常使用MSE(Mean Squared Error)作为loss函数,此时:</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<img alt="" class="has" src="https://img-blog.csdn.net/20180703144945384?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p>这里的<img alt="" class="has" src="https://img-blog.csdn.net/20180703144958176?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70">就表示期望输出,<img alt="" class="has" src="https://img-blog.csdn.net/20180703145008373?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70">表示原始的实际输出(就是还没有加softmax)。这里的m表示有m个样本,loss为m个样本的loss均值。MSE在逻辑回归问题中比较好用,那么在分类问题中还是如此么?我们来看看Loss曲线。</p>

<p>&nbsp; &nbsp;首先,将原始的实际输出节点都经过softmax,</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<img alt="" class="has" src="https://img-blog.csdn.net/20180703145406155?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</p>

<p>现在,拿出一个样例来看,使用MSE的loss为:</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<img alt="" class="has" src="https://img-blog.csdn.net/20180703145940937?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p>其中:<img alt="" class="has" src="https://img-blog.csdn.net/20180703144958176?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70">和<img alt="" class="has" src="https://img-blog.csdn.net/20180703145905258?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70">都是常数,那么loss就可以简化为:</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<img alt="" class="has" src="https://img-blog.csdn.net/20180703150036829?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p>取c1=1,c2=2,绘制图像:</p>

<p>&nbsp;</p>

<p><img alt="" class="has" src="https://img-blog.csdn.net/20180703150138741?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p>大家可以看到,这是一个非凸函数。(后面我会再写一篇来说明只有当损失函数为凸函数时,梯度下降算法才能保证达到全局最优解),如果落在左半平面,由于梯度太小,训练困难。由此可以说,MSE在分类问题中,并不是一个好的loss函数。(现在知道如何回答面试官的问题了吗?如果面试官问你为什么在分类的时候不选用MSE而是交叉熵作为损失函数?)那么我们来看看交叉熵作为损失函数的时候的loss曲线,看完你就清楚了。</p>

<p>如果利用交叉熵作为损失函数的话,那么:</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<img alt="" class="has" src="https://img-blog.csdn.net/20180703154101670?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p>还是一样,<img alt="" class="has" src="https://img-blog.csdn.net/20180703144958176?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70">就表示期望输出,<img alt="" class="has" src="https://img-blog.csdn.net/20180703145008373?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70">表示原始的实际输出(就是还没有加softmax),由于one-hot标签的特殊性,一个1,剩下全是0,loss可以简化为:&nbsp;</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<img alt="" class="has" src="https://img-blog.csdn.net/20180703153934707?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p>加入(softmax)得:&nbsp;</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<img alt="" class="has" src="https://img-blog.csdn.net/20180703154158447?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70">&nbsp; &nbsp;(这一步咋来的,哪来的c1-)</p>

<p><span style="color:#333333;">取</span><span id="MathJax-Element-807-Frame" style="color:#333333;">C1=1,C2=2</span><span style="color:#333333;">绘制曲线如下</span>&nbsp;:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</p>

<p><img alt="" class="has" src="https://img-blog.csdn.net/20180703154327739?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl8zNzU2NzQ1MQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70"></p>

<p><span style="color:#333333;">相对MSE而言,曲线整体呈单调性,loss越大,梯度越大。便于梯度下降反向传播,利于优化。所以一般针对分类问题采用交叉熵作为loss函数。</span></p>

<h3><a name="t1"></a><a name="t1"></a><span style="color:#333333;">三.利用TensorFlow实现交叉熵</span></h3>

<pre class="has" name="code"><code class="language-python hljs">cross_entropy=-tf.reduce_mean(y_*tf.log(tf.clip_by_value(y,<span class="hljs-number">1e-10</span>,<span class="hljs-number">1.0</span>)))</code><div class="hljs-button {2}" data-title="复制" οnclick="hljs.copyCode(event)"></div></pre>

<p>其中y_就代表p,也就是正确结果(或者说标签),y代表q,也就是预测结果(或者说实际输出)。</p>

<p>这里的tf.reduce_mean()是求一个平均值,具体是这样的:</p>

<p>tf.reduce_mean(x):就是求所有元素的平均值;</p>

<p>tf.reduce_mean(x,0):就是求维度为0的平均值,也就是求列平均;</p>

<p>tf.reduce_mean(x,1):就是求维度为1的平均值,也就是求行平均。</p>

<p>这里的tf.clip_by_value(v,a,b):表示把v限制在a~b的范围内,小于a的让它等于a,大于b的让它等于b。(通俗易懂)</p>

<p>给出一个tf.clip_by_value()的样例。</p>

<pre class="has" name="code"><code class="language-python hljs"><ol class="hljs-ln"><li><div class="hljs-ln-numbers"><div class="hljs-ln-line hljs-ln-n" data-line-number="1"></div></div><div class="hljs-ln-code"><div class="hljs-ln-line">v=tf.constant([[<span class="hljs-number">1.0</span>,<span class="hljs-number">2.0</span>,<span class="hljs-number">3.0</span>],[<span class="hljs-number">4.0</span>,<span class="hljs-number">5.0</span>,<span class="hljs-number">6.0</span>]])</div></div></li><li><div class="hljs-ln-numbers"><div class="hljs-ln-line hljs-ln-n" data-line-number="2"></div></div><div class="hljs-ln-code"><div class="hljs-ln-line">print(tf.clip_by_value(v,<span class="hljs-number">2.5</span>,<span class="hljs-number">4.5</span>).eval())</div></div></li><li><div class="hljs-ln-numbers"><div class="hljs-ln-line hljs-ln-n" data-line-number="3"></div></div><div class="hljs-ln-code"><div class="hljs-ln-line"><span class="hljs-comment">#输出为[[2.5 2.5 3.][4. 4.5 4.5]]</span></div></div></li></ol></code><div class="hljs-button {2}" data-title="复制" οnclick="hljs.copyCode(event)"></div></pre>

<p>可以看到把一个2*3维的张量V的所有元素都限制在2.5~4.5之内,这样做的目的是可以避免一些运算错误(比如说log0是无效的)。<span style="color:#ff6666;">还有一个要注意的点:代码中的*,这里的*不是矩阵相乘,而是元素之间直接相乘。矩阵乘法需要用tf.matmul函数来完成。</span></p>

<p><span style="color:#000000;">因为交叉熵一般会与softmax回归一起使用,所以TensorFlow对这两个功能进行了同一封装,并提供了tf.nn.softmax_cross_entropy_with_logits函数。比如可以直接通过以下代码实现了sotfmax回归之后的交叉熵损失函数:</span></p>

<pre class="has" name="code"><code class="language-python hljs">cross_entropy=tf.nn.sparse_sotfmax_cross_entropy_with_logits(label=y_,logits=y)</code><div class="hljs-button {2}" data-title="复制" οnclick="hljs.copyCode(event)"></div></pre>

<p>这里y代表神经网络的原始输出(就是还没有经过sotfmax的输出),而y_是给出的标准答案。</p>                                    </div><div><div></div></div>
                                </div>

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值