<div id="article_details" class="details">
<div class="article_title">
<span class="ico ico_type_Original"></span>
<h1>
<span class="link_title"><a href="/u012436149/article/details/55683401">
tensorflow学习笔记(三十二):conv2d_transpose ("解卷积")
</a>
</span>
</h1>
</div>
<div class="article_manage clearfix">
<div class="article_r">
<span class="link_postdate">2017-02-18 22:43</span>
<span class="link_view" title="阅读次数">4856人阅读</span>
<span class="link_comments" title="评论次数"> <a href="#comments" οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_pinglun'])">评论</a>(0)</span>
<span class="link_collect tracking-ad" data-mod="popu_171"> <a href="javascript:void(0);" οnclick="javascript:collectArticle('tensorflow%e5%ad%a6%e4%b9%a0%e7%ac%94%e8%ae%b0(%e4%b8%89%e5%8d%81%e4%ba%8c)%3aconv2d_transpose+(%22%e8%a7%a3%e5%8d%b7%e7%a7%af%22)','55683401');return false;" title="收藏" target="_blank">收藏</a></span>
<span class="link_report"> <a href="#report" οnclick="javascript:report(55683401,2);return false;" title="举报">举报</a></span>
</div>
</div> <style type="text/css">
.embody{
padding:10px 10px 10px;
margin:0 -20px;
border-bottom:solid 1px #ededed;
}
.embody_b{
margin:0 ;
padding:10px 0;
}
.embody .embody_t,.embody .embody_c{
display: inline-block;
margin-right:10px;
}
.embody_t{
font-size: 12px;
color:#999;
}
.embody_c{
font-size: 12px;
}
.embody_c img,.embody_c em{
display: inline-block;
vertical-align: middle;
}
.embody_c img{
width:30px;
height:30px;
}
.embody_c em{
margin: 0 20px 0 10px;
color:#333;
font-style: normal;
}
</style>
<script type="text/javascript">
$(function () {
try
{
var lib = eval("("+$("#lib").attr("value")+")");
var html = "";
if (lib.err == 0) {
$.each(lib.data, function (i) {
var obj = lib.data[i];
//html += '<img src="' + obj.logo + '"/>' + obj.name + " ";
html += ' <a href="' + obj.url + '" target="_blank">';
html += ' <img src="' + obj.logo + '">';
html += ' <em><b>' + obj.name + '</b></em>';
html += ' </a>';
});
if (html != "") {
setTimeout(function () {
$("#lib").html(html);
$("#embody").show();
}, 100);
}
}
} catch (err)
{ }
});
</script>
<div class="category clearfix">
<div class="category_l">
<img src="http://static.blog.csdn.net/images/category_icon.jpg">
<span>分类:</span>
</div>
<div class="category_r">
<label οnclick="GetCategoryArticles('6461700','u012436149','top','55683401');">
<span οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_fenlei']);">tensorflow<em>(66)</em></span>
<img class="arrow-down" src="http://static.blog.csdn.net/images/arrow_triangle _down.jpg" style="display:inline;">
<img class="arrow-up" src="http://static.blog.csdn.net/images/arrow_triangle_up.jpg" style="display:none;">
<div class="subItem">
<div class="subItem_t"><a href="http://blog.csdn.net/u012436149/article/category/6461700" target="_blank">作者同类文章</a><i class="J_close">X</i></div>
<ul class="subItem_l" id="top_6461700">
</ul>
</div>
</label>
</div>
</div>
<div class="bog_copyright">
<p class="copyright_p">转载于:http://blog.csdn.net/u012436149/article/details/55683401</p>
</div>
<div style="clear:both"></div><div style="border:solid 1px #ccc; background:#eee; float:left; min-width:200px;padding:4px 10px;"><p style="text-align:right;margin:0;"><span style="float:left;">目录<a href="#" title="系统根据文章中H1到H6标签自动生成文章目录">(?)</a></span><a href="#" οnclick="javascript:return openct(this);" title="展开">[+]</a></p><ol style="display:none;margin-left:14px;padding-left:14px;line-height:160%;"><li><a href="#t0">conv_transpose</a></li><ol><li><a href="#t1">如何灵活的控制 deconv 的output shape</a></li></ol></ol></div><div style="clear:both"></div><div id="article_content" class="article_content tracking-ad" data-mod="popu_307" data-dsm="post">
<div class="markdown_views"><h1 id="convtranspose"><a name="t0" target="_blank"></a>conv_transpose</h1>
<p><code>deconv</code>解卷积,实际是叫做<code>conv_transpose</code>, <code>conv_transpose</code>实际是卷积的一个逆向过程,<code>tf</code> 中, 编写<code>conv_transpose</code>代码的时候,心中想着一个正向的卷积过程会很有帮助。</p>
<p>想象一下我们有一个正向卷积: <br>
<code>input_shape = [1,5,5,3]</code> <br>
<code>kernel_shape=[2,2,3,1]</code> <br>
<code>strides=[1,2,2,1]</code> <br>
<code>padding = "SAME"</code></p>
<p>那么,卷积激活后,我们会得到 x(就是上面代码的x)。那么,我们已知x,要想得到input_shape 形状的 tensor,我们应该如何使用<code>conv2d_transpose</code>函数呢? <br>
就用下面的代码</p>
<pre class="prettyprint" name="code"><code class="language-python hljs has-numbering"><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf
tf.set_random_seed(<span class="hljs-number">1</span>)
x = tf.random_normal(shape=[<span class="hljs-number">1</span>,<span class="hljs-number">3</span>,<span class="hljs-number">3</span>,<span class="hljs-number">1</span>])
<span class="hljs-comment">#正向卷积的kernel的模样</span>
kernel = tf.random_normal(shape=[<span class="hljs-number">2</span>,<span class="hljs-number">2</span>,<span class="hljs-number">3</span>,<span class="hljs-number">1</span>])
<span class="hljs-comment"># strides 和padding也是假想中 正向卷积的模样。当然,x是正向卷积后的模样</span>
y = tf.nn.conv2d_transpose(x,kernel,output_shape=[<span class="hljs-number">1</span>,<span class="hljs-number">5</span>,<span class="hljs-number">5</span>,<span class="hljs-number">3</span>],
strides=[<span class="hljs-number">1</span>,<span class="hljs-number">2</span>,<span class="hljs-number">2</span>,<span class="hljs-number">1</span>],padding=<span class="hljs-string">"SAME"</span>)
<span class="hljs-comment"># 在这里,output_shape=[1,6,6,3]也可以,考虑正向过程,[1,6,6,3]</span>
<span class="hljs-comment"># 通过kernel_shape:[2,2,3,1],strides:[1,2,2,1]也可以</span>
<span class="hljs-comment"># 获得x_shape:[1,3,3,1]</span>
<span class="hljs-comment"># output_shape 也可以是一个 tensor</span>
sess = tf.Session()
tf.global_variables_initializer().run(session=sess)
print(y.eval(session=sess))</code><ul class="pre-numbering"><li>1</li><li>2</li><li>3</li><li>4</li><li>5</li><li>6</li><li>7</li><li>8</li><li>9</li><li>10</li><li>11</li><li>12</li><li>13</li><li>14</li><li>15</li><li>16</li><li>17</li></ul><div class="save_code tracking-ad" data-mod="popu_249"><a href="javascript:;" target="_blank"><img src="http://static.blog.csdn.net/images/save_snippets.png"></a></div></pre>
<p><strong>conv2d_transpose 中会计算 output_shape 能否通过给定的参数计算出 inputs的维度,如果不能,则报错</strong> </p>
<pre class="prettyprint" name="code"><code class="language-python hljs has-numbering"><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf
<span class="hljs-keyword">from</span> tensorflow.contrib <span class="hljs-keyword">import</span> slim
inputs = tf.random_normal(shape=[<span class="hljs-number">3</span>, <span class="hljs-number">97</span>, <span class="hljs-number">97</span>, <span class="hljs-number">10</span>])
conv1 = slim.conv2d(inputs, num_outputs=<span class="hljs-number">20</span>, kernel_size=<span class="hljs-number">3</span>, stride=<span class="hljs-number">4</span>)
de_weight = tf.get_variable(<span class="hljs-string">'de_weight'</span>, shape=[<span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">10</span>, <span class="hljs-number">20</span>])
deconv1 = tf.nn.conv2d_transpose(conv1, filter=de_weight, output_shape=tf.shape(inputs),
strides=[<span class="hljs-number">1</span>, <span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">1</span>], padding=<span class="hljs-string">'SAME'</span>)
<span class="hljs-comment"># ValueError: Shapes (3, 33, 33, 20) and (3, 25, 25, 20) are not compatible</span></code><ul class="pre-numbering"><li>1</li><li>2</li><li>3</li><li>4</li><li>5</li><li>6</li><li>7</li><li>8</li><li>9</li><li>10</li><li>11</li><li>12</li><li>13</li></ul><div class="save_code tracking-ad" data-mod="popu_249"><a href="javascript:;" target="_blank"><img src="http://static.blog.csdn.net/images/save_snippets.png"></a></div></pre>
<p>上面错误的意思是:</p>
<ul>
<li>conv1 的 shape 是 (3, 25, 25, 20)</li>
<li>但是 deconv1 对 conv1 求导的时候,得到的导数 shape 却是 [3, 33, 33, 20],这个和 <code>conv1</code> 的shape 不匹配,当然要报错咯。</li>
</ul>
<pre class="prettyprint" name="code"><code class="language-python hljs has-numbering"><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf
<span class="hljs-keyword">from</span> tensorflow.contrib <span class="hljs-keyword">import</span> slim
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
inputs = tf.placeholder(tf.float32, shape=[<span class="hljs-keyword">None</span>, <span class="hljs-keyword">None</span>, <span class="hljs-keyword">None</span>, <span class="hljs-number">3</span>])
conv1 = slim.conv2d(inputs, num_outputs=<span class="hljs-number">20</span>, kernel_size=<span class="hljs-number">3</span>, stride=<span class="hljs-number">4</span>)
de_weight = tf.get_variable(<span class="hljs-string">'de_weight'</span>, shape=[<span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">20</span>])
deconv1 = tf.nn.conv2d_transpose(conv1, filter=de_weight, output_shape=tf.shape(inputs),
strides=[<span class="hljs-number">1</span>, <span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">1</span>], padding=<span class="hljs-string">'SAME'</span>)
loss = deconv1 - inputs
train_op = tf.train.GradientDescentOptimizer(<span class="hljs-number">0.001</span>).minimize(loss)
<span class="hljs-keyword">with</span> tf.Session() <span class="hljs-keyword">as</span> sess:
tf.global_variables_initializer().run()
<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(<span class="hljs-number">10</span>):
data_in = np.random.normal(size=[<span class="hljs-number">3</span>, <span class="hljs-number">97</span>, <span class="hljs-number">97</span>, <span class="hljs-number">3</span>])
_, los_ = sess.run([train_op, loss], feed_dict={inputs: data_in})
print(los_)
<span class="hljs-comment"># InvalidArgumentError (see above for traceback): Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 25, computed = 33</span></code><ul class="pre-numbering"><li>1</li><li>2</li><li>3</li><li>4</li><li>5</li><li>6</li><li>7</li><li>8</li><li>9</li><li>10</li><li>11</li><li>12</li><li>13</li><li>14</li><li>15</li><li>16</li><li>17</li><li>18</li><li>19</li><li>20</li><li>21</li><li>22</li><li>23</li><li>24</li></ul><div class="save_code tracking-ad" data-mod="popu_249"><a href="javascript:;" target="_blank"><img src="http://static.blog.csdn.net/images/save_snippets.png"></a></div></pre>
<p>如果 输入的 shape 有好多 <code>None</code> 的话,那就是另外一种 报错方式了,如上所示: <br>
这个错误的意思是:</p>
<ul>
<li><code>conv1</code> 的 shape 第二维或第三维的 shape 是 25</li>
<li>但是 deconv1 对 conv1 求导的时候,得到的 倒数 shape 的第二位或第三维却是 33</li>
</ul>
<p>至于为什么会这样,因为 <code>deconv</code> 的计算方式就是 <code>conv</code> 求导的计算方式,<code>conv</code> 的计算方式,就是 <code>decov</code> 求导的方式。</p>
<p>对<code>deconv</code> 求导就相当于 拿着 <code>conv_transpose</code> 中的参数对 <code>deconv</code> 输出的值的导数做卷积。 </p>
<h2 id="如何灵活的控制-deconv-的output-shape"><a name="t1" target="_blank"></a>如何灵活的控制 deconv 的output shape</h2>
<p>在 <code>conv2d_transpose()</code> 中,有一个参数,叫 <code>output_shape</code>, 如果对它传入一个 int list 的话,那么在运行的过程中,<code>output_shape</code> 将无法改变(传入<code>int list</code>已经可以满足大部分应用的需要),但是如何更灵活的控制 <code>output_shape</code> 呢?</p>
<ul>
<li>传入 <code>tensor</code></li>
</ul>
<pre class="prettyprint" name="code"><code class="language-python hljs has-numbering"><span class="hljs-comment"># 可以用 placeholder</span>
outputs_shape = tf.placeholder(dtype=tf.int32, shape=[<span class="hljs-number">4</span>])
deconv1 = tf.nn.conv2d_transpose(conv1, filter=de_weight, output_shape=output_shape,
strides=[<span class="hljs-number">1</span>, <span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">1</span>], padding=<span class="hljs-string">'SAME'</span>)
<span class="hljs-comment"># 可以用 inputs 的shape,但是有点改变</span>
inputs_shape = tf.shape(inputs)
outputs_shape = [inputs_shape[<span class="hljs-number">0</span>], inputs_shape[<span class="hljs-number">1</span>], inputs_shape[<span class="hljs-number">2</span>], some_value]
deconv1 = tf.nn.conv2d_transpose(conv1, filter=de_weight, output_shape=outputs_shape,
strides=[<span class="hljs-number">1</span>, <span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">1</span>], padding=<span class="hljs-string">'SAME'</span>) </code><ul class="pre-numbering"><li>1</li><li>2</li><li>3</li><li>4</li><li>5</li><li>6</li><li>7</li><li>8</li><li>9</li><li>10</li></ul><div class="save_code tracking-ad" data-mod="popu_249"><a href="javascript:;" target="_blank"><img src="http://static.blog.csdn.net/images/save_snippets.png"></a></div></pre></div>
<script type="text/javascript">
$(function () {
$('pre.prettyprint code').each(function () {
var lines = $(this).text().split('\n').length;
var $numbering = $('<ul/>').addClass('pre-numbering').hide();
$(this).addClass('has-numbering').parent().append($numbering);
for (i = 1; i <= lines; i++) {
$numbering.append($('<li/>').text(i));
};
$numbering.fadeIn(1700);
});
});
</script>
</div>
<!-- Baidu Button BEGIN -->
<div class="bdsharebuttonbox tracking-ad bdshare-button-style0-16" style="float: right;" data-mod="popu_172" data-bd-bind="1505052134596">
<a href="#" class="bds_more" data-cmd="more" style="background-position:0 0 !important; background-image: url(http://bdimg.share.baidu.com/static/api/img/share/icons_0_16.png?v=d754dcc0.png) !important" target="_blank"></a>
<a href="#" class="bds_qzone" data-cmd="qzone" title="分享到QQ空间" style="background-position:0 -52px !important" target="_blank"></a>
<a href="#" class="bds_tsina" data-cmd="tsina" title="分享到新浪微博" style="background-position:0 -104px !important" target="_blank"></a>
<a href="#" class="bds_tqq" data-cmd="tqq" title="分享到腾讯微博" style="background-position:0 -260px !important" target="_blank"></a>
<a href="#" class="bds_renren" data-cmd="renren" title="分享到人人网" style="background-position:0 -208px !important" target="_blank"></a>
<a href="#" class="bds_weixin" data-cmd="weixin" title="分享到微信" style="background-position:0 -1612px !important" target="_blank"></a>
</div>
<script>window._bd_share_config = { "common": { "bdSnsKey": {}, "bdText": "", "bdMini": "1", "bdMiniList": false, "bdPic": "", "bdStyle": "0", "bdSize": "16" }, "share": {} }; with (document) 0[(getElementsByTagName('head')[0] || body).appendChild(createElement('script')).src = 'http://bdimg.share.baidu.com/static/api/js/share.js?v=89860593.js?cdnversion=' + ~(-new Date() / 36e5)];</script>
<!-- Baidu Button END -->
<!--172.16.140.11-->
<!-- Baidu Button BEGIN -->
<script type="text/javascript" id="bdshare_js" data="type=tools&uid=1536434" src="http://bdimg.share.baidu.com/static/js/bds_s_v2.js?cdnversion=418071"></script>
<script type="text/javascript">
document.getElementById("bdshell_js").src = "http://bdimg.share.baidu.com/static/js/shell_v2.js?cdnversion=" + Math.ceil(new Date()/3600000)
</script>
<!-- Baidu Button END -->
<div id="digg" articleid="55683401">
<dl id="btnDigg" class="digg digg_enable" οnclick="btndigga();">
<dt>顶</dt>
<dd>1</dd>
</dl>
<dl id="btnBury" class="digg digg_enable" οnclick="btnburya();">
<dt>踩</dt>
<dd>0</dd>
</dl>
</div>
<div class="tracking-ad" data-mod="popu_222"><a href="javascript:void(0);" target="_blank"> </a> </div>
<div class="tracking-ad" data-mod="popu_223"> <a href="javascript:void(0);" target="_blank"> </a></div>
<script type="text/javascript">
function btndigga() {
$(".tracking-ad[data-mod='popu_222'] a").click();
}
function btnburya() {
$(".tracking-ad[data-mod='popu_223'] a").click();
}
</script>
<ul class="article_next_prev">
<li class="prev_article"><span οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_shangyipian']);location.href='http://blog.csdn.net/u012436149/article/details/55000323';">上一篇</span><a href="http://blog.csdn.net/u012436149/article/details/55000323" οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_shangyipian'])">变分推断(variational inference)</a></li>
<li class="next_article"><span οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_xiayipian']);location.href='http://blog.csdn.net/u012436149/article/details/56484572';">下一篇</span><a href="http://blog.csdn.net/u012436149/article/details/56484572" οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_xiayipian'])">tensorflow学习笔记(三十三):ExponentialMovingAverage</a></li>
</ul>
<div style="clear:both; height:10px;"></div>
<div class="similar_article">
<h4></h4>
<div class="similar_c" style="margin:20px 0px 0px 0px">
<div class="similar_c_t">
相关文章推荐
</div>
<div class="similar_wrap tracking-ad" data-mod="popu_36" style="max-height:250px">
<ul class="similar_list fl">
<li>
<em>•</em>
<a href="http://blog.csdn.net/guotong1988/article/details/52955164" title="tf.nn.conv2d_transpose 实例 及 解析" strategy="BlogCommendFromBaidu_0" target="_blank">tf.nn.conv2d_transpose 实例 及 解析</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/course/detail/1203?utm_source=blog7" title="Hadoop生态系统零基础入门" strategy="undefined" target="_blank">Hadoop生态系统零基础入门</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/kkk584520/article/details/51612370" title="TensorFlow 从入门到精通(八):TensorFlow tf.nn.conv2d 一路追查" strategy="BlogCommendFromBaidu_1" target="_blank">TensorFlow 从入门到精通(八):TensorFlow tf.nn.conv2d 一路追查</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/huiyiCourse/series_detail/62?utm_source=blog7" title="系统集成工程师必过冲刺!" strategy="undefined" target="_blank">系统集成工程师必过冲刺!</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/Rush_b/article/details/71302282" title="开源项目domain-transfer-network-master学习" strategy="BlogCommendFromBaidu_2" target="_blank">开源项目domain-transfer-network-master学习</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/course/detail/2883?utm_source=blog7" title="征服React Native我有妙招" strategy="undefined" target="_blank">征服React Native我有妙招</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/lujiandong1/article/details/53728053" title="tensorflow conv2d的padding解释以及参数解释" strategy="BlogCommendFromBaidu_3" target="_blank">tensorflow conv2d的padding解释以及参数解释</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/course/detail/2314?utm_source=blog7" title="FFmpeg音视频高级开发实战" strategy="undefined" target="_blank">FFmpeg音视频高级开发实战</a>
</li>
</ul>
<ul class="similar_list fr">
<li>
<em>•</em>
<a href="http://blog.csdn.net/u012968002/article/details/52261415" title="Tensorflow -- tf.nn.conv2d() 函数详解" strategy="BlogCommendFromBaidu_4" target="_blank">Tensorflow -- tf.nn.conv2d() 函数详解</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/course/detail/5463?utm_source=blog7" title="5天搞定深度学习框架-Caffe" strategy="undefined" target="_blank">5天搞定深度学习框架-Caffe</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/zby1001/article/details/53707338" title="TensorFlow-tf.nn.conv2d 函数" strategy="BlogCommendFromBaidu_5" target="_blank">TensorFlow-tf.nn.conv2d 函数</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/course/detail/4661?utm_source=blog7" title="Python数据分析经典案例解析" strategy="undefined" target="_blank">Python数据分析经典案例解析</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/u013713117/article/details/55517458" title="tf.nn.conv2d理解" strategy="BlogCommendFromBaidu_6" target="_blank">tf.nn.conv2d理解</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/mao_xiao_feng/article/details/53444333" title="【TensorFlow】tf.nn.conv2d是怎样实现卷积的?" strategy="BlogCommendFromBaidu_7" target="_blank">【TensorFlow】tf.nn.conv2d是怎样实现卷积的?</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/u012436149/article/details/72353313" title="tensorflow学习笔记(四十二):输入流水线" strategy="BlogCommendFromBaidu_8" target="_blank">tensorflow学习笔记(四十二):输入流水线</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/caicaiatnbu/article/details/72792684" title="[TensorFlow 学习笔记-04]卷积函数之tf.nn.conv2d" strategy="BlogCommendFromCsdn_9" target="_blank">[TensorFlow 学习笔记-04]卷积函数之tf.nn.conv2d</a>
</li>
</ul>
</div>
</div>
</div>
</div>
<div class="article_title">
<span class="ico ico_type_Original"></span>
<h1>
<span class="link_title"><a href="/u012436149/article/details/55683401">
tensorflow学习笔记(三十二):conv2d_transpose ("解卷积")
</a>
</span>
</h1>
</div>
<div class="article_manage clearfix">
<div class="article_r">
<span class="link_postdate">2017-02-18 22:43</span>
<span class="link_view" title="阅读次数">4856人阅读</span>
<span class="link_comments" title="评论次数"> <a href="#comments" οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_pinglun'])">评论</a>(0)</span>
<span class="link_collect tracking-ad" data-mod="popu_171"> <a href="javascript:void(0);" οnclick="javascript:collectArticle('tensorflow%e5%ad%a6%e4%b9%a0%e7%ac%94%e8%ae%b0(%e4%b8%89%e5%8d%81%e4%ba%8c)%3aconv2d_transpose+(%22%e8%a7%a3%e5%8d%b7%e7%a7%af%22)','55683401');return false;" title="收藏" target="_blank">收藏</a></span>
<span class="link_report"> <a href="#report" οnclick="javascript:report(55683401,2);return false;" title="举报">举报</a></span>
</div>
</div> <style type="text/css">
.embody{
padding:10px 10px 10px;
margin:0 -20px;
border-bottom:solid 1px #ededed;
}
.embody_b{
margin:0 ;
padding:10px 0;
}
.embody .embody_t,.embody .embody_c{
display: inline-block;
margin-right:10px;
}
.embody_t{
font-size: 12px;
color:#999;
}
.embody_c{
font-size: 12px;
}
.embody_c img,.embody_c em{
display: inline-block;
vertical-align: middle;
}
.embody_c img{
width:30px;
height:30px;
}
.embody_c em{
margin: 0 20px 0 10px;
color:#333;
font-style: normal;
}
</style>
<script type="text/javascript">
$(function () {
try
{
var lib = eval("("+$("#lib").attr("value")+")");
var html = "";
if (lib.err == 0) {
$.each(lib.data, function (i) {
var obj = lib.data[i];
//html += '<img src="' + obj.logo + '"/>' + obj.name + " ";
html += ' <a href="' + obj.url + '" target="_blank">';
html += ' <img src="' + obj.logo + '">';
html += ' <em><b>' + obj.name + '</b></em>';
html += ' </a>';
});
if (html != "") {
setTimeout(function () {
$("#lib").html(html);
$("#embody").show();
}, 100);
}
}
} catch (err)
{ }
});
</script>
<div class="category clearfix">
<div class="category_l">
<img src="http://static.blog.csdn.net/images/category_icon.jpg">
<span>分类:</span>
</div>
<div class="category_r">
<label οnclick="GetCategoryArticles('6461700','u012436149','top','55683401');">
<span οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_fenlei']);">tensorflow<em>(66)</em></span>
<img class="arrow-down" src="http://static.blog.csdn.net/images/arrow_triangle _down.jpg" style="display:inline;">
<img class="arrow-up" src="http://static.blog.csdn.net/images/arrow_triangle_up.jpg" style="display:none;">
<div class="subItem">
<div class="subItem_t"><a href="http://blog.csdn.net/u012436149/article/category/6461700" target="_blank">作者同类文章</a><i class="J_close">X</i></div>
<ul class="subItem_l" id="top_6461700">
</ul>
</div>
</label>
</div>
</div>
<div class="bog_copyright">
<p class="copyright_p">转载于:http://blog.csdn.net/u012436149/article/details/55683401</p>
</div>
<div style="clear:both"></div><div style="border:solid 1px #ccc; background:#eee; float:left; min-width:200px;padding:4px 10px;"><p style="text-align:right;margin:0;"><span style="float:left;">目录<a href="#" title="系统根据文章中H1到H6标签自动生成文章目录">(?)</a></span><a href="#" οnclick="javascript:return openct(this);" title="展开">[+]</a></p><ol style="display:none;margin-left:14px;padding-left:14px;line-height:160%;"><li><a href="#t0">conv_transpose</a></li><ol><li><a href="#t1">如何灵活的控制 deconv 的output shape</a></li></ol></ol></div><div style="clear:both"></div><div id="article_content" class="article_content tracking-ad" data-mod="popu_307" data-dsm="post">
<div class="markdown_views"><h1 id="convtranspose"><a name="t0" target="_blank"></a>conv_transpose</h1>
<p><code>deconv</code>解卷积,实际是叫做<code>conv_transpose</code>, <code>conv_transpose</code>实际是卷积的一个逆向过程,<code>tf</code> 中, 编写<code>conv_transpose</code>代码的时候,心中想着一个正向的卷积过程会很有帮助。</p>
<p>想象一下我们有一个正向卷积: <br>
<code>input_shape = [1,5,5,3]</code> <br>
<code>kernel_shape=[2,2,3,1]</code> <br>
<code>strides=[1,2,2,1]</code> <br>
<code>padding = "SAME"</code></p>
<p>那么,卷积激活后,我们会得到 x(就是上面代码的x)。那么,我们已知x,要想得到input_shape 形状的 tensor,我们应该如何使用<code>conv2d_transpose</code>函数呢? <br>
就用下面的代码</p>
<pre class="prettyprint" name="code"><code class="language-python hljs has-numbering"><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf
tf.set_random_seed(<span class="hljs-number">1</span>)
x = tf.random_normal(shape=[<span class="hljs-number">1</span>,<span class="hljs-number">3</span>,<span class="hljs-number">3</span>,<span class="hljs-number">1</span>])
<span class="hljs-comment">#正向卷积的kernel的模样</span>
kernel = tf.random_normal(shape=[<span class="hljs-number">2</span>,<span class="hljs-number">2</span>,<span class="hljs-number">3</span>,<span class="hljs-number">1</span>])
<span class="hljs-comment"># strides 和padding也是假想中 正向卷积的模样。当然,x是正向卷积后的模样</span>
y = tf.nn.conv2d_transpose(x,kernel,output_shape=[<span class="hljs-number">1</span>,<span class="hljs-number">5</span>,<span class="hljs-number">5</span>,<span class="hljs-number">3</span>],
strides=[<span class="hljs-number">1</span>,<span class="hljs-number">2</span>,<span class="hljs-number">2</span>,<span class="hljs-number">1</span>],padding=<span class="hljs-string">"SAME"</span>)
<span class="hljs-comment"># 在这里,output_shape=[1,6,6,3]也可以,考虑正向过程,[1,6,6,3]</span>
<span class="hljs-comment"># 通过kernel_shape:[2,2,3,1],strides:[1,2,2,1]也可以</span>
<span class="hljs-comment"># 获得x_shape:[1,3,3,1]</span>
<span class="hljs-comment"># output_shape 也可以是一个 tensor</span>
sess = tf.Session()
tf.global_variables_initializer().run(session=sess)
print(y.eval(session=sess))</code><ul class="pre-numbering"><li>1</li><li>2</li><li>3</li><li>4</li><li>5</li><li>6</li><li>7</li><li>8</li><li>9</li><li>10</li><li>11</li><li>12</li><li>13</li><li>14</li><li>15</li><li>16</li><li>17</li></ul><div class="save_code tracking-ad" data-mod="popu_249"><a href="javascript:;" target="_blank"><img src="http://static.blog.csdn.net/images/save_snippets.png"></a></div></pre>
<p><strong>conv2d_transpose 中会计算 output_shape 能否通过给定的参数计算出 inputs的维度,如果不能,则报错</strong> </p>
<pre class="prettyprint" name="code"><code class="language-python hljs has-numbering"><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf
<span class="hljs-keyword">from</span> tensorflow.contrib <span class="hljs-keyword">import</span> slim
inputs = tf.random_normal(shape=[<span class="hljs-number">3</span>, <span class="hljs-number">97</span>, <span class="hljs-number">97</span>, <span class="hljs-number">10</span>])
conv1 = slim.conv2d(inputs, num_outputs=<span class="hljs-number">20</span>, kernel_size=<span class="hljs-number">3</span>, stride=<span class="hljs-number">4</span>)
de_weight = tf.get_variable(<span class="hljs-string">'de_weight'</span>, shape=[<span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">10</span>, <span class="hljs-number">20</span>])
deconv1 = tf.nn.conv2d_transpose(conv1, filter=de_weight, output_shape=tf.shape(inputs),
strides=[<span class="hljs-number">1</span>, <span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">1</span>], padding=<span class="hljs-string">'SAME'</span>)
<span class="hljs-comment"># ValueError: Shapes (3, 33, 33, 20) and (3, 25, 25, 20) are not compatible</span></code><ul class="pre-numbering"><li>1</li><li>2</li><li>3</li><li>4</li><li>5</li><li>6</li><li>7</li><li>8</li><li>9</li><li>10</li><li>11</li><li>12</li><li>13</li></ul><div class="save_code tracking-ad" data-mod="popu_249"><a href="javascript:;" target="_blank"><img src="http://static.blog.csdn.net/images/save_snippets.png"></a></div></pre>
<p>上面错误的意思是:</p>
<ul>
<li>conv1 的 shape 是 (3, 25, 25, 20)</li>
<li>但是 deconv1 对 conv1 求导的时候,得到的导数 shape 却是 [3, 33, 33, 20],这个和 <code>conv1</code> 的shape 不匹配,当然要报错咯。</li>
</ul>
<pre class="prettyprint" name="code"><code class="language-python hljs has-numbering"><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf
<span class="hljs-keyword">from</span> tensorflow.contrib <span class="hljs-keyword">import</span> slim
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
inputs = tf.placeholder(tf.float32, shape=[<span class="hljs-keyword">None</span>, <span class="hljs-keyword">None</span>, <span class="hljs-keyword">None</span>, <span class="hljs-number">3</span>])
conv1 = slim.conv2d(inputs, num_outputs=<span class="hljs-number">20</span>, kernel_size=<span class="hljs-number">3</span>, stride=<span class="hljs-number">4</span>)
de_weight = tf.get_variable(<span class="hljs-string">'de_weight'</span>, shape=[<span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">20</span>])
deconv1 = tf.nn.conv2d_transpose(conv1, filter=de_weight, output_shape=tf.shape(inputs),
strides=[<span class="hljs-number">1</span>, <span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">1</span>], padding=<span class="hljs-string">'SAME'</span>)
loss = deconv1 - inputs
train_op = tf.train.GradientDescentOptimizer(<span class="hljs-number">0.001</span>).minimize(loss)
<span class="hljs-keyword">with</span> tf.Session() <span class="hljs-keyword">as</span> sess:
tf.global_variables_initializer().run()
<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(<span class="hljs-number">10</span>):
data_in = np.random.normal(size=[<span class="hljs-number">3</span>, <span class="hljs-number">97</span>, <span class="hljs-number">97</span>, <span class="hljs-number">3</span>])
_, los_ = sess.run([train_op, loss], feed_dict={inputs: data_in})
print(los_)
<span class="hljs-comment"># InvalidArgumentError (see above for traceback): Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 25, computed = 33</span></code><ul class="pre-numbering"><li>1</li><li>2</li><li>3</li><li>4</li><li>5</li><li>6</li><li>7</li><li>8</li><li>9</li><li>10</li><li>11</li><li>12</li><li>13</li><li>14</li><li>15</li><li>16</li><li>17</li><li>18</li><li>19</li><li>20</li><li>21</li><li>22</li><li>23</li><li>24</li></ul><div class="save_code tracking-ad" data-mod="popu_249"><a href="javascript:;" target="_blank"><img src="http://static.blog.csdn.net/images/save_snippets.png"></a></div></pre>
<p>如果 输入的 shape 有好多 <code>None</code> 的话,那就是另外一种 报错方式了,如上所示: <br>
这个错误的意思是:</p>
<ul>
<li><code>conv1</code> 的 shape 第二维或第三维的 shape 是 25</li>
<li>但是 deconv1 对 conv1 求导的时候,得到的 倒数 shape 的第二位或第三维却是 33</li>
</ul>
<p>至于为什么会这样,因为 <code>deconv</code> 的计算方式就是 <code>conv</code> 求导的计算方式,<code>conv</code> 的计算方式,就是 <code>decov</code> 求导的方式。</p>
<p>对<code>deconv</code> 求导就相当于 拿着 <code>conv_transpose</code> 中的参数对 <code>deconv</code> 输出的值的导数做卷积。 </p>
<h2 id="如何灵活的控制-deconv-的output-shape"><a name="t1" target="_blank"></a>如何灵活的控制 deconv 的output shape</h2>
<p>在 <code>conv2d_transpose()</code> 中,有一个参数,叫 <code>output_shape</code>, 如果对它传入一个 int list 的话,那么在运行的过程中,<code>output_shape</code> 将无法改变(传入<code>int list</code>已经可以满足大部分应用的需要),但是如何更灵活的控制 <code>output_shape</code> 呢?</p>
<ul>
<li>传入 <code>tensor</code></li>
</ul>
<pre class="prettyprint" name="code"><code class="language-python hljs has-numbering"><span class="hljs-comment"># 可以用 placeholder</span>
outputs_shape = tf.placeholder(dtype=tf.int32, shape=[<span class="hljs-number">4</span>])
deconv1 = tf.nn.conv2d_transpose(conv1, filter=de_weight, output_shape=output_shape,
strides=[<span class="hljs-number">1</span>, <span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">1</span>], padding=<span class="hljs-string">'SAME'</span>)
<span class="hljs-comment"># 可以用 inputs 的shape,但是有点改变</span>
inputs_shape = tf.shape(inputs)
outputs_shape = [inputs_shape[<span class="hljs-number">0</span>], inputs_shape[<span class="hljs-number">1</span>], inputs_shape[<span class="hljs-number">2</span>], some_value]
deconv1 = tf.nn.conv2d_transpose(conv1, filter=de_weight, output_shape=outputs_shape,
strides=[<span class="hljs-number">1</span>, <span class="hljs-number">3</span>, <span class="hljs-number">3</span>, <span class="hljs-number">1</span>], padding=<span class="hljs-string">'SAME'</span>) </code><ul class="pre-numbering"><li>1</li><li>2</li><li>3</li><li>4</li><li>5</li><li>6</li><li>7</li><li>8</li><li>9</li><li>10</li></ul><div class="save_code tracking-ad" data-mod="popu_249"><a href="javascript:;" target="_blank"><img src="http://static.blog.csdn.net/images/save_snippets.png"></a></div></pre></div>
<script type="text/javascript">
$(function () {
$('pre.prettyprint code').each(function () {
var lines = $(this).text().split('\n').length;
var $numbering = $('<ul/>').addClass('pre-numbering').hide();
$(this).addClass('has-numbering').parent().append($numbering);
for (i = 1; i <= lines; i++) {
$numbering.append($('<li/>').text(i));
};
$numbering.fadeIn(1700);
});
});
</script>
</div>
<!-- Baidu Button BEGIN -->
<div class="bdsharebuttonbox tracking-ad bdshare-button-style0-16" style="float: right;" data-mod="popu_172" data-bd-bind="1505052134596">
<a href="#" class="bds_more" data-cmd="more" style="background-position:0 0 !important; background-image: url(http://bdimg.share.baidu.com/static/api/img/share/icons_0_16.png?v=d754dcc0.png) !important" target="_blank"></a>
<a href="#" class="bds_qzone" data-cmd="qzone" title="分享到QQ空间" style="background-position:0 -52px !important" target="_blank"></a>
<a href="#" class="bds_tsina" data-cmd="tsina" title="分享到新浪微博" style="background-position:0 -104px !important" target="_blank"></a>
<a href="#" class="bds_tqq" data-cmd="tqq" title="分享到腾讯微博" style="background-position:0 -260px !important" target="_blank"></a>
<a href="#" class="bds_renren" data-cmd="renren" title="分享到人人网" style="background-position:0 -208px !important" target="_blank"></a>
<a href="#" class="bds_weixin" data-cmd="weixin" title="分享到微信" style="background-position:0 -1612px !important" target="_blank"></a>
</div>
<script>window._bd_share_config = { "common": { "bdSnsKey": {}, "bdText": "", "bdMini": "1", "bdMiniList": false, "bdPic": "", "bdStyle": "0", "bdSize": "16" }, "share": {} }; with (document) 0[(getElementsByTagName('head')[0] || body).appendChild(createElement('script')).src = 'http://bdimg.share.baidu.com/static/api/js/share.js?v=89860593.js?cdnversion=' + ~(-new Date() / 36e5)];</script>
<!-- Baidu Button END -->
<!--172.16.140.11-->
<!-- Baidu Button BEGIN -->
<script type="text/javascript" id="bdshare_js" data="type=tools&uid=1536434" src="http://bdimg.share.baidu.com/static/js/bds_s_v2.js?cdnversion=418071"></script>
<script type="text/javascript">
document.getElementById("bdshell_js").src = "http://bdimg.share.baidu.com/static/js/shell_v2.js?cdnversion=" + Math.ceil(new Date()/3600000)
</script>
<!-- Baidu Button END -->
<div id="digg" articleid="55683401">
<dl id="btnDigg" class="digg digg_enable" οnclick="btndigga();">
<dt>顶</dt>
<dd>1</dd>
</dl>
<dl id="btnBury" class="digg digg_enable" οnclick="btnburya();">
<dt>踩</dt>
<dd>0</dd>
</dl>
</div>
<div class="tracking-ad" data-mod="popu_222"><a href="javascript:void(0);" target="_blank"> </a> </div>
<div class="tracking-ad" data-mod="popu_223"> <a href="javascript:void(0);" target="_blank"> </a></div>
<script type="text/javascript">
function btndigga() {
$(".tracking-ad[data-mod='popu_222'] a").click();
}
function btnburya() {
$(".tracking-ad[data-mod='popu_223'] a").click();
}
</script>
<ul class="article_next_prev">
<li class="prev_article"><span οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_shangyipian']);location.href='http://blog.csdn.net/u012436149/article/details/55000323';">上一篇</span><a href="http://blog.csdn.net/u012436149/article/details/55000323" οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_shangyipian'])">变分推断(variational inference)</a></li>
<li class="next_article"><span οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_xiayipian']);location.href='http://blog.csdn.net/u012436149/article/details/56484572';">下一篇</span><a href="http://blog.csdn.net/u012436149/article/details/56484572" οnclick="_gaq.push(['_trackEvent','function', 'onclick', 'blog_articles_xiayipian'])">tensorflow学习笔记(三十三):ExponentialMovingAverage</a></li>
</ul>
<div style="clear:both; height:10px;"></div>
<div class="similar_article">
<h4></h4>
<div class="similar_c" style="margin:20px 0px 0px 0px">
<div class="similar_c_t">
相关文章推荐
</div>
<div class="similar_wrap tracking-ad" data-mod="popu_36" style="max-height:250px">
<ul class="similar_list fl">
<li>
<em>•</em>
<a href="http://blog.csdn.net/guotong1988/article/details/52955164" title="tf.nn.conv2d_transpose 实例 及 解析" strategy="BlogCommendFromBaidu_0" target="_blank">tf.nn.conv2d_transpose 实例 及 解析</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/course/detail/1203?utm_source=blog7" title="Hadoop生态系统零基础入门" strategy="undefined" target="_blank">Hadoop生态系统零基础入门</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/kkk584520/article/details/51612370" title="TensorFlow 从入门到精通(八):TensorFlow tf.nn.conv2d 一路追查" strategy="BlogCommendFromBaidu_1" target="_blank">TensorFlow 从入门到精通(八):TensorFlow tf.nn.conv2d 一路追查</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/huiyiCourse/series_detail/62?utm_source=blog7" title="系统集成工程师必过冲刺!" strategy="undefined" target="_blank">系统集成工程师必过冲刺!</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/Rush_b/article/details/71302282" title="开源项目domain-transfer-network-master学习" strategy="BlogCommendFromBaidu_2" target="_blank">开源项目domain-transfer-network-master学习</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/course/detail/2883?utm_source=blog7" title="征服React Native我有妙招" strategy="undefined" target="_blank">征服React Native我有妙招</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/lujiandong1/article/details/53728053" title="tensorflow conv2d的padding解释以及参数解释" strategy="BlogCommendFromBaidu_3" target="_blank">tensorflow conv2d的padding解释以及参数解释</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/course/detail/2314?utm_source=blog7" title="FFmpeg音视频高级开发实战" strategy="undefined" target="_blank">FFmpeg音视频高级开发实战</a>
</li>
</ul>
<ul class="similar_list fr">
<li>
<em>•</em>
<a href="http://blog.csdn.net/u012968002/article/details/52261415" title="Tensorflow -- tf.nn.conv2d() 函数详解" strategy="BlogCommendFromBaidu_4" target="_blank">Tensorflow -- tf.nn.conv2d() 函数详解</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/course/detail/5463?utm_source=blog7" title="5天搞定深度学习框架-Caffe" strategy="undefined" target="_blank">5天搞定深度学习框架-Caffe</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/zby1001/article/details/53707338" title="TensorFlow-tf.nn.conv2d 函数" strategy="BlogCommendFromBaidu_5" target="_blank">TensorFlow-tf.nn.conv2d 函数</a>
</li>
<li>
<em>•</em>
<a href="http://edu.csdn.net/course/detail/4661?utm_source=blog7" title="Python数据分析经典案例解析" strategy="undefined" target="_blank">Python数据分析经典案例解析</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/u013713117/article/details/55517458" title="tf.nn.conv2d理解" strategy="BlogCommendFromBaidu_6" target="_blank">tf.nn.conv2d理解</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/mao_xiao_feng/article/details/53444333" title="【TensorFlow】tf.nn.conv2d是怎样实现卷积的?" strategy="BlogCommendFromBaidu_7" target="_blank">【TensorFlow】tf.nn.conv2d是怎样实现卷积的?</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/u012436149/article/details/72353313" title="tensorflow学习笔记(四十二):输入流水线" strategy="BlogCommendFromBaidu_8" target="_blank">tensorflow学习笔记(四十二):输入流水线</a>
</li>
<li>
<em>•</em>
<a href="http://blog.csdn.net/caicaiatnbu/article/details/72792684" title="[TensorFlow 学习笔记-04]卷积函数之tf.nn.conv2d" strategy="BlogCommendFromCsdn_9" target="_blank">[TensorFlow 学习笔记-04]卷积函数之tf.nn.conv2d</a>
</li>
</ul>
</div>
</div>
</div>
</div>