TensorFlow实现卷积、反卷积和空洞卷积

原文链接:https://blog.csdn.net/guyuealian/article/details/86239099?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522158441867619724811827978%2522%252C%2522scm%2522%253A%252220140713.130056874%E2%80%A6%2522%257D&request_id=158441867619724811827978&biz_id=0&utm_source=distribute.pc_search_result.none-task

TensorFlow实现卷积、反卷积和空洞卷积

    TensorFlow已经实现了卷积(tf.nn.conv2d卷积函数),反卷积(tf.nn.conv2d_transpose反卷积函数)以及空洞卷积(tf.nn.atrous_conv2d空洞卷积(dilated convolution)),这三个函数的参数理解,可参考网上。比较难的是计算维度,这里提供三种方式封装卷积、反卷积和空洞卷积的方法,方面调用:


一、卷积

  • 输入图片大小 W×W
  • Filter大小 F×F
  • 步长 S
  • padding的像素数 P

   于是我们可以得出

N = [(W − F + 2P )/S]+1

    输出图片大小为 N×N,卷积维度计算方法:https://blog.csdn.net/qq_21997625/article/details/87252780

    可以使用TensorFlow高级的API的slim.conv2d


 
 
  1. net = slim.conv2d(inputs=inputs,
  2. num_outputs=num_outputs,
  3. weights_initializer=tf.truncated_normal_initializer(stddev= 0.01),
  4. weights_regularizer=reg,
  5. kernel_size=[kernel, kernel],
  6. activation_fn=activation_fn,
  7. stride=stride,
  8. padding=padding,
  9. trainable= True,
  10. scope=scope)

     一些特殊情况,可以自己对feature进行填充:


 
 
  1. def slim_conv2d(inputs,num_outputs,stride,padding,kernel,activation_fn,reg,scope):
  2. if padding== "VALID":
  3. padding_size=int(kernel / 2)
  4. inputs = tf.pad(inputs, paddings=[[ 0, 0], [padding_size, padding_size], [padding_size, padding_size], [ 0, 0]],
  5. mode= 'REFLECT')
  6. print( "pad.inputs.shape:{}".format(inputs.get_shape()))
  7. net = slim.conv2d(inputs=inputs,
  8. num_outputs=num_outputs,
  9. weights_initializer=tf.truncated_normal_initializer(stddev= 0.01),
  10. weights_regularizer=reg,
  11. kernel_size=[kernel, kernel],
  12. activation_fn=activation_fn,
  13. stride=stride,
  14. padding=padding,
  15. trainable= True,
  16. scope=scope)
  17. print( "net.{}.shape:{}".format(scope,net.get_shape()))
  18. return net

    下面是使用TensorFlow自己封装的卷积,与TensorFlow自带的slim.conv2d高级API类似的功能


 
 
  1. def conv2D_layer(inputs,num_outputs,kernel_size,activation_fn,stride,padding,scope,weights_regularizer):
  2. '''
  3. 根据tensorflow slim模块封装一个卷积层API:包含卷积和激活函数,但不包含池化层
  4. :param inputs:
  5. :param num_outputs:
  6. :param kernel_size: 卷积核大小,一般是[1,1],[3,3],[5,5]
  7. :param activation_fn:激活函数
  8. :param stride: 例如:[2,2]
  9. :param padding: SAME or VALID
  10. :param scope: scope name
  11. :param weights_regularizer:正则化,例如:weights_regularizer = slim.l2_regularizer(scale=0.01)
  12. :return:
  13. '''
  14. with tf.variable_scope(name_or_scope=scope):
  15. in_channels = inputs.get_shape().as_list()[ 3]
  16. # kernel=[height, width, in_channels, output_channels]
  17. kernel=[kernel_size[ 0],kernel_size[ 1],in_channels,num_outputs]
  18. strides=[ 1,stride[ 0],stride[ 1], 1]
  19. # filter_weight=tf.Variable(initial_value=tf.truncated_normal(shape,stddev=0.1))
  20. filter_weight = slim.variable(name= 'weights',
  21. shape=kernel,
  22. initializer=tf.truncated_normal_initializer(stddev= 0.1),
  23. regularizer=weights_regularizer)
  24. bias = tf.Variable(tf.constant( 0.01, shape=[num_outputs]))
  25. inputs = tf.nn.conv2d(inputs, filter_weight, strides, padding=padding) + bias
  26. if not activation_fn is None:
  27. inputs = activation_fn(inputs)
  28. return inputs

二、反卷积

     TensorFlow的高级API已经封装好了反卷积函数,分别是: tf.layers.conv2d_transpose以及slim.conv2d_transpose,其用法基本一样,如果想使用tf.nn.conv2d_transpose实现反卷积功能,那么需要自己根据padding='VALID'和‘SAME’计算输出维度,这里提供一个函数deconv_output_length函数,可以根据输入的维度,filter_size, padding, stride自动计算其输出维度。


 
 
  1. # -*-coding: utf-8 -*-
  2. """
  3. @Project: YNet-python
  4. @File : myTest.py
  5. @Author : panjq
  6. @E-mail : pan_jinquan@163.com
  7. @Date : 2019-01-10 15:51:23
  8. """
  9. import tensorflow as tf
  10. import tensorflow.contrib.slim as slim
  11. def deconv_output_length(input_length, filter_size, padding, stride):
  12. """Determines output length of a transposed convolution given input length.
  13. Arguments:
  14. input_length: integer.
  15. filter_size: integer.
  16. padding: one of SAME or VALID ,FULL
  17. stride: integer.
  18. Returns:
  19. The output length (integer).
  20. """
  21. if input_length is None:
  22. return None
  23. # 默认SAME
  24. input_length *= stride
  25. if padding == 'VALID':
  26. input_length += max(filter_size - stride, 0)
  27. elif padding == 'FULL':
  28. input_length -= (stride + filter_size - 2)
  29. return input_length
  30. def conv2D_transpose_layer(inputs,num_outputs,kernel_size,activation_fn,stride,padding,scope,weights_regularizer):
  31. '''
  32. 实现反卷积的API:包含反卷积和激活函数,但不包含池化层
  33. :param inputs:input Tensor=[batch, in_height, in_width, in_channels]
  34. :param num_outputs:
  35. :param kernel_size: 卷积核大小,一般是[1,1],[3,3],[5,5]
  36. :param activation_fn:激活函数
  37. :param stride: 例如:[2,2]
  38. :param padding: SAME or VALID
  39. :param scope: scope name
  40. :param weights_regularizer:正则化,例如:weights_regularizer = slim.l2_regularizer(scale=0.01)
  41. :return:
  42. '''
  43. with tf.variable_scope(name_or_scope=scope):
  44. # shape = [batch_size, height, width, channel]
  45. in_shape = inputs.get_shape().as_list()
  46. # 计算反卷积的输出维度
  47. output_height=deconv_output_length(in_shape[ 1], kernel_size[ 0], padding=padding, stride=stride[ 0])
  48. output_width =deconv_output_length(in_shape[ 2], kernel_size[ 1], padding=padding, stride=stride[ 1])
  49. output_shape=[in_shape[ 0],output_height,output_width,num_outputs]
  50. strides=[ 1,stride[ 0],stride[ 1], 1]
  51. # kernel=[kernel_size, kernel_size, output_channel, input_channel ]
  52. kernel=[kernel_size[ 0],kernel_size[ 1],num_outputs,in_shape[ 3]]
  53. filter_weight = slim.variable(name= 'weights',
  54. shape=kernel,
  55. initializer=tf.truncated_normal_initializer(stddev= 0.1),
  56. regularizer=weights_regularizer)
  57. bias = tf.Variable(tf.constant( 0.01, shape=[num_outputs]))
  58. inputs = tf.nn.conv2d_transpose(value=inputs, filter=filter_weight,output_shape=output_shape, strides=strides, padding=padding) + bias
  59. if not activation_fn is None:
  60. inputs = activation_fn(inputs)
  61. return inputs
  62. if __name__ == "__main__":
  63. inputs = tf.ones(shape=[ 4, 100, 100, 3])
  64. stride= 2
  65. kernel_size= 10
  66. padding= "SAME"
  67. net1 = tf.layers.conv2d_transpose(inputs=inputs,
  68. filters= 32,
  69. kernel_size=kernel_size,
  70. strides=stride,
  71. padding=padding)
  72. net2 = slim.conv2d_transpose(inputs=inputs,
  73. num_outputs= 32,
  74. kernel_size=[kernel_size, kernel_size],
  75. stride=[stride, stride],
  76. padding=padding)
  77. net3 = conv2D_transpose_layer(inputs=inputs,
  78. num_outputs= 32,
  79. kernel_size=[kernel_size, kernel_size],
  80. activation_fn=tf.nn.relu,
  81. stride=[stride, stride],
  82. padding=padding,
  83. scope= "conv2D_transpose_layer",
  84. weights_regularizer= None)
  85. print( "net1.shape:{}".format(net1.get_shape()))
  86. print( "net2.shape:{}".format(net2.get_shape()))
  87. print( "net3.shape:{}".format(net3.get_shape()))
  88. with tf.Session() as sess:
  89. sess.run(tf.global_variables_initializer())

三、空洞卷积:增大感受野

    Dilated/Atrous conv 空洞卷积/多孔卷积,又名扩张卷积(dilated convolutions),向卷积层引入了一个称为 “扩张率(dilation rate)”的新参数,该参数定义了卷积核处理数据时各值的间距。该结构的目的是在不用pooling(pooling层会导致信息损失)且计算量相当的情况下,提供更大的感受野

    在空洞卷积中有个重要的参数叫rate,这个参数代表了空洞的大小。 要理解空洞概念和如何操作可以从两个角度去看。 

1)从原图角度,所谓空洞就是在原图上做采样。采样的频率是根据rate参数来设置的,当rate为1时候,就是原图不丢失任何信息采样,此时卷积操作就是标准的卷积操作,当rate>1,比如2的时候,就是在原图上每隔一(rate-1)个像素采样,如图b,可以把红色的点想象成在原图上的采样点,然后将采样后的图像与kernel做卷积,这样做其实变相增大了感受野。 

2)从kernel角度去看空洞的话就是扩大kernel的尺寸,在kernel中,相邻点之间插入rate-1个零,然后将扩大的kernel和原图做卷积 ,这样还是增大了感受野

3)标准卷积为了提高感受野,可以通过池化pooling下采样,降低图像尺度的同时增大感受野,但pooling本身是不可学习的,也会丢失很多细节信息。而dilated conv空洞卷积,不需要pooling,也能有较大的感受野看到更多的信息。

4)增大卷积核的大小也可以提高感受野,但这会增加计算量

标准卷积: 

空洞卷积 :

    在VGG网络中就证明了使用小卷积核叠加来取代大卷积核可以起到减少参数同时达到大卷积核同样大小感受野的功效。但是通过叠加小卷积核来扩大感受野只能线性增长,公式为(kernelSize−1)∗layers+1(kernelSize−1)∗layers+1,,也就是线性增长,而空洞卷积可以以指数级增长感受野。

    参考资料:https://blog.csdn.net/silence2015/article/details/79748729


 
 
  1. def dilated_conv2D_layer(inputs,num_outputs,kernel_size,activation_fn,rate,padding,scope,weights_regularizer):
  2. '''
  3. 使用Tensorflow封装的空洞卷积层API:包含空洞卷积和激活函数,但不包含池化层
  4. :param inputs:
  5. :param num_outputs:
  6. :param kernel_size: 卷积核大小,一般是[1,1],[3,3],[5,5]
  7. :param activation_fn:激活函数
  8. :param rate:
  9. :param padding: SAME or VALID
  10. :param scope: scope name
  11. :param weights_regularizer:正则化,例如:weights_regularizer = slim.l2_regularizer(scale=0.01)
  12. :return:
  13. '''
  14. with tf.variable_scope(name_or_scope=scope):
  15. in_channels = inputs.get_shape().as_list()[ 3]
  16. kernel=[kernel_size[ 0],kernel_size[ 1],in_channels,num_outputs]
  17. # filter_weight=tf.Variable(initial_value=tf.truncated_normal(shape,stddev=0.1))
  18. filter_weight = slim.variable(name= 'weights',
  19. shape=kernel,
  20. initializer=tf.truncated_normal_initializer(stddev= 0.1),
  21. regularizer=weights_regularizer)
  22. bias = tf.Variable(tf.constant( 0.01, shape=[num_outputs]))
  23. # inputs = tf.nn.conv2d(inputs,filter_weight, strides, padding=padding) + bias
  24. inputs = tf.nn.atrous_conv2d(inputs, filter_weight, rate=rate, padding=padding) + bias
  25. if not activation_fn is None:
  26. inputs = activation_fn(inputs)
  27. return inputs

 

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值