卷积神经网络---padding、 pool、 Activation layer

#coding:utf-8
import tensorflow as tf
tf.reset_default_graph()
image = tf.random_normal([1, 112, 96, 3])
in_channels = 3
out_channels = 32
kernel_size = 5
conv_weight = tf.Variable(tf.truncated_normal([kernel_size, kernel_size, in_channels, out_channels], stddev=0.1,
                                              dtype=tf.float32))

print 'image shape', image.get_shape()
print 'conv weight shape', conv_weight.get_shape()
bias = tf.Variable(tf.zeros([out_channels], dtype=tf.float32))
conv = tf.nn.conv2d(image, conv_weight, strides=[1, 3, 3, 1], padding='SAME')
conv = tf.nn.bias_add(conv, bias)
print 'conv output shape with SAME padded', conv.get_shape()

conv = tf.nn.conv2d(image, conv_weight, strides=[1, 3, 3, 1], padding='VALID')
conv = tf.nn.bias_add(conv, bias)
print 'conv output shape with VALID padded', conv.get_shape()


'''
两种padding方式的不同
SAME 简而言之就是丢弃,像素不够的时候对那部分不进行卷积,输出图像的宽高计算公式如下(向上取整,进1):
HEIGHT = ceil(float(in_height)/float(strides[1]))
WIDTH = ceil(float(in_width)/float(strides[2]))

VALID 简而言之就是补全,像素不够的时候补0,输出图像的宽高计算公式如下
HEIGHT = ceil(float(in_height - filter_height + 1)/float(strides[1]))
WIDTH = ceil(float(in_width - filter_width + 1)/float(strides[2]))
'''

 打印结果

 image shape (1, 112, 96, 3)
 conv weight shape (5, 5, 3, 32)
 conv output shape with SAME padded (1, 38, 32, 32)
 conv output shape with VALID padded (1, 36, 31, 32)

pool_size = 3
pool = tf.nn.max_pool(conv, ksize=[1, pool_size, pool_size, 1], strides=[1, 2, 2, 1], padding='SAME')
print pool.get_shape()
pool = tf.nn.max_pool(conv, ksize=[1, pool_size, pool_size, 1], strides=[1, 2, 2, 1], padding='VALID')
print pool.get_shape()

结果

(1, 18, 16, 32)
(1, 17, 15, 32)

#激活层
relu = tf.nn.relu(pool)
print relu.get_shape()
l2_regularizer = tf.contrib.layers.l2_regularizer(1.0)
def prelu(x, name = 'prelu'):
    with tf.variable_scope(name):
        alphas = tf.get_variable('alpha', x.get_shape()[-1], initializer=tf.constant_initializer(0.25), regularizer=l2_regularizer, dtype=
                                 tf.float32)
    pos = tf.nn.relu(x)
    neg = tf.multiply(alphas, (x - abs(x)) * 0.5)
    return pos + neg
prelu_out = prelu(pool)
print prelu_out.get_shape()

 

转载于:https://www.cnblogs.com/cnugis/p/9309113.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值