keras和TensorFlow设置卷积层的过程中可以设置padding参数,vaild和same。“valid”代表只进行有效的卷积,对边界数据不处理。“same”代表保留边界处的卷积结果,通常会导致输出shape与输入shape相同.
下面 W为输入大小,S为步长,F为核大小
一、tensorflow 中
1.卷积
在tensorflow中,conv后的shape可按照下边的公式计算
padding VALID
n = (W-F+1)/S 向上取整
padding SAME
n = W/S 向上取整
在valid方式中,dilation_rate为膨胀系数,默认为1,并且一般不设置。网上好多博主找到的官方说明(我没找到在哪)
If padding == "SAME":
output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])
If padding == "VALID":
output_spatial_shape[i] =
ceil((input_spatial_shape[i] -
(spatial_filter_shape[i]-1) * dilation_rate[i])
/ strides[i]).
Raises:
ValueError: If input/output depth does not match filter shape, if padding
is other than "VALID" or "SAME", or if data_format is invalid.
2.池化
padding VALID
n = (W-F+1)/S 向上取整
padding SAME
n = W/S 向上取整
自己可使用一下代码测试
import tensorflow as tf
#自己修改input的维度,和conv2d的padding方式测试
input = tf.Variable(tf.random_normal([1, 5, 5, 3]))
filter = tf.Variable(tf.random_normal([3, 3, 3, 7]))
result = tf.nn.conv2d(input, filter, strides=[1, 2, 2, 1], padding='SAME')
pool_result = tf.nn.max_pool(input, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
with tf.Session() as sess:
tf.global_variables_initializer().run()
s = sess.run(result)
p = sess.run(pool_result)
print(s.shape)
print(p.shape)
二、keras中
1.卷积
默认为 valid,默认strides=1,计算同tensorflow
如果设置了相关项,计算同tensorflow
2.池化
默认为 valid,默认strides None
如果只设置一个pool_size,那么N=W/F,same向上取整,valid向下取整
如果设置strides那么按照下边计算
同tensorflow
from keras.models import *
from keras.layers import *
from keras.optimizers import *
input = tf.Variable(tf.random_normal([1,572,572,1]))
conv1 = Conv2D(64,3,activation='relu',padding='same',kernel_initializer='he_normal')(input)
print(conv1.shape)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'glorot_normal')(conv1)
print(conv1.shape)
pool1 = MaxPooling2D(pool_size=(3, 3))(conv1)
print(pool1.shape)
前两次卷积
理论上和自己测试的一样,但要具体情况具体分析吧
比如u-net网络,就没搞懂padding=same,步长为1情况下,conv一次,尺寸减小2是怎么回事。因此怀疑应该是padding=valid,这样572-3+1=570,570-3+1=568才符合图示,但是很多博文以及github上很多代码也是使用same。
def unet(pretrained_weights = None,input_size = (256,256,1)):
inputs = Input(input_size)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
drop4 = Dropout(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
drop5 = Dropout(0.5)(conv5)
up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
merge6 = concatenate([drop4,up6], axis = 3)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)
up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)
up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)
up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9)
model = Model(input = inputs, output = conv10)
model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy'])
#model.summary()
if(pretrained_weights):
model.load_weights(pretrained_weights)
return model