感受野在深度学习中的作用无需多言,计算感受野在设计网络时起着很重要的作用。
下面简单介绍下感受野的计算规则:
感受野(receptive field)的计算:
RF = 1
for layer in reversed(range(layernum)):
fsize, stride, pad = net[layer]
RF= ((RF-1)* stride) + fsize
简单解释下:
上文中的RF为感受野大小,初始大小为1
对于每一层来说,该层的感受野与上一层有着一个线性关系,
其中Fsize为 Kernel size
附一个完整的代码:
#Compute input size that leads to a 1x1 output size, among other things
# [filter size, stride, padding]
convnet =[[11,4,0],[3,2,0],[5,1,2],[3,2,0],[3,1,1],[3,1,1],[3,1,1],[3,2,0],[6,1,0]]
layer_name = ['conv1','pool1','conv2','pool2','conv3','conv4','conv5','pool5','fc6-conv']
imsize = 227
def outFromIn(isz, layernum = 9, net = convnet):
if layernum>len(net): layernum=len(net)
totstride = 1
insize = isz
#for layerparams in net:
for layer in range(layernum):
fsize, stride, pad = net[layer]
outsize = (insize - fsize + 2*pad) / stride + 1
insize = outsize
totstride = totstride * stride
return outsize, totstride
def inFromOut( layernum = 9, net = convnet):
if layernum>len(net): layernum=len(net)
outsize = 1
#for layerparams in net:
for layer in reversed(range(layernum)):
fsize, stride, pad = net[layer]
outsize = ((outsize -1)* stride) + fsize
RFsize = outsize
return RFsize
if __name__ == '__main__':
print ("layer output sizes given image = %dx%d" % (imsize, imsize))
for i in range(len(convnet)):
p = outFromIn(imsize,i+1)
rf = inFromOut(i+1)
print ("Layer Name = %s, Output size = %3d, Stride = % 3d, RF size = %3d" % (layer_name[i], p[0], p[1], rf))