深度神经网络Deeplabv3_resnet50详解

深度神经网络Deeplabv3_resnet50详解

小编最近有个项目是用神经网络deeplabv3_resnet50完成,现在用简单易懂的方式进行梳理,帮助跟我当初一样的初学者也能看懂,节省时间。并且,依据data flow过程,用最基本的语句编写了复盘程序,可帮助读者理解每一步的实现。

论文及公开代码

论文下载:https://arxiv.org/pdf/1512.03385.pdf
git公开代码:https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py
参考博客:
1、https://blog.csdn.net/qq_35435964/article/details/101470991
2、https://www.jiqizhixin.com/articles/042201
3、https://zhuanlan.zhihu.com/p/72226476?from_voters_page=true

我自己对网络框架的理解

特点:网络输入数据input(H, W)与网络输出数据output(H, W)相同。在数据经过整个网络中,先通过下采样,再通过上采样,即可获取这样的效果。
以resnet50作为backbone, 在加上ASPP和aux_classifier组成了deeplabv3。详细的机构图如下:
在这里插入图片描述

从图中可以看出,input数据先依次经过initial block, layer 1, layer 2, layer 3。从layer 3输出的数据分别保存为out(tensor 类型) 和 aux(tensor)。其中out继续经过layer 4和 ASPP一系列操作,输出仍赋值给out变量。而aux直接经过conv, BN, ReLU, dropout, conv操作,得到新的aux。此时的out和aux形状相同,且与input的形状相同。
注:也有解释说layer–>stage

下采样

**注释:使图像H,W缩小的操作,在此处统一理解为下采样。如下图黑色箭头指示处。
网络中出现两种形式的下采样:
1 典型的池化pool实现下采样
2 利用卷积特性特性下采样
需要注意的是:ASPP中的pool 为平均池化AdaptAvgPool2d(output_size=1),输出尺寸为(B, N, 1, 1)
在这里插入图片描述

上采样

**注释:使图像H,W放大的操作,在此处统一理解为上采样。如下图黑色箭头指示处。
模型中共有三处上采样,并最总使得图像输出尺寸与输入尺寸相同。
需要注意的是:ASPP中的上采样是将尺寸为(B, N, 1, 1)的图像放大至与ASPP上面的4层数据形状相同(B, N, 48, 48),目的将在下面ASPP过程介绍
在这里插入图片描述

空洞卷积

网络中3个模块中出现空洞卷积:
Layer 3:采用固定dilation=(2,2)
Layer 4:采用不固定dilation=(2,2);(4,4);(4,4)
ASPP :采用不固定dilation=(12,12);(24,24);(36,36)
相应的,每一个空洞卷积的padding与dilation保持一致。
图:带有黑色圆环的表示空洞卷积操作。
在这里插入图片描述
公式:带有dilation的卷积(空洞卷积)后图像尺寸计算公式,详情见pytorch doc链接
在这里插入图片描述

ASPP: Atrous Spatial Pyramid Pooling

ASPP可以理解为一组按一定规律排列的空洞卷积组合。
是将5个Batch的数据按dim=1轴即通道channel轴进行拼接concat(dim=1)。得到的结果为(N,C*5,H,W)
在这里插入图片描述

流水代码

**注释:此代码仅供学习神经网络,若用于模型训练,请使用git上公开项目。
torch version: “1.6.0” ;代码已跑通,可直接使用。
P.S: 本代码第3行:x=torch.randn(2,1,384,384)中batch不能写用1,应为在ASPP中经过AdaptAvgpool后图像变为(1,C, 1, 1), 做batchNorm操作时出错。而在torch的deeplabv3_resnet50框架中,做了相应的处理。图像尺寸设置为384*384, 是因为项目中应用尺寸。且与以上所有图中数据形状保持前后一致。读者可根据自己需要修改尺寸,只要是8的倍数即可。
如相关解释有误,请留言,我看到后会进行更正。

import torch
# from torch import nn
x=torch.randn(2,1,384,384)  #   (batch, channels, H, W)
input_shape=x.shape
print("input_shape", input_shape)
conv2d=torch.nn.Conv2d(1, 64, kernel_size=(7,7), stride=(2,2), padding=(3,3), bias=False)
x=conv2d(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(64)
x=BN1(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)
maxpool = torch.nn.MaxPool2d(kernel_size=3, stride=2,padding=1,dilation=1,ceil_mode=False)
x=maxpool(x)
print(x.shape)
print("\nlayer 1 - bottleneck 0-------------------")
# downsample
down_conv=torch.nn.Conv2d(64, 256, kernel_size=(1,1), stride=(1,1), bias=False)
identity=down_conv(x)
print("downsample",identity.shape)
down_BN1=torch.nn.BatchNorm2d(256)
identity=down_BN1(identity)
print("downsample",identity.shape)

conv1=torch.nn.Conv2d(64, 64, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(64)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(64, 64, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(64)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(64, 256, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(256)
x=BN3(x)
print(x.shape)
x += identity
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 1 - bottleneck 1-------------------")
conv1=torch.nn.Conv2d(256, 64, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(64)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(64, 64, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(64)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(64, 256, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(256)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)
print("\nlayer 1 - bottleneck 2-------------------")
conv1=torch.nn.Conv2d(256, 64, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(64)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(64, 64, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(64)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(64, 256, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(256)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 2 - bottleneck 0-------------------")
# downsample
down_conv=torch.nn.Conv2d(256, 512, kernel_size=(1,1), stride=(2,2), bias=False)
identity=down_conv(x)
print("downsample",identity.shape)
down_BN1=torch.nn.BatchNorm2d(512)
identity=down_BN1(identity)
print("downsample",identity.shape)

conv1=torch.nn.Conv2d(256, 128, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(128)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(128, 128, kernel_size=(3,3), stride=(2,2), padding=(1,1), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(128)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(128, 512, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(512)
x=BN3(x)
print(x.shape)
x+=identity
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 2 - bottleneck 1-------------------")
conv1=torch.nn.Conv2d(512, 128, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(128)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(128, 128, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(128)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(128, 512, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(512)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 2 - bottleneck 2-------------------")
conv1=torch.nn.Conv2d(512, 128, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(128)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(128, 128, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(128)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(128, 512, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(512)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 2 - bottleneck 3-------------------")
conv1=torch.nn.Conv2d(512, 128, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(128)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(128, 128, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(128)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(128, 512, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(512)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 3 - bottleneck 0-------------------")
# downsample
down_conv=torch.nn.Conv2d(512, 1024, kernel_size=(1,1), stride=(1,1), bias=False)
identity=down_conv(x)
print("downsample",identity.shape)
down_BN1=torch.nn.BatchNorm2d(1024)
identity=down_BN1(identity)
print("downsample",identity.shape)

conv1=torch.nn.Conv2d(512, 256, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(256)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(256, 256, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(256)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(256, 1024, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(1024)
x=BN3(x)
print(x.shape)
x+=identity
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 3 - bottleneck 1-------------------")
conv1=torch.nn.Conv2d(1024, 256, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(256)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(256, 256, kernel_size=(3,3), stride=(1,1), padding=(2,2), dilation=(2,2), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(256)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(256, 1024, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(1024)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 3 - bottleneck 2-------------------")
conv1=torch.nn.Conv2d(1024, 256, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(256)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(256, 256, kernel_size=(3,3), stride=(1,1), padding=(2,2), dilation=(2,2), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(256)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(256, 1024, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(1024)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 3 - bottleneck 3-------------------")
conv1=torch.nn.Conv2d(1024, 256, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(256)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(256, 256, kernel_size=(3,3), stride=(1,1), padding=(2,2), dilation=(2,2), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(256)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(256, 1024, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(1024)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 3 - bottleneck 4-------------------")
conv1=torch.nn.Conv2d(1024, 256, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(256)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(256, 256, kernel_size=(3,3), stride=(1,1), padding=(2,2), dilation=(2,2), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(256)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(256, 1024, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(1024)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 3 - bottleneck 5-------------------")
conv1=torch.nn.Conv2d(1024, 256, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(256)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(256, 256, kernel_size=(3,3), stride=(1,1), padding=(2,2), dilation=(2,2), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(256)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(256, 1024, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(1024)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
aux=x            # TODO used for aux_classifier
print("aux.shape: ", aux.shape)
print(x.shape)

print("\nlayer 4 - bottleneck 0-------------------")
# downsample
down_conv=torch.nn.Conv2d(1024, 2048, kernel_size=(1,1), stride=(1,1), bias=False)
identity=down_conv(x)
print("downsample",identity.shape)
down_BN1=torch.nn.BatchNorm2d(2048)
identity=down_BN1(identity)
print("downsample",identity.shape)

conv1=torch.nn.Conv2d(1024, 512, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(512)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(512, 512, kernel_size=(3,3), stride=(1,1), padding=(2,2), dilation=(2,2), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(512)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(512, 2048, kernel_size=(1,1), stride=(1,1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(2048)
x=BN3(x)
print(x.shape)
x+=identity
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)


print("\nlayer 4 - bottleneck 1-------------------")
conv1=torch.nn.Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(512)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(512)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(2048)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nlayer 4 - bottleneck 2-------------------")
conv1=torch.nn.Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
x=conv1(x)
print(x.shape)
BN1=torch.nn.BatchNorm2d(512)
x=BN1(x)
print(x.shape)
conv2=torch.nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False)
x=conv2(x)
print(x.shape)
BN2=torch.nn.BatchNorm2d(512)
x=BN2(x)
print(x.shape)
conv3=torch.nn.Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
x=conv3(x)
print(x.shape)
BN3=torch.nn.BatchNorm2d(2048)
x=BN3(x)
print(x.shape)
relu=torch.nn.ReLU()
x=relu(x)
print(x.shape)

print("\nclassifier - ASPP sequential 0-------------------")
res=[]
conv2d=torch.nn.Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
ASPP0=conv2d(x)
BN=torch.nn.BatchNorm2d(256)
ASPP0=BN(ASPP0)
ASPP0=relu(ASPP0)
res.append(ASPP0)
print(ASPP0.shape)

print("\nclassifier - ASPP ASPPConv1-------------------")
conv2d=torch.nn.Conv2d(2048, 256, kernel_size=(3, 3), stride=(1, 1), padding=(12,12), dilation=(12,12), bias=False)
ASPP1=conv2d(x)
BN=torch.nn.BatchNorm2d(256)
ASPP1=BN(ASPP1)
ASPP1=relu(ASPP1)
res.append(ASPP1)
print(ASPP1.shape)

print("\nclassifier - ASPP ASPPConv2-------------------")
conv2d=torch.nn.Conv2d(2048, 256, kernel_size=(3, 3), stride=(1, 1), padding=(24,24), dilation=(24,24), bias=False)
ASPP2=conv2d(x)
BN=torch.nn.BatchNorm2d(256)
ASPP2=BN(ASPP2)
ASPP2=relu(ASPP2)
res.append(ASPP2)
print(ASPP2.shape)

print("\nclassifier - ASPP ASPPConv2-------------------")
conv2d=torch.nn.Conv2d(2048, 256, kernel_size=(3, 3), stride=(1, 1), padding=(36,36), dilation=(36,36), bias=False)
ASPP3=conv2d(x)
BN=torch.nn.BatchNorm2d(256)
ASPP3=BN(ASPP3)
ASPP3=relu(ASPP3)
res.append(ASPP3)
print(ASPP3.shape)

print("\nclassifier - ASPP ASPPpooling-------------------")
adaptiveavgpooling=torch.nn.AdaptiveAvgPool2d(1)                                                           # 下采样过程,输出(B, N ,1, 1)
ASPP4=adaptiveavgpooling(x)
conv2d=torch.nn.Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
ASPP4=conv2d(ASPP4)
BN=torch.nn.BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
ASPP4=BN(ASPP4)           # 注意:如果batch_num即N等于1,则ASPP为(1,C,1,1), 此条命令会出错,本验证程序将B设置为2。但是在Pytorch框架下的deeplabv3在此处做了处理,不再赘述
ASP4=relu(ASPP4)
ASPP4=torch.nn.functional.interpolate(ASPP4, size=ASPP3.shape[-2:], mode="bilinear", align_corners=False)   # 上采样过程,输出ASPP3形状  # 上采样,(1,1)-->(48,48)
res.append(ASPP4)
print(ASPP4.shape)

print("\nclassifier - ASPP result after torch.cat()-------------------")
res=torch.cat(res, dim=1)
print("res: ", res.shape)

print("\nclassifier -project-------------------")
conv2d=torch.nn.Conv2d(1280, 256, kernel_size=(1,1), stride=(1,1), bias=False)
out=conv2d(res)
BN=torch.nn.BatchNorm2d(256)
out=BN(out)
relu=torch.nn.ReLU()
out=relu(out)
dropout=torch.nn.Dropout(p=0.5, inplace=False)
out=dropout(out)

conv2d=torch.nn.Conv2d(256, 256, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
out=conv2d(out)
BN=torch.nn.BatchNorm2d(256)
out=BN(out)
relu=torch.nn.ReLU()
out=relu(out)
conv2d=torch.nn.Conv2d(256, 1, kernel_size=(1,1), stride=(1,1))
out=conv2d(out)
out=torch.nn.functional.interpolate(out, size=input_shape[-2:], mode="bilinear", align_corners=False)   # 上采样,(48,48)-->(384,384)
print("out.shape: ", out.shape)


print("\naux_classifier --------------------")
conv2d=torch.nn.Conv2d(1024, 256, kernel_size=(3,3), stride=(1,1), padding=(1,1), bias=False)
aux=conv2d(aux)
BN=torch.nn.BatchNorm2d(256)
aux=BN(aux)
relu=torch.nn.ReLU()
aux=relu(aux)
print(aux.shape)
dropout=torch.nn.Dropout(p=0.1, inplace=False)
aux=dropout(aux)
print(aux.shape)
conv2d=torch.nn.Conv2d(256, 1, kernel_size=(1,1), stride=(1,1))
aux=conv2d(aux)
aux=torch.nn.functional.interpolate(aux, size=input_shape[-2:], mode="bilinear", align_corners=False)   # 上采样,(48,48)-->(384,384)
print("aux.shape: ", aux.shape)

print("\nsigmoid --------------------")
logit=(aux+out)/2.0
print("logit.shape", logit.shape)
sigmoid=torch.nn.Sigmoid()
prob=sigmoid(out)
print("prob.shape", prob.shape)

输出:

D:\Program_Files\Anaconda3\envs\torch_gpu\python.exe "D:\Program Files (x86)\PyCharm Community Edition 2020.3\plugins\python-ce\helpers\pydev\pydevd.py" --multiproc --qt-support=auto --client 127.0.0.1 --port 51825 --file D:/python/first_B/test_test/deeplabv3.py
Connected to pydev debugger (build 203.5981.52)
input_shape torch.Size([2, 3, 384, 384])
torch.Size([2, 64, 192, 192])
torch.Size([2, 64, 192, 192])
torch.Size([2, 64, 192, 192])
torch.Size([2, 64, 96, 96])

layer 1 - bottleneck 0-------------------
downsample torch.Size([2, 256, 96, 96])
downsample torch.Size([2, 256, 96, 96])
torch.Size([2, 64, 96, 96])
torch.Size([2, 64, 96, 96])
torch.Size([2, 64, 96, 96])
torch.Size([2, 64, 96, 96])
torch.Size([2, 256, 96, 96])
torch.Size([2, 256, 96, 96])
torch.Size([2, 256, 96, 96])

layer 1 - bottleneck 1-------------------
torch.Size([2, 64, 96, 96])
torch.Size([2, 64, 96, 96])
torch.Size([2, 64, 96, 96])
torch.Size([2, 64, 96, 96])
torch.Size([2, 256, 96, 96])
torch.Size([2, 256, 96, 96])
torch.Size([2, 256, 96, 96])

layer 1 - bottleneck 2-------------------
torch.Size([2, 64, 96, 96])
torch.Size([2, 64, 96, 96])
torch.Size([2, 64, 96, 96])
torch.Size([2, 64, 96, 96])
torch.Size([2, 256, 96, 96])
torch.Size([2, 256, 96, 96])
torch.Size([2, 256, 96, 96])

layer 2 - bottleneck 0-------------------
downsample torch.Size([2, 512, 48, 48])
downsample torch.Size([2, 512, 48, 48])
torch.Size([2, 128, 96, 96])
torch.Size([2, 128, 96, 96])
torch.Size([2, 128, 48, 48])
torch.Size([2, 128, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])

layer 2 - bottleneck 1-------------------
torch.Size([2, 128, 48, 48])
torch.Size([2, 128, 48, 48])
torch.Size([2, 128, 48, 48])
torch.Size([2, 128, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])

layer 2 - bottleneck 2-------------------
torch.Size([2, 128, 48, 48])
torch.Size([2, 128, 48, 48])
torch.Size([2, 128, 48, 48])
torch.Size([2, 128, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])

layer 2 - bottleneck 3-------------------
torch.Size([2, 128, 48, 48])
torch.Size([2, 128, 48, 48])
torch.Size([2, 128, 48, 48])
torch.Size([2, 128, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])

layer 3 - bottleneck 0-------------------
downsample torch.Size([2, 1024, 48, 48])
downsample torch.Size([2, 1024, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])

layer 3 - bottleneck 1-------------------
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])

layer 3 - bottleneck 2-------------------
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])

layer 3 - bottleneck 3-------------------
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])

layer 3 - bottleneck 4-------------------
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])

layer 3 - bottleneck 5-------------------
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 256, 48, 48])
torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])
aux.shape:  torch.Size([2, 1024, 48, 48])
torch.Size([2, 1024, 48, 48])

layer 4 - bottleneck 0-------------------
downsample torch.Size([2, 2048, 48, 48])
downsample torch.Size([2, 2048, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 2048, 48, 48])
torch.Size([2, 2048, 48, 48])
torch.Size([2, 2048, 48, 48])

layer 4 - bottleneck 1-------------------
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 2048, 48, 48])
torch.Size([2, 2048, 48, 48])
torch.Size([2, 2048, 48, 48])

layer 4 - bottleneck 2-------------------
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 512, 48, 48])
torch.Size([2, 2048, 48, 48])
torch.Size([2, 2048, 48, 48])
torch.Size([2, 2048, 48, 48])

classifier - ASPP sequential 0-------------------
torch.Size([2, 256, 48, 48])

classifier - ASPP ASPPConv1-------------------
torch.Size([2, 256, 48, 48])

classifier - ASPP ASPPConv2-------------------
torch.Size([2, 256, 48, 48])

classifier - ASPP ASPPConv2-------------------
torch.Size([2, 256, 48, 48])

classifier - ASPP ASPPpooling-------------------
torch.Size([2, 256, 48, 48])

classifier - ASPP result after torch.cat()-------------------
res:  torch.Size([2, 1280, 48, 48])

classifier -project-------------------
out.shape:  torch.Size([2, 1, 384, 384])

aux_classifier --------------------
torch.Size([2, 256, 48, 48])
torch.Size([2, 1, 384, 384])

Process finished with exit code -1

  • 15
    点赞
  • 48
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值