PaddlePaddle——论文复现:使用高层API复现Resnet18

目前在看《从零开始学视觉Transformer》,第一课的作业是一个warmup,复现Resnet18。在课程中,老师在讲解的过程中没有完全按照论文的形式进行组网搭建,所以这里我将会按照论文的形式进行相同的组网复现,并对其中的结构进行解释,同时锻炼自己的网络复现能力,为研究工作奠定基础。

Resnet论文精读

那必须是沐神的视频,大家可以自行观看:

ResNet论文逐段精读【论文精读】

Resnet网络结构图

ResNet网络结构分析

PaddlePaddle复现

'''
Descripttion: 
version: 
Author: Irving.Gao
Date: 2021-11-24 14:32:07
LastEditors: Irving.Gao
LastEditTime: 2021-11-24 19:37:58
'''
import paddle
import paddle.nn as nn

# paddle.set_device('cpu')

class Block(nn.Layer):
    def __init__(self, in_dim, out_dim, stride, downsample):
        super(Block, self).__init__()
        ## 补充代码
        self.in_dim = in_dim
        self.out_dim = out_dim
        self.stride = stride
        self.downsample = downsample

        self.conv1 = nn.Conv2D(in_channels=in_dim, out_channels=out_dim, kernel_size=3, stride=stride, padding="SAME")
        self.bn1 = nn.BatchNorm2D(num_features=out_dim)
        self.relu = nn.ReLU()
        self.conv2 = nn.Conv2D(in_channels=out_dim, out_channels=out_dim, kernel_size=3, stride=stride, padding="SAME")
        self.bn2 = nn.BatchNorm2D(num_features=out_dim)

    def forward(self, x):
        ## 补充代码
        h = x 
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.conv2(x)
        x = self.bn2(x)
        if self.downsample != None:
            identity = self.downsample(h) # downsamle有两种情况,第一种是什么都不做,直接相加;第二种是当输入与输出维度不对应时,需要用一个1*1的卷积层将维度统一。
        else:
            identity = h # 维度一致时直接相加
        x = x + identity
        x = self.relu(x)
        return x 

class ResNet18(nn.Layer):
    def __init__(self, in_dim=64, num_classes=10):
        super(ResNet18, self).__init__()
        ## 补充代码
        self.in_dim = in_dim
        self.num_classes = num_classes
        self.conv1 = nn.Conv2D(in_channels=3,
                               out_channels=in_dim,
                               kernel_size=3,
                               stride=2,
                               padding=1)
        self.bn1 = nn.BatchNorm2D(num_features=in_dim)
        self.relu = nn.ReLU()

        self.maxpool = nn.MaxPool2D(kernel_size=3, stride=2)
        
        self.layer2 = self._make_layer(out_dim=64, n_blocks=2, stride=1)
        self.layer3 = self._make_layer(out_dim=128, n_blocks=2, stride=1)
        self.layer4 = self._make_layer(out_dim=256, n_blocks=2, stride=1)
        self.layer5 = self._make_layer(out_dim=512, n_blocks=2, stride=1)
        self.avgpool = nn.AdaptiveAvgPool2D(output_size=1) # 根据输出决定pooling大小
        
        self.classifer = paddle.nn.Linear(in_features=512, out_features=num_classes)

    def _make_layer(self, out_dim, n_blocks, stride):
        #     ## 补充代码
        layer = []
        # 残差结构分支
        # 输入与输出维度不一致时用一个1*1的卷积层统一维度
        if stride != 1 or out_dim != self.in_dim: # 有两种情况会导致维度不一致,第一种是步长不是1,第二种是输入和输出维度不同
            downsample = nn.Sequential(
                nn.Conv2D(in_channels=self.in_dim, out_channels=out_dim, kernel_size=1, stride=1),
                nn.BatchNorm2D(num_features=out_dim)
            )
        else:
            downsample = None
            
        layer.append(Block(self.in_dim, out_dim, stride, downsample))
        for i in range(n_blocks-1):
            layer.append(Block(in_dim=out_dim, out_dim=out_dim, stride=stride, downsample=None))
        
        self.in_dim = out_dim
        return nn.Sequential(*layer)


    def forward(self, x):
        ## 补充代码
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
        x = self.layer5(x)
        x = self.avgpool(x)
        x = x.flatten(1)
        x = self.classifer(x)
        return x          

def main():
    model = ResNet18()
    print(model)
    x = paddle.randn([2, 3, 32, 32])
    out = model(x)
    print(out)
    print(out.shape)

if __name__ == "__main__":
    main()
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值