pytorch module内部模块增减

1、nn.conv

说下个人的理解,在 torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding, bias)中,conv2d会首先根据设定的in_channel 和 out_channel初始化相对应的weights和bias,但是可以仔细地查看最终卷积时候调用的函数,其实是对weight和input进行conv2d的卷积计算,所以其实最重要的维度对齐是由weight和bias控制的。

  def _conv_forward(self, input: Tensor, weight: Tensor, bias: Optional[Tensor]):
        if self.padding_mode != 'zeros':
            return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
                            weight, bias, self.stride,
                            _pair(0), self.dilation, self.groups)
        return F.conv2d(input, weight, bias, self.stride,
                        self.padding, self.dilation, self.groups)

    def forward(self, input: Tensor) -> Tensor:
        return self._conv_forward(input, self.weight, self.bias)
2、nn.module内部模块的增加

本部分主要是在collapse操作之后引入的,是用于模型内部的结构删减替换。self.__delattr__本身就是用于类内部删除相关属性。

class RBRepSR_three(nn.Module):
    """Residual block without BN, without relu
    It has a style of:
        -----Conv3----Conv3----Conv3-----+-
         |_______________________________|
    Args:
        num_feat (int): Channel number of intermediate features.
            Default: 64.
        res_scale (float): Residual scale. Default: 1.
        pytorch_init (bool): If set to True, use pytorch default init,
            otherwise, use default_init_weights. Default: False.
    """

    def __init__(self, num_feat=64, res_scale=1, pytorch_init=False, deploy_flag=False):
        super(RBRepSR_three, self).__init__()
        self.res_scale = res_scale
        self.deploy = deploy_flag
        self.num_feat = num_feat

        if self.deploy:
            self.rbr_reparam = nn.Conv2d(num_feat, num_feat, 7, 1, 7//2, bias=True)
        else:
            self.padding = nn.ZeroPad2d(3)
            self.conv3x3_1 = nn.Conv2d(num_feat, num_feat, 3, 1, 0, bias=True)
            self.conv3x3_2 = nn.Conv2d(num_feat, num_feat, 3, 1, 0, bias=True)
            self.conv3x3_3 = nn.Conv2d(num_feat, num_feat, 3, 1, 0, bias=True)
            #self.relu = nn.ReLU(inplace=True)

            if not pytorch_init:
                default_init_weights([self.conv3x3_1, self.conv3x3_2, self.conv3x3_3], 0.1)

    def forward(self, x):
        if hasattr(self, 'rbr_reparam'):
            return self.rbr_reparam(x)
        else:
            identity = x
            out = self.conv3x3_3(self.conv3x3_2(self.conv3x3_1(self.padding(x))))
            return identity + out * self.res_scale

    def get_equivalent_kernel_bias(self):
        temp = reparameter_33(self.conv3x3_1, self.conv3x3_2)
        fused = reparameter_33(temp, self.conv3x3_3)
        kernel_identity = torch.zeros((self.num_feat, self.num_feat, 7, 7))
        for i in range(self.num_feat):
    	    kernel_identity[i, i, 3, 3] = 1
        self.rbr_reparam = nn.Conv2d(self.num_feat, self.num_feat, 7, 1, 7//2, bias=True)
        self.rbr_reparam.weight.data = fused.weight.data * self.res_scale + kernel_identity
        self.rbr_reparam.bias.data = fused.bias.data * self.res_scale

    def switch_to_deploy(self):
        if hasattr(self, 'rbr_reparam'):
            return
        self.get_equivalent_kernel_bias()
        for para in self.parameters():
            para.detach_()
        self.__delattr__('padding')
        self.__delattr__('conv3x3_1')
        self.__delattr__('conv3x3_2')
        self.__delattr__('conv3x3_3')
PyTorchModule是一个基类,它是所有神经网络模块的父类。无论是模型、层、激活函数还是损失函数,都可以被视为Module的扩展。所以,modules和named_modules可以用于递归遍历模型的各个层次,从浅到深,迭代每个自定义块(block)以及block内的每个层(layer),将它们都视为module进行迭代。而children则更加直观,它表示模型的"孩子",即直接子模块,不进行深入递归。 需要注意的是,model.modules()和model.named_modules()方法返回的都是迭代器(iterator),可以用于遍历模型的各个子模块。此外,model.modules()返回的是所有子模块的迭代器,而model.named_modules()返回的是带有子模块名称的迭代器。 总之,ModulePyTorch神经网络模块的基类,通过使用modules和named_modules方法,可以方便地对模型进行层层迭代遍历,而children方法则直接返回模型的直接子模块。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [pytorch教程之nn.Module类详解——使用Module类来自定义模型](https://blog.csdn.net/qq_27825451/article/details/90550890)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值