x_rgb = self.rgb_path.layer1(x_rgb)计算过程

输入尺寸:

torch.Size([4, 64, 8, 56, 56])

计算:

x_rgb = self.rgb_path.layer1(x_rgb)

 

打印网络结构:

print(a.rgb_path.layer1)

Sequential(
  (0): Bottleneck3d(
    (conv1): ConvModule(
      (conv): Conv3d(64, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
      (bn): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (activate): ReLU(inplace=True)
    )
    (conv2): ConvModule(
      (conv): Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
      (bn): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (activate): ReLU(inplace=True)
    )
    (conv3): ConvModule(
      (conv): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
      (bn): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (downsample): ConvModule(
      (conv): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
      (bn): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (relu): ReLU(inplace=True)
  )
  (1): Bottleneck3d(
    (conv1): ConvModule(
      (conv): Conv3d(256, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
      (bn): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (activate): ReLU(inplace=True)
    )
    (conv2): ConvModule(
      (conv): Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
      (bn): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (activate): ReLU(inplace=True)
    )
    (conv3): ConvModule(
      (conv): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
      (bn): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (relu): ReLU(inplace=True)
  )
  (2): Bottleneck3d(
    (conv1): ConvModule(
      (conv): Conv3d(256, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
      (bn): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (activate): ReLU(inplace=True)
    )
    (conv2): ConvModule(
      (conv): Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
      (bn): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (activate): ReLU(inplace=True)
    )
    (conv3): ConvModule(
      (conv): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
      (bn): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (relu): ReLU(inplace=True)
  )
)

  1. conv1 层:

    • 输入通道数: 64
    • 输出通道数: 64
    • 核大小: (1, 1, 1)
    • 步长: (1, 1, 1)
    • 填充: 0
    • 输出尺寸计算:

      D_out = (D_in - 1) // 1 + 1 = 8
      H_out = (H_in - 1) // 1 + 1 = 56
      W_out = (W_in - 1) // 1 + 1 = 56
      Output size: (4, 64, 8, 56, 56)
      
  2. conv2 层:

    • 输入通道数: 64
    • 输出通道数: 64
    • 核大小: (1, 3, 3)
    • 步长: (1, 1, 1)
    • 填充: (0, 1, 1)
    • 输出尺寸计算:

      D_out = (D_in - 1) // 1 + 1 = 8
      H_out = (H_in + 2 * 1 - 3 + 1) // 1 + 1 = 56
      W_out = (W_in + 2 * 1 - 3 + 1) // 1 + 1 = 56
      Output size: (4, 64, 8, 56, 56)
      
  3. conv3 层:

    • 输入通道数: 64
    • 输出通道数: 256
    • 核大小: (1, 1, 1)
    • 步长: (1, 1, 1)
    • 填充: 0
    • 输出尺寸计算:

      D_out = (D_in - 1) // 1 + 1 = 8
      H_out = (H_in - 1) // 1 + 1 = 56
      W_out = (W_in - 1) // 1 + 1 = 56
      Output size: (4, 256, 8, 56, 56)
      
  4. downsample 层:

    • 输入通道数: 64
    • 输出通道数: 256
    • 核大小: (1, 1, 1)
    • 步长: (1, 1, 1)
    • 填充: 0
    • 输出尺寸计算:
       
      D_out = (D_in - 1) // 1 + 1 = 8 
      H_out = (H_in - 1) // 1 + 1 = 56
      W_out = (W_in - 1) // 1 + 1 = 56
      Output size: (4, 256, 8, 56, 56)
      

所以经过 layer1 的输出尺寸为 (4, 256, 8, 56, 56)

发现手算和程序计算的结果是一致的。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值