[TorchSummary+TorchSnooper] 一次TorchSummary可视化网络的调试+TorchSnooper的第一次实际使用

问题描述:

本次调试中遇到的错误,其原因主要是数据类型不符。具体表示为cpu、cuda、Tensor.cuda.FloatTensor、Tensor.FloatTensor的不匹配。

本次调试用到一个第三方库,名为torchsnooper,用于监测深度网络程序中的变量的类型、维度等信息,用以解决相应的不匹配问题。

GitHub链接:TorchSnooper

本次调试遇到的报错信息包括:

RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 ‘weight’

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
p.s.

由于本人一开始对论文的网络架构architecture理解有误,致使网络结构与论文中的架构出现了较大的差异。

正确的网络架构如下:
在这里插入图片描述
而对应的最终的源代码如下:

import torch.nn as nn
from torch.nn import init
import torch.nn.functional as F
import torch
import os
from torchsummary import summary
import torchsnooper

patch_size = 17
batch_size = 20
device = "cuda" if torch.cuda.is_available() else "cpu"

x = torch.randn(batch_size,1,103,patch_size,patch_size,device=device)
# -----------------------自加在构建网络的情况下获得维度---------------------------
# @torchsnooper.snoop()

class Net(nn.Module):
    @staticmethod
    def weight_init(m):
        if isinstance(m, nn.Linear) or isinstance(m, nn.Conv3d):
            init.xavier_uniform_(m.weight.data)
            init.constant_(m.bias.data, 0)

    def _get_final_flattened_size(self):
        with torch.no_grad():
            x = torch.zeros((1, 1, 103,
                             patch_size, patch_size),device=device)
            x = self.pool1(self.conv1(x))
            x = self.pool2(self.conv2(x))
            x = self.conv3(x)
            _, t, c, w, h = x.size()
        return t * c * w * h

    def __init__(self):
        super(Net,self).__init__()
        self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1)).cuda()
        self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1)).cuda()
        self.conv3 = nn.Conv3d(2*32, 4*32, (32, 3, 3), padding=(1, 0, 0)).cuda()
        self.pool1 = nn.MaxPool3d((1,2,2), stride = (1,2,2)).cuda()
        self.pool2 = nn.MaxPool3d((1,2,2), stride = (1,2,2)).cuda()

        self.features_size = self._get_final_flattened_size()

        self.fc = nn.Linear(self.features_size, 10).cuda()

        self.apply(self.weight_init)

    def forward(self,x):
        x = F.relu(self.conv1(x))
        x = self.pool1(x)
        x = F.relu(self.conv2(x))
        x = self.pool2(x)
        x = F.relu(self.conv3(x))
        x = x.view(-1, self.features_size)
        x = self.fc(x)
        return x

net = Net()
net.to(device)

print(net.to(device))

# os.system('pause')

summary(net.to(device),(1, 103, patch_size, patch_size),device=device)
# -----------------------自加在构建网络的情况下获得维度---------------------------

这里通过将nn.MaxPool3d((1,2,2), stride = (1,2,2))将三维最大池化在Depth(Depth×Width×Height)的size和stride均设置为,从而达到只将W,H维度的size减半,保持D维度不变的目的。

对应的运行结果如下:

E:\Anaconda\python.exe C:/Users/73416/PycharmProjects/HSIproject/test.py
Net(
  (conv1): Conv3d(1, 32, kernel_size=(32, 4, 4), stride=(1, 1, 1), padding=(1, 1, 1))
  (conv2): Conv3d(32, 64, kernel_size=(32, 5, 5), stride=(1, 1, 1), padding=(1, 1, 1))
  (conv3): Conv3d(64, 128, kernel_size=(32, 3, 3), stride=(1, 1, 1), padding=(1, 0, 0))
  (pool1): MaxPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2), padding=0, dilation=1, ceil_mode=False)
  (pool2): MaxPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2), padding=0, dilation=1, ceil_mode=False)
  (fc): Linear(in_features=2048, out_features=10, bias=True)
)
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv3d-1       [-1, 32, 74, 16, 16]          16,416
         MaxPool3d-2         [-1, 32, 74, 8, 8]               0
            Conv3d-3         [-1, 64, 45, 6, 6]       1,638,464
         MaxPool3d-4         [-1, 64, 45, 3, 3]               0
            Conv3d-5        [-1, 128, 16, 1, 1]       2,359,424
            Linear-6                   [-1, 10]          20,490
================================================================
Total params: 4,034,794
Trainable params: 4,034,794
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.11
Forward/backward pass size (MB): 6.79
Params size (MB): 15.39
Estimated Total Size (MB): 22.29
----------------------------------------------------------------

Process finished with exit code 0

调试过程记录:

调试1:
运行结果:
E:\Anaconda\python.exe C:/Users/73416/PycharmProjects/HSIproject/test.py
torch.Size([10, 1, 103, 17, 17])
Source path:... C:/Users/73416/PycharmProjects/HSIproject/test.py
Starting var:.. self = REPR FAILED
Starting var:.. __class__ = <class '__main__.Net'>
22:19:30.891404 call        63     def __init__(self):
22:19:30.892400 line        64         super(Net, self).__init__()
Modified var:.. self = Net()
22:19:30.892400 line        65         self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4, 4), stride=(1, 1, 1), padding=(1, 1, 1)))
22:19:30.893395 line        66         self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...(32, 5, 5), stride=(1, 1, 1), padding=(1, 1, 1)))
22:19:30.904366 line        67         self.conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...=(4, 3, 3), stride=(1, 1, 1), padding=(1, 0, 0)))
22:19:30.906358 line        68         self.pool1 = nn.MaxPool3d(2, stride = 2)
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
22:19:30.907359 line        69         self.pool2 = nn.MaxPool3d(2, stride = 2)
22:19:30.907359 return      69         self.pool2 = nn.MaxPool3d(2, stride = 2)
Return value:.. None
Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
Starting var:.. x = tensor<(2, 1, 103, 17, 17), float32, cuda:0>
22:19:33.521693 call        71     def forward(self,x):
22:19:33.523689 line        72         x = F.relu(self.conv1(x))
conv1: torch.Size([2, 32, 74, 16, 16])
Modified var:.. x = tensor<(2, 32, 74, 16, 16), float32, cuda:0, grad>
22:19:34.297627 line        73         print('conv1:', x.size())
22:19:34.312655 line        74         x = self.pool1(x)
pool1: torch.Size([2, 32, 37, 8, 8])
Modified var:.. x = tensor<(2, 32, 37, 8, 8), float32, cuda:0, grad>
22:19:34.315645 line        75         print('pool1:', x.size())
22:19:34.317638 line        76         x = F.relu(self.conv2(x))
Modified var:.. x = tensor<(2, 64, 8, 6, 6), float32, cuda:0, grad>
22:19:34.318636 line        77         print('conv2:', x.size())
conv2: torch.Size([2, 64, 8, 6, 6])
pool2: torch.Size([2, 64, 4, 3, 3])
22:19:34.322625 line        78         x = self.pool2(x)
Modified var:.. x = tensor<(2, 64, 4, 3, 3), float32, cuda:0, grad>
22:19:34.323624 line        79         print('pool2:', x.size())
22:19:34.324619 line        80         x = F.relu(self.conv3(x))
Modified var:.. x = tensor<(2, 128, 3, 1, 1), float32, cuda:0, grad>
22:19:34.325617 line        81         print('conv3:', x.size())
conv3: torch.Size([2, 128, 3, 1, 1])
22:19:34.326614 line        82         features_size = self._get_final_flattened_size()
    Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
    22:19:34.327611 call        53     def _get_final_flattened_size(self):
    22:19:34.327611 line        54         with torch.no_grad():
    22:19:34.327611 line        55             x = torch.zeros((batch_size, 1, 103,
    22:19:34.327611 line        56                              patch_size, patch_size))
    New var:....... x = tensor<(10, 1, 103, 17, 17), float32, cpu>
    22:19:34.328577 line        57             x = self.pool1(self.conv1(x))
    22:19:34.332566 exception   57             x = self.pool1(self.conv1(x))
    RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'
    Call ended by exception
22:19:34.337552 exception   82         features_size = self._get_final_flattened_size()
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'
Call ended by exception
Traceback (most recent call last):
  File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 92, in <module>
    summary(net,(1, 103, patch_size, patch_size),batch_size,device='cuda')
  File "E:\Anaconda\lib\site-packages\torchsummary\torchsummary.py", line 72, in summary
    model(*x)
  File "E:\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "E:\Anaconda\lib\site-packages\pysnooper\tracer.py", line 256, in simple_wrapper
    return function(*args, **kwargs)
  File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 82, in forward
    features_size = self._get_final_flattened_size()
  File "E:\Anaconda\lib\site-packages\pysnooper\tracer.py", line 256, in simple_wrapper
    return function(*args, **kwargs)
  File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 57, in _get_final_flattened_size
    x = self.pool1(self.conv1(x))
  File "E:\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "E:\Anaconda\lib\site-packages\torch\nn\modules\conv.py", line 448, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'

Process finished with exit code 1
结果分析:

可以看到报错原因是:

RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'

可以看出程序报错是cpucuda类型不匹配造成的。

通过@torchsnooper.snoop()来监测程序中的变量。发现:


只有这个变量的类型是cpu,而其他的变量的类型都是cuda。查找上文发现这个报错的位置是在程序的第56行。

22:19:34.327611 line        55             x = torch.zeros((batch_size, 1, 103,
22:19:34.327611 line        56                              patch_size, patch_size))
New var:....... x = tensor<(10, 1, 103, 17, 17), float32, cpu>

即在相应的位置将变量的类型指定为cuda即可。

x = torch.zeros((batch_size, 1, 103, patch_size, patch_size), device = 'cuda')
调试2:
运行结果:
E:\Anaconda\python.exe C:/Users/73416/PycharmProjects/HSIproject/test.py
Source path:... C:/Users/73416/PycharmProjects/HSIproject/test.py
torch.Size([10, 1, 103, 17, 17])
Starting var:.. self = REPR FAILED
Starting var:.. __class__ = <class '__main__.Net'>
22:28:30.503370 call        63     def __init__(self):
22:28:30.503370 line        64         super(Net, self).__init__()
Modified var:.. self = Net()
22:28:30.503370 line        65         self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4, 4), stride=(1, 1, 1), padding=(1, 1, 1)))
22:28:30.504392 line        66         self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...(32, 5, 5), stride=(1, 1, 1), padding=(1, 1, 1)))
22:28:30.516367 line        67         self.conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...=(4, 3, 3), stride=(1, 1, 1), padding=(1, 0, 0)))
22:28:30.518328 line        68         self.pool1 = nn.MaxPool3d(2, stride = 2)
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
22:28:30.518328 line        69         self.pool2 = nn.MaxPool3d(2, stride = 2)
22:28:30.518328 line        70         self.features_size = self._get_final_flattened_size()
    Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
    22:28:30.519359 call        53     def _get_final_flattened_size(self):
    22:28:30.519359 line        54         with torch.no_grad():
    22:28:30.519359 line        55             x = torch.zeros((batch_size, 1, 103,
    22:28:30.519359 line        56                              patch_size, patch_size),device='cuda')
    New var:....... x = tensor<(10, 1, 103, 17, 17), float32, cuda:0>
    22:28:30.520324 line        57             x = self.pool1(self.conv1(x))
    22:28:30.523339 exception   57             x = self.pool1(self.conv1(x))
    RuntimeError: Input type (torch.cuda.FloatTensor...eight type (torch.FloatTensor) should be the same
    Call ended by exception
22:28:30.526306 exception   70         self.features_size = self._get_final_flattened_size()
RuntimeError: Input type (torch.cuda.FloatTensor...eight type (torch.FloatTensor) should be the same
Call ended by exception
Traceback (most recent call last):
  File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 92, in <module>
    net = Net().to("cuda")
  File "E:\Anaconda\lib\site-packages\pysnooper\tracer.py", line 256, in simple_wrapper
    return function(*args, **kwargs)
  File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 70, in __init__
    self.features_size = self._get_final_flattened_size()
  File "E:\Anaconda\lib\site-packages\pysnooper\tracer.py", line 256, in simple_wrapper
    return function(*args, **kwargs)
  File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 57, in _get_final_flattened_size
    x = self.pool1(self.conv1(x))
  File "E:\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "E:\Anaconda\lib\site-packages\torch\nn\modules\conv.py", line 448, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

Process finished with exit code 1
结果分析:

可以看到报错是:


究其原因是因为输入的x与模型net()所在的位置不同,一个在CPU,一个在GPU。

解决方式有两种:

  • 将输入数据x和模型net()的所在位置均设定为CPU。
  • 将输入数据x和模型net()的所在位置均设定为GPU。

本次的处理是将位置设定为CPU。(因为设定为GPU后仍报错,且报错信息不变)。

修改过的源码见“最终代码”部分。

最终代码1:

设置device = ‘cpu’:

这部分就是将模型和所用的网络的device均设定为‘cuda’

import torch.nn as nn
from torch.nn import init
import torch.nn.functional as F
import torch
import os
from torchsummary import summary
import torchsnooper

patch_size = 17
batch_size = 20
x = torch.randn(batch_size,1,103,patch_size,patch_size,device='cpu')
# -----------------------自加在构建网络的情况下获得维度---------------------------
@torchsnooper.snoop()

class Net(nn.Module):

    def _get_final_flattened_size(self):
        with torch.no_grad():
            x = torch.zeros((batch_size, 1, 103,
                             patch_size, patch_size),device='cpu')
            x = self.pool1(self.conv1(x))
            x = self.pool2(self.conv2(x))
            x = self.conv3(x)
            _, t, c, w, h = x.size()
        return t * c * w * h

    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1))
        self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1))
        self.conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0))
        self.pool1 = nn.MaxPool3d(2, stride = 2)
        self.pool2 = nn.MaxPool3d(2, stride = 2)
        self.features_size = self._get_final_flattened_size()
        self.fc = nn.Linear(self.features_size, 10)


    def forward(self,x):
        x = F.relu(self.conv1(x))
        x = self.pool1(x)
        x = F.relu(self.conv2(x))
        x = self.pool2(x)
        x = F.relu(self.conv3(x))
        x = x.view(-1, self.features_size)
        x = self.fc(x)
        return x

    def print(self):
        print(self.features_size)

net = Net()
summary(net,(1, 103, patch_size, patch_size),device='cpu')
net.print()
# -----------------------自加在构建网络的情况下获得维度---------------------------
# # torch.Size([10, 1, 103, 17, 17])
# # conv1: torch.Size([10, 32, 74, 16, 16])
# # pool1: torch.Size([10, 32, 37, 8, 8])
# # conv2: torch.Size([10, 64, 8, 6, 6])
# # pool2: torch.Size([10, 64, 4, 3, 3])
# # conv3: torch.Size([10, 128, 3, 1, 1])
# # features_size: 384
# # final_size: torch.Size([10, 10])
设置device = ‘cuda’:

这部分的操作是在网络Net的类的初始化中,对网络的层加上.cuda()的后缀,将网络层放到GPU上。

之前的报错原因是因为网络(或者说网络的层)在CPU而不在GPU。

当然这种操作我是第一次见(2019.9.20)

import torch.nn as nn
from torch.nn import init
import torch.nn.functional as F
import torch
import os
from torchsummary import summary
import torchsnooper

patch_size = 17
batch_size = 20
device = "cuda" if torch.cuda.is_available() else "cpu"

x = torch.randn(batch_size,1,103,patch_size,patch_size,device=device)
# -----------------------自加在构建网络的情况下获得维度---------------------------
# @torchsnooper.snoop()

class Net(nn.Module):
    @staticmethod
    def weight_init(m):
        if isinstance(m, nn.Linear) or isinstance(m, nn.Conv3d):
            init.xavier_uniform_(m.weight.data)
            init.constant_(m.bias.data, 0)

    def _get_final_flattened_size(self):
        with torch.no_grad():
            x = torch.zeros((1, 1, 103,
                             patch_size, patch_size),device=device)
            x = self.pool1(self.conv1(x))
            x = self.pool2(self.conv2(x))
            x = self.conv3(x)
            _, t, c, w, h = x.size()
        return t * c * w * h

    def __init__(self):
        super(Net,self).__init__()
        self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1)).cuda()
        self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1)).cuda()
        self.conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0)).cuda()
        self.pool1 = nn.MaxPool3d(2, stride = 2).cuda()
        self.pool2 = nn.MaxPool3d(2, stride = 2).cuda()

        self.features_size = self._get_final_flattened_size()

        self.fc = nn.Linear(self.features_size, 10).cuda()

        self.apply(self.weight_init)

    def forward(self,x):
        x = F.relu(self.conv1(x))
        x = self.pool1(x)
        x = F.relu(self.conv2(x))
        x = self.pool2(x)
        x = F.relu(self.conv3(x))
        x = x.view(-1, self.features_size)
        x = self.fc(x)
        return x

net = Net()
net.to(device)

print(net.to(device))

# os.system('pause')

summary(net.to(device),(1, 103, patch_size, patch_size),device=device)
# -----------------------自加在构建网络的情况下获得维度---------------------------
# -----------------------自加在构建网络的情况下获得维度---------------------------
# # torch.Size([10, 1, 103, 17, 17])
# # conv1: torch.Size([10, 32, 74, 16, 16])
# # pool1: torch.Size([10, 32, 37, 8, 8])
# # conv2: torch.Size([10, 64, 8, 6, 6])
# # pool2: torch.Size([10, 64, 4, 3, 3])
# # conv3: torch.Size([10, 128, 3, 1, 1])
# # features_size: 384
# # final_size: torch.Size([10, 10])

这里我要强调一点:这是我这么长时间以来(2019.9.20)第一次见在网络的类的初始化模块,就将在网络层的后面加上.cuda()

运行结果:

设置device = ‘cpu’:
E:\Anaconda\python.exe C:/Users/73416/PycharmProjects/HSIproject/test.py
Done!
Source path:... C:/Users/73416/PycharmProjects/HSIproject/test.py
Starting var:.. self = REPR FAILED
Starting var:.. __class__ = <class '__main__.Net'>
14:08:20.721270 call        62     def __init__(self):
14:08:20.721270 line        63         super(Net, self).__init__()
Modified var:.. self = Net()
14:08:20.721270 line        64         self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4, 4), stride=(1, 1, 1), padding=(1, 1, 1)))
14:08:20.767246 line        65         self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...(32, 5, 5), stride=(1, 1, 1), padding=(1, 1, 1)))
14:08:20.797150 line        66         self.conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...=(4, 3, 3), stride=(1, 1, 1), padding=(1, 0, 0)))
14:08:20.799145 line        67         self.pool1 = nn.MaxPool3d(2, stride = 2)
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
14:08:20.799145 line        68         self.pool2 = nn.MaxPool3d(2, stride = 2)
14:08:20.799145 line        69         self.features_size = self._get_final_flattened_size()
    Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
    14:08:20.799145 call        52     def _get_final_flattened_size(self):
    14:08:20.799145 line        53         with torch.no_grad():
    14:08:20.800142 line        54             x = torch.zeros((batch_size, 1, 103,
    14:08:20.800142 line        55                              patch_size, patch_size),device='cpu')
    New var:....... x = tensor<(20, 1, 103, 17, 17), float32, cpu>
    14:08:20.811084 line        56             x = self.pool1(self.conv1(x))
    Modified var:.. x = tensor<(20, 32, 37, 8, 8), float32, cpu>
    14:08:21.515198 line        57             x = self.pool2(self.conv2(x))
    Modified var:.. x = tensor<(20, 64, 4, 3, 3), float32, cpu>
    14:08:21.967987 line        58             x = self.conv3(x)
    Modified var:.. x = tensor<(20, 128, 3, 1, 1), float32, cpu>
    14:08:21.985939 line        59             _, t, c, w, h = x.size()
    New var:....... _ = 20
    New var:....... t = 128
    New var:....... c = 3
    New var:....... w = 1
    New var:....... h = 1
    14:08:21.986969 line        60         return t * c * w * h
    14:08:21.986969 return      60         return t * c * w * h
    Return value:.. 384
14:08:21.986969 line        70         self.fc = nn.Linear(self.features_size, 10)
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...ear(in_features=384, out_features=10, bias=True))
14:08:21.987934 return      70         self.fc = nn.Linear(self.features_size, 10)
Return value:.. None
Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...ear(in_features=384, out_features=10, bias=True))
Starting var:.. x = tensor<(2, 1, 103, 17, 17), float32, cpu>
14:08:22.012867 call        73     def forward(self,x):
14:08:22.013864 line        74         x = F.relu(self.conv1(x))
Modified var:.. x = tensor<(2, 32, 74, 16, 16), float32, cpu, grad>
14:08:22.148504 line        75         x = self.pool1(x)
Modified var:.. x = tensor<(2, 32, 37, 8, 8), float32, cpu, grad>
14:08:22.165459 line        76         x = F.relu(self.conv2(x))
Modified var:.. x = tensor<(2, 64, 8, 6, 6), float32, cpu, grad>
14:08:22.208344 line        77         x = self.pool2(x)
Modified var:.. x = tensor<(2, 64, 4, 3, 3), float32, cpu, grad>
14:08:22.209340 line        78         x = F.relu(self.conv3(x))
Modified var:.. x = tensor<(2, 128, 3, 1, 1), float32, cpu, grad>
14:08:22.210337 line        79         x = x.view(-1, self.features_size)
Modified var:.. x = tensor<(2, 384), float32, cpu, grad>
14:08:22.221308 line        80         x = self.fc(x)
Modified var:.. x = tensor<(2, 10), float32, cpu, grad>
14:08:22.238264 line        81         return x
14:08:22.239262 return      81         return x
Return value:.. tensor<(2, 10), float32, cpu, grad>
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv3d-1       [-1, 32, 74, 16, 16]          16,416
         MaxPool3d-2         [-1, 32, 37, 8, 8]               0
            Conv3d-3          [-1, 64, 8, 6, 6]       1,638,464
         MaxPool3d-4          [-1, 64, 4, 3, 3]               0
            Conv3d-5         [-1, 128, 3, 1, 1]         295,040
            Linear-6                   [-1, 10]           3,850
================================================================
Total params: 1,953,770
Trainable params: 1,953,770
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.11
Forward/backward pass size (MB): 5.36
Params size (MB): 7.45
Estimated Total Size (MB): 12.93
----------------------------------------------------------------
384
Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...ear(in_features=384, out_features=10, bias=True))
14:08:22.299100 call        83     def print(self):
14:08:22.300098 line        84         print(self.features_size)
14:08:22.300098 return      84         print(self.features_size)
Return value:.. None

Process finished with exit code 0

#####设置device = ‘cuda’

E:\Anaconda\python.exe C:/Users/73416/PycharmProjects/HSIproject/test.py
Net(
  (conv1): Conv3d(1, 32, kernel_size=(32, 4, 4), stride=(1, 1, 1), padding=(1, 1, 1))
  (conv2): Conv3d(32, 64, kernel_size=(32, 5, 5), stride=(1, 1, 1), padding=(1, 1, 1))
  (conv3): Conv3d(64, 128, kernel_size=(4, 3, 3), stride=(1, 1, 1), padding=(1, 0, 0))
  (pool1): MaxPool3d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (pool2): MaxPool3d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (fc): Linear(in_features=384, out_features=10, bias=True)
)
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv3d-1       [-1, 32, 74, 16, 16]          16,416
         MaxPool3d-2         [-1, 32, 37, 8, 8]               0
            Conv3d-3          [-1, 64, 8, 6, 6]       1,638,464
         MaxPool3d-4          [-1, 64, 4, 3, 3]               0
            Conv3d-5         [-1, 128, 3, 1, 1]         295,040
            Linear-6                   [-1, 10]           3,850
================================================================
Total params: 1,953,770
Trainable params: 1,953,770
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.11
Forward/backward pass size (MB): 5.36
Params size (MB): 7.45
Estimated Total Size (MB): 12.93
----------------------------------------------------------------

Process finished with exit code 0

代码备份:

import torch.nn as nn
from torch.nn import init
import torch.nn.functional as F
import torch
import os
from torchsummary import summary
import torchsnooper

patch_size = 17
batch_size = 20
x = torch.randn(batch_size,1,103,patch_size,patch_size,device='cpu')
# -----------------------自加在不建立网络的情况下获得维度-------------------------
# conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1))
# conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1))
# conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0))
# pool1 = nn.MaxPool3d(2, stride = 2)
# pool2 = nn.MaxPool3d(2, stride = 2)
#
# def _get_final_flattened_size():
#         with torch.no_grad():
#             x = torch.zeros((batch_size, 1, 103,
#                              patch_size, patch_size))
#             x = pool1(conv1(x))
#             x = pool2(conv2(x))
#             x = conv3(x)
#             _, t, c, w, h = x.size()
#         return t * c * w * h
#
# x = F.relu(conv1(x))
# print('conv1:', x.size())
# x = pool1(x)
# print('pool1:', x.size())
# x = F.relu(conv2(x))
# print('conv2:', x.size())
# x = pool2(x)
# print('pool2:', x.size())
# x = F.relu(conv3(x))
# print('conv3:', x.size())
# features_size = _get_final_flattened_size()
# print('features_size:', features_size)
# fc = nn.Linear(features_size, 10)
# x = x.view(-1, features_size)
# x = fc(x)
# print('final_size:', x.size())
print('Done!')
# -----------------------自加在不建立网络的情况下获得维度-------------------------
# # torch.Size([10, 1, 103, 17, 17])
# # conv1: torch.Size([10, 32, 74, 16, 16])
# # pool1: torch.Size([10, 32, 37, 8, 8])
# # conv2: torch.Size([10, 64, 8, 6, 6])
# # pool2: torch.Size([10, 64, 4, 3, 3])
# # conv3: torch.Size([10, 128, 3, 1, 1])
# # features_size: 384
# # final_size: torch.Size([10, 10])
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值