torch.nn.Module.modules()

参考链接: torch.nn.Module.modules()

说明:

注意torch.nn.Module.modules()和torch.nn.Module.children()方法的区别:

前者modules()会递归地遍历所有子模块,
包括子模块的子模块、子模块的子模块的子模块,等等依此类推.

后者children()只会遍历所有直接子模块,不再递归下去,
即子模块的子模块不会被遍历到.

在这里插入图片描述

原文及翻译:


modules()  modules()方法  
    Returns an iterator over all modules in the network.
    返回一个迭代器,这个迭代器能够遍历神经网络中的所有模块.
    Yields  迭代返回
        Module – a module in the network
        Module类型 - 神经网络中的一个模块


    Note  注意:
    	Duplicate modules are returned only once. 
    	重复的模块只返回一次.
    	In the following example, l will be returned only once.
    	在以下例子中,模块对象l只返回一次.


    Example: 例子:

    >>> l = nn.Linear(2, 2)
    >>> net = nn.Sequential(l, l)
    >>> for idx, m in enumerate(net.modules()):
            print(idx, '->', m)

    0 -> Sequential(
      (0): Linear(in_features=2, out_features=2, bias=True)
      (1): Linear(in_features=2, out_features=2, bias=True)
    )
    1 -> Linear(in_features=2, out_features=2, bias=True)

代码实验1:

Windows PowerShell
版权所有 (C) Microsoft Corporation。保留所有权利。

尝试新的跨平台 PowerShell https://aka.ms/pscore6

加载个人及系统配置文件用了 1144 毫秒。
(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch1_2_0
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> python
Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import torch.nn as nn
>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
...     print(idx, '->', m)
... 
0 -> Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)  
>>> 
>>> l1 = nn.Linear(2, 2) 
>>> l2 = nn.Linear(4, 4) 
>>> net = nn.Sequential(l1, l2) 
>>> for idx, m in enumerate(net.modules()):
...     print(idx, '->', m)
... 
0 -> Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=4, out_features=4, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
2 -> Linear(in_features=4, out_features=4, bias=True)
>>>
>>> 
>>> l1 = nn.Linear(2, 2) 
>>> l2 = nn.Linear(2, 2)  
>>> net = nn.Sequential(l1, l2) 
>>> for idx, m in enumerate(net.modules()):
...     print(idx, '->', m)
... 
0 -> Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
2 -> Linear(in_features=2, out_features=2, bias=True)
>>>
>>> 
>>> 

代码实验2:

import torch 
class Model(torch.nn.Module):
    def __init__(self):
        super(Model,self).__init__()
        self.conv1=torch.nn.Sequential(  # 输入torch.Size([64, 1, 28, 28])
                torch.nn.Conv2d(1,64,kernel_size=3,stride=1,padding=1),
                torch.nn.ReLU(),  # 输出torch.Size([64, 64, 28, 28])
                torch.nn.Conv2d(64,128,kernel_size=3,stride=1,padding=1),  # 输出torch.Size([64, 128, 28, 28])
                torch.nn.ReLU(),
                torch.nn.MaxPool2d(stride=2,kernel_size=2)  # 输出torch.Size([64, 128, 14, 14])
        )

        self.dense=torch.nn.Sequential(  # 输入torch.Size([64, 14*14*128])
                    torch.nn.Linear(14*14*128,1024),  # 输出torch.Size([64, 1024])
                    torch.nn.ReLU(),
                    torch.nn.Dropout(p=0.5),
                    torch.nn.Linear(1024,10)  # 输出torch.Size([64, 10])        
        )
        self.layer4cxq1 = torch.nn.Conv2d(2,33,4,4)
        self.layer4cxq2 = torch.nn.ReLU()
        self.layer4cxq3 = torch.nn.MaxPool2d(stride=2,kernel_size=2)
        self.layer4cxq4 = torch.nn.Linear(14*14*128,1024)
        self.layer4cxq5 = torch.nn.Dropout(p=0.8)
    
    def forward(self,x):  # torch.Size([64, 1, 28, 28])
        x = self.conv1(x)  # 输出torch.Size([64, 128, 14, 14])
        x = x.view(-1,14*14*128)  # torch.Size([64, 14*14*128])
        x = self.dense(x)  # 输出torch.Size([64, 10])
        return x

print('cuda(GPU)是否可用:',torch.cuda.is_available())
print('torch的版本:',torch.__version__)

model = Model() #.cuda()

print("测试模型(CPU)".center(100,"-"))
print(type(model))

print("测试模型modules()方法".center(100,"-"))

for child in model.modules():
    print(child)

控制台输出结果:

Windows PowerShell
版权所有 (C) Microsoft Corporation。保留所有权利。

尝试新的跨平台 PowerShell https://aka.ms/pscore6

加载个人及系统配置文件用了 1035 毫秒。
(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch1_2_0
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>  & 'D:\Anaconda3\envs\ssd4pytorch1_2_0\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2020.12.424452561\pythonFiles\lib\python\debugpy\launcher' '51853' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\test4cxq\test31.py'
cuda(GPU)是否可用: True
torch的版本: 1.2.0+cu92
---------------------------------------------测试模型(CPU)----------------------------------------------
<class '__main__.Model'>
------------------------------------------测试模型modules()方法-------------------------------------------
Model(
  (conv1): Sequential(
    (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU()
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (dense): Sequential(
    (0): Linear(in_features=25088, out_features=1024, bias=True)
    (1): ReLU()
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=1024, out_features=10, bias=True)
  )
  (layer4cxq1): Conv2d(2, 33, kernel_size=(4, 4), stride=(4, 4))
  (layer4cxq2): ReLU()
  (layer4cxq3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (layer4cxq4): Linear(in_features=25088, out_features=1024, bias=True)
  (layer4cxq5): Dropout(p=0.8, inplace=False)
)
Sequential(
  (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (1): ReLU()
  (2): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (3): ReLU()
  (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
ReLU()
Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
ReLU()
MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
Sequential(
  (0): Linear(in_features=25088, out_features=1024, bias=True)
  (1): ReLU()
  (2): Dropout(p=0.5, inplace=False)
  (3): Linear(in_features=1024, out_features=10, bias=True)
)
Linear(in_features=25088, out_features=1024, bias=True)
ReLU()
Dropout(p=0.5, inplace=False)
Linear(in_features=1024, out_features=10, bias=True)
Conv2d(2, 33, kernel_size=(4, 4), stride=(4, 4))
ReLU()
MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
Linear(in_features=25088, out_features=1024, bias=True)
Dropout(p=0.8, inplace=False)
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> 
  • 5
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值