pytorch打印网络结构图,打印部分特征图变化形状,Total params,Input size (MB),Params size (MB),打印flops, params

一. DA-TransUNet/train.py

net = ViT_seg(config_vit, img_size=args.img_size, num_classes=config_vit.n_classes).cuda()

#打印网络结构图
print(net)

#打印部分特征图变化形状,Total params,Trainable params,Input size (MB),Params size (MB)等
from torchsummary import summary
summary(net,input_size=(3,224,224),batch_size=1,device="cuda")

#打印flops,  params
tensor = torch.rand(1,1, 3, 224, 224)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tensor = tensor.to(device)
from thop import profile
flops, params = profile(net, tensor)
print('flops: ', flops, 'params: ', params)

二. CMT-pytorch-master\model\Transformers\CMT\cmt.py

if __name__ == "__main__":
    x = torch.randn(1, 3, 160, 160)
    model = CmtTi()
    
    #打印网络结构图
    print(model)
    
    #打印部分特征图变化形状,Total params,Trainable params,Input size (MB),Params size (MB)等
    from torchsummary import summary
    summary(model, input_size=(3,160,160), batch_size=1, device='cuda')
    
    #打印flops,  params
    from thop import profile
    x = torch.randn(1,1, 3, 160, 160)
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    tensor = x.to(device)
    flops, params = profile(model, tensor)
    print('flops: ', flops, 'params: ', params)
    
    
    
net = ViT_seg(config_vit, img_size=args.img_size, num_classes=config_vit.n_classes).cuda()
#打印网络结构图
print(net)
#网络部分结构图如下所示
(base) ➜  work conda run -n base --no-capture-output --live-stream python /home/featurize/work/5DA/DA-TransUnet/DA-TransUNet/DA-TransUNet/train.py
/environment/miniconda3/lib/python3.10/site-packages/torch/nn/init.py:405: UserWarning: Initializing zero-element tensors is a no-op
  warnings.warn("Initializing zero-element tensors is a no-op")
DA_Transformer(
  (transformer): Transformer(
    (embeddings): Embeddings(
      (DAblock1): DANetHead(
        (conv5a): Sequential(
          (0): Conv2d(768, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(48, eps=0.001, momentum=0.95, affine=True, track_running_stats=True)
          (2): ReLU()
        )
        (conv5c): Sequential(
          (0): Conv2d(768, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): BatchNorm2d(48, eps=0.001, momentum=0.95, affine=True, track_running_stats=True)
          (2): ReLU()
        )
        (sa): PAM_Module(
          (query_conv): Conv2d(48, 6, kernel_size=(1, 1), stride=(1, 1))
          (key_conv): Conv2d(48, 6, kernel_size=(1, 1), stride=(1, 1))
          (value_conv): Conv2d(48, 48, kernel_size=(1, 1), stride=(1, 1))
          (softmax): Softmax(dim=-1)
        )
        (sc): CAM_Module(
          (softmax): Softmax(dim=-1)
        )
net = ViT_seg(config_vit, img_size=args.img_size, num_classes=config_vit.n_classes).cuda()
#打印部分特征图变化形状,Total params,Trainable params,Input size (MB),Params size (MB)等
from torchsummary import summary
summary(net,input_size=(3,224,224),batch_size=1,device="cuda")
/environment/miniconda3/lib/python3.10/site-packages/torch/nn/init.py:405: UserWarning: Initializing zero-element tensors is a no-op
  warnings.warn("Initializing zero-element tensors is a no-op")
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
         StdConv2d-1          [1, 64, 112, 112]           9,408
         GroupNorm-2          [1, 64, 112, 112]             128
              ReLU-3          [1, 64, 112, 112]               0
         StdConv2d-4           [1, 256, 55, 55]          16,384
         GroupNorm-5           [1, 256, 55, 55]             512
         StdConv2d-6            [1, 64, 55, 55]           4,096
         GroupNorm-7            [1, 64, 55, 55]             128
              ReLU-8            [1, 64, 55, 55]               0
          Conv2d-492           [1, 9, 224, 224]           1,305
        Identity-493           [1, 9, 224, 224]               0
================================================================
Total params: 106,405,265
Trainable params: 106,405,265
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 5921468054582.70
Params size (MB): 405.90
Estimated Total Size (MB): 5921468054989.18
----------------------------------------------------------------          
net = ViT_seg(config_vit, img_size=args.img_size, num_classes=config_vit.n_classes).cuda()
#打印flops,  params
tensor = torch.rand(1,1, 3, 224, 224)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tensor = tensor.to(device)
from thop import profile
flops, params = profile(net, tensor)
print('flops: ', flops, 'params: ', params)
[INFO] Register count_convNd() for <class 'torch.nn.modules.conv.Conv2d'>.
[INFO] Register count_normalization() for <class 'torch.nn.modules.batchnorm.BatchNorm2d'>.
[INFO] Register zero_ops() for <class 'torch.nn.modules.activation.ReLU'>.
[INFO] Register zero_ops() for <class 'torch.nn.modules.container.Sequential'>.
[INFO] Register count_softmax() for <class 'torch.nn.modules.activation.Softmax'>.
[INFO] Register zero_ops() for <class 'torch.nn.modules.dropout.Dropout'>.
[INFO] Register count_normalization() for <class 'torch.nn.modules.normalization.LayerNorm'>.
[INFO] Register count_linear() for <class 'torch.nn.modules.linear.Linear'>.
[INFO] Register count_upsample() for <class 'torch.nn.modules.upsampling.UpsamplingBilinear2d'>.
flops:  25491496060.0 params:  94510417.0
  • 9
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值