第J5周:DenseNet+SE-Net实战

任务:
●1. 在DenseNet系列算法中插入SE-Net通道注意力机制,并完成猴痘病识别
●2. 改进思路是否可以迁移到其他地方呢
●3. 测试集accuracy到达89%(拔高,可选)

一、介绍

可参考论文《Squeeze-and-Excitation Networks》

SE-Net 是 ImageNet 2017(ImageNet 收官赛)的冠军模型,是由WMW团队发布。具有复杂度低,参数少和计算量小的优点。且SENet 思路很简单,很容易扩展到已有网络结构如 Inception 和 ResNet 中。

已经有很多工作在空间维度上来提升网络的性能,如 Inception 等,而 SENet 将关注点放在了特征通道之间的关系上。其具体策略为:通过学习的方式来自动获取到每个特征通道的重要程度,然后依照这个重要程度去提升有用的特征并抑制对当前任务用处不大的特征,这又叫做“特征重标定”策略。具体的 SE 模块如下图所示:

在这里插入图片描述
在这里插入图片描述

二、SE 模块应用分析

SE模块的灵活性在于它可以直接应用现有的网络结构中。以 Inception 和 ResNet 为例,我们只需要在 Inception 模块或 Residual 模块后添加一个 SE 模块即可。具体如下图所示:

在这里插入图片描述
上图分别是将 SE 模块嵌入到 Inception 结构与 ResNet 中的示例,方框旁边的维度信息代表该层的输出,c(原文是r,不过我觉得应该是c) 表示 Excitation 操作中的降维系数。

三、SE 模型效果对比

SE 模块很容易嵌入到其它网络中,为了验证 SE 模块的作用,在其它流行网络如 ResNet 和 Inception 中引入 SE 模块,测试其在 ImageNet 上的效果,如下表所示:

在这里插入图片描述

首先看一下网络的深度对 SE 的影响。上表分别展示了 ResNet-50、ResNet-101、ResNet-152 和嵌入 SE 模型的结果。第一栏 Original 是原作者实现的结果,为了进行公平的比较,重新进行了实验得到 Our re-implementation 的结果。最后一栏 SE-module 是指嵌入了 SE 模块的结果,它的训练参数和第二栏 Our re-implementation 一致。括号中的红色数值是指相对于 Our re-implementation 的精度提升的幅值。

从上表可以看出,SE-ResNets 在各种深度上都远远超过了其对应的没有SE的结构版本的精度,这说明无论网络的深度如何,SE模块都能够给网络带来性能上的增益。值得一提的是,SE-ResNet-50 可以达到和ResNet-101 一样的精度;更甚,SE-ResNet-101 远远地超过了更深的ResNet-152。

在这里插入图片描述

上图展示了ResNet-50 和 ResNet-152 以及它们对应的嵌入SE模块的网络在ImageNet上的训练过程,可以明显地看出加入了SE模块的网络收敛到更低的错误率上。

四、SE 模块代码实现

import tensorflow as tf

class Squeeze_excitation_layer(tf.keras.Model):
    def __init__(self, filter_sq):
    	# filter_sq 是 Excitation 中第一个卷积过程中卷积核的个数
        super().__init__()
        self.filter_sq = filter_sq
        self.avepool = tf.keras.layers.GlobalAveragePooling2D()
        self.dense = tf.keras.layers.Dense(filter_sq)
        self.relu = tf.keras.layers.Activation('relu')
        self.sigmoid = tf.keras.layers.Activation('sigmoid')

    def call(self, inputs):
        squeeze = self.avepool(inputs)
        
        excitation = self.dense(squeeze)
        excitation = self.relu(excitation)
        excitation = tf.keras.layers.Dense(inputs.shape[-1])(excitation)
        excitation = self.sigmoid(excitation)
        excitation = tf.keras.layers.Reshape((1, 1, inputs.shape[-1]))(excitation)

        scale = inputs * excitation

        return scale

SE = Squeeze_excitation_layer(16)
inputs = np.zeros((1, 32, 32, 32), dtype=np.float32)
SE(inputs).shape

代码输出:

TensorShape([1, 32, 32, 32])

五、SE 模块插入到 DenseNet 代码实现

from tensorflow.keras.models import Model

    # 56,56,64 -> 56,56,64+32*block[0]
    # Densenet121 56,56,64 -> 56,56,64+32*6 == 56,56,256
    x = dense_block(x, blocks[0], name='conv2')

    # 56,56,64+32*block[0] -> 28,28,32+16*block[0]
    # Densenet121 56,56,256 -> 28,28,32+16*6 == 28,28,128
    x = transition_block(x, 0.5, name='pool2')

    # 28,28,32+16*block[0] -> 28,28,32+16*block[0]+32*block[1]
    # Densenet121 28,28,128 -> 28,28,128+32*12 == 28,28,512
    x = dense_block(x, blocks[1], name='conv3')
    
    # Densenet121 28,28,512 -> 14,14,256
    x = transition_block(x, 0.5, name='pool3')

    # Densenet121 14,14,256 -> 14,14,256+32*block[2] == 14,14,1024
    x = dense_block(x, blocks[2], name='conv4')

    # Densenet121 14,14,1024 -> 7,7,512
    x = transition_block(x, 0.5, name='pool4')

    # Densenet121 7,7,512 -> 7,7,256+32*block[3] == 7,7,1024
    x = dense_block(x, blocks[3], name='conv5')
    
    # 加SE注意力机制
    x = Squeeze_excitation_layer(16)(x)

    x = layers.BatchNormalization(axis=bn_axis, epsilon=1.001e-5, name='bn')(x)
    x = layers.Activation('relu', name='relu')(x)

    x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
    x = layers.Dense(classes, activation='softmax', name='fc1000')(x)

    inputs = img_input

    if blocks == [6, 12, 24, 16]:
        model = Model(inputs, x, name='densenet121')
    elif blocks == [6, 12, 32, 32]:
        model = Model(inputs, x, name='densenet169')
    elif blocks == [6, 12, 48, 32]:
        model = Model(inputs, x, name='densenet201')
    else:
        model = Model(inputs, x, name='densenet')
    return model

def DenseNet121(input_shape=[224,224,3], classes=3, **kwargs):
    return DenseNet([6, 12, 24, 16], input_shape, classes, **kwargs)

def DenseNet169(input_shape=[224,224,3], classes=3, **kwargs):
    return DenseNet([6, 12, 32, 32], input_shape, classes, **kwargs)

def DenseNet201(input_shape=[224,224,3], classes=3, **kwargs):
    return DenseNet([6, 12, 48, 32], input_shape, classes, **kwargs)

六、用Pytorch实现的具体代码

1、 基础配置

  • 操作系统:ubuntu 22.04
  • GPU:RTX 3090(24GB) * 1
  • 语言环境:python 3.12.3
  • 编译器:Jupyter Notebook
  • 深度学习环境:torch 2.3.0+cu121,torchvision 0.18.0+cu121

2、 前期准备

2.1.设置gpu/cpu

import os, PIL, random, pathlib
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision
from torchvision import transforms, datasets

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

print(device)

代码输出:

cuda

2.2.导入数据

data_dir = './J5'
data_dir = pathlib.Path(data_dir)

data_paths = list(data_dir.glob('*'))
classeNames = [str(path).split("/")[1] for path in data_paths]
print(classeNames)

image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:", image_count)

代码输出:

['Monkeypox', 'Others']
图片总数为: 2142

2.3.数据预处理

train_transforms = transforms.Compose([
    transforms.Resize([224, 224]),  # 将输入图片resize成统一尺寸
    # transforms.RandomHorizontalFlip(), # 随机水平翻转
    transforms.ToTensor(),  # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
    transforms.Normalize(  # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
        mean=[0.485, 0.456, 0.406],
        std=[0.229, 0.224, 0.225])  # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])
 
test_transform = transforms.Compose([
    transforms.Resize([224, 224]),  # 将输入图片resize成统一尺寸
    transforms.ToTensor(),  # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
    transforms.Normalize(  # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
        mean=[0.485, 0.456, 0.406],
        std=[0.229, 0.224, 0.225])  # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])
 
total_data = datasets.ImageFolder("./J5/", transform=train_transforms)
print(total_data.class_to_idx)

代码输出:

{'Monkeypox': 0, 'Others': 1}

2.4.划分数据集

train_size = int(0.8 * len(total_data))
test_size = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])
 
batch_size = 8        #由于显卡问题,这里将batch_size设置为4
train_dl = torch.utils.data.DataLoader(train_dataset,
                                       batch_size=batch_size,
                                       shuffle=True,
                                       num_workers=0)
test_dl = torch.utils.data.DataLoader(test_dataset,
                                      batch_size=batch_size,
                                      shuffle=True,
                                      num_workers=0)
for X, y in test_dl:
    print("Shape of X [N, C, H, W]: ", X.shape)
    print("Shape of y: ", y.shape, y.dtype)
    break

代码输出:

Shape of X [N, C, H, W]:  torch.Size([8, 3, 224, 224])
Shape of y:  torch.Size([8]) torch.int64

3、搭建模型

3.1.模型搭建

from collections import OrderedDict
import torch.utils.checkpoint as cp
import torch
import torch.nn as nn
import torch.nn.functional as F

def _bn_function_factory(norm, relu, conv):
    def bn_function(*inputs):
        concated_features = torch.cat(inputs, 1)
        bottleneck_output = conv(relu(norm(concated_features)))
        return bottleneck_output

    return bn_function


class _DenseLayer(nn.Module):
    def __init__(self, num_input_features, growth_rate, bn_size, drop_rate, efficient=False):
        super(_DenseLayer, self).__init__()
        self.add_module('norm1', nn.BatchNorm2d(num_input_features)),
        self.add_module('relu1', nn.ReLU(inplace=True)),
        self.add_module('conv1', nn.Conv2d(num_input_features, bn_size * growth_rate,
                                           kernel_size=1, stride=1, bias=False)),
        self.add_module('norm2', nn.BatchNorm2d(bn_size * growth_rate)),
        self.add_module('relu2', nn.ReLU(inplace=True)),
        self.add_module('conv2', nn.Conv2d(bn_size * growth_rate, growth_rate,
                                           kernel_size=3, stride=1, padding=1, bias=False)),

        self.add_module('SE_Block', SE_Block(growth_rate, reduction=16))
        self.drop_rate = drop_rate
        self.efficient = efficient

    def forward(self, *prev_features):
        bn_function = _bn_function_factory(self.norm1, self.relu1, self.conv1)
        if self.efficient and any(prev_feature.requires_grad for prev_feature in prev_features):
            bottleneck_output = cp.checkpoint(bn_function, *prev_features)
        else:
            bottleneck_output = bn_function(*prev_features)
        new_features = self.SE_Block(self.conv2(self.relu2(self.norm2(bottleneck_output))))
        if self.drop_rate > 0:
            new_features = F.dropout(new_features, p=self.drop_rate, training=self.training)
        return new_features


class _Transition(nn.Sequential):
    def __init__(self, num_input_features, num_output_features):
        super(_Transition, self).__init__()
        self.add_module('norm', nn.BatchNorm2d(num_input_features))
        self.add_module('relu', nn.ReLU(inplace=True))
        self.add_module('conv', nn.Conv2d(num_input_features, num_output_features,
                                          kernel_size=1, stride=1, bias=False))
        self.add_module('pool', nn.AvgPool2d(kernel_size=2, stride=2))


class _DenseBlock(nn.Module):
    def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate, efficient=False):
        super(_DenseBlock, self).__init__()
        for i in range(num_layers):
            layer = _DenseLayer(
                num_input_features + i * growth_rate,
                growth_rate=growth_rate,
                bn_size=bn_size,
                drop_rate=drop_rate,
                efficient=efficient,
            )
            self.add_module('denselayer%d' % (i + 1), layer)

    def forward(self, init_features):
        features = [init_features]
        for name, layer in self.named_children():
            new_features = layer(*features)
            features.append(new_features)
        return torch.cat(features, 1)


class SE_Block(nn.Module):
    def __init__(self, ch_in, reduction=16):
        super(SE_Block, self).__init__()
        self.avg_pool = nn.AdaptiveAvgPool2d(1)  # 全局自适应池化
        self.fc = nn.Sequential(
            nn.Linear(ch_in, ch_in // reduction, bias=False),
            nn.ReLU(inplace=True),
            nn.Linear(ch_in // reduction, ch_in, bias=False),
            nn.Sigmoid()
        )

    def forward(self, x):
        b, c, _, _ = x.size()
        y = self.avg_pool(x).view(b, c)  # squeeze操作
        y = self.fc(y).view(b, c, 1, 1)  # FC获取通道注意力权重,是具有全局信息的
        return x * y.expand_as(x)  # 注意力作用每一个通道上


class DenseNet(nn.Module):
    def __init__(self, growth_rate, block_config, num_init_features=24, compression=0.5, bn_size=4, drop_rate=0,
                 num_classes=10, small_inputs=True, efficient=False):

        super(DenseNet, self).__init__()
        assert 0 < compression <= 1, 'compression of densenet should be between 0 and 1'

        # First convolution
        if small_inputs:
            self.features = nn.Sequential(OrderedDict([
                ('conv0', nn.Conv2d(3, num_init_features, kernel_size=3, stride=1, padding=1, bias=False)),
            ]))
        else:
            self.features = nn.Sequential(OrderedDict([
                ('conv0', nn.Conv2d(3, num_init_features, kernel_size=7, stride=2, padding=3, bias=False)),
            ]))
            self.features.add_module('norm0', nn.BatchNorm2d(num_init_features))
            self.features.add_module('relu0', nn.ReLU(inplace=True))
            self.features.add_module('pool0', nn.MaxPool2d(kernel_size=3, stride=2, padding=1,
                                                           ceil_mode=False))

        # Each denseblock
        num_features = num_init_features
        for i, num_layers in enumerate(block_config):
            block = _DenseBlock(
                num_layers=num_layers,
                num_input_features=num_features,
                bn_size=bn_size,
                growth_rate=growth_rate,
                drop_rate=drop_rate,
                efficient=efficient,
            )
            self.features.add_module('denseblock%d' % (i + 1), block)
            num_features = num_features + num_layers * growth_rate
            if i != len(block_config) - 1:
                trans = _Transition(num_input_features=num_features,
                                    num_output_features=int(num_features * compression))
                self.features.add_module('transition%d' % (i + 1), trans)
                num_features = int(num_features * compression)
            # self.features.add_module('SE_Block%d' % (i + 1),SE_Block(num_features, reduction=16))

        # Final batch norm
        self.features.add_module('norm_final', nn.BatchNorm2d(num_features))

        # Linear layer
        self.classifier = nn.Linear(num_features, num_classes)

    def forward(self, x):
        features = self.features(x)
        out = F.relu(features, inplace=True)
        out = F.adaptive_avg_pool2d(out, (1, 1))
        out = torch.flatten(out, 1)
        out = self.classifier(out)
        return out

3.2.模型实例化

def DenseNet121_4class():
    return DenseNet(growth_rate=32, block_config=(6,12,24,16), compression=0.5,
                    num_init_features=64, bn_size=4, drop_rate=0.2,num_classes=2,efficient=True)


# 实例化修改后的DenseNet并进行前向传播
model = DenseNet121_4class()

3.3.查看模型信息

x = torch.randn(2, 3, 224, 224)
out = model(x)
print('out.shape:',out.shape)
print(out)

model.to(device)
# 统计模型参数量以及其他指标
import torchsummary as summary
 
summary.summary(model, (3, 224, 224))

代码输出:

out.shape: torch.Size([2, 2])
tensor([[-0.1052, -0.1803],
        [-0.1191, -0.1518]], grad_fn=<AddmmBackward0>)
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 64, 224, 224]           1,728
       BatchNorm2d-2         [-1, 64, 224, 224]             128
              ReLU-3         [-1, 64, 224, 224]               0
            Conv2d-4        [-1, 128, 224, 224]           8,192
       BatchNorm2d-5        [-1, 128, 224, 224]             256
              ReLU-6        [-1, 128, 224, 224]               0
            Conv2d-7         [-1, 32, 224, 224]          36,864
 AdaptiveAvgPool2d-8             [-1, 32, 1, 1]               0
            Linear-9                    [-1, 2]              64
             ReLU-10                    [-1, 2]               0
           Linear-11                   [-1, 32]              64
          Sigmoid-12                   [-1, 32]               0
         SE_Block-13         [-1, 32, 224, 224]               0
      _DenseLayer-14         [-1, 32, 224, 224]               0
      BatchNorm2d-15         [-1, 96, 224, 224]             192
             ReLU-16         [-1, 96, 224, 224]               0
           Conv2d-17        [-1, 128, 224, 224]          12,288
      BatchNorm2d-18        [-1, 128, 224, 224]             256
             ReLU-19        [-1, 128, 224, 224]               0
           Conv2d-20         [-1, 32, 224, 224]          36,864
AdaptiveAvgPool2d-21             [-1, 32, 1, 1]               0
           Linear-22                    [-1, 2]              64
             ReLU-23                    [-1, 2]               0
           Linear-24                   [-1, 32]              64
          Sigmoid-25                   [-1, 32]               0
         SE_Block-26         [-1, 32, 224, 224]               0
      _DenseLayer-27         [-1, 32, 224, 224]               0
      BatchNorm2d-28        [-1, 128, 224, 224]             256
             ReLU-29        [-1, 128, 224, 224]               0
           Conv2d-30        [-1, 128, 224, 224]          16,384
      BatchNorm2d-31        [-1, 128, 224, 224]             256
             ReLU-32        [-1, 128, 224, 224]               0
           Conv2d-33         [-1, 32, 224, 224]          36,864
AdaptiveAvgPool2d-34             [-1, 32, 1, 1]               0
           Linear-35                    [-1, 2]              64
             ReLU-36                    [-1, 2]               0
           Linear-37                   [-1, 32]              64
          Sigmoid-38                   [-1, 32]               0
         SE_Block-39         [-1, 32, 224, 224]               0
      _DenseLayer-40         [-1, 32, 224, 224]               0
      BatchNorm2d-41        [-1, 160, 224, 224]             320
             ReLU-42        [-1, 160, 224, 224]               0
           Conv2d-43        [-1, 128, 224, 224]          20,480
      BatchNorm2d-44        [-1, 128, 224, 224]             256
             ReLU-45        [-1, 128, 224, 224]               0
           Conv2d-46         [-1, 32, 224, 224]          36,864
AdaptiveAvgPool2d-47             [-1, 32, 1, 1]               0
           Linear-48                    [-1, 2]              64
             ReLU-49                    [-1, 2]               0
           Linear-50                   [-1, 32]              64
          Sigmoid-51                   [-1, 32]               0
         SE_Block-52         [-1, 32, 224, 224]               0
      _DenseLayer-53         [-1, 32, 224, 224]               0
      BatchNorm2d-54        [-1, 192, 224, 224]             384
             ReLU-55        [-1, 192, 224, 224]               0
           Conv2d-56        [-1, 128, 224, 224]          24,576
      BatchNorm2d-57        [-1, 128, 224, 224]             256
             ReLU-58        [-1, 128, 224, 224]               0
           Conv2d-59         [-1, 32, 224, 224]          36,864
AdaptiveAvgPool2d-60             [-1, 32, 1, 1]               0
           Linear-61                    [-1, 2]              64
             ReLU-62                    [-1, 2]               0
           Linear-63                   [-1, 32]              64
          Sigmoid-64                   [-1, 32]               0
         SE_Block-65         [-1, 32, 224, 224]               0
      _DenseLayer-66         [-1, 32, 224, 224]               0
      BatchNorm2d-67        [-1, 224, 224, 224]             448
             ReLU-68        [-1, 224, 224, 224]               0
           Conv2d-69        [-1, 128, 224, 224]          28,672
      BatchNorm2d-70        [-1, 128, 224, 224]             256
             ReLU-71        [-1, 128, 224, 224]               0
           Conv2d-72         [-1, 32, 224, 224]          36,864
AdaptiveAvgPool2d-73             [-1, 32, 1, 1]               0
           Linear-74                    [-1, 2]              64
             ReLU-75                    [-1, 2]               0
           Linear-76                   [-1, 32]              64
          Sigmoid-77                   [-1, 32]               0
         SE_Block-78         [-1, 32, 224, 224]               0
      _DenseLayer-79         [-1, 32, 224, 224]               0
      _DenseBlock-80        [-1, 256, 224, 224]               0
      BatchNorm2d-81        [-1, 256, 224, 224]             512
             ReLU-82        [-1, 256, 224, 224]               0
           Conv2d-83        [-1, 128, 224, 224]          32,768
        AvgPool2d-84        [-1, 128, 112, 112]               0
      BatchNorm2d-85        [-1, 128, 112, 112]             256
             ReLU-86        [-1, 128, 112, 112]               0
           Conv2d-87        [-1, 128, 112, 112]          16,384
      BatchNorm2d-88        [-1, 128, 112, 112]             256
             ReLU-89        [-1, 128, 112, 112]               0
           Conv2d-90         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-91             [-1, 32, 1, 1]               0
           Linear-92                    [-1, 2]              64
             ReLU-93                    [-1, 2]               0
           Linear-94                   [-1, 32]              64
          Sigmoid-95                   [-1, 32]               0
         SE_Block-96         [-1, 32, 112, 112]               0
      _DenseLayer-97         [-1, 32, 112, 112]               0
      BatchNorm2d-98        [-1, 160, 112, 112]             320
             ReLU-99        [-1, 160, 112, 112]               0
          Conv2d-100        [-1, 128, 112, 112]          20,480
     BatchNorm2d-101        [-1, 128, 112, 112]             256
            ReLU-102        [-1, 128, 112, 112]               0
          Conv2d-103         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-104             [-1, 32, 1, 1]               0
          Linear-105                    [-1, 2]              64
            ReLU-106                    [-1, 2]               0
          Linear-107                   [-1, 32]              64
         Sigmoid-108                   [-1, 32]               0
        SE_Block-109         [-1, 32, 112, 112]               0
     _DenseLayer-110         [-1, 32, 112, 112]               0
     BatchNorm2d-111        [-1, 192, 112, 112]             384
            ReLU-112        [-1, 192, 112, 112]               0
          Conv2d-113        [-1, 128, 112, 112]          24,576
     BatchNorm2d-114        [-1, 128, 112, 112]             256
            ReLU-115        [-1, 128, 112, 112]               0
          Conv2d-116         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-117             [-1, 32, 1, 1]               0
          Linear-118                    [-1, 2]              64
            ReLU-119                    [-1, 2]               0
          Linear-120                   [-1, 32]              64
         Sigmoid-121                   [-1, 32]               0
        SE_Block-122         [-1, 32, 112, 112]               0
     _DenseLayer-123         [-1, 32, 112, 112]               0
     BatchNorm2d-124        [-1, 224, 112, 112]             448
            ReLU-125        [-1, 224, 112, 112]               0
          Conv2d-126        [-1, 128, 112, 112]          28,672
     BatchNorm2d-127        [-1, 128, 112, 112]             256
            ReLU-128        [-1, 128, 112, 112]               0
          Conv2d-129         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-130             [-1, 32, 1, 1]               0
          Linear-131                    [-1, 2]              64
            ReLU-132                    [-1, 2]               0
          Linear-133                   [-1, 32]              64
         Sigmoid-134                   [-1, 32]               0
        SE_Block-135         [-1, 32, 112, 112]               0
     _DenseLayer-136         [-1, 32, 112, 112]               0
     BatchNorm2d-137        [-1, 256, 112, 112]             512
            ReLU-138        [-1, 256, 112, 112]               0
          Conv2d-139        [-1, 128, 112, 112]          32,768
     BatchNorm2d-140        [-1, 128, 112, 112]             256
            ReLU-141        [-1, 128, 112, 112]               0
          Conv2d-142         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-143             [-1, 32, 1, 1]               0
          Linear-144                    [-1, 2]              64
            ReLU-145                    [-1, 2]               0
          Linear-146                   [-1, 32]              64
         Sigmoid-147                   [-1, 32]               0
        SE_Block-148         [-1, 32, 112, 112]               0
     _DenseLayer-149         [-1, 32, 112, 112]               0
     BatchNorm2d-150        [-1, 288, 112, 112]             576
            ReLU-151        [-1, 288, 112, 112]               0
          Conv2d-152        [-1, 128, 112, 112]          36,864
     BatchNorm2d-153        [-1, 128, 112, 112]             256
            ReLU-154        [-1, 128, 112, 112]               0
          Conv2d-155         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-156             [-1, 32, 1, 1]               0
          Linear-157                    [-1, 2]              64
            ReLU-158                    [-1, 2]               0
          Linear-159                   [-1, 32]              64
         Sigmoid-160                   [-1, 32]               0
        SE_Block-161         [-1, 32, 112, 112]               0
     _DenseLayer-162         [-1, 32, 112, 112]               0
     BatchNorm2d-163        [-1, 320, 112, 112]             640
            ReLU-164        [-1, 320, 112, 112]               0
          Conv2d-165        [-1, 128, 112, 112]          40,960
     BatchNorm2d-166        [-1, 128, 112, 112]             256
            ReLU-167        [-1, 128, 112, 112]               0
          Conv2d-168         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-169             [-1, 32, 1, 1]               0
          Linear-170                    [-1, 2]              64
            ReLU-171                    [-1, 2]               0
          Linear-172                   [-1, 32]              64
         Sigmoid-173                   [-1, 32]               0
        SE_Block-174         [-1, 32, 112, 112]               0
     _DenseLayer-175         [-1, 32, 112, 112]               0
     BatchNorm2d-176        [-1, 352, 112, 112]             704
            ReLU-177        [-1, 352, 112, 112]               0
          Conv2d-178        [-1, 128, 112, 112]          45,056
     BatchNorm2d-179        [-1, 128, 112, 112]             256
            ReLU-180        [-1, 128, 112, 112]               0
          Conv2d-181         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-182             [-1, 32, 1, 1]               0
          Linear-183                    [-1, 2]              64
            ReLU-184                    [-1, 2]               0
          Linear-185                   [-1, 32]              64
         Sigmoid-186                   [-1, 32]               0
        SE_Block-187         [-1, 32, 112, 112]               0
     _DenseLayer-188         [-1, 32, 112, 112]               0
     BatchNorm2d-189        [-1, 384, 112, 112]             768
            ReLU-190        [-1, 384, 112, 112]               0
          Conv2d-191        [-1, 128, 112, 112]          49,152
     BatchNorm2d-192        [-1, 128, 112, 112]             256
            ReLU-193        [-1, 128, 112, 112]               0
          Conv2d-194         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-195             [-1, 32, 1, 1]               0
          Linear-196                    [-1, 2]              64
            ReLU-197                    [-1, 2]               0
          Linear-198                   [-1, 32]              64
         Sigmoid-199                   [-1, 32]               0
        SE_Block-200         [-1, 32, 112, 112]               0
     _DenseLayer-201         [-1, 32, 112, 112]               0
     BatchNorm2d-202        [-1, 416, 112, 112]             832
            ReLU-203        [-1, 416, 112, 112]               0
          Conv2d-204        [-1, 128, 112, 112]          53,248
     BatchNorm2d-205        [-1, 128, 112, 112]             256
            ReLU-206        [-1, 128, 112, 112]               0
          Conv2d-207         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-208             [-1, 32, 1, 1]               0
          Linear-209                    [-1, 2]              64
            ReLU-210                    [-1, 2]               0
          Linear-211                   [-1, 32]              64
         Sigmoid-212                   [-1, 32]               0
        SE_Block-213         [-1, 32, 112, 112]               0
     _DenseLayer-214         [-1, 32, 112, 112]               0
     BatchNorm2d-215        [-1, 448, 112, 112]             896
            ReLU-216        [-1, 448, 112, 112]               0
          Conv2d-217        [-1, 128, 112, 112]          57,344
     BatchNorm2d-218        [-1, 128, 112, 112]             256
            ReLU-219        [-1, 128, 112, 112]               0
          Conv2d-220         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-221             [-1, 32, 1, 1]               0
          Linear-222                    [-1, 2]              64
            ReLU-223                    [-1, 2]               0
          Linear-224                   [-1, 32]              64
         Sigmoid-225                   [-1, 32]               0
        SE_Block-226         [-1, 32, 112, 112]               0
     _DenseLayer-227         [-1, 32, 112, 112]               0
     BatchNorm2d-228        [-1, 480, 112, 112]             960
            ReLU-229        [-1, 480, 112, 112]               0
          Conv2d-230        [-1, 128, 112, 112]          61,440
     BatchNorm2d-231        [-1, 128, 112, 112]             256
            ReLU-232        [-1, 128, 112, 112]               0
          Conv2d-233         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-234             [-1, 32, 1, 1]               0
          Linear-235                    [-1, 2]              64
            ReLU-236                    [-1, 2]               0
          Linear-237                   [-1, 32]              64
         Sigmoid-238                   [-1, 32]               0
        SE_Block-239         [-1, 32, 112, 112]               0
     _DenseLayer-240         [-1, 32, 112, 112]               0
     _DenseBlock-241        [-1, 512, 112, 112]               0
     BatchNorm2d-242        [-1, 512, 112, 112]           1,024
            ReLU-243        [-1, 512, 112, 112]               0
          Conv2d-244        [-1, 256, 112, 112]         131,072
       AvgPool2d-245          [-1, 256, 56, 56]               0
     BatchNorm2d-246          [-1, 256, 56, 56]             512
            ReLU-247          [-1, 256, 56, 56]               0
          Conv2d-248          [-1, 128, 56, 56]          32,768
     BatchNorm2d-249          [-1, 128, 56, 56]             256
            ReLU-250          [-1, 128, 56, 56]               0
          Conv2d-251           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-252             [-1, 32, 1, 1]               0
          Linear-253                    [-1, 2]              64
            ReLU-254                    [-1, 2]               0
          Linear-255                   [-1, 32]              64
         Sigmoid-256                   [-1, 32]               0
        SE_Block-257           [-1, 32, 56, 56]               0
     _DenseLayer-258           [-1, 32, 56, 56]               0
     BatchNorm2d-259          [-1, 288, 56, 56]             576
            ReLU-260          [-1, 288, 56, 56]               0
          Conv2d-261          [-1, 128, 56, 56]          36,864
     BatchNorm2d-262          [-1, 128, 56, 56]             256
            ReLU-263          [-1, 128, 56, 56]               0
          Conv2d-264           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-265             [-1, 32, 1, 1]               0
          Linear-266                    [-1, 2]              64
            ReLU-267                    [-1, 2]               0
          Linear-268                   [-1, 32]              64
         Sigmoid-269                   [-1, 32]               0
        SE_Block-270           [-1, 32, 56, 56]               0
     _DenseLayer-271           [-1, 32, 56, 56]               0
     BatchNorm2d-272          [-1, 320, 56, 56]             640
            ReLU-273          [-1, 320, 56, 56]               0
          Conv2d-274          [-1, 128, 56, 56]          40,960
     BatchNorm2d-275          [-1, 128, 56, 56]             256
            ReLU-276          [-1, 128, 56, 56]               0
          Conv2d-277           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-278             [-1, 32, 1, 1]               0
          Linear-279                    [-1, 2]              64
            ReLU-280                    [-1, 2]               0
          Linear-281                   [-1, 32]              64
         Sigmoid-282                   [-1, 32]               0
        SE_Block-283           [-1, 32, 56, 56]               0
     _DenseLayer-284           [-1, 32, 56, 56]               0
     BatchNorm2d-285          [-1, 352, 56, 56]             704
            ReLU-286          [-1, 352, 56, 56]               0
          Conv2d-287          [-1, 128, 56, 56]          45,056
     BatchNorm2d-288          [-1, 128, 56, 56]             256
            ReLU-289          [-1, 128, 56, 56]               0
          Conv2d-290           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-291             [-1, 32, 1, 1]               0
          Linear-292                    [-1, 2]              64
            ReLU-293                    [-1, 2]               0
          Linear-294                   [-1, 32]              64
         Sigmoid-295                   [-1, 32]               0
        SE_Block-296           [-1, 32, 56, 56]               0
     _DenseLayer-297           [-1, 32, 56, 56]               0
     BatchNorm2d-298          [-1, 384, 56, 56]             768
            ReLU-299          [-1, 384, 56, 56]               0
          Conv2d-300          [-1, 128, 56, 56]          49,152
     BatchNorm2d-301          [-1, 128, 56, 56]             256
            ReLU-302          [-1, 128, 56, 56]               0
          Conv2d-303           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-304             [-1, 32, 1, 1]               0
          Linear-305                    [-1, 2]              64
            ReLU-306                    [-1, 2]               0
          Linear-307                   [-1, 32]              64
         Sigmoid-308                   [-1, 32]               0
        SE_Block-309           [-1, 32, 56, 56]               0
     _DenseLayer-310           [-1, 32, 56, 56]               0
     BatchNorm2d-311          [-1, 416, 56, 56]             832
            ReLU-312          [-1, 416, 56, 56]               0
          Conv2d-313          [-1, 128, 56, 56]          53,248
     BatchNorm2d-314          [-1, 128, 56, 56]             256
            ReLU-315          [-1, 128, 56, 56]               0
          Conv2d-316           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-317             [-1, 32, 1, 1]               0
          Linear-318                    [-1, 2]              64
            ReLU-319                    [-1, 2]               0
          Linear-320                   [-1, 32]              64
         Sigmoid-321                   [-1, 32]               0
        SE_Block-322           [-1, 32, 56, 56]               0
     _DenseLayer-323           [-1, 32, 56, 56]               0
     BatchNorm2d-324          [-1, 448, 56, 56]             896
            ReLU-325          [-1, 448, 56, 56]               0
          Conv2d-326          [-1, 128, 56, 56]          57,344
     BatchNorm2d-327          [-1, 128, 56, 56]             256
            ReLU-328          [-1, 128, 56, 56]               0
          Conv2d-329           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-330             [-1, 32, 1, 1]               0
          Linear-331                    [-1, 2]              64
            ReLU-332                    [-1, 2]               0
          Linear-333                   [-1, 32]              64
         Sigmoid-334                   [-1, 32]               0
        SE_Block-335           [-1, 32, 56, 56]               0
     _DenseLayer-336           [-1, 32, 56, 56]               0
     BatchNorm2d-337          [-1, 480, 56, 56]             960
            ReLU-338          [-1, 480, 56, 56]               0
          Conv2d-339          [-1, 128, 56, 56]          61,440
     BatchNorm2d-340          [-1, 128, 56, 56]             256
            ReLU-341          [-1, 128, 56, 56]               0
          Conv2d-342           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-343             [-1, 32, 1, 1]               0
          Linear-344                    [-1, 2]              64
            ReLU-345                    [-1, 2]               0
          Linear-346                   [-1, 32]              64
         Sigmoid-347                   [-1, 32]               0
        SE_Block-348           [-1, 32, 56, 56]               0
     _DenseLayer-349           [-1, 32, 56, 56]               0
     BatchNorm2d-350          [-1, 512, 56, 56]           1,024
            ReLU-351          [-1, 512, 56, 56]               0
          Conv2d-352          [-1, 128, 56, 56]          65,536
     BatchNorm2d-353          [-1, 128, 56, 56]             256
            ReLU-354          [-1, 128, 56, 56]               0
          Conv2d-355           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-356             [-1, 32, 1, 1]               0
          Linear-357                    [-1, 2]              64
            ReLU-358                    [-1, 2]               0
          Linear-359                   [-1, 32]              64
         Sigmoid-360                   [-1, 32]               0
        SE_Block-361           [-1, 32, 56, 56]               0
     _DenseLayer-362           [-1, 32, 56, 56]               0
     BatchNorm2d-363          [-1, 544, 56, 56]           1,088
            ReLU-364          [-1, 544, 56, 56]               0
          Conv2d-365          [-1, 128, 56, 56]          69,632
     BatchNorm2d-366          [-1, 128, 56, 56]             256
            ReLU-367          [-1, 128, 56, 56]               0
          Conv2d-368           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-369             [-1, 32, 1, 1]               0
          Linear-370                    [-1, 2]              64
            ReLU-371                    [-1, 2]               0
          Linear-372                   [-1, 32]              64
         Sigmoid-373                   [-1, 32]               0
        SE_Block-374           [-1, 32, 56, 56]               0
     _DenseLayer-375           [-1, 32, 56, 56]               0
     BatchNorm2d-376          [-1, 576, 56, 56]           1,152
            ReLU-377          [-1, 576, 56, 56]               0
          Conv2d-378          [-1, 128, 56, 56]          73,728
     BatchNorm2d-379          [-1, 128, 56, 56]             256
            ReLU-380          [-1, 128, 56, 56]               0
          Conv2d-381           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-382             [-1, 32, 1, 1]               0
          Linear-383                    [-1, 2]              64
            ReLU-384                    [-1, 2]               0
          Linear-385                   [-1, 32]              64
         Sigmoid-386                   [-1, 32]               0
        SE_Block-387           [-1, 32, 56, 56]               0
     _DenseLayer-388           [-1, 32, 56, 56]               0
     BatchNorm2d-389          [-1, 608, 56, 56]           1,216
            ReLU-390          [-1, 608, 56, 56]               0
          Conv2d-391          [-1, 128, 56, 56]          77,824
     BatchNorm2d-392          [-1, 128, 56, 56]             256
            ReLU-393          [-1, 128, 56, 56]               0
          Conv2d-394           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-395             [-1, 32, 1, 1]               0
          Linear-396                    [-1, 2]              64
            ReLU-397                    [-1, 2]               0
          Linear-398                   [-1, 32]              64
         Sigmoid-399                   [-1, 32]               0
        SE_Block-400           [-1, 32, 56, 56]               0
     _DenseLayer-401           [-1, 32, 56, 56]               0
     BatchNorm2d-402          [-1, 640, 56, 56]           1,280
            ReLU-403          [-1, 640, 56, 56]               0
          Conv2d-404          [-1, 128, 56, 56]          81,920
     BatchNorm2d-405          [-1, 128, 56, 56]             256
            ReLU-406          [-1, 128, 56, 56]               0
          Conv2d-407           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-408             [-1, 32, 1, 1]               0
          Linear-409                    [-1, 2]              64
            ReLU-410                    [-1, 2]               0
          Linear-411                   [-1, 32]              64
         Sigmoid-412                   [-1, 32]               0
        SE_Block-413           [-1, 32, 56, 56]               0
     _DenseLayer-414           [-1, 32, 56, 56]               0
     BatchNorm2d-415          [-1, 672, 56, 56]           1,344
            ReLU-416          [-1, 672, 56, 56]               0
          Conv2d-417          [-1, 128, 56, 56]          86,016
     BatchNorm2d-418          [-1, 128, 56, 56]             256
            ReLU-419          [-1, 128, 56, 56]               0
          Conv2d-420           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-421             [-1, 32, 1, 1]               0
          Linear-422                    [-1, 2]              64
            ReLU-423                    [-1, 2]               0
          Linear-424                   [-1, 32]              64
         Sigmoid-425                   [-1, 32]               0
        SE_Block-426           [-1, 32, 56, 56]               0
     _DenseLayer-427           [-1, 32, 56, 56]               0
     BatchNorm2d-428          [-1, 704, 56, 56]           1,408
            ReLU-429          [-1, 704, 56, 56]               0
          Conv2d-430          [-1, 128, 56, 56]          90,112
     BatchNorm2d-431          [-1, 128, 56, 56]             256
            ReLU-432          [-1, 128, 56, 56]               0
          Conv2d-433           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-434             [-1, 32, 1, 1]               0
          Linear-435                    [-1, 2]              64
            ReLU-436                    [-1, 2]               0
          Linear-437                   [-1, 32]              64
         Sigmoid-438                   [-1, 32]               0
        SE_Block-439           [-1, 32, 56, 56]               0
     _DenseLayer-440           [-1, 32, 56, 56]               0
     BatchNorm2d-441          [-1, 736, 56, 56]           1,472
            ReLU-442          [-1, 736, 56, 56]               0
          Conv2d-443          [-1, 128, 56, 56]          94,208
     BatchNorm2d-444          [-1, 128, 56, 56]             256
            ReLU-445          [-1, 128, 56, 56]               0
          Conv2d-446           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-447             [-1, 32, 1, 1]               0
          Linear-448                    [-1, 2]              64
            ReLU-449                    [-1, 2]               0
          Linear-450                   [-1, 32]              64
         Sigmoid-451                   [-1, 32]               0
        SE_Block-452           [-1, 32, 56, 56]               0
     _DenseLayer-453           [-1, 32, 56, 56]               0
     BatchNorm2d-454          [-1, 768, 56, 56]           1,536
            ReLU-455          [-1, 768, 56, 56]               0
          Conv2d-456          [-1, 128, 56, 56]          98,304
     BatchNorm2d-457          [-1, 128, 56, 56]             256
            ReLU-458          [-1, 128, 56, 56]               0
          Conv2d-459           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-460             [-1, 32, 1, 1]               0
          Linear-461                    [-1, 2]              64
            ReLU-462                    [-1, 2]               0
          Linear-463                   [-1, 32]              64
         Sigmoid-464                   [-1, 32]               0
        SE_Block-465           [-1, 32, 56, 56]               0
     _DenseLayer-466           [-1, 32, 56, 56]               0
     BatchNorm2d-467          [-1, 800, 56, 56]           1,600
            ReLU-468          [-1, 800, 56, 56]               0
          Conv2d-469          [-1, 128, 56, 56]         102,400
     BatchNorm2d-470          [-1, 128, 56, 56]             256
            ReLU-471          [-1, 128, 56, 56]               0
          Conv2d-472           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-473             [-1, 32, 1, 1]               0
          Linear-474                    [-1, 2]              64
            ReLU-475                    [-1, 2]               0
          Linear-476                   [-1, 32]              64
         Sigmoid-477                   [-1, 32]               0
        SE_Block-478           [-1, 32, 56, 56]               0
     _DenseLayer-479           [-1, 32, 56, 56]               0
     BatchNorm2d-480          [-1, 832, 56, 56]           1,664
            ReLU-481          [-1, 832, 56, 56]               0
          Conv2d-482          [-1, 128, 56, 56]         106,496
     BatchNorm2d-483          [-1, 128, 56, 56]             256
            ReLU-484          [-1, 128, 56, 56]               0
          Conv2d-485           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-486             [-1, 32, 1, 1]               0
          Linear-487                    [-1, 2]              64
            ReLU-488                    [-1, 2]               0
          Linear-489                   [-1, 32]              64
         Sigmoid-490                   [-1, 32]               0
        SE_Block-491           [-1, 32, 56, 56]               0
     _DenseLayer-492           [-1, 32, 56, 56]               0
     BatchNorm2d-493          [-1, 864, 56, 56]           1,728
            ReLU-494          [-1, 864, 56, 56]               0
          Conv2d-495          [-1, 128, 56, 56]         110,592
     BatchNorm2d-496          [-1, 128, 56, 56]             256
            ReLU-497          [-1, 128, 56, 56]               0
          Conv2d-498           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-499             [-1, 32, 1, 1]               0
          Linear-500                    [-1, 2]              64
            ReLU-501                    [-1, 2]               0
          Linear-502                   [-1, 32]              64
         Sigmoid-503                   [-1, 32]               0
        SE_Block-504           [-1, 32, 56, 56]               0
     _DenseLayer-505           [-1, 32, 56, 56]               0
     BatchNorm2d-506          [-1, 896, 56, 56]           1,792
            ReLU-507          [-1, 896, 56, 56]               0
          Conv2d-508          [-1, 128, 56, 56]         114,688
     BatchNorm2d-509          [-1, 128, 56, 56]             256
            ReLU-510          [-1, 128, 56, 56]               0
          Conv2d-511           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-512             [-1, 32, 1, 1]               0
          Linear-513                    [-1, 2]              64
            ReLU-514                    [-1, 2]               0
          Linear-515                   [-1, 32]              64
         Sigmoid-516                   [-1, 32]               0
        SE_Block-517           [-1, 32, 56, 56]               0
     _DenseLayer-518           [-1, 32, 56, 56]               0
     BatchNorm2d-519          [-1, 928, 56, 56]           1,856
            ReLU-520          [-1, 928, 56, 56]               0
          Conv2d-521          [-1, 128, 56, 56]         118,784
     BatchNorm2d-522          [-1, 128, 56, 56]             256
            ReLU-523          [-1, 128, 56, 56]               0
          Conv2d-524           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-525             [-1, 32, 1, 1]               0
          Linear-526                    [-1, 2]              64
            ReLU-527                    [-1, 2]               0
          Linear-528                   [-1, 32]              64
         Sigmoid-529                   [-1, 32]               0
        SE_Block-530           [-1, 32, 56, 56]               0
     _DenseLayer-531           [-1, 32, 56, 56]               0
     BatchNorm2d-532          [-1, 960, 56, 56]           1,920
            ReLU-533          [-1, 960, 56, 56]               0
          Conv2d-534          [-1, 128, 56, 56]         122,880
     BatchNorm2d-535          [-1, 128, 56, 56]             256
            ReLU-536          [-1, 128, 56, 56]               0
          Conv2d-537           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-538             [-1, 32, 1, 1]               0
          Linear-539                    [-1, 2]              64
            ReLU-540                    [-1, 2]               0
          Linear-541                   [-1, 32]              64
         Sigmoid-542                   [-1, 32]               0
        SE_Block-543           [-1, 32, 56, 56]               0
     _DenseLayer-544           [-1, 32, 56, 56]               0
     BatchNorm2d-545          [-1, 992, 56, 56]           1,984
            ReLU-546          [-1, 992, 56, 56]               0
          Conv2d-547          [-1, 128, 56, 56]         126,976
     BatchNorm2d-548          [-1, 128, 56, 56]             256
            ReLU-549          [-1, 128, 56, 56]               0
          Conv2d-550           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-551             [-1, 32, 1, 1]               0
          Linear-552                    [-1, 2]              64
            ReLU-553                    [-1, 2]               0
          Linear-554                   [-1, 32]              64
         Sigmoid-555                   [-1, 32]               0
        SE_Block-556           [-1, 32, 56, 56]               0
     _DenseLayer-557           [-1, 32, 56, 56]               0
     _DenseBlock-558         [-1, 1024, 56, 56]               0
     BatchNorm2d-559         [-1, 1024, 56, 56]           2,048
            ReLU-560         [-1, 1024, 56, 56]               0
          Conv2d-561          [-1, 512, 56, 56]         524,288
       AvgPool2d-562          [-1, 512, 28, 28]               0
     BatchNorm2d-563          [-1, 512, 28, 28]           1,024
            ReLU-564          [-1, 512, 28, 28]               0
          Conv2d-565          [-1, 128, 28, 28]          65,536
     BatchNorm2d-566          [-1, 128, 28, 28]             256
            ReLU-567          [-1, 128, 28, 28]               0
          Conv2d-568           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-569             [-1, 32, 1, 1]               0
          Linear-570                    [-1, 2]              64
            ReLU-571                    [-1, 2]               0
          Linear-572                   [-1, 32]              64
         Sigmoid-573                   [-1, 32]               0
        SE_Block-574           [-1, 32, 28, 28]               0
     _DenseLayer-575           [-1, 32, 28, 28]               0
     BatchNorm2d-576          [-1, 544, 28, 28]           1,088
            ReLU-577          [-1, 544, 28, 28]               0
          Conv2d-578          [-1, 128, 28, 28]          69,632
     BatchNorm2d-579          [-1, 128, 28, 28]             256
            ReLU-580          [-1, 128, 28, 28]               0
          Conv2d-581           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-582             [-1, 32, 1, 1]               0
          Linear-583                    [-1, 2]              64
            ReLU-584                    [-1, 2]               0
          Linear-585                   [-1, 32]              64
         Sigmoid-586                   [-1, 32]               0
        SE_Block-587           [-1, 32, 28, 28]               0
     _DenseLayer-588           [-1, 32, 28, 28]               0
     BatchNorm2d-589          [-1, 576, 28, 28]           1,152
            ReLU-590          [-1, 576, 28, 28]               0
          Conv2d-591          [-1, 128, 28, 28]          73,728
     BatchNorm2d-592          [-1, 128, 28, 28]             256
            ReLU-593          [-1, 128, 28, 28]               0
          Conv2d-594           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-595             [-1, 32, 1, 1]               0
          Linear-596                    [-1, 2]              64
            ReLU-597                    [-1, 2]               0
          Linear-598                   [-1, 32]              64
         Sigmoid-599                   [-1, 32]               0
        SE_Block-600           [-1, 32, 28, 28]               0
     _DenseLayer-601           [-1, 32, 28, 28]               0
     BatchNorm2d-602          [-1, 608, 28, 28]           1,216
            ReLU-603          [-1, 608, 28, 28]               0
          Conv2d-604          [-1, 128, 28, 28]          77,824
     BatchNorm2d-605          [-1, 128, 28, 28]             256
            ReLU-606          [-1, 128, 28, 28]               0
          Conv2d-607           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-608             [-1, 32, 1, 1]               0
          Linear-609                    [-1, 2]              64
            ReLU-610                    [-1, 2]               0
          Linear-611                   [-1, 32]              64
         Sigmoid-612                   [-1, 32]               0
        SE_Block-613           [-1, 32, 28, 28]               0
     _DenseLayer-614           [-1, 32, 28, 28]               0
     BatchNorm2d-615          [-1, 640, 28, 28]           1,280
            ReLU-616          [-1, 640, 28, 28]               0
          Conv2d-617          [-1, 128, 28, 28]          81,920
     BatchNorm2d-618          [-1, 128, 28, 28]             256
            ReLU-619          [-1, 128, 28, 28]               0
          Conv2d-620           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-621             [-1, 32, 1, 1]               0
          Linear-622                    [-1, 2]              64
            ReLU-623                    [-1, 2]               0
          Linear-624                   [-1, 32]              64
         Sigmoid-625                   [-1, 32]               0
        SE_Block-626           [-1, 32, 28, 28]               0
     _DenseLayer-627           [-1, 32, 28, 28]               0
     BatchNorm2d-628          [-1, 672, 28, 28]           1,344
            ReLU-629          [-1, 672, 28, 28]               0
          Conv2d-630          [-1, 128, 28, 28]          86,016
     BatchNorm2d-631          [-1, 128, 28, 28]             256
            ReLU-632          [-1, 128, 28, 28]               0
          Conv2d-633           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-634             [-1, 32, 1, 1]               0
          Linear-635                    [-1, 2]              64
            ReLU-636                    [-1, 2]               0
          Linear-637                   [-1, 32]              64
         Sigmoid-638                   [-1, 32]               0
        SE_Block-639           [-1, 32, 28, 28]               0
     _DenseLayer-640           [-1, 32, 28, 28]               0
     BatchNorm2d-641          [-1, 704, 28, 28]           1,408
            ReLU-642          [-1, 704, 28, 28]               0
          Conv2d-643          [-1, 128, 28, 28]          90,112
     BatchNorm2d-644          [-1, 128, 28, 28]             256
            ReLU-645          [-1, 128, 28, 28]               0
          Conv2d-646           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-647             [-1, 32, 1, 1]               0
          Linear-648                    [-1, 2]              64
            ReLU-649                    [-1, 2]               0
          Linear-650                   [-1, 32]              64
         Sigmoid-651                   [-1, 32]               0
        SE_Block-652           [-1, 32, 28, 28]               0
     _DenseLayer-653           [-1, 32, 28, 28]               0
     BatchNorm2d-654          [-1, 736, 28, 28]           1,472
            ReLU-655          [-1, 736, 28, 28]               0
          Conv2d-656          [-1, 128, 28, 28]          94,208
     BatchNorm2d-657          [-1, 128, 28, 28]             256
            ReLU-658          [-1, 128, 28, 28]               0
          Conv2d-659           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-660             [-1, 32, 1, 1]               0
          Linear-661                    [-1, 2]              64
            ReLU-662                    [-1, 2]               0
          Linear-663                   [-1, 32]              64
         Sigmoid-664                   [-1, 32]               0
        SE_Block-665           [-1, 32, 28, 28]               0
     _DenseLayer-666           [-1, 32, 28, 28]               0
     BatchNorm2d-667          [-1, 768, 28, 28]           1,536
            ReLU-668          [-1, 768, 28, 28]               0
          Conv2d-669          [-1, 128, 28, 28]          98,304
     BatchNorm2d-670          [-1, 128, 28, 28]             256
            ReLU-671          [-1, 128, 28, 28]               0
          Conv2d-672           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-673             [-1, 32, 1, 1]               0
          Linear-674                    [-1, 2]              64
            ReLU-675                    [-1, 2]               0
          Linear-676                   [-1, 32]              64
         Sigmoid-677                   [-1, 32]               0
        SE_Block-678           [-1, 32, 28, 28]               0
     _DenseLayer-679           [-1, 32, 28, 28]               0
     BatchNorm2d-680          [-1, 800, 28, 28]           1,600
            ReLU-681          [-1, 800, 28, 28]               0
          Conv2d-682          [-1, 128, 28, 28]         102,400
     BatchNorm2d-683          [-1, 128, 28, 28]             256
            ReLU-684          [-1, 128, 28, 28]               0
          Conv2d-685           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-686             [-1, 32, 1, 1]               0
          Linear-687                    [-1, 2]              64
            ReLU-688                    [-1, 2]               0
          Linear-689                   [-1, 32]              64
         Sigmoid-690                   [-1, 32]               0
        SE_Block-691           [-1, 32, 28, 28]               0
     _DenseLayer-692           [-1, 32, 28, 28]               0
     BatchNorm2d-693          [-1, 832, 28, 28]           1,664
            ReLU-694          [-1, 832, 28, 28]               0
          Conv2d-695          [-1, 128, 28, 28]         106,496
     BatchNorm2d-696          [-1, 128, 28, 28]             256
            ReLU-697          [-1, 128, 28, 28]               0
          Conv2d-698           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-699             [-1, 32, 1, 1]               0
          Linear-700                    [-1, 2]              64
            ReLU-701                    [-1, 2]               0
          Linear-702                   [-1, 32]              64
         Sigmoid-703                   [-1, 32]               0
        SE_Block-704           [-1, 32, 28, 28]               0
     _DenseLayer-705           [-1, 32, 28, 28]               0
     BatchNorm2d-706          [-1, 864, 28, 28]           1,728
            ReLU-707          [-1, 864, 28, 28]               0
          Conv2d-708          [-1, 128, 28, 28]         110,592
     BatchNorm2d-709          [-1, 128, 28, 28]             256
            ReLU-710          [-1, 128, 28, 28]               0
          Conv2d-711           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-712             [-1, 32, 1, 1]               0
          Linear-713                    [-1, 2]              64
            ReLU-714                    [-1, 2]               0
          Linear-715                   [-1, 32]              64
         Sigmoid-716                   [-1, 32]               0
        SE_Block-717           [-1, 32, 28, 28]               0
     _DenseLayer-718           [-1, 32, 28, 28]               0
     BatchNorm2d-719          [-1, 896, 28, 28]           1,792
            ReLU-720          [-1, 896, 28, 28]               0
          Conv2d-721          [-1, 128, 28, 28]         114,688
     BatchNorm2d-722          [-1, 128, 28, 28]             256
            ReLU-723          [-1, 128, 28, 28]               0
          Conv2d-724           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-725             [-1, 32, 1, 1]               0
          Linear-726                    [-1, 2]              64
            ReLU-727                    [-1, 2]               0
          Linear-728                   [-1, 32]              64
         Sigmoid-729                   [-1, 32]               0
        SE_Block-730           [-1, 32, 28, 28]               0
     _DenseLayer-731           [-1, 32, 28, 28]               0
     BatchNorm2d-732          [-1, 928, 28, 28]           1,856
            ReLU-733          [-1, 928, 28, 28]               0
          Conv2d-734          [-1, 128, 28, 28]         118,784
     BatchNorm2d-735          [-1, 128, 28, 28]             256
            ReLU-736          [-1, 128, 28, 28]               0
          Conv2d-737           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-738             [-1, 32, 1, 1]               0
          Linear-739                    [-1, 2]              64
            ReLU-740                    [-1, 2]               0
          Linear-741                   [-1, 32]              64
         Sigmoid-742                   [-1, 32]               0
        SE_Block-743           [-1, 32, 28, 28]               0
     _DenseLayer-744           [-1, 32, 28, 28]               0
     BatchNorm2d-745          [-1, 960, 28, 28]           1,920
            ReLU-746          [-1, 960, 28, 28]               0
          Conv2d-747          [-1, 128, 28, 28]         122,880
     BatchNorm2d-748          [-1, 128, 28, 28]             256
            ReLU-749          [-1, 128, 28, 28]               0
          Conv2d-750           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-751             [-1, 32, 1, 1]               0
          Linear-752                    [-1, 2]              64
            ReLU-753                    [-1, 2]               0
          Linear-754                   [-1, 32]              64
         Sigmoid-755                   [-1, 32]               0
        SE_Block-756           [-1, 32, 28, 28]               0
     _DenseLayer-757           [-1, 32, 28, 28]               0
     BatchNorm2d-758          [-1, 992, 28, 28]           1,984
            ReLU-759          [-1, 992, 28, 28]               0
          Conv2d-760          [-1, 128, 28, 28]         126,976
     BatchNorm2d-761          [-1, 128, 28, 28]             256
            ReLU-762          [-1, 128, 28, 28]               0
          Conv2d-763           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-764             [-1, 32, 1, 1]               0
          Linear-765                    [-1, 2]              64
            ReLU-766                    [-1, 2]               0
          Linear-767                   [-1, 32]              64
         Sigmoid-768                   [-1, 32]               0
        SE_Block-769           [-1, 32, 28, 28]               0
     _DenseLayer-770           [-1, 32, 28, 28]               0
     _DenseBlock-771         [-1, 1024, 28, 28]               0
     BatchNorm2d-772         [-1, 1024, 28, 28]           2,048
          Linear-773                    [-1, 2]           2,050
================================================================
Total params: 6,955,522
Trainable params: 6,955,522
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 4854.11
Params size (MB): 26.53
Estimated Total Size (MB): 4881.21
----------------------------------------------------------------

4、训练模型

4.1.编写训练函数

# 训练循环
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)  # 训练集的大小
    num_batches = len(dataloader)  # 批次数目, (size/batch_size,向上取整)
 
    train_loss, train_acc = 0, 0  # 初始化训练损失和正确率
 
    for X, y in dataloader:  # 获取图片及其标签
        X, y = X.to(device), y.to(device)
 
        # 计算预测误差
        pred = model(X)  # 网络输出
        loss = loss_fn(pred, y)  # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失
 
        # 反向传播
        optimizer.zero_grad()  # grad属性归零
        loss.backward()  # 反向传播
        optimizer.step()  # 每一步自动更新
 
        # 记录acc与loss
        train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()
 
    train_acc /= size
    train_loss /= num_batches
 
    return train_acc, train_loss

4.2.编写测试函数

def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)  # 测试集的大小
    num_batches = len(dataloader)  # 批次数目
    test_loss, test_acc = 0, 0
 
    # 当不进行训练时,停止梯度更新,节省计算内存消耗
    with torch.no_grad():
        for imgs, target in dataloader:
            imgs, target = imgs.to(device), target.to(device)
 
            # 计算loss
            target_pred = model(imgs)
            loss = loss_fn(target_pred, target)
 
            test_loss += loss.item()
            test_acc += (target_pred.argmax(1) == target).type(torch.float).sum().item()
 
    test_acc /= size
    test_loss /= num_batches
 
    return test_acc, test_loss

4.3.正式训练

import copy
 
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
loss_fn = nn.CrossEntropyLoss()  # 创建损失函数
 
epochs = 100
 
train_loss = []
train_acc = []
test_loss = []
test_acc = []
 
best_acc = 0  # 设置一个最佳准确率,作为最佳模型的判别指标
 
for epoch in range(epochs):
    # 更新学习率(使用自定义学习率时使用)
    # adjust_learning_rate(optimizer, epoch, learn_rate)
 
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)
    # scheduler.step() # 更新学习率(调用官方动态学习率接口时使用)
 
    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)
 
    # 保存最佳模型到 best_model
    if epoch_test_acc > best_acc:
        best_acc = epoch_test_acc
        best_model = copy.deepcopy(model)
 
    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)
 
    # 获取当前的学习率
    lr = optimizer.state_dict()['param_groups'][0]['lr']
 
    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')
    print(template.format(epoch + 1, epoch_train_acc * 100, epoch_train_loss,
                          epoch_test_acc * 100, epoch_test_loss, lr))
 
# 保存最佳模型到文件中
PATH = './J5_best_model.pth_2'  # 保存的参数文件名
torch.save(model.state_dict(), PATH)


print('Done')

代码输出:

Epoch: 1, Train_acc:62.9%, Train_loss:0.661, Test_acc:60.6%, Test_loss:0.733, Lr:1.00E-04
Epoch: 2, Train_acc:66.2%, Train_loss:0.628, Test_acc:58.7%, Test_loss:0.707, Lr:1.00E-04
Epoch: 3, Train_acc:66.9%, Train_loss:0.615, Test_acc:62.2%, Test_loss:0.768, Lr:1.00E-04
Epoch: 4, Train_acc:68.1%, Train_loss:0.617, Test_acc:66.7%, Test_loss:0.624, Lr:1.00E-04
Epoch: 5, Train_acc:68.2%, Train_loss:0.594, Test_acc:67.6%, Test_loss:0.628, Lr:1.00E-04
Epoch: 6, Train_acc:69.9%, Train_loss:0.582, Test_acc:67.4%, Test_loss:0.626, Lr:1.00E-04
Epoch: 7, Train_acc:73.7%, Train_loss:0.547, Test_acc:67.1%, Test_loss:0.608, Lr:1.00E-04
Epoch: 8, Train_acc:74.3%, Train_loss:0.531, Test_acc:66.4%, Test_loss:0.714, Lr:1.00E-04
Epoch: 9, Train_acc:75.0%, Train_loss:0.498, Test_acc:75.1%, Test_loss:0.539, Lr:1.00E-04
Epoch:10, Train_acc:76.1%, Train_loss:0.489, Test_acc:76.7%, Test_loss:0.507, Lr:1.00E-04
Epoch:11, Train_acc:77.7%, Train_loss:0.466, Test_acc:74.8%, Test_loss:0.479, Lr:1.00E-04
Epoch:12, Train_acc:79.7%, Train_loss:0.439, Test_acc:77.6%, Test_loss:0.521, Lr:1.00E-04
Epoch:13, Train_acc:83.5%, Train_loss:0.379, Test_acc:81.1%, Test_loss:0.414, Lr:1.00E-04
Epoch:14, Train_acc:82.5%, Train_loss:0.407, Test_acc:88.3%, Test_loss:0.333, Lr:1.00E-04
Epoch:15, Train_acc:83.5%, Train_loss:0.357, Test_acc:90.0%, Test_loss:0.287, Lr:1.00E-04
Epoch:16, Train_acc:85.9%, Train_loss:0.341, Test_acc:78.8%, Test_loss:0.467, Lr:1.00E-04
Epoch:17, Train_acc:86.3%, Train_loss:0.317, Test_acc:89.0%, Test_loss:0.306, Lr:1.00E-04
Epoch:18, Train_acc:87.6%, Train_loss:0.297, Test_acc:87.2%, Test_loss:0.299, Lr:1.00E-04
Epoch:19, Train_acc:88.4%, Train_loss:0.282, Test_acc:90.4%, Test_loss:0.253, Lr:1.00E-04
Epoch:20, Train_acc:88.1%, Train_loss:0.284, Test_acc:92.3%, Test_loss:0.208, Lr:1.00E-04
Epoch:21, Train_acc:89.4%, Train_loss:0.240, Test_acc:88.1%, Test_loss:0.295, Lr:1.00E-04
Epoch:22, Train_acc:91.1%, Train_loss:0.219, Test_acc:85.1%, Test_loss:0.296, Lr:1.00E-04
Epoch:23, Train_acc:90.5%, Train_loss:0.234, Test_acc:93.9%, Test_loss:0.190, Lr:1.00E-04
Epoch:24, Train_acc:91.4%, Train_loss:0.208, Test_acc:90.7%, Test_loss:0.285, Lr:1.00E-04
Epoch:25, Train_acc:92.3%, Train_loss:0.201, Test_acc:92.1%, Test_loss:0.193, Lr:1.00E-04
Epoch:26, Train_acc:92.2%, Train_loss:0.189, Test_acc:92.3%, Test_loss:0.171, Lr:1.00E-04
Epoch:27, Train_acc:92.5%, Train_loss:0.194, Test_acc:91.8%, Test_loss:0.185, Lr:1.00E-04
Epoch:28, Train_acc:93.0%, Train_loss:0.179, Test_acc:93.0%, Test_loss:0.214, Lr:1.00E-04
Epoch:29, Train_acc:93.2%, Train_loss:0.183, Test_acc:92.1%, Test_loss:0.203, Lr:1.00E-04
Epoch:30, Train_acc:93.5%, Train_loss:0.169, Test_acc:90.9%, Test_loss:0.205, Lr:1.00E-04
Epoch:31, Train_acc:94.0%, Train_loss:0.165, Test_acc:93.7%, Test_loss:0.197, Lr:1.00E-04
Epoch:32, Train_acc:93.9%, Train_loss:0.165, Test_acc:94.4%, Test_loss:0.152, Lr:1.00E-04
Epoch:33, Train_acc:93.9%, Train_loss:0.163, Test_acc:91.8%, Test_loss:0.229, Lr:1.00E-04
Epoch:34, Train_acc:95.6%, Train_loss:0.133, Test_acc:91.1%, Test_loss:0.266, Lr:1.00E-04
Epoch:35, Train_acc:95.2%, Train_loss:0.137, Test_acc:93.7%, Test_loss:0.130, Lr:1.00E-04
Epoch:36, Train_acc:95.0%, Train_loss:0.142, Test_acc:95.8%, Test_loss:0.123, Lr:1.00E-04
Epoch:37, Train_acc:93.5%, Train_loss:0.161, Test_acc:94.6%, Test_loss:0.133, Lr:1.00E-04
Epoch:38, Train_acc:95.7%, Train_loss:0.120, Test_acc:96.0%, Test_loss:0.109, Lr:1.00E-04
Epoch:39, Train_acc:95.6%, Train_loss:0.121, Test_acc:94.2%, Test_loss:0.161, Lr:1.00E-04
Epoch:40, Train_acc:95.9%, Train_loss:0.110, Test_acc:89.7%, Test_loss:0.287, Lr:1.00E-04
Epoch:41, Train_acc:95.0%, Train_loss:0.135, Test_acc:96.3%, Test_loss:0.109, Lr:1.00E-04
Epoch:42, Train_acc:97.0%, Train_loss:0.094, Test_acc:93.7%, Test_loss:0.185, Lr:1.00E-04
Epoch:43, Train_acc:97.0%, Train_loss:0.089, Test_acc:90.9%, Test_loss:0.284, Lr:1.00E-04
Epoch:44, Train_acc:96.4%, Train_loss:0.101, Test_acc:91.6%, Test_loss:0.263, Lr:1.00E-04
Epoch:45, Train_acc:97.0%, Train_loss:0.083, Test_acc:91.4%, Test_loss:0.260, Lr:1.00E-04
Epoch:46, Train_acc:96.1%, Train_loss:0.105, Test_acc:95.6%, Test_loss:0.122, Lr:1.00E-04
Epoch:47, Train_acc:96.7%, Train_loss:0.101, Test_acc:92.8%, Test_loss:0.214, Lr:1.00E-04
Epoch:48, Train_acc:96.0%, Train_loss:0.112, Test_acc:93.7%, Test_loss:0.248, Lr:1.00E-04
Epoch:49, Train_acc:97.0%, Train_loss:0.086, Test_acc:94.6%, Test_loss:0.197, Lr:1.00E-04
Epoch:50, Train_acc:97.4%, Train_loss:0.071, Test_acc:96.7%, Test_loss:0.091, Lr:1.00E-04
Epoch:51, Train_acc:96.4%, Train_loss:0.095, Test_acc:94.9%, Test_loss:0.149, Lr:1.00E-04
Epoch:52, Train_acc:97.4%, Train_loss:0.081, Test_acc:95.3%, Test_loss:0.095, Lr:1.00E-04
Epoch:53, Train_acc:97.5%, Train_loss:0.073, Test_acc:95.6%, Test_loss:0.141, Lr:1.00E-04
Epoch:54, Train_acc:98.2%, Train_loss:0.058, Test_acc:96.5%, Test_loss:0.105, Lr:1.00E-04
Epoch:55, Train_acc:97.4%, Train_loss:0.074, Test_acc:92.1%, Test_loss:0.290, Lr:1.00E-04
Epoch:56, Train_acc:97.5%, Train_loss:0.069, Test_acc:94.9%, Test_loss:0.146, Lr:1.00E-04
Epoch:57, Train_acc:98.4%, Train_loss:0.050, Test_acc:95.3%, Test_loss:0.120, Lr:1.00E-04
Epoch:58, Train_acc:97.7%, Train_loss:0.066, Test_acc:95.3%, Test_loss:0.202, Lr:1.00E-04
Epoch:59, Train_acc:97.7%, Train_loss:0.065, Test_acc:95.3%, Test_loss:0.135, Lr:1.00E-04
Epoch:60, Train_acc:98.0%, Train_loss:0.057, Test_acc:96.0%, Test_loss:0.128, Lr:1.00E-04
Epoch:61, Train_acc:97.9%, Train_loss:0.068, Test_acc:94.6%, Test_loss:0.137, Lr:1.00E-04
Epoch:62, Train_acc:98.0%, Train_loss:0.067, Test_acc:95.6%, Test_loss:0.129, Lr:1.00E-04
Epoch:63, Train_acc:98.5%, Train_loss:0.042, Test_acc:97.4%, Test_loss:0.099, Lr:1.00E-04
Epoch:64, Train_acc:98.8%, Train_loss:0.048, Test_acc:96.0%, Test_loss:0.163, Lr:1.00E-04
Epoch:65, Train_acc:97.7%, Train_loss:0.062, Test_acc:93.0%, Test_loss:0.177, Lr:1.00E-04
Epoch:66, Train_acc:98.3%, Train_loss:0.063, Test_acc:96.7%, Test_loss:0.110, Lr:1.00E-04
Epoch:67, Train_acc:98.7%, Train_loss:0.040, Test_acc:95.3%, Test_loss:0.207, Lr:1.00E-04
Epoch:68, Train_acc:99.2%, Train_loss:0.025, Test_acc:97.0%, Test_loss:0.099, Lr:1.00E-04
Epoch:69, Train_acc:97.5%, Train_loss:0.066, Test_acc:95.8%, Test_loss:0.118, Lr:1.00E-04
Epoch:70, Train_acc:98.0%, Train_loss:0.056, Test_acc:97.2%, Test_loss:0.136, Lr:1.00E-04
Epoch:71, Train_acc:98.3%, Train_loss:0.051, Test_acc:96.7%, Test_loss:0.069, Lr:1.00E-04
Epoch:72, Train_acc:99.1%, Train_loss:0.039, Test_acc:95.3%, Test_loss:0.143, Lr:1.00E-04
Epoch:73, Train_acc:98.5%, Train_loss:0.049, Test_acc:95.6%, Test_loss:0.155, Lr:1.00E-04
Epoch:74, Train_acc:98.1%, Train_loss:0.052, Test_acc:96.5%, Test_loss:0.079, Lr:1.00E-04
Epoch:75, Train_acc:97.5%, Train_loss:0.066, Test_acc:94.6%, Test_loss:0.155, Lr:1.00E-04
Epoch:76, Train_acc:98.8%, Train_loss:0.035, Test_acc:96.0%, Test_loss:0.117, Lr:1.00E-04
Epoch:77, Train_acc:98.4%, Train_loss:0.045, Test_acc:93.0%, Test_loss:0.180, Lr:1.00E-04
Epoch:78, Train_acc:98.6%, Train_loss:0.049, Test_acc:97.2%, Test_loss:0.128, Lr:1.00E-04
Epoch:79, Train_acc:98.0%, Train_loss:0.052, Test_acc:98.1%, Test_loss:0.063, Lr:1.00E-04
Epoch:80, Train_acc:98.5%, Train_loss:0.048, Test_acc:96.7%, Test_loss:0.123, Lr:1.00E-04
Epoch:81, Train_acc:99.2%, Train_loss:0.035, Test_acc:97.4%, Test_loss:0.100, Lr:1.00E-04
Epoch:82, Train_acc:98.4%, Train_loss:0.045, Test_acc:95.3%, Test_loss:0.154, Lr:1.00E-04
Epoch:83, Train_acc:97.4%, Train_loss:0.057, Test_acc:96.5%, Test_loss:0.151, Lr:1.00E-04
Epoch:84, Train_acc:98.4%, Train_loss:0.047, Test_acc:95.8%, Test_loss:0.112, Lr:1.00E-04
Epoch:85, Train_acc:98.8%, Train_loss:0.040, Test_acc:96.0%, Test_loss:0.152, Lr:1.00E-04
Epoch:86, Train_acc:99.5%, Train_loss:0.015, Test_acc:96.0%, Test_loss:0.157, Lr:1.00E-04
Epoch:87, Train_acc:99.0%, Train_loss:0.036, Test_acc:93.0%, Test_loss:0.183, Lr:1.00E-04
Epoch:88, Train_acc:98.6%, Train_loss:0.042, Test_acc:95.3%, Test_loss:0.122, Lr:1.00E-04
Epoch:89, Train_acc:99.2%, Train_loss:0.024, Test_acc:95.1%, Test_loss:0.157, Lr:1.00E-04
Epoch:90, Train_acc:98.5%, Train_loss:0.049, Test_acc:93.9%, Test_loss:0.162, Lr:1.00E-04
Epoch:91, Train_acc:98.5%, Train_loss:0.040, Test_acc:97.2%, Test_loss:0.085, Lr:1.00E-04
Epoch:92, Train_acc:98.5%, Train_loss:0.051, Test_acc:97.2%, Test_loss:0.061, Lr:1.00E-04
Epoch:93, Train_acc:99.0%, Train_loss:0.030, Test_acc:97.7%, Test_loss:0.074, Lr:1.00E-04
Epoch:94, Train_acc:99.4%, Train_loss:0.017, Test_acc:96.0%, Test_loss:0.121, Lr:1.00E-04
Epoch:95, Train_acc:98.9%, Train_loss:0.038, Test_acc:97.9%, Test_loss:0.077, Lr:1.00E-04
Epoch:96, Train_acc:98.7%, Train_loss:0.037, Test_acc:96.5%, Test_loss:0.111, Lr:1.00E-04
Epoch:97, Train_acc:99.3%, Train_loss:0.024, Test_acc:96.7%, Test_loss:0.151, Lr:1.00E-04
Epoch:98, Train_acc:98.8%, Train_loss:0.030, Test_acc:96.0%, Test_loss:0.182, Lr:1.00E-04
Epoch:99, Train_acc:97.8%, Train_loss:0.062, Test_acc:93.7%, Test_loss:0.190, Lr:1.00E-04
Epoch:100, Train_acc:98.4%, Train_loss:0.061, Test_acc:96.7%, Test_loss:0.077, Lr:1.00E-04
Done

5、结果可视化

5.1.Loss&Accuracy

import matplotlib.pyplot as plt
# 隐藏警告
import warnings
 
warnings.filterwarnings("ignore")  # 忽略警告信息
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号
plt.rcParams['figure.dpi'] = 100  # 分辨率
 
epochs_range = range(epochs)
 
plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)
 
plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
 
plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

代码输出:
在这里插入图片描述

5.2. 指定图片进行预测

from PIL import Image
 
classes = list(total_data.class_to_idx)
 
 
def predict_one_image(image_path, model, transform, classes):
    test_img = Image.open(image_path).convert('RGB')
    plt.imshow(test_img)  # 展示预测的图片
 
    test_img = transform(test_img)
    img = test_img.to(device).unsqueeze(0)
 
    model.eval()
    output = model(img)
 
    _, pred = torch.max(output, 1)
    pred_class = classes[pred]
    print(f'预测结果是:{pred_class}')
# 预测训练集中的某张照片
predict_one_image(image_path='./J5/Monkeypox//M01_01_00.jpg',
                  model=model,
                  transform=train_transforms,
                  classes=classes)

代码输出:

预测结果是:Monkeypox

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值