第J5周:DenseNet+SE-Net实战

embedded/2024/9/21 8:03:55/
  • 本文为🔗365天深度学习训练营 中的学习记录博客
  • 原作者:K同学啊

任务:
●1. 在DenseNet系列算法中插入SE-Net通道注意力机制,并完成猴痘病识别
●2. 改进思路是否可以迁移到其他地方呢
●3. 测试集accuracy到达89%(拔高,可选)

一、介绍

可参考论文《Squeeze-and-Excitation Networks》

SE-Net 是 ImageNet 2017(ImageNet 收官赛)的冠军模型,是由WMW团队发布。具有复杂度低,参数少和计算量小的优点。且SENet 思路很简单,很容易扩展到已有网络结构如 Inception 和 ResNet 中。

已经有很多工作在空间维度上来提升网络的性能,如 Inception 等,而 SENet 将关注点放在了特征通道之间的关系上。其具体策略为:通过学习的方式来自动获取到每个特征通道的重要程度,然后依照这个重要程度去提升有用的特征并抑制对当前任务用处不大的特征,这又叫做“特征重标定”策略。具体的 SE 模块如下图所示:

在这里插入图片描述
在这里插入图片描述

二、SE 模块应用分析

SE模块的灵活性在于它可以直接应用现有的网络结构中。以 Inception 和 ResNet 为例,我们只需要在 Inception 模块或 Residual 模块后添加一个 SE 模块即可。具体如下图所示:

在这里插入图片描述
上图分别是将 SE 模块嵌入到 Inception 结构与 ResNet 中的示例,方框旁边的维度信息代表该层的输出,c(原文是r,不过我觉得应该是c) 表示 Excitation 操作中的降维系数。

三、SE 模型效果对比

SE 模块很容易嵌入到其它网络中,为了验证 SE 模块的作用,在其它流行网络如 ResNet 和 Inception 中引入 SE 模块,测试其在 ImageNet 上的效果,如下表所示:

在这里插入图片描述

首先看一下网络的深度对 SE 的影响。上表分别展示了 ResNet-50、ResNet-101、ResNet-152 和嵌入 SE 模型的结果。第一栏 Original 是原作者实现的结果,为了进行公平的比较,重新进行了实验得到 Our re-implementation 的结果。最后一栏 SE-module 是指嵌入了 SE 模块的结果,它的训练参数和第二栏 Our re-implementation 一致。括号中的红色数值是指相对于 Our re-implementation 的精度提升的幅值。

从上表可以看出,SE-ResNets 在各种深度上都远远超过了其对应的没有SE的结构版本的精度,这说明无论网络的深度如何,SE模块都能够给网络带来性能上的增益。值得一提的是,SE-ResNet-50 可以达到和ResNet-101 一样的精度;更甚,SE-ResNet-101 远远地超过了更深的ResNet-152。

在这里插入图片描述

上图展示了ResNet-50 和 ResNet-152 以及它们对应的嵌入SE模块的网络在ImageNet上的训练过程,可以明显地看出加入了SE模块的网络收敛到更低的错误率上。

四、SE 模块代码实现

import tensorflow as tfclass Squeeze_excitation_layer(tf.keras.Model):def __init__(self, filter_sq):# filter_sq 是 Excitation 中第一个卷积过程中卷积核的个数super().__init__()self.filter_sq = filter_sqself.avepool = tf.keras.layers.GlobalAveragePooling2D()self.dense = tf.keras.layers.Dense(filter_sq)self.relu = tf.keras.layers.Activation('relu')self.sigmoid = tf.keras.layers.Activation('sigmoid')def call(self, inputs):squeeze = self.avepool(inputs)excitation = self.dense(squeeze)excitation = self.relu(excitation)excitation = tf.keras.layers.Dense(inputs.shape[-1])(excitation)excitation = self.sigmoid(excitation)excitation = tf.keras.layers.Reshape((1, 1, inputs.shape[-1]))(excitation)scale = inputs * excitationreturn scaleSE = Squeeze_excitation_layer(16)
inputs = np.zeros((1, 32, 32, 32), dtype=np.float32)
SE(inputs).shape

代码输出:

TensorShape([1, 32, 32, 32])

五、SE 模块插入到 DenseNet 代码实现

from tensorflow.keras.models import Model# 56,56,64 -> 56,56,64+32*block[0]# Densenet121 56,56,64 -> 56,56,64+32*6 == 56,56,256x = dense_block(x, blocks[0], name='conv2')# 56,56,64+32*block[0] -> 28,28,32+16*block[0]# Densenet121 56,56,256 -> 28,28,32+16*6 == 28,28,128x = transition_block(x, 0.5, name='pool2')# 28,28,32+16*block[0] -> 28,28,32+16*block[0]+32*block[1]# Densenet121 28,28,128 -> 28,28,128+32*12 == 28,28,512x = dense_block(x, blocks[1], name='conv3')# Densenet121 28,28,512 -> 14,14,256x = transition_block(x, 0.5, name='pool3')# Densenet121 14,14,256 -> 14,14,256+32*block[2] == 14,14,1024x = dense_block(x, blocks[2], name='conv4')# Densenet121 14,14,1024 -> 7,7,512x = transition_block(x, 0.5, name='pool4')# Densenet121 7,7,512 -> 7,7,256+32*block[3] == 7,7,1024x = dense_block(x, blocks[3], name='conv5')# 加SE注意力机制x = Squeeze_excitation_layer(16)(x)x = layers.BatchNormalization(axis=bn_axis, epsilon=1.001e-5, name='bn')(x)x = layers.Activation('relu', name='relu')(x)x = layers.GlobalAveragePooling2D(name='avg_pool')(x)x = layers.Dense(classes, activation='softmax', name='fc1000')(x)inputs = img_inputif blocks == [6, 12, 24, 16]:model = Model(inputs, x, name='densenet121')elif blocks == [6, 12, 32, 32]:model = Model(inputs, x, name='densenet169')elif blocks == [6, 12, 48, 32]:model = Model(inputs, x, name='densenet201')else:model = Model(inputs, x, name='densenet')return modeldef DenseNet121(input_shape=[224,224,3], classes=3, **kwargs):return DenseNet([6, 12, 24, 16], input_shape, classes, **kwargs)def DenseNet169(input_shape=[224,224,3], classes=3, **kwargs):return DenseNet([6, 12, 32, 32], input_shape, classes, **kwargs)def DenseNet201(input_shape=[224,224,3], classes=3, **kwargs):return DenseNet([6, 12, 48, 32], input_shape, classes, **kwargs)

六、用Pytorch实现的具体代码

1、 基础配置

  • 操作系统:ubuntu 22.04
  • GPU:RTX 3090(24GB) * 1
  • 语言环境:python 3.12.3
  • 编译器:Jupyter Notebook
  • 深度学习环境:torch 2.3.0+cu121,torchvision 0.18.0+cu121

2、 前期准备

2.1.设置gpu/cpu

import os, PIL, random, pathlib
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision
from torchvision import transforms, datasetsdevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")print(device)

代码输出:

cuda

2.2.导入数据

data_dir = './J5'
data_dir = pathlib.Path(data_dir)data_paths = list(data_dir.glob('*'))
classeNames = [str(path).split("/")[1] for path in data_paths]
print(classeNames)image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:", image_count)

代码输出:

['Monkeypox', 'Others']
图片总数为: 2142

2.3.数据预处理

train_transforms = transforms.Compose([transforms.Resize([224, 224]),  # 将输入图片resize成统一尺寸# transforms.RandomHorizontalFlip(), # 随机水平翻转transforms.ToTensor(),  # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间transforms.Normalize(  # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])  # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])test_transform = transforms.Compose([transforms.Resize([224, 224]),  # 将输入图片resize成统一尺寸transforms.ToTensor(),  # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间transforms.Normalize(  # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])  # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])total_data = datasets.ImageFolder("./J5/", transform=train_transforms)
print(total_data.class_to_idx)

代码输出:

{'Monkeypox': 0, 'Others': 1}

2.4.划分数据集

train_size = int(0.8 * len(total_data))
test_size = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])batch_size = 8        #由于显卡问题,这里将batch_size设置为4
train_dl = torch.utils.data.DataLoader(train_dataset,batch_size=batch_size,shuffle=True,num_workers=0)
test_dl = torch.utils.data.DataLoader(test_dataset,batch_size=batch_size,shuffle=True,num_workers=0)
for X, y in test_dl:print("Shape of X [N, C, H, W]: ", X.shape)print("Shape of y: ", y.shape, y.dtype)break

代码输出:

Shape of X [N, C, H, W]:  torch.Size([8, 3, 224, 224])
Shape of y:  torch.Size([8]) torch.int64

3、搭建模型

3.1.模型搭建

from collections import OrderedDict
import torch.utils.checkpoint as cp
import torch
import torch.nn as nn
import torch.nn.functional as Fdef _bn_function_factory(norm, relu, conv):def bn_function(*inputs):concated_features = torch.cat(inputs, 1)bottleneck_output = conv(relu(norm(concated_features)))return bottleneck_outputreturn bn_functionclass _DenseLayer(nn.Module):def __init__(self, num_input_features, growth_rate, bn_size, drop_rate, efficient=False):super(_DenseLayer, self).__init__()self.add_module('norm1', nn.BatchNorm2d(num_input_features)),self.add_module('relu1', nn.ReLU(inplace=True)),self.add_module('conv1', nn.Conv2d(num_input_features, bn_size * growth_rate,kernel_size=1, stride=1, bias=False)),self.add_module('norm2', nn.BatchNorm2d(bn_size * growth_rate)),self.add_module('relu2', nn.ReLU(inplace=True)),self.add_module('conv2', nn.Conv2d(bn_size * growth_rate, growth_rate,kernel_size=3, stride=1, padding=1, bias=False)),self.add_module('SE_Block', SE_Block(growth_rate, reduction=16))self.drop_rate = drop_rateself.efficient = efficientdef forward(self, *prev_features):bn_function = _bn_function_factory(self.norm1, self.relu1, self.conv1)if self.efficient and any(prev_feature.requires_grad for prev_feature in prev_features):bottleneck_output = cp.checkpoint(bn_function, *prev_features)else:bottleneck_output = bn_function(*prev_features)new_features = self.SE_Block(self.conv2(self.relu2(self.norm2(bottleneck_output))))if self.drop_rate > 0:new_features = F.dropout(new_features, p=self.drop_rate, training=self.training)return new_featuresclass _Transition(nn.Sequential):def __init__(self, num_input_features, num_output_features):super(_Transition, self).__init__()self.add_module('norm', nn.BatchNorm2d(num_input_features))self.add_module('relu', nn.ReLU(inplace=True))self.add_module('conv', nn.Conv2d(num_input_features, num_output_features,kernel_size=1, stride=1, bias=False))self.add_module('pool', nn.AvgPool2d(kernel_size=2, stride=2))class _DenseBlock(nn.Module):def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate, efficient=False):super(_DenseBlock, self).__init__()for i in range(num_layers):layer = _DenseLayer(num_input_features + i * growth_rate,growth_rate=growth_rate,bn_size=bn_size,drop_rate=drop_rate,efficient=efficient,)self.add_module('denselayer%d' % (i + 1), layer)def forward(self, init_features):features = [init_features]for name, layer in self.named_children():new_features = layer(*features)features.append(new_features)return torch.cat(features, 1)class SE_Block(nn.Module):def __init__(self, ch_in, reduction=16):super(SE_Block, self).__init__()self.avg_pool = nn.AdaptiveAvgPool2d(1)  # 全局自适应池化self.fc = nn.Sequential(nn.Linear(ch_in, ch_in // reduction, bias=False),nn.ReLU(inplace=True),nn.Linear(ch_in // reduction, ch_in, bias=False),nn.Sigmoid())def forward(self, x):b, c, _, _ = x.size()y = self.avg_pool(x).view(b, c)  # squeeze操作y = self.fc(y).view(b, c, 1, 1)  # FC获取通道注意力权重,是具有全局信息的return x * y.expand_as(x)  # 注意力作用每一个通道上class DenseNet(nn.Module):def __init__(self, growth_rate, block_config, num_init_features=24, compression=0.5, bn_size=4, drop_rate=0,num_classes=10, small_inputs=True, efficient=False):super(DenseNet, self).__init__()assert 0 < compression <= 1, 'compression of densenet should be between 0 and 1'# First convolutionif small_inputs:self.features = nn.Sequential(OrderedDict([('conv0', nn.Conv2d(3, num_init_features, kernel_size=3, stride=1, padding=1, bias=False)),]))else:self.features = nn.Sequential(OrderedDict([('conv0', nn.Conv2d(3, num_init_features, kernel_size=7, stride=2, padding=3, bias=False)),]))self.features.add_module('norm0', nn.BatchNorm2d(num_init_features))self.features.add_module('relu0', nn.ReLU(inplace=True))self.features.add_module('pool0', nn.MaxPool2d(kernel_size=3, stride=2, padding=1,ceil_mode=False))# Each denseblocknum_features = num_init_featuresfor i, num_layers in enumerate(block_config):block = _DenseBlock(num_layers=num_layers,num_input_features=num_features,bn_size=bn_size,growth_rate=growth_rate,drop_rate=drop_rate,efficient=efficient,)self.features.add_module('denseblock%d' % (i + 1), block)num_features = num_features + num_layers * growth_rateif i != len(block_config) - 1:trans = _Transition(num_input_features=num_features,num_output_features=int(num_features * compression))self.features.add_module('transition%d' % (i + 1), trans)num_features = int(num_features * compression)# self.features.add_module('SE_Block%d' % (i + 1),SE_Block(num_features, reduction=16))# Final batch normself.features.add_module('norm_final', nn.BatchNorm2d(num_features))# Linear layerself.classifier = nn.Linear(num_features, num_classes)def forward(self, x):features = self.features(x)out = F.relu(features, inplace=True)out = F.adaptive_avg_pool2d(out, (1, 1))out = torch.flatten(out, 1)out = self.classifier(out)return out

3.2.模型实例化

def DenseNet121_4class():return DenseNet(growth_rate=32, block_config=(6,12,24,16), compression=0.5,num_init_features=64, bn_size=4, drop_rate=0.2,num_classes=2,efficient=True)# 实例化修改后的DenseNet并进行前向传播
model = DenseNet121_4class()

3.3.查看模型信息

x = torch.randn(2, 3, 224, 224)
out = model(x)
print('out.shape:',out.shape)
print(out)model.to(device)
# 统计模型参数量以及其他指标
import torchsummary as summarysummary.summary(model, (3, 224, 224))

代码输出:

out.shape: torch.Size([2, 2])
tensor([[-0.1052, -0.1803],[-0.1191, -0.1518]], grad_fn=<AddmmBackward0>)
----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv2d-1         [-1, 64, 224, 224]           1,728BatchNorm2d-2         [-1, 64, 224, 224]             128ReLU-3         [-1, 64, 224, 224]               0Conv2d-4        [-1, 128, 224, 224]           8,192BatchNorm2d-5        [-1, 128, 224, 224]             256ReLU-6        [-1, 128, 224, 224]               0Conv2d-7         [-1, 32, 224, 224]          36,864AdaptiveAvgPool2d-8             [-1, 32, 1, 1]               0Linear-9                    [-1, 2]              64ReLU-10                    [-1, 2]               0Linear-11                   [-1, 32]              64Sigmoid-12                   [-1, 32]               0SE_Block-13         [-1, 32, 224, 224]               0_DenseLayer-14         [-1, 32, 224, 224]               0BatchNorm2d-15         [-1, 96, 224, 224]             192ReLU-16         [-1, 96, 224, 224]               0Conv2d-17        [-1, 128, 224, 224]          12,288BatchNorm2d-18        [-1, 128, 224, 224]             256ReLU-19        [-1, 128, 224, 224]               0Conv2d-20         [-1, 32, 224, 224]          36,864
AdaptiveAvgPool2d-21             [-1, 32, 1, 1]               0Linear-22                    [-1, 2]              64ReLU-23                    [-1, 2]               0Linear-24                   [-1, 32]              64Sigmoid-25                   [-1, 32]               0SE_Block-26         [-1, 32, 224, 224]               0_DenseLayer-27         [-1, 32, 224, 224]               0BatchNorm2d-28        [-1, 128, 224, 224]             256ReLU-29        [-1, 128, 224, 224]               0Conv2d-30        [-1, 128, 224, 224]          16,384BatchNorm2d-31        [-1, 128, 224, 224]             256ReLU-32        [-1, 128, 224, 224]               0Conv2d-33         [-1, 32, 224, 224]          36,864
AdaptiveAvgPool2d-34             [-1, 32, 1, 1]               0Linear-35                    [-1, 2]              64ReLU-36                    [-1, 2]               0Linear-37                   [-1, 32]              64Sigmoid-38                   [-1, 32]               0SE_Block-39         [-1, 32, 224, 224]               0_DenseLayer-40         [-1, 32, 224, 224]               0BatchNorm2d-41        [-1, 160, 224, 224]             320ReLU-42        [-1, 160, 224, 224]               0Conv2d-43        [-1, 128, 224, 224]          20,480BatchNorm2d-44        [-1, 128, 224, 224]             256ReLU-45        [-1, 128, 224, 224]               0Conv2d-46         [-1, 32, 224, 224]          36,864
AdaptiveAvgPool2d-47             [-1, 32, 1, 1]               0Linear-48                    [-1, 2]              64ReLU-49                    [-1, 2]               0Linear-50                   [-1, 32]              64Sigmoid-51                   [-1, 32]               0SE_Block-52         [-1, 32, 224, 224]               0_DenseLayer-53         [-1, 32, 224, 224]               0BatchNorm2d-54        [-1, 192, 224, 224]             384ReLU-55        [-1, 192, 224, 224]               0Conv2d-56        [-1, 128, 224, 224]          24,576BatchNorm2d-57        [-1, 128, 224, 224]             256ReLU-58        [-1, 128, 224, 224]               0Conv2d-59         [-1, 32, 224, 224]          36,864
AdaptiveAvgPool2d-60             [-1, 32, 1, 1]               0Linear-61                    [-1, 2]              64ReLU-62                    [-1, 2]               0Linear-63                   [-1, 32]              64Sigmoid-64                   [-1, 32]               0SE_Block-65         [-1, 32, 224, 224]               0_DenseLayer-66         [-1, 32, 224, 224]               0BatchNorm2d-67        [-1, 224, 224, 224]             448ReLU-68        [-1, 224, 224, 224]               0Conv2d-69        [-1, 128, 224, 224]          28,672BatchNorm2d-70        [-1, 128, 224, 224]             256ReLU-71        [-1, 128, 224, 224]               0Conv2d-72         [-1, 32, 224, 224]          36,864
AdaptiveAvgPool2d-73             [-1, 32, 1, 1]               0Linear-74                    [-1, 2]              64ReLU-75                    [-1, 2]               0Linear-76                   [-1, 32]              64Sigmoid-77                   [-1, 32]               0SE_Block-78         [-1, 32, 224, 224]               0_DenseLayer-79         [-1, 32, 224, 224]               0_DenseBlock-80        [-1, 256, 224, 224]               0BatchNorm2d-81        [-1, 256, 224, 224]             512ReLU-82        [-1, 256, 224, 224]               0Conv2d-83        [-1, 128, 224, 224]          32,768AvgPool2d-84        [-1, 128, 112, 112]               0BatchNorm2d-85        [-1, 128, 112, 112]             256ReLU-86        [-1, 128, 112, 112]               0Conv2d-87        [-1, 128, 112, 112]          16,384BatchNorm2d-88        [-1, 128, 112, 112]             256ReLU-89        [-1, 128, 112, 112]               0Conv2d-90         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-91             [-1, 32, 1, 1]               0Linear-92                    [-1, 2]              64ReLU-93                    [-1, 2]               0Linear-94                   [-1, 32]              64Sigmoid-95                   [-1, 32]               0SE_Block-96         [-1, 32, 112, 112]               0_DenseLayer-97         [-1, 32, 112, 112]               0BatchNorm2d-98        [-1, 160, 112, 112]             320ReLU-99        [-1, 160, 112, 112]               0Conv2d-100        [-1, 128, 112, 112]          20,480BatchNorm2d-101        [-1, 128, 112, 112]             256ReLU-102        [-1, 128, 112, 112]               0Conv2d-103         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-104             [-1, 32, 1, 1]               0Linear-105                    [-1, 2]              64ReLU-106                    [-1, 2]               0Linear-107                   [-1, 32]              64Sigmoid-108                   [-1, 32]               0SE_Block-109         [-1, 32, 112, 112]               0_DenseLayer-110         [-1, 32, 112, 112]               0BatchNorm2d-111        [-1, 192, 112, 112]             384ReLU-112        [-1, 192, 112, 112]               0Conv2d-113        [-1, 128, 112, 112]          24,576BatchNorm2d-114        [-1, 128, 112, 112]             256ReLU-115        [-1, 128, 112, 112]               0Conv2d-116         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-117             [-1, 32, 1, 1]               0Linear-118                    [-1, 2]              64ReLU-119                    [-1, 2]               0Linear-120                   [-1, 32]              64Sigmoid-121                   [-1, 32]               0SE_Block-122         [-1, 32, 112, 112]               0_DenseLayer-123         [-1, 32, 112, 112]               0BatchNorm2d-124        [-1, 224, 112, 112]             448ReLU-125        [-1, 224, 112, 112]               0Conv2d-126        [-1, 128, 112, 112]          28,672BatchNorm2d-127        [-1, 128, 112, 112]             256ReLU-128        [-1, 128, 112, 112]               0Conv2d-129         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-130             [-1, 32, 1, 1]               0Linear-131                    [-1, 2]              64ReLU-132                    [-1, 2]               0Linear-133                   [-1, 32]              64Sigmoid-134                   [-1, 32]               0SE_Block-135         [-1, 32, 112, 112]               0_DenseLayer-136         [-1, 32, 112, 112]               0BatchNorm2d-137        [-1, 256, 112, 112]             512ReLU-138        [-1, 256, 112, 112]               0Conv2d-139        [-1, 128, 112, 112]          32,768BatchNorm2d-140        [-1, 128, 112, 112]             256ReLU-141        [-1, 128, 112, 112]               0Conv2d-142         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-143             [-1, 32, 1, 1]               0Linear-144                    [-1, 2]              64ReLU-145                    [-1, 2]               0Linear-146                   [-1, 32]              64Sigmoid-147                   [-1, 32]               0SE_Block-148         [-1, 32, 112, 112]               0_DenseLayer-149         [-1, 32, 112, 112]               0BatchNorm2d-150        [-1, 288, 112, 112]             576ReLU-151        [-1, 288, 112, 112]               0Conv2d-152        [-1, 128, 112, 112]          36,864BatchNorm2d-153        [-1, 128, 112, 112]             256ReLU-154        [-1, 128, 112, 112]               0Conv2d-155         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-156             [-1, 32, 1, 1]               0Linear-157                    [-1, 2]              64ReLU-158                    [-1, 2]               0Linear-159                   [-1, 32]              64Sigmoid-160                   [-1, 32]               0SE_Block-161         [-1, 32, 112, 112]               0_DenseLayer-162         [-1, 32, 112, 112]               0BatchNorm2d-163        [-1, 320, 112, 112]             640ReLU-164        [-1, 320, 112, 112]               0Conv2d-165        [-1, 128, 112, 112]          40,960BatchNorm2d-166        [-1, 128, 112, 112]             256ReLU-167        [-1, 128, 112, 112]               0Conv2d-168         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-169             [-1, 32, 1, 1]               0Linear-170                    [-1, 2]              64ReLU-171                    [-1, 2]               0Linear-172                   [-1, 32]              64Sigmoid-173                   [-1, 32]               0SE_Block-174         [-1, 32, 112, 112]               0_DenseLayer-175         [-1, 32, 112, 112]               0BatchNorm2d-176        [-1, 352, 112, 112]             704ReLU-177        [-1, 352, 112, 112]               0Conv2d-178        [-1, 128, 112, 112]          45,056BatchNorm2d-179        [-1, 128, 112, 112]             256ReLU-180        [-1, 128, 112, 112]               0Conv2d-181         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-182             [-1, 32, 1, 1]               0Linear-183                    [-1, 2]              64ReLU-184                    [-1, 2]               0Linear-185                   [-1, 32]              64Sigmoid-186                   [-1, 32]               0SE_Block-187         [-1, 32, 112, 112]               0_DenseLayer-188         [-1, 32, 112, 112]               0BatchNorm2d-189        [-1, 384, 112, 112]             768ReLU-190        [-1, 384, 112, 112]               0Conv2d-191        [-1, 128, 112, 112]          49,152BatchNorm2d-192        [-1, 128, 112, 112]             256ReLU-193        [-1, 128, 112, 112]               0Conv2d-194         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-195             [-1, 32, 1, 1]               0Linear-196                    [-1, 2]              64ReLU-197                    [-1, 2]               0Linear-198                   [-1, 32]              64Sigmoid-199                   [-1, 32]               0SE_Block-200         [-1, 32, 112, 112]               0_DenseLayer-201         [-1, 32, 112, 112]               0BatchNorm2d-202        [-1, 416, 112, 112]             832ReLU-203        [-1, 416, 112, 112]               0Conv2d-204        [-1, 128, 112, 112]          53,248BatchNorm2d-205        [-1, 128, 112, 112]             256ReLU-206        [-1, 128, 112, 112]               0Conv2d-207         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-208             [-1, 32, 1, 1]               0Linear-209                    [-1, 2]              64ReLU-210                    [-1, 2]               0Linear-211                   [-1, 32]              64Sigmoid-212                   [-1, 32]               0SE_Block-213         [-1, 32, 112, 112]               0_DenseLayer-214         [-1, 32, 112, 112]               0BatchNorm2d-215        [-1, 448, 112, 112]             896ReLU-216        [-1, 448, 112, 112]               0Conv2d-217        [-1, 128, 112, 112]          57,344BatchNorm2d-218        [-1, 128, 112, 112]             256ReLU-219        [-1, 128, 112, 112]               0Conv2d-220         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-221             [-1, 32, 1, 1]               0Linear-222                    [-1, 2]              64ReLU-223                    [-1, 2]               0Linear-224                   [-1, 32]              64Sigmoid-225                   [-1, 32]               0SE_Block-226         [-1, 32, 112, 112]               0_DenseLayer-227         [-1, 32, 112, 112]               0BatchNorm2d-228        [-1, 480, 112, 112]             960ReLU-229        [-1, 480, 112, 112]               0Conv2d-230        [-1, 128, 112, 112]          61,440BatchNorm2d-231        [-1, 128, 112, 112]             256ReLU-232        [-1, 128, 112, 112]               0Conv2d-233         [-1, 32, 112, 112]          36,864
AdaptiveAvgPool2d-234             [-1, 32, 1, 1]               0Linear-235                    [-1, 2]              64ReLU-236                    [-1, 2]               0Linear-237                   [-1, 32]              64Sigmoid-238                   [-1, 32]               0SE_Block-239         [-1, 32, 112, 112]               0_DenseLayer-240         [-1, 32, 112, 112]               0_DenseBlock-241        [-1, 512, 112, 112]               0BatchNorm2d-242        [-1, 512, 112, 112]           1,024ReLU-243        [-1, 512, 112, 112]               0Conv2d-244        [-1, 256, 112, 112]         131,072AvgPool2d-245          [-1, 256, 56, 56]               0BatchNorm2d-246          [-1, 256, 56, 56]             512ReLU-247          [-1, 256, 56, 56]               0Conv2d-248          [-1, 128, 56, 56]          32,768BatchNorm2d-249          [-1, 128, 56, 56]             256ReLU-250          [-1, 128, 56, 56]               0Conv2d-251           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-252             [-1, 32, 1, 1]               0Linear-253                    [-1, 2]              64ReLU-254                    [-1, 2]               0Linear-255                   [-1, 32]              64Sigmoid-256                   [-1, 32]               0SE_Block-257           [-1, 32, 56, 56]               0_DenseLayer-258           [-1, 32, 56, 56]               0BatchNorm2d-259          [-1, 288, 56, 56]             576ReLU-260          [-1, 288, 56, 56]               0Conv2d-261          [-1, 128, 56, 56]          36,864BatchNorm2d-262          [-1, 128, 56, 56]             256ReLU-263          [-1, 128, 56, 56]               0Conv2d-264           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-265             [-1, 32, 1, 1]               0Linear-266                    [-1, 2]              64ReLU-267                    [-1, 2]               0Linear-268                   [-1, 32]              64Sigmoid-269                   [-1, 32]               0SE_Block-270           [-1, 32, 56, 56]               0_DenseLayer-271           [-1, 32, 56, 56]               0BatchNorm2d-272          [-1, 320, 56, 56]             640ReLU-273          [-1, 320, 56, 56]               0Conv2d-274          [-1, 128, 56, 56]          40,960BatchNorm2d-275          [-1, 128, 56, 56]             256ReLU-276          [-1, 128, 56, 56]               0Conv2d-277           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-278             [-1, 32, 1, 1]               0Linear-279                    [-1, 2]              64ReLU-280                    [-1, 2]               0Linear-281                   [-1, 32]              64Sigmoid-282                   [-1, 32]               0SE_Block-283           [-1, 32, 56, 56]               0_DenseLayer-284           [-1, 32, 56, 56]               0BatchNorm2d-285          [-1, 352, 56, 56]             704ReLU-286          [-1, 352, 56, 56]               0Conv2d-287          [-1, 128, 56, 56]          45,056BatchNorm2d-288          [-1, 128, 56, 56]             256ReLU-289          [-1, 128, 56, 56]               0Conv2d-290           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-291             [-1, 32, 1, 1]               0Linear-292                    [-1, 2]              64ReLU-293                    [-1, 2]               0Linear-294                   [-1, 32]              64Sigmoid-295                   [-1, 32]               0SE_Block-296           [-1, 32, 56, 56]               0_DenseLayer-297           [-1, 32, 56, 56]               0BatchNorm2d-298          [-1, 384, 56, 56]             768ReLU-299          [-1, 384, 56, 56]               0Conv2d-300          [-1, 128, 56, 56]          49,152BatchNorm2d-301          [-1, 128, 56, 56]             256ReLU-302          [-1, 128, 56, 56]               0Conv2d-303           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-304             [-1, 32, 1, 1]               0Linear-305                    [-1, 2]              64ReLU-306                    [-1, 2]               0Linear-307                   [-1, 32]              64Sigmoid-308                   [-1, 32]               0SE_Block-309           [-1, 32, 56, 56]               0_DenseLayer-310           [-1, 32, 56, 56]               0BatchNorm2d-311          [-1, 416, 56, 56]             832ReLU-312          [-1, 416, 56, 56]               0Conv2d-313          [-1, 128, 56, 56]          53,248BatchNorm2d-314          [-1, 128, 56, 56]             256ReLU-315          [-1, 128, 56, 56]               0Conv2d-316           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-317             [-1, 32, 1, 1]               0Linear-318                    [-1, 2]              64ReLU-319                    [-1, 2]               0Linear-320                   [-1, 32]              64Sigmoid-321                   [-1, 32]               0SE_Block-322           [-1, 32, 56, 56]               0_DenseLayer-323           [-1, 32, 56, 56]               0BatchNorm2d-324          [-1, 448, 56, 56]             896ReLU-325          [-1, 448, 56, 56]               0Conv2d-326          [-1, 128, 56, 56]          57,344BatchNorm2d-327          [-1, 128, 56, 56]             256ReLU-328          [-1, 128, 56, 56]               0Conv2d-329           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-330             [-1, 32, 1, 1]               0Linear-331                    [-1, 2]              64ReLU-332                    [-1, 2]               0Linear-333                   [-1, 32]              64Sigmoid-334                   [-1, 32]               0SE_Block-335           [-1, 32, 56, 56]               0_DenseLayer-336           [-1, 32, 56, 56]               0BatchNorm2d-337          [-1, 480, 56, 56]             960ReLU-338          [-1, 480, 56, 56]               0Conv2d-339          [-1, 128, 56, 56]          61,440BatchNorm2d-340          [-1, 128, 56, 56]             256ReLU-341          [-1, 128, 56, 56]               0Conv2d-342           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-343             [-1, 32, 1, 1]               0Linear-344                    [-1, 2]              64ReLU-345                    [-1, 2]               0Linear-346                   [-1, 32]              64Sigmoid-347                   [-1, 32]               0SE_Block-348           [-1, 32, 56, 56]               0_DenseLayer-349           [-1, 32, 56, 56]               0BatchNorm2d-350          [-1, 512, 56, 56]           1,024ReLU-351          [-1, 512, 56, 56]               0Conv2d-352          [-1, 128, 56, 56]          65,536BatchNorm2d-353          [-1, 128, 56, 56]             256ReLU-354          [-1, 128, 56, 56]               0Conv2d-355           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-356             [-1, 32, 1, 1]               0Linear-357                    [-1, 2]              64ReLU-358                    [-1, 2]               0Linear-359                   [-1, 32]              64Sigmoid-360                   [-1, 32]               0SE_Block-361           [-1, 32, 56, 56]               0_DenseLayer-362           [-1, 32, 56, 56]               0BatchNorm2d-363          [-1, 544, 56, 56]           1,088ReLU-364          [-1, 544, 56, 56]               0Conv2d-365          [-1, 128, 56, 56]          69,632BatchNorm2d-366          [-1, 128, 56, 56]             256ReLU-367          [-1, 128, 56, 56]               0Conv2d-368           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-369             [-1, 32, 1, 1]               0Linear-370                    [-1, 2]              64ReLU-371                    [-1, 2]               0Linear-372                   [-1, 32]              64Sigmoid-373                   [-1, 32]               0SE_Block-374           [-1, 32, 56, 56]               0_DenseLayer-375           [-1, 32, 56, 56]               0BatchNorm2d-376          [-1, 576, 56, 56]           1,152ReLU-377          [-1, 576, 56, 56]               0Conv2d-378          [-1, 128, 56, 56]          73,728BatchNorm2d-379          [-1, 128, 56, 56]             256ReLU-380          [-1, 128, 56, 56]               0Conv2d-381           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-382             [-1, 32, 1, 1]               0Linear-383                    [-1, 2]              64ReLU-384                    [-1, 2]               0Linear-385                   [-1, 32]              64Sigmoid-386                   [-1, 32]               0SE_Block-387           [-1, 32, 56, 56]               0_DenseLayer-388           [-1, 32, 56, 56]               0BatchNorm2d-389          [-1, 608, 56, 56]           1,216ReLU-390          [-1, 608, 56, 56]               0Conv2d-391          [-1, 128, 56, 56]          77,824BatchNorm2d-392          [-1, 128, 56, 56]             256ReLU-393          [-1, 128, 56, 56]               0Conv2d-394           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-395             [-1, 32, 1, 1]               0Linear-396                    [-1, 2]              64ReLU-397                    [-1, 2]               0Linear-398                   [-1, 32]              64Sigmoid-399                   [-1, 32]               0SE_Block-400           [-1, 32, 56, 56]               0_DenseLayer-401           [-1, 32, 56, 56]               0BatchNorm2d-402          [-1, 640, 56, 56]           1,280ReLU-403          [-1, 640, 56, 56]               0Conv2d-404          [-1, 128, 56, 56]          81,920BatchNorm2d-405          [-1, 128, 56, 56]             256ReLU-406          [-1, 128, 56, 56]               0Conv2d-407           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-408             [-1, 32, 1, 1]               0Linear-409                    [-1, 2]              64ReLU-410                    [-1, 2]               0Linear-411                   [-1, 32]              64Sigmoid-412                   [-1, 32]               0SE_Block-413           [-1, 32, 56, 56]               0_DenseLayer-414           [-1, 32, 56, 56]               0BatchNorm2d-415          [-1, 672, 56, 56]           1,344ReLU-416          [-1, 672, 56, 56]               0Conv2d-417          [-1, 128, 56, 56]          86,016BatchNorm2d-418          [-1, 128, 56, 56]             256ReLU-419          [-1, 128, 56, 56]               0Conv2d-420           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-421             [-1, 32, 1, 1]               0Linear-422                    [-1, 2]              64ReLU-423                    [-1, 2]               0Linear-424                   [-1, 32]              64Sigmoid-425                   [-1, 32]               0SE_Block-426           [-1, 32, 56, 56]               0_DenseLayer-427           [-1, 32, 56, 56]               0BatchNorm2d-428          [-1, 704, 56, 56]           1,408ReLU-429          [-1, 704, 56, 56]               0Conv2d-430          [-1, 128, 56, 56]          90,112BatchNorm2d-431          [-1, 128, 56, 56]             256ReLU-432          [-1, 128, 56, 56]               0Conv2d-433           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-434             [-1, 32, 1, 1]               0Linear-435                    [-1, 2]              64ReLU-436                    [-1, 2]               0Linear-437                   [-1, 32]              64Sigmoid-438                   [-1, 32]               0SE_Block-439           [-1, 32, 56, 56]               0_DenseLayer-440           [-1, 32, 56, 56]               0BatchNorm2d-441          [-1, 736, 56, 56]           1,472ReLU-442          [-1, 736, 56, 56]               0Conv2d-443          [-1, 128, 56, 56]          94,208BatchNorm2d-444          [-1, 128, 56, 56]             256ReLU-445          [-1, 128, 56, 56]               0Conv2d-446           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-447             [-1, 32, 1, 1]               0Linear-448                    [-1, 2]              64ReLU-449                    [-1, 2]               0Linear-450                   [-1, 32]              64Sigmoid-451                   [-1, 32]               0SE_Block-452           [-1, 32, 56, 56]               0_DenseLayer-453           [-1, 32, 56, 56]               0BatchNorm2d-454          [-1, 768, 56, 56]           1,536ReLU-455          [-1, 768, 56, 56]               0Conv2d-456          [-1, 128, 56, 56]          98,304BatchNorm2d-457          [-1, 128, 56, 56]             256ReLU-458          [-1, 128, 56, 56]               0Conv2d-459           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-460             [-1, 32, 1, 1]               0Linear-461                    [-1, 2]              64ReLU-462                    [-1, 2]               0Linear-463                   [-1, 32]              64Sigmoid-464                   [-1, 32]               0SE_Block-465           [-1, 32, 56, 56]               0_DenseLayer-466           [-1, 32, 56, 56]               0BatchNorm2d-467          [-1, 800, 56, 56]           1,600ReLU-468          [-1, 800, 56, 56]               0Conv2d-469          [-1, 128, 56, 56]         102,400BatchNorm2d-470          [-1, 128, 56, 56]             256ReLU-471          [-1, 128, 56, 56]               0Conv2d-472           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-473             [-1, 32, 1, 1]               0Linear-474                    [-1, 2]              64ReLU-475                    [-1, 2]               0Linear-476                   [-1, 32]              64Sigmoid-477                   [-1, 32]               0SE_Block-478           [-1, 32, 56, 56]               0_DenseLayer-479           [-1, 32, 56, 56]               0BatchNorm2d-480          [-1, 832, 56, 56]           1,664ReLU-481          [-1, 832, 56, 56]               0Conv2d-482          [-1, 128, 56, 56]         106,496BatchNorm2d-483          [-1, 128, 56, 56]             256ReLU-484          [-1, 128, 56, 56]               0Conv2d-485           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-486             [-1, 32, 1, 1]               0Linear-487                    [-1, 2]              64ReLU-488                    [-1, 2]               0Linear-489                   [-1, 32]              64Sigmoid-490                   [-1, 32]               0SE_Block-491           [-1, 32, 56, 56]               0_DenseLayer-492           [-1, 32, 56, 56]               0BatchNorm2d-493          [-1, 864, 56, 56]           1,728ReLU-494          [-1, 864, 56, 56]               0Conv2d-495          [-1, 128, 56, 56]         110,592BatchNorm2d-496          [-1, 128, 56, 56]             256ReLU-497          [-1, 128, 56, 56]               0Conv2d-498           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-499             [-1, 32, 1, 1]               0Linear-500                    [-1, 2]              64ReLU-501                    [-1, 2]               0Linear-502                   [-1, 32]              64Sigmoid-503                   [-1, 32]               0SE_Block-504           [-1, 32, 56, 56]               0_DenseLayer-505           [-1, 32, 56, 56]               0BatchNorm2d-506          [-1, 896, 56, 56]           1,792ReLU-507          [-1, 896, 56, 56]               0Conv2d-508          [-1, 128, 56, 56]         114,688BatchNorm2d-509          [-1, 128, 56, 56]             256ReLU-510          [-1, 128, 56, 56]               0Conv2d-511           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-512             [-1, 32, 1, 1]               0Linear-513                    [-1, 2]              64ReLU-514                    [-1, 2]               0Linear-515                   [-1, 32]              64Sigmoid-516                   [-1, 32]               0SE_Block-517           [-1, 32, 56, 56]               0_DenseLayer-518           [-1, 32, 56, 56]               0BatchNorm2d-519          [-1, 928, 56, 56]           1,856ReLU-520          [-1, 928, 56, 56]               0Conv2d-521          [-1, 128, 56, 56]         118,784BatchNorm2d-522          [-1, 128, 56, 56]             256ReLU-523          [-1, 128, 56, 56]               0Conv2d-524           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-525             [-1, 32, 1, 1]               0Linear-526                    [-1, 2]              64ReLU-527                    [-1, 2]               0Linear-528                   [-1, 32]              64Sigmoid-529                   [-1, 32]               0SE_Block-530           [-1, 32, 56, 56]               0_DenseLayer-531           [-1, 32, 56, 56]               0BatchNorm2d-532          [-1, 960, 56, 56]           1,920ReLU-533          [-1, 960, 56, 56]               0Conv2d-534          [-1, 128, 56, 56]         122,880BatchNorm2d-535          [-1, 128, 56, 56]             256ReLU-536          [-1, 128, 56, 56]               0Conv2d-537           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-538             [-1, 32, 1, 1]               0Linear-539                    [-1, 2]              64ReLU-540                    [-1, 2]               0Linear-541                   [-1, 32]              64Sigmoid-542                   [-1, 32]               0SE_Block-543           [-1, 32, 56, 56]               0_DenseLayer-544           [-1, 32, 56, 56]               0BatchNorm2d-545          [-1, 992, 56, 56]           1,984ReLU-546          [-1, 992, 56, 56]               0Conv2d-547          [-1, 128, 56, 56]         126,976BatchNorm2d-548          [-1, 128, 56, 56]             256ReLU-549          [-1, 128, 56, 56]               0Conv2d-550           [-1, 32, 56, 56]          36,864
AdaptiveAvgPool2d-551             [-1, 32, 1, 1]               0Linear-552                    [-1, 2]              64ReLU-553                    [-1, 2]               0Linear-554                   [-1, 32]              64Sigmoid-555                   [-1, 32]               0SE_Block-556           [-1, 32, 56, 56]               0_DenseLayer-557           [-1, 32, 56, 56]               0_DenseBlock-558         [-1, 1024, 56, 56]               0BatchNorm2d-559         [-1, 1024, 56, 56]           2,048ReLU-560         [-1, 1024, 56, 56]               0Conv2d-561          [-1, 512, 56, 56]         524,288AvgPool2d-562          [-1, 512, 28, 28]               0BatchNorm2d-563          [-1, 512, 28, 28]           1,024ReLU-564          [-1, 512, 28, 28]               0Conv2d-565          [-1, 128, 28, 28]          65,536BatchNorm2d-566          [-1, 128, 28, 28]             256ReLU-567          [-1, 128, 28, 28]               0Conv2d-568           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-569             [-1, 32, 1, 1]               0Linear-570                    [-1, 2]              64ReLU-571                    [-1, 2]               0Linear-572                   [-1, 32]              64Sigmoid-573                   [-1, 32]               0SE_Block-574           [-1, 32, 28, 28]               0_DenseLayer-575           [-1, 32, 28, 28]               0BatchNorm2d-576          [-1, 544, 28, 28]           1,088ReLU-577          [-1, 544, 28, 28]               0Conv2d-578          [-1, 128, 28, 28]          69,632BatchNorm2d-579          [-1, 128, 28, 28]             256ReLU-580          [-1, 128, 28, 28]               0Conv2d-581           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-582             [-1, 32, 1, 1]               0Linear-583                    [-1, 2]              64ReLU-584                    [-1, 2]               0Linear-585                   [-1, 32]              64Sigmoid-586                   [-1, 32]               0SE_Block-587           [-1, 32, 28, 28]               0_DenseLayer-588           [-1, 32, 28, 28]               0BatchNorm2d-589          [-1, 576, 28, 28]           1,152ReLU-590          [-1, 576, 28, 28]               0Conv2d-591          [-1, 128, 28, 28]          73,728BatchNorm2d-592          [-1, 128, 28, 28]             256ReLU-593          [-1, 128, 28, 28]               0Conv2d-594           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-595             [-1, 32, 1, 1]               0Linear-596                    [-1, 2]              64ReLU-597                    [-1, 2]               0Linear-598                   [-1, 32]              64Sigmoid-599                   [-1, 32]               0SE_Block-600           [-1, 32, 28, 28]               0_DenseLayer-601           [-1, 32, 28, 28]               0BatchNorm2d-602          [-1, 608, 28, 28]           1,216ReLU-603          [-1, 608, 28, 28]               0Conv2d-604          [-1, 128, 28, 28]          77,824BatchNorm2d-605          [-1, 128, 28, 28]             256ReLU-606          [-1, 128, 28, 28]               0Conv2d-607           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-608             [-1, 32, 1, 1]               0Linear-609                    [-1, 2]              64ReLU-610                    [-1, 2]               0Linear-611                   [-1, 32]              64Sigmoid-612                   [-1, 32]               0SE_Block-613           [-1, 32, 28, 28]               0_DenseLayer-614           [-1, 32, 28, 28]               0BatchNorm2d-615          [-1, 640, 28, 28]           1,280ReLU-616          [-1, 640, 28, 28]               0Conv2d-617          [-1, 128, 28, 28]          81,920BatchNorm2d-618          [-1, 128, 28, 28]             256ReLU-619          [-1, 128, 28, 28]               0Conv2d-620           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-621             [-1, 32, 1, 1]               0Linear-622                    [-1, 2]              64ReLU-623                    [-1, 2]               0Linear-624                   [-1, 32]              64Sigmoid-625                   [-1, 32]               0SE_Block-626           [-1, 32, 28, 28]               0_DenseLayer-627           [-1, 32, 28, 28]               0BatchNorm2d-628          [-1, 672, 28, 28]           1,344ReLU-629          [-1, 672, 28, 28]               0Conv2d-630          [-1, 128, 28, 28]          86,016BatchNorm2d-631          [-1, 128, 28, 28]             256ReLU-632          [-1, 128, 28, 28]               0Conv2d-633           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-634             [-1, 32, 1, 1]               0Linear-635                    [-1, 2]              64ReLU-636                    [-1, 2]               0Linear-637                   [-1, 32]              64Sigmoid-638                   [-1, 32]               0SE_Block-639           [-1, 32, 28, 28]               0_DenseLayer-640           [-1, 32, 28, 28]               0BatchNorm2d-641          [-1, 704, 28, 28]           1,408ReLU-642          [-1, 704, 28, 28]               0Conv2d-643          [-1, 128, 28, 28]          90,112BatchNorm2d-644          [-1, 128, 28, 28]             256ReLU-645          [-1, 128, 28, 28]               0Conv2d-646           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-647             [-1, 32, 1, 1]               0Linear-648                    [-1, 2]              64ReLU-649                    [-1, 2]               0Linear-650                   [-1, 32]              64Sigmoid-651                   [-1, 32]               0SE_Block-652           [-1, 32, 28, 28]               0_DenseLayer-653           [-1, 32, 28, 28]               0BatchNorm2d-654          [-1, 736, 28, 28]           1,472ReLU-655          [-1, 736, 28, 28]               0Conv2d-656          [-1, 128, 28, 28]          94,208BatchNorm2d-657          [-1, 128, 28, 28]             256ReLU-658          [-1, 128, 28, 28]               0Conv2d-659           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-660             [-1, 32, 1, 1]               0Linear-661                    [-1, 2]              64ReLU-662                    [-1, 2]               0Linear-663                   [-1, 32]              64Sigmoid-664                   [-1, 32]               0SE_Block-665           [-1, 32, 28, 28]               0_DenseLayer-666           [-1, 32, 28, 28]               0BatchNorm2d-667          [-1, 768, 28, 28]           1,536ReLU-668          [-1, 768, 28, 28]               0Conv2d-669          [-1, 128, 28, 28]          98,304BatchNorm2d-670          [-1, 128, 28, 28]             256ReLU-671          [-1, 128, 28, 28]               0Conv2d-672           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-673             [-1, 32, 1, 1]               0Linear-674                    [-1, 2]              64ReLU-675                    [-1, 2]               0Linear-676                   [-1, 32]              64Sigmoid-677                   [-1, 32]               0SE_Block-678           [-1, 32, 28, 28]               0_DenseLayer-679           [-1, 32, 28, 28]               0BatchNorm2d-680          [-1, 800, 28, 28]           1,600ReLU-681          [-1, 800, 28, 28]               0Conv2d-682          [-1, 128, 28, 28]         102,400BatchNorm2d-683          [-1, 128, 28, 28]             256ReLU-684          [-1, 128, 28, 28]               0Conv2d-685           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-686             [-1, 32, 1, 1]               0Linear-687                    [-1, 2]              64ReLU-688                    [-1, 2]               0Linear-689                   [-1, 32]              64Sigmoid-690                   [-1, 32]               0SE_Block-691           [-1, 32, 28, 28]               0_DenseLayer-692           [-1, 32, 28, 28]               0BatchNorm2d-693          [-1, 832, 28, 28]           1,664ReLU-694          [-1, 832, 28, 28]               0Conv2d-695          [-1, 128, 28, 28]         106,496BatchNorm2d-696          [-1, 128, 28, 28]             256ReLU-697          [-1, 128, 28, 28]               0Conv2d-698           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-699             [-1, 32, 1, 1]               0Linear-700                    [-1, 2]              64ReLU-701                    [-1, 2]               0Linear-702                   [-1, 32]              64Sigmoid-703                   [-1, 32]               0SE_Block-704           [-1, 32, 28, 28]               0_DenseLayer-705           [-1, 32, 28, 28]               0BatchNorm2d-706          [-1, 864, 28, 28]           1,728ReLU-707          [-1, 864, 28, 28]               0Conv2d-708          [-1, 128, 28, 28]         110,592BatchNorm2d-709          [-1, 128, 28, 28]             256ReLU-710          [-1, 128, 28, 28]               0Conv2d-711           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-712             [-1, 32, 1, 1]               0Linear-713                    [-1, 2]              64ReLU-714                    [-1, 2]               0Linear-715                   [-1, 32]              64Sigmoid-716                   [-1, 32]               0SE_Block-717           [-1, 32, 28, 28]               0_DenseLayer-718           [-1, 32, 28, 28]               0BatchNorm2d-719          [-1, 896, 28, 28]           1,792ReLU-720          [-1, 896, 28, 28]               0Conv2d-721          [-1, 128, 28, 28]         114,688BatchNorm2d-722          [-1, 128, 28, 28]             256ReLU-723          [-1, 128, 28, 28]               0Conv2d-724           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-725             [-1, 32, 1, 1]               0Linear-726                    [-1, 2]              64ReLU-727                    [-1, 2]               0Linear-728                   [-1, 32]              64Sigmoid-729                   [-1, 32]               0SE_Block-730           [-1, 32, 28, 28]               0_DenseLayer-731           [-1, 32, 28, 28]               0BatchNorm2d-732          [-1, 928, 28, 28]           1,856ReLU-733          [-1, 928, 28, 28]               0Conv2d-734          [-1, 128, 28, 28]         118,784BatchNorm2d-735          [-1, 128, 28, 28]             256ReLU-736          [-1, 128, 28, 28]               0Conv2d-737           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-738             [-1, 32, 1, 1]               0Linear-739                    [-1, 2]              64ReLU-740                    [-1, 2]               0Linear-741                   [-1, 32]              64Sigmoid-742                   [-1, 32]               0SE_Block-743           [-1, 32, 28, 28]               0_DenseLayer-744           [-1, 32, 28, 28]               0BatchNorm2d-745          [-1, 960, 28, 28]           1,920ReLU-746          [-1, 960, 28, 28]               0Conv2d-747          [-1, 128, 28, 28]         122,880BatchNorm2d-748          [-1, 128, 28, 28]             256ReLU-749          [-1, 128, 28, 28]               0Conv2d-750           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-751             [-1, 32, 1, 1]               0Linear-752                    [-1, 2]              64ReLU-753                    [-1, 2]               0Linear-754                   [-1, 32]              64Sigmoid-755                   [-1, 32]               0SE_Block-756           [-1, 32, 28, 28]               0_DenseLayer-757           [-1, 32, 28, 28]               0BatchNorm2d-758          [-1, 992, 28, 28]           1,984ReLU-759          [-1, 992, 28, 28]               0Conv2d-760          [-1, 128, 28, 28]         126,976BatchNorm2d-761          [-1, 128, 28, 28]             256ReLU-762          [-1, 128, 28, 28]               0Conv2d-763           [-1, 32, 28, 28]          36,864
AdaptiveAvgPool2d-764             [-1, 32, 1, 1]               0Linear-765                    [-1, 2]              64ReLU-766                    [-1, 2]               0Linear-767                   [-1, 32]              64Sigmoid-768                   [-1, 32]               0SE_Block-769           [-1, 32, 28, 28]               0_DenseLayer-770           [-1, 32, 28, 28]               0_DenseBlock-771         [-1, 1024, 28, 28]               0BatchNorm2d-772         [-1, 1024, 28, 28]           2,048Linear-773                    [-1, 2]           2,050
================================================================
Total params: 6,955,522
Trainable params: 6,955,522
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 4854.11
Params size (MB): 26.53
Estimated Total Size (MB): 4881.21
----------------------------------------------------------------

4、训练模型

4.1.编写训练函数

# 训练循环
def train(dataloader, model, loss_fn, optimizer):size = len(dataloader.dataset)  # 训练集的大小num_batches = len(dataloader)  # 批次数目, (size/batch_size,向上取整)train_loss, train_acc = 0, 0  # 初始化训练损失和正确率for X, y in dataloader:  # 获取图片及其标签X, y = X.to(device), y.to(device)# 计算预测误差pred = model(X)  # 网络输出loss = loss_fn(pred, y)  # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失# 反向传播optimizer.zero_grad()  # grad属性归零loss.backward()  # 反向传播optimizer.step()  # 每一步自动更新# 记录acc与losstrain_acc += (pred.argmax(1) == y).type(torch.float).sum().item()train_loss += loss.item()train_acc /= sizetrain_loss /= num_batchesreturn train_acc, train_loss

4.2.编写测试函数

def test(dataloader, model, loss_fn):size = len(dataloader.dataset)  # 测试集的大小num_batches = len(dataloader)  # 批次数目test_loss, test_acc = 0, 0# 当不进行训练时,停止梯度更新,节省计算内存消耗with torch.no_grad():for imgs, target in dataloader:imgs, target = imgs.to(device), target.to(device)# 计算losstarget_pred = model(imgs)loss = loss_fn(target_pred, target)test_loss += loss.item()test_acc += (target_pred.argmax(1) == target).type(torch.float).sum().item()test_acc /= sizetest_loss /= num_batchesreturn test_acc, test_loss

4.3.正式训练

import copyoptimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
loss_fn = nn.CrossEntropyLoss()  # 创建损失函数epochs = 100train_loss = []
train_acc = []
test_loss = []
test_acc = []best_acc = 0  # 设置一个最佳准确率,作为最佳模型的判别指标for epoch in range(epochs):# 更新学习率(使用自定义学习率时使用)# adjust_learning_rate(optimizer, epoch, learn_rate)model.train()epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)# scheduler.step() # 更新学习率(调用官方动态学习率接口时使用)model.eval()epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)# 保存最佳模型到 best_modelif epoch_test_acc > best_acc:best_acc = epoch_test_accbest_model = copy.deepcopy(model)train_acc.append(epoch_train_acc)train_loss.append(epoch_train_loss)test_acc.append(epoch_test_acc)test_loss.append(epoch_test_loss)# 获取当前的学习率lr = optimizer.state_dict()['param_groups'][0]['lr']template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')print(template.format(epoch + 1, epoch_train_acc * 100, epoch_train_loss,epoch_test_acc * 100, epoch_test_loss, lr))# 保存最佳模型到文件中
PATH = './J5_best_model.pth_2'  # 保存的参数文件名
torch.save(model.state_dict(), PATH)print('Done')

代码输出:

Epoch: 1, Train_acc:62.9%, Train_loss:0.661, Test_acc:60.6%, Test_loss:0.733, Lr:1.00E-04
Epoch: 2, Train_acc:66.2%, Train_loss:0.628, Test_acc:58.7%, Test_loss:0.707, Lr:1.00E-04
Epoch: 3, Train_acc:66.9%, Train_loss:0.615, Test_acc:62.2%, Test_loss:0.768, Lr:1.00E-04
Epoch: 4, Train_acc:68.1%, Train_loss:0.617, Test_acc:66.7%, Test_loss:0.624, Lr:1.00E-04
Epoch: 5, Train_acc:68.2%, Train_loss:0.594, Test_acc:67.6%, Test_loss:0.628, Lr:1.00E-04
Epoch: 6, Train_acc:69.9%, Train_loss:0.582, Test_acc:67.4%, Test_loss:0.626, Lr:1.00E-04
Epoch: 7, Train_acc:73.7%, Train_loss:0.547, Test_acc:67.1%, Test_loss:0.608, Lr:1.00E-04
Epoch: 8, Train_acc:74.3%, Train_loss:0.531, Test_acc:66.4%, Test_loss:0.714, Lr:1.00E-04
Epoch: 9, Train_acc:75.0%, Train_loss:0.498, Test_acc:75.1%, Test_loss:0.539, Lr:1.00E-04
Epoch:10, Train_acc:76.1%, Train_loss:0.489, Test_acc:76.7%, Test_loss:0.507, Lr:1.00E-04
Epoch:11, Train_acc:77.7%, Train_loss:0.466, Test_acc:74.8%, Test_loss:0.479, Lr:1.00E-04
Epoch:12, Train_acc:79.7%, Train_loss:0.439, Test_acc:77.6%, Test_loss:0.521, Lr:1.00E-04
Epoch:13, Train_acc:83.5%, Train_loss:0.379, Test_acc:81.1%, Test_loss:0.414, Lr:1.00E-04
Epoch:14, Train_acc:82.5%, Train_loss:0.407, Test_acc:88.3%, Test_loss:0.333, Lr:1.00E-04
Epoch:15, Train_acc:83.5%, Train_loss:0.357, Test_acc:90.0%, Test_loss:0.287, Lr:1.00E-04
Epoch:16, Train_acc:85.9%, Train_loss:0.341, Test_acc:78.8%, Test_loss:0.467, Lr:1.00E-04
Epoch:17, Train_acc:86.3%, Train_loss:0.317, Test_acc:89.0%, Test_loss:0.306, Lr:1.00E-04
Epoch:18, Train_acc:87.6%, Train_loss:0.297, Test_acc:87.2%, Test_loss:0.299, Lr:1.00E-04
Epoch:19, Train_acc:88.4%, Train_loss:0.282, Test_acc:90.4%, Test_loss:0.253, Lr:1.00E-04
Epoch:20, Train_acc:88.1%, Train_loss:0.284, Test_acc:92.3%, Test_loss:0.208, Lr:1.00E-04
Epoch:21, Train_acc:89.4%, Train_loss:0.240, Test_acc:88.1%, Test_loss:0.295, Lr:1.00E-04
Epoch:22, Train_acc:91.1%, Train_loss:0.219, Test_acc:85.1%, Test_loss:0.296, Lr:1.00E-04
Epoch:23, Train_acc:90.5%, Train_loss:0.234, Test_acc:93.9%, Test_loss:0.190, Lr:1.00E-04
Epoch:24, Train_acc:91.4%, Train_loss:0.208, Test_acc:90.7%, Test_loss:0.285, Lr:1.00E-04
Epoch:25, Train_acc:92.3%, Train_loss:0.201, Test_acc:92.1%, Test_loss:0.193, Lr:1.00E-04
Epoch:26, Train_acc:92.2%, Train_loss:0.189, Test_acc:92.3%, Test_loss:0.171, Lr:1.00E-04
Epoch:27, Train_acc:92.5%, Train_loss:0.194, Test_acc:91.8%, Test_loss:0.185, Lr:1.00E-04
Epoch:28, Train_acc:93.0%, Train_loss:0.179, Test_acc:93.0%, Test_loss:0.214, Lr:1.00E-04
Epoch:29, Train_acc:93.2%, Train_loss:0.183, Test_acc:92.1%, Test_loss:0.203, Lr:1.00E-04
Epoch:30, Train_acc:93.5%, Train_loss:0.169, Test_acc:90.9%, Test_loss:0.205, Lr:1.00E-04
Epoch:31, Train_acc:94.0%, Train_loss:0.165, Test_acc:93.7%, Test_loss:0.197, Lr:1.00E-04
Epoch:32, Train_acc:93.9%, Train_loss:0.165, Test_acc:94.4%, Test_loss:0.152, Lr:1.00E-04
Epoch:33, Train_acc:93.9%, Train_loss:0.163, Test_acc:91.8%, Test_loss:0.229, Lr:1.00E-04
Epoch:34, Train_acc:95.6%, Train_loss:0.133, Test_acc:91.1%, Test_loss:0.266, Lr:1.00E-04
Epoch:35, Train_acc:95.2%, Train_loss:0.137, Test_acc:93.7%, Test_loss:0.130, Lr:1.00E-04
Epoch:36, Train_acc:95.0%, Train_loss:0.142, Test_acc:95.8%, Test_loss:0.123, Lr:1.00E-04
Epoch:37, Train_acc:93.5%, Train_loss:0.161, Test_acc:94.6%, Test_loss:0.133, Lr:1.00E-04
Epoch:38, Train_acc:95.7%, Train_loss:0.120, Test_acc:96.0%, Test_loss:0.109, Lr:1.00E-04
Epoch:39, Train_acc:95.6%, Train_loss:0.121, Test_acc:94.2%, Test_loss:0.161, Lr:1.00E-04
Epoch:40, Train_acc:95.9%, Train_loss:0.110, Test_acc:89.7%, Test_loss:0.287, Lr:1.00E-04
Epoch:41, Train_acc:95.0%, Train_loss:0.135, Test_acc:96.3%, Test_loss:0.109, Lr:1.00E-04
Epoch:42, Train_acc:97.0%, Train_loss:0.094, Test_acc:93.7%, Test_loss:0.185, Lr:1.00E-04
Epoch:43, Train_acc:97.0%, Train_loss:0.089, Test_acc:90.9%, Test_loss:0.284, Lr:1.00E-04
Epoch:44, Train_acc:96.4%, Train_loss:0.101, Test_acc:91.6%, Test_loss:0.263, Lr:1.00E-04
Epoch:45, Train_acc:97.0%, Train_loss:0.083, Test_acc:91.4%, Test_loss:0.260, Lr:1.00E-04
Epoch:46, Train_acc:96.1%, Train_loss:0.105, Test_acc:95.6%, Test_loss:0.122, Lr:1.00E-04
Epoch:47, Train_acc:96.7%, Train_loss:0.101, Test_acc:92.8%, Test_loss:0.214, Lr:1.00E-04
Epoch:48, Train_acc:96.0%, Train_loss:0.112, Test_acc:93.7%, Test_loss:0.248, Lr:1.00E-04
Epoch:49, Train_acc:97.0%, Train_loss:0.086, Test_acc:94.6%, Test_loss:0.197, Lr:1.00E-04
Epoch:50, Train_acc:97.4%, Train_loss:0.071, Test_acc:96.7%, Test_loss:0.091, Lr:1.00E-04
Epoch:51, Train_acc:96.4%, Train_loss:0.095, Test_acc:94.9%, Test_loss:0.149, Lr:1.00E-04
Epoch:52, Train_acc:97.4%, Train_loss:0.081, Test_acc:95.3%, Test_loss:0.095, Lr:1.00E-04
Epoch:53, Train_acc:97.5%, Train_loss:0.073, Test_acc:95.6%, Test_loss:0.141, Lr:1.00E-04
Epoch:54, Train_acc:98.2%, Train_loss:0.058, Test_acc:96.5%, Test_loss:0.105, Lr:1.00E-04
Epoch:55, Train_acc:97.4%, Train_loss:0.074, Test_acc:92.1%, Test_loss:0.290, Lr:1.00E-04
Epoch:56, Train_acc:97.5%, Train_loss:0.069, Test_acc:94.9%, Test_loss:0.146, Lr:1.00E-04
Epoch:57, Train_acc:98.4%, Train_loss:0.050, Test_acc:95.3%, Test_loss:0.120, Lr:1.00E-04
Epoch:58, Train_acc:97.7%, Train_loss:0.066, Test_acc:95.3%, Test_loss:0.202, Lr:1.00E-04
Epoch:59, Train_acc:97.7%, Train_loss:0.065, Test_acc:95.3%, Test_loss:0.135, Lr:1.00E-04
Epoch:60, Train_acc:98.0%, Train_loss:0.057, Test_acc:96.0%, Test_loss:0.128, Lr:1.00E-04
Epoch:61, Train_acc:97.9%, Train_loss:0.068, Test_acc:94.6%, Test_loss:0.137, Lr:1.00E-04
Epoch:62, Train_acc:98.0%, Train_loss:0.067, Test_acc:95.6%, Test_loss:0.129, Lr:1.00E-04
Epoch:63, Train_acc:98.5%, Train_loss:0.042, Test_acc:97.4%, Test_loss:0.099, Lr:1.00E-04
Epoch:64, Train_acc:98.8%, Train_loss:0.048, Test_acc:96.0%, Test_loss:0.163, Lr:1.00E-04
Epoch:65, Train_acc:97.7%, Train_loss:0.062, Test_acc:93.0%, Test_loss:0.177, Lr:1.00E-04
Epoch:66, Train_acc:98.3%, Train_loss:0.063, Test_acc:96.7%, Test_loss:0.110, Lr:1.00E-04
Epoch:67, Train_acc:98.7%, Train_loss:0.040, Test_acc:95.3%, Test_loss:0.207, Lr:1.00E-04
Epoch:68, Train_acc:99.2%, Train_loss:0.025, Test_acc:97.0%, Test_loss:0.099, Lr:1.00E-04
Epoch:69, Train_acc:97.5%, Train_loss:0.066, Test_acc:95.8%, Test_loss:0.118, Lr:1.00E-04
Epoch:70, Train_acc:98.0%, Train_loss:0.056, Test_acc:97.2%, Test_loss:0.136, Lr:1.00E-04
Epoch:71, Train_acc:98.3%, Train_loss:0.051, Test_acc:96.7%, Test_loss:0.069, Lr:1.00E-04
Epoch:72, Train_acc:99.1%, Train_loss:0.039, Test_acc:95.3%, Test_loss:0.143, Lr:1.00E-04
Epoch:73, Train_acc:98.5%, Train_loss:0.049, Test_acc:95.6%, Test_loss:0.155, Lr:1.00E-04
Epoch:74, Train_acc:98.1%, Train_loss:0.052, Test_acc:96.5%, Test_loss:0.079, Lr:1.00E-04
Epoch:75, Train_acc:97.5%, Train_loss:0.066, Test_acc:94.6%, Test_loss:0.155, Lr:1.00E-04
Epoch:76, Train_acc:98.8%, Train_loss:0.035, Test_acc:96.0%, Test_loss:0.117, Lr:1.00E-04
Epoch:77, Train_acc:98.4%, Train_loss:0.045, Test_acc:93.0%, Test_loss:0.180, Lr:1.00E-04
Epoch:78, Train_acc:98.6%, Train_loss:0.049, Test_acc:97.2%, Test_loss:0.128, Lr:1.00E-04
Epoch:79, Train_acc:98.0%, Train_loss:0.052, Test_acc:98.1%, Test_loss:0.063, Lr:1.00E-04
Epoch:80, Train_acc:98.5%, Train_loss:0.048, Test_acc:96.7%, Test_loss:0.123, Lr:1.00E-04
Epoch:81, Train_acc:99.2%, Train_loss:0.035, Test_acc:97.4%, Test_loss:0.100, Lr:1.00E-04
Epoch:82, Train_acc:98.4%, Train_loss:0.045, Test_acc:95.3%, Test_loss:0.154, Lr:1.00E-04
Epoch:83, Train_acc:97.4%, Train_loss:0.057, Test_acc:96.5%, Test_loss:0.151, Lr:1.00E-04
Epoch:84, Train_acc:98.4%, Train_loss:0.047, Test_acc:95.8%, Test_loss:0.112, Lr:1.00E-04
Epoch:85, Train_acc:98.8%, Train_loss:0.040, Test_acc:96.0%, Test_loss:0.152, Lr:1.00E-04
Epoch:86, Train_acc:99.5%, Train_loss:0.015, Test_acc:96.0%, Test_loss:0.157, Lr:1.00E-04
Epoch:87, Train_acc:99.0%, Train_loss:0.036, Test_acc:93.0%, Test_loss:0.183, Lr:1.00E-04
Epoch:88, Train_acc:98.6%, Train_loss:0.042, Test_acc:95.3%, Test_loss:0.122, Lr:1.00E-04
Epoch:89, Train_acc:99.2%, Train_loss:0.024, Test_acc:95.1%, Test_loss:0.157, Lr:1.00E-04
Epoch:90, Train_acc:98.5%, Train_loss:0.049, Test_acc:93.9%, Test_loss:0.162, Lr:1.00E-04
Epoch:91, Train_acc:98.5%, Train_loss:0.040, Test_acc:97.2%, Test_loss:0.085, Lr:1.00E-04
Epoch:92, Train_acc:98.5%, Train_loss:0.051, Test_acc:97.2%, Test_loss:0.061, Lr:1.00E-04
Epoch:93, Train_acc:99.0%, Train_loss:0.030, Test_acc:97.7%, Test_loss:0.074, Lr:1.00E-04
Epoch:94, Train_acc:99.4%, Train_loss:0.017, Test_acc:96.0%, Test_loss:0.121, Lr:1.00E-04
Epoch:95, Train_acc:98.9%, Train_loss:0.038, Test_acc:97.9%, Test_loss:0.077, Lr:1.00E-04
Epoch:96, Train_acc:98.7%, Train_loss:0.037, Test_acc:96.5%, Test_loss:0.111, Lr:1.00E-04
Epoch:97, Train_acc:99.3%, Train_loss:0.024, Test_acc:96.7%, Test_loss:0.151, Lr:1.00E-04
Epoch:98, Train_acc:98.8%, Train_loss:0.030, Test_acc:96.0%, Test_loss:0.182, Lr:1.00E-04
Epoch:99, Train_acc:97.8%, Train_loss:0.062, Test_acc:93.7%, Test_loss:0.190, Lr:1.00E-04
Epoch:100, Train_acc:98.4%, Train_loss:0.061, Test_acc:96.7%, Test_loss:0.077, Lr:1.00E-04
Done

5、结果可视化

5.1.Loss&Accuracy

import matplotlib.pyplot as plt
# 隐藏警告
import warningswarnings.filterwarnings("ignore")  # 忽略警告信息
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号
plt.rcParams['figure.dpi'] = 100  # 分辨率epochs_range = range(epochs)plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

代码输出:
在这里插入图片描述

5.2. 指定图片进行预测

from PIL import Imageclasses = list(total_data.class_to_idx)def predict_one_image(image_path, model, transform, classes):test_img = Image.open(image_path).convert('RGB')plt.imshow(test_img)  # 展示预测的图片test_img = transform(test_img)img = test_img.to(device).unsqueeze(0)model.eval()output = model(img)_, pred = torch.max(output, 1)pred_class = classes[pred]print(f'预测结果是:{pred_class}')
# 预测训练集中的某张照片
predict_one_image(image_path='./J5/Monkeypox//M01_01_00.jpg',model=model,transform=train_transforms,classes=classes)

代码输出:

预测结果是:Monkeypox

在这里插入图片描述


http://www.ppmy.cn/embedded/114509.html

相关文章

qt使用对数坐标的例子,qchart用QLogValueAxis坐标不出图解决

硬件&#xff1a;ThinkPad T15 系统&#xff1a;win10 专业版 qt版本&#xff1a;Qt 5.14.1 &#xff0c; QtCreator 4.11.1 软件界面放了一个QPushButton&#xff0c;一个QVBoxLayout&#xff0c;如下&#xff1a; 主要代码如下&#xff0c;我添加了两条曲线&#xff0c;…

【STM32 Blue Pill编程实例】-手机通过HC-05串口蓝牙控制LED

手机通过HC-05串口蓝牙控制LED 文章目录 手机通过HC-05串口蓝牙控制LED1、HC-05串口蓝牙模块介绍2、硬件准备和接线3、模块配置4、代码实现5、手机控制在本文中,我们介绍如何使用 STM32CubeIDE 和 HAL 库将 HC-05 蓝牙模块与 STM32 Blue Pill 开发板连接。 我们将使用 Android…

Scrapy爬虫框架 Pipeline 数据传输管道

在网络数据采集领域&#xff0c;Scrapy 是一个非常强大的框架&#xff0c;而 Pipeline 是其中不可或缺的一部分。它允许我们在数据处理的最后阶段对抓取的数据进行进一步的处理&#xff0c;如清洗、存储等操作。 本教程将详细介绍如何在 Scrapy 中使用 Pipeline&#xff0c;帮…

neo4j节点关联路径的表示、节点的增删改查

目录 核心概念节点的增删改查&#xff08;1&#xff09;增&#xff08;2&#xff09;查&#xff08;3&#xff09;删&#xff08;4&#xff09;改 neo4j文档&#xff1a;https://neo4j.com/docs/ https://neo4j.com/docs/cypher-manual/current/introduction/ 核心概念 节点 ne…

交通标志识别系统Python+卷积神经网络算法+深度学习人工智能+TensorFlow模型训练+计算机课设项目+Django网页界面

一、介绍 交通标志识别系统。本系统使用Python作为主要编程语言&#xff0c;在交通标志图像识别功能实现中&#xff0c;基于TensorFlow搭建卷积神经网络算法模型&#xff0c;通过对收集到的58种常见的交通标志图像作为数据集&#xff0c;进行迭代训练最后得到一个识别精度较高…

[Mdp] lc3290. 最高乘法得分(二维dp+状态定义+状态转移+LCS问题+好题+周赛415_2)

文章目录 1. 题目来源2. 题目解析 1. 题目来源 链接&#xff1a;3290. 最高乘法得分 类似&#xff1a; [Mdp] lc3259. 超级饮料的最大强化能量(dp状态表示状态转移状态机dp周赛411_2) 2. 题目解析 挺不错的题目&#xff0c;纠结了一会贪心解法&#xff0c;但是没有什么卵用…

cpp中的namespace详解

namespace的作用主要是为了避免名字冲突和组织代码。 命名空间在C中是一个非常重要的特性&#xff0c;它帮助开发者更好地管理代码和避免潜在的冲突。 具体来说&#xff0c;它有以下几个主要用途 避免名字冲突 在大型项目中可能会有很多个类、函数或变量使用相同的名称。使用…

音频北斗定位系统有什么用?

在当今科技飞速发展的时代&#xff0c;定位技术已经成为我们日常生活和各行各业不可或缺的一部分。其中&#xff0c;音频北斗定位系统作为一种新兴的定位技术&#xff0c;正逐渐展现出其独特的优势和应用价值。那么&#xff0c;到底音频北斗定位系统有什么用呢?我们一起来了解…