深度学习基础--ResNet50V2网络的讲解,ResNet50V2的复现(pytorch)以及用复现的ResNet50做鸟类图像分类

ops/2025/3/1 19:41:41/
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

前言

  • 如果说最经典的神经网络ResNet肯定是一个,从ResNet发布后,作者又进行修改,命名为ResNe50v2,这篇文章是本人学习ResNe50v2的学习笔记,并且用pytorch复现了ResNet50V2,后面用它做了一个鸟类图像分类demo,与上一篇ResNet50相比,效果明显好了不少。
  • ResNet讲解: https://blog.csdn.net/weixin_74085818/article/details/145786990?spm=1001.2014.3001.5501
  • 欢迎收藏 + 关注,本人将会持续更新

文章目录

  • 1、简介
    • 与ResNet对比
    • 不同残差结构
    • 激活函数的尝试
    • 小结
  • 2、ResNet50V2搭建
    • 1、导入数据
      • 1、导入库
      • 2、查看数据信息和导入数据
      • 3、展示数据
      • 4、数据导入
      • 5、数据划分
      • 6、动态加载数据
    • 2、构建ResNet-50V2网络
    • 3、模型训练
      • 1、构建训练集
      • 2、构建测试集
      • 3、设置超参数
    • 4、模型训练
    • 5、结果可视化

1、简介

与ResNet对比

在这里插入图片描述

👀 改进点:

  • 原始resnet结果:先进行卷积,在进行BN和激活函数,最后执行addtion与RelU
  • 修改版本:先进行BN和激活函数,把addtion后的ReLU放到了残差内部,改进后残差内有两个ReLU

不同残差结构

何凯明大神产实力不同的残差结构,如下:

在这里插入图片描述

最后结果

在这里插入图片描述

发现还是原始的残差结构效果最好

激活函数的尝试

这个部分主要是激活函数、BN层的位置。

在这里插入图片描述

结果

在这里插入图片描述

发现最好的是**(e)**结果

小结

通过学习,发现可以从两个角度修改模型:

  • 激活函数、BN层的位置,如:数据处理中的位置,不同位置效果也不同。
  • 残差结构:原始版本是恒等映射,但是也有可能不同的残差也会有不同的效果。

2、ResNet50V2搭建

1、导入数据

1、导入库

import torch  
import torch.nn as nn
import torchvision 
import numpy as np 
import os, PIL, pathlib # 设置设备
device = "cuda" if torch.cuda.is_available() else "cpu"device 
'cuda'

2、查看数据信息和导入数据

数据目录有两个文件:一个数据文件,一个权重。

data_dir = "./data/"data_dir = pathlib.Path(data_dir)# 类别数量
classnames = [str(path).split("\\")[0] for path in os.listdir(data_dir)]classnames
['bird_photos', 'resnet50_weights_tf_dim_ordering_tf_kernels.h5']

3、展示数据

import matplotlib.pylab as plt  
from PIL import Image # 获取文件名称
data_path_name = "./data/bird_photos/Bananaquit/"
data_path_list = [f for f in os.listdir(data_path_name) if f.endswith(('jpg', 'png'))]# 创建画板
fig, axes = plt.subplots(2, 8, figsize=(16, 6))for ax, img_file in zip(axes.flat, data_path_list):path_name = os.path.join(data_path_name, img_file)img = Image.open(path_name) # 打开# 显示ax.imshow(img)ax.axis('off')plt.show()


在这里插入图片描述

4、数据导入

from torchvision import transforms, datasets # 数据统一格式
img_height = 224
img_width = 224 data_tranforms = transforms.Compose([transforms.Resize([img_height, img_width]),transforms.ToTensor(),transforms.Normalize(   # 归一化mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225] )
])# 加载所有数据
total_data = datasets.ImageFolder(root="./data/", transform=data_tranforms)

5、数据划分

# 大小 8 : 2
train_size = int(len(total_data) * 0.8)
test_size = len(total_data) - train_size train_data, test_data = torch.utils.data.random_split(total_data, [train_size, test_size])

6、动态加载数据

batch_size = 32 train_dl = torch.utils.data.DataLoader(train_data,batch_size=batch_size,shuffle=True
)test_dl = torch.utils.data.DataLoader(test_data,batch_size=batch_size,shuffle=False
)
# 查看数据维度
for data, labels in train_dl:print("data shape[N, C, H, W]: ", data.shape)print("labels: ", labels)break
data shape[N, C, H, W]:  torch.Size([32, 3, 224, 224])
labels:  tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0])

2、构建ResNet-50V2网络

在这里插入图片描述

上一篇文章中,本人搭建的网络有点啰嗦,很多都是一步一步的,但是这一篇,这一种搭建,比较优雅,因为这个利用了三个网络模块很多相同点,依据这个搭建而成。

ResNet50V2搭建方式和ResNet50一样,只是残差堆积不同

注意:参数,这个神经网络有很多参数,注意别错了。

import torch.nn.functional as F'''  
conv_shortcut: 采用什么样的残差连接,对应上面图的1、3模块
filters: 输出通道数
卷积核:默认为3
'''
class Block2(nn.Module):def __init__(self, in_channel, filters, kernel_size=3, stride=1, conv_shortcut=False):super().__init__()# 第一个,preact,对应上图的前两层,bn、reluself.preact = nn.Sequential(nn.BatchNorm2d(in_channel),nn.ReLU(True))# 判断是否需要使用残差连接,上图展示的网络中,有3个模块,有两个有残差连接,有一个没有,没有的那一块卷积核为 1self.shortcut = conv_shortcutif self.shortcut:   # 对应上图的第一块网络结构残差连接self.short = nn.Conv2d(in_channel, 4 * filters, kernel_size=1, stride=stride, padding=0, bias=False)  # padding默认为0, 4 * filtersz看源码得出,  输出通道else:self.short = nn.MaxPool2d(kernel_size=1, stride=stride, padding=0) if stride > 1 else nn.Identity()  # nn.Identity() 对输入的数据X,不做任何操作# 后面结果,三个模块都一样,我把他分层三个模块# 模块一,看源码self.conv1 = nn.Sequential(nn.Conv2d(in_channel, filters, kernel_size=1, stride=1, bias=False),nn.BatchNorm2d(filters),nn.ReLU(True))# 模块二self.conv2 = nn.Sequential(nn.Conv2d(filters, filters, kernel_size=kernel_size, stride=stride, padding=1, bias=False),nn.BatchNorm2d(filters),nn.ReLU(True))# 模块三self.conv3 = nn.Conv2d(filters, 4 * filters, kernel_size=1, stride=1)def forward(self, x):# 数据x1 = self.preact(x)if self.shortcut:  # 这个时候,对应对一个模块x2 = self.short(x1)  # 这个时候输入的是 x1else:x2 = self.short(x)  # 这个对应上面网络图第三个, 用的输入 x x1 = self.conv1(x1)x1 = self.conv2(x1)x1 = self.conv3(x1)x = x1 + x2  # 合并return x# 堆积
class Stack2(nn.Module):def __init__(self, in_channel, filters, blocks, stride=2):  # blocks代表上图中最左网络图,残差堆积 中 层数super().__init__()self.conv = nn.Sequential()# 上面网络图中,最左部分,残差堆积是很相似的self.conv.add_module(str(0), Block2(in_channel, filters, conv_shortcut=True))   # 参数,名字 + 模块# 中间层for i in range(1, blocks - 1):  # 上面一层去除,中间剩下 blocks - 2self.add_module(str(i), Block2(4 * filters, filters))  # 上一层输出:4 * filters,这一层回归filtersself.conv.add_module(str(blocks-1), Block2(4 * filters, filters, stride=stride))  # 这里的stride不一样def forward(self, x):x = self.conv(x)return xclass ResNet50V2(nn.Module):def __init__(self,include_top=True, # 是否需要包含最定层preact=True,  # 是否需要预激活use_bias=True,  # 卷积层是否用偏置input_shape=[224, 224, 3],classes=1000,  # 类别数量pooling=None):super().__init__()# 上图神经网络,最左边,最顶层, ZeroPad是感受野参数self.conv1 = nn.Sequential() self.conv1.add_module('conv', nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=use_bias))  # 这里的标准化,激活函数是可选的if not preact:self.conv1.add_module('bn', nn.BatchNorm2d(64))self.conv1.add_module('relu', nn.ReLU())self.conv1.add_module('max_pool', nn.MaxPool2d(kernel_size=3, stride=2, padding=1))# 上图神经网络,最左边,中间层self.conv2 = Stack2(64, 64, 3)self.conv3 = Stack2(256, 128, 4)self.conv4 = Stack2(512, 256, 6)self.conv5 = Stack2(1024, 512, 3, stride=1)  # 这些层数量变换挺有意思的self.last = nn.Sequential()if preact:self.last.add_module('bn', nn.BatchNorm2d(2048))self.last.add_module('relu', nn.ReLU(True))if include_top:self.last.add_module('avg_pool', nn.AdaptiveAvgPool2d((1, 1)))self.last.add_module('flatten', nn.Flatten())self.last.add_module('fc', nn.Linear(2048, classes))else:if pooling=='avg':self.last.add_module('avg_pool', nn.AdaptiveAvgPool2d((1, 1)))elif pooling=='max':self.last.add_module('max_pool', nn.AdaptiveAMaxPool2d((1, 1)))def forward(self, x):x = self.conv1(x)x = self.conv2(x)x = self.conv3(x)x = self.conv4(x)x = self.conv5(x)x = self.last(x)return xmodel = ResNet50V2(classes=len(classnames)).to(device)model
ResNet50V2((conv1): Sequential((conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))(max_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False))(conv2): Stack2((conv): Sequential((0): Block2((preact): Sequential((0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(conv1): Sequential((0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1)))(2): Block2((preact): Sequential((0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)(conv1): Sequential((0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))))(1): Block2((preact): Sequential((0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Identity()(conv1): Sequential((0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))))(conv3): Stack2((conv): Sequential((0): Block2((preact): Sequential((0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(conv1): Sequential((0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)))(3): Block2((preact): Sequential((0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)(conv1): Sequential((0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))))(1): Block2((preact): Sequential((0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Identity()(conv1): Sequential((0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)))(2): Block2((preact): Sequential((0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Identity()(conv1): Sequential((0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))))(conv4): Stack2((conv): Sequential((0): Block2((preact): Sequential((0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(conv1): Sequential((0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1)))(5): Block2((preact): Sequential((0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)(conv1): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))))(1): Block2((preact): Sequential((0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Identity()(conv1): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1)))(2): Block2((preact): Sequential((0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Identity()(conv1): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1)))(3): Block2((preact): Sequential((0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Identity()(conv1): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1)))(4): Block2((preact): Sequential((0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Identity()(conv1): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))))(conv5): Stack2((conv): Sequential((0): Block2((preact): Sequential((0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(conv1): Sequential((0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1)))(2): Block2((preact): Sequential((0): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Identity()(conv1): Sequential((0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))))(1): Block2((preact): Sequential((0): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): ReLU(inplace=True))(short): Identity()(conv1): Sequential((0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv2): Sequential((0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))))(last): Sequential((bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(avg_pool): AdaptiveAvgPool2d(output_size=(1, 1))(flatten): Flatten(start_dim=1, end_dim=-1)(fc): Linear(in_features=2048, out_features=2, bias=True))
)

3、模型训练

1、构建训练集

def train(dataloader, model, loss_fn, optimizer):size = len(dataloader.dataset)batch_size = len(dataloader)train_acc, train_loss = 0, 0 for X, y in dataloader:X, y = X.to(device), y.to(device)# 训练pred = model(X)loss = loss_fn(pred, y)# 梯度下降法optimizer.zero_grad()loss.backward()optimizer.step()# 记录train_loss += loss.item()train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()train_acc /= sizetrain_loss /= batch_sizereturn train_acc, train_loss

2、构建测试集

def test(dataloader, model, loss_fn):size = len(dataloader.dataset)batch_size = len(dataloader)test_acc, test_loss = 0, 0 with torch.no_grad():for X, y in dataloader:X, y = X.to(device), y.to(device)pred = model(X)loss = loss_fn(pred, y)test_loss += loss.item()test_acc += (pred.argmax(1) == y).type(torch.float).sum().item()test_acc /= sizetest_loss /= batch_sizereturn test_acc, test_loss

3、设置超参数

loss_fn = nn.CrossEntropyLoss()  # 损失函数     
learn_lr = 1e-4             # 超参数
optimizer = torch.optim.Adam(model.parameters(), lr=learn_lr)   # 优化器

4、模型训练

train_acc = []
train_loss = []
test_acc = []
test_loss = []epoches = 10for i in range(epoches):model.train()epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)model.eval()epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)train_acc.append(epoch_train_acc)train_loss.append(epoch_train_loss)test_acc.append(epoch_test_acc)test_loss.append(epoch_test_loss)# 输出template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}')print(template.format(i + 1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))print("Done")
Epoch: 1, Train_acc:96.9%, Train_loss:0.289, Test_acc:100.0%, Test_loss:0.117
Epoch: 2, Train_acc:100.0%, Train_loss:0.025, Test_acc:100.0%, Test_loss:0.011
Epoch: 3, Train_acc:100.0%, Train_loss:0.007, Test_acc:100.0%, Test_loss:0.006
Epoch: 4, Train_acc:100.0%, Train_loss:0.004, Test_acc:100.0%, Test_loss:0.003
Epoch: 5, Train_acc:100.0%, Train_loss:0.003, Test_acc:100.0%, Test_loss:0.003
Epoch: 6, Train_acc:100.0%, Train_loss:0.002, Test_acc:100.0%, Test_loss:0.002
Epoch: 7, Train_acc:100.0%, Train_loss:0.002, Test_acc:100.0%, Test_loss:0.002
Epoch: 8, Train_acc:100.0%, Train_loss:0.002, Test_acc:100.0%, Test_loss:0.001
Epoch: 9, Train_acc:100.0%, Train_loss:0.001, Test_acc:100.0%, Test_loss:0.001
Epoch:10, Train_acc:100.0%, Train_loss:0.001, Test_acc:100.0%, Test_loss:0.001
Done

5、结果可视化

import matplotlib.pyplot as plt
#隐藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息epochs_range = range(epoches)plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training= Loss')
plt.show()


在这里插入图片描述


效果比ResNet50好


http://www.ppmy.cn/ops/162314.html

相关文章

Cursor+pycharm接入Codeuim(免费版),Tab自动补全功能平替

如题,笔者在Cursor中使用pycharm写python程序,试用期到了Tab自动补全功能就不能用了,安装Codeuim插件可以代替这个功能。步骤如下: 1. 在应用商店中搜索扩展Codeuim,下载安装 2. 安装完成后左下角会弹出提示框&#x…

用HTML5+CSS+JavaScript实现新奇挂钟动画

用HTML5+CSS+JavaScript实现新奇挂钟动画 引言 在技术博客中,如何吸引粉丝并保持他们的关注?除了干货内容,独特的视觉效果也是关键。今天,我们将通过HTML5、CSS和JavaScript实现一个新奇挂钟动画,并将其嵌入到你的网站中。这个动画不仅能让你的网站脱颖而出,还能展示你的…

GitHub开源协议选择指南:如何为你的项目找到最佳“许可证”?

引言 当你站在GitHub仓库创建的十字路口时,是否曾被众多开源协议晃花了眼? 别担心!这篇指南将化身你的"协议导航仪",用一张流程图五个灵魂拷问,帮你轻松找到最佳选择。无论你是开发者、开源爱好者&#xff…

告别GitHub连不上!一分钟快速访问方案

一、当GitHub抽风时,你是否也这样崩溃过? 😡 npm install卡在node-sass半小时不动😭 git clone到90%突然fatal: early EOF🤬 改了半天hosts文件,第二天又失效了... 根本原因:传统代理需要复杂…

2-2linux系统IO

文章目录 linux系统文件io1 open /close1.1 open1.2 close1.3 示例1.3.1 打开已经存在的文件 2 read/write2.1 read2.2 write使用 遗留问题:新创建的文件权限很奇怪3 lseek3.1 文件指针的移动3.2 文件拓展 perror函数 linux系统文件io 系统函数是系统专有的函数&am…

C/C++ | 每日一练 (4)

💢欢迎来到张胤尘的技术站 💥技术如江河,汇聚众志成。代码似星辰,照亮行征程。开源精神长,传承永不忘。携手共前行,未来更辉煌💥 文章目录 C/C | 每日一练 (4)题目参考答案基础容器序列容器std:…

《A++ 敏捷开发》- 17 持续集成

为了避免客户验收前或使用后才暴露大量棘手缺陷,可能要花很长时间才能发现并解决,便应依据精益和系统工程的原则,把系统拆分成子系统/模块,先开发并测试子系统/模块、集成、再测试,按部就班地完成整个软件开发。 验收…

学术ppt模板_院士增选_自然科学奖_技术发明奖_科技进步奖_杰青_长江学者特聘教授_校企联聘长江学者_重点研发_优青_青长_青拔ppt制作案例

WordinPPT / 持续为双一流高校、科研院所、企业等提供PPT制作系统服务。 院士增选_自然科学奖_技术发明奖_科技进步奖 PPTX源件:wordinppt.com/slide.html 杰出青年基金答辩PPT模板 2025简约Nature蓝色国家杰青答辩PPT模板 杰出青年基金PPT模板信息 格 式 &#x…