第P7周-Pytorch实现马铃薯病害识别(VGG16复现)

news/2025/2/2 2:29:54/
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

目标

马铃薯病害数据集,该数据集包含表现出各种疾病的马铃薯植物的高分辨率图像,包括早期疫病晚期疫病健康叶子。它旨在帮助开发和测试图像识别模型,以实现准确的疾病检测和分类,从而促进农业诊断的进步。
image.png
image.png

具体实现

(一)环境

语言环境:Python 3.10
编 译 器: PyCharm
框 架: Pytorch

(二)具体步骤
1. Utils.py
**import torch  
import pathlib  
import matplotlib.pyplot as plt  
from torchvision.transforms import transforms  # 第一步:设置GPU  
def USE_GPU():  if torch.cuda.is_available():  print('CUDA is available, will use GPU')  device = torch.device("cuda")  else:  print('CUDA is not available. Will use CPU')  device = torch.device("cpu")  return device  temp_dict = dict()  
def recursive_iterate(path):  """  根据所提供的路径遍历该路径下的所有子目录,列出所有子目录下的文件  :param path: 路径  :return: 返回最后一级目录的数据  """    path = pathlib.Path(path)  for file in path.iterdir():  if file.is_file():  temp_key = str(file).split('\\')[-2]  if temp_key in temp_dict:  temp_dict.update({temp_key: temp_dict[temp_key] + 1})  else:  temp_dict.update({temp_key: 1})  # print(file)  elif file.is_dir():  recursive_iterate(file)  return temp_dict  def data_from_directory(directory, train_dir=None, test_dir=None, show=False):  """  提供是的数据集是文件形式的,提供目录方式导入数据,简单分析数据并返回数据分类  :param test_dir: 是否设置了测试集目录  :param train_dir: 是否设置了训练集目录  :param directory: 数据集所在目录  :param show: 是否需要以柱状图形式显示数据分类情况,默认显示  :return: 数据分类列表,类型: list  """    global total_image  print("数据目录:{}".format(directory))  data_dir = pathlib.Path(directory)  # for d in data_dir.glob('**/*'): # **/*通配符可以遍历所有子目录  #     if d.is_dir():  #         print(d)    class_name = []  total_image = 0  # temp_sum = 0  if train_dir is None or test_dir is None:  data_path = list(data_dir.glob('*'))  class_name = [str(path).split('\\')[-1] for path in data_path]  print("数据分类: {}, 类别数量:{}".format(class_name, len(list(data_dir.glob('*')))))  total_image = len(list(data_dir.glob('*/*')))  print("图片数据总数: {}".format(total_image))  else:  temp_dict.clear()  train_data_path = directory + '/' + train_dir  train_data_info = recursive_iterate(train_data_path)  print("{}目录:{},{}".format(train_dir, train_data_path, train_data_info))  temp_dict.clear()  test_data_path = directory + '/' + test_dir  print("{}目录:{},{}".format(test_dir,  test_data_path, recursive_iterate(test_data_path)))  class_name = temp_dict.keys()  if show:  # 隐藏警告  import warnings  warnings.filterwarnings("ignore")  # 忽略警告信息  plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签  plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号  plt.rcParams['figure.dpi'] = 100  # 分辨率  for i in class_name:  data = len(list(pathlib.Path((directory + '\\' + i + '\\')).glob('*')))  plt.title('数据分类情况')  plt.grid(ls='--', alpha=0.5)  plt.bar(i, data)  plt.text(i, data, str(data), ha='center', va='bottom')  print("类别-{}:{}".format(i, data))  # temp_sum += data  plt.show()  # if temp_sum == total_image:  #     print("图片数据总数检查一致")  # else:    #     print("数据数据总数检查不一致,请检查数据集是否正确!")  return class_name  def get_transforms_setting(size):  """  获取transforms的初始设置  :param size: 图片大小  :return: transforms.compose设置  """    transform_setting = {  'train': transforms.Compose([  transforms.Resize(size),  transforms.ToTensor(),  transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])  ]),  'test': transforms.Compose([  transforms.Resize(size),  transforms.ToTensor(),  transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])  ])  }  return transform_setting  # 训练循环  
def train(dataloader, device, model, loss_fn, optimizer):  size = len(dataloader.dataset)  # 训练集的大小  num_batches = len(dataloader)  # 批次数目, (size/batch_size,向上取整)  train_loss, train_acc = 0, 0  # 初始化训练损失和正确率  for X, y in dataloader:  # 获取图片及其标签  X, y = X.to(device), y.to(device)  # 计算预测误差  pred = model(X)  # 网络输出  loss = loss_fn(pred, y)  # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失  # 反向传播  optimizer.zero_grad()  # grad属性归零  loss.backward()  # 反向传播  optimizer.step()  # 每一步自动更新  # 记录acc与loss  train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()  train_loss += loss.item()  train_acc /= size  train_loss /= num_batches  return train_acc, train_loss  def test(dataloader, device, model, loss_fn):  size = len(dataloader.dataset)  # 测试集的大小  num_batches = len(dataloader)  # 批次数目, (size/batch_size,向上取整)  test_loss, test_acc = 0, 0  # 当不进行训练时,停止梯度更新,节省计算内存消耗  with torch.no_grad():  for imgs, target in dataloader:  imgs, target = imgs.to(device), target.to(device)  # 计算loss  target_pred = model(imgs)  loss = loss_fn(target_pred, target)  test_loss += loss.item()  test_acc += (target_pred.argmax(1) == target).type(torch.float).sum().item()  test_acc /= size  test_loss /= num_batches  return test_acc, test_loss  from PIL import Image  def predict_one_image(image_path, device, model, transform, classes):  """  预测单张图片  :param image_path: 图片路径  :param device: CPU or GPU    :param model: cnn模型  :param transform:    :param classes:    :return:   """  test_img = Image.open(image_path).convert('RGB')  plt.imshow(test_img)  # 展示预测的图片  test_img = transform(test_img)  img = test_img.to(device).unsqueeze(0)  model.eval()  output = model(img)  _, pred = torch.max(output, 1)  pred_class = classes[pred]  print(f'预测结果是:{pred_class}')**
2. model.py
import torch.nn as nn  
import torch
import torch.nn.functional as Fclass VGG16(nn.Module):  def __init__(self, num_classes):  super(VGG16, self).__init__()  # 卷积块1  self.block1 = nn.Sequential(  nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.MaxPool2d(kernel_size=2, stride=2)  )  # 卷积块2  self.block2 = nn.Sequential(  nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.MaxPool2d(kernel_size=2, stride=2)  )  # 卷积块3  self.block3 = nn.Sequential(  nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.MaxPool2d(kernel_size=2, stride=2)  )  # 卷积块4  self.block4 = nn.Sequential(  nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.MaxPool2d(kernel_size=2, stride=2)  )  # 卷积块5  self.block5 = nn.Sequential(  nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),  nn.ReLU(),  nn.MaxPool2d(kernel_size=2, stride=2)  )  # 全连接网络层,用于分类  self.classifier = nn.Sequential(  nn.Linear( in_features=512 * 7 * 7, out_features=4096),  nn.ReLU(),  nn.Linear(in_features=4096, out_features=4096),  nn.ReLU(),  nn.Linear(in_features=4096, out_features=num_classes)  )  def forward(self, x):  x = self.block1(x)  x = self.block2(x)  x = self.block3(x)  x = self.block4(x)  x = self.block5(x)  x = torch.flatten(x, 1)  x = self.classifier(x)  return x
3. main.py
import torch.utils.data  
from torchvision import datasets  from Utils import USE_GPU, data_from_directory, get_transforms_setting, train, test  
from config import get_options  
from model import VGG16  # 获取参数配置  
opt = get_options()  # 设置GPU  
device = USE_GPU()  DATA_DIR = "./data/PotatoPlants"  # 导入数据  
classes_name = data_from_directory(DATA_DIR, show=True)  transforms_setting = get_transforms_setting((224, 224))  total_data = datasets.ImageFolder(DATA_DIR, transform=transforms_setting['train'])  
print(total_data)  
print(total_data.class_to_idx)  # 划分数据集  
train_size = int(0.8 * len(total_data))  
test_size = len(total_data) - train_size  train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])  
print(train_dataset, test_dataset)  train_dl = torch.utils.data.DataLoader(train_dataset, batch_size=opt.batch_size, shuffle=True)  
test_dl = torch.utils.data.DataLoader(test_dataset, batch_size=opt.batch_size, shuffle=True)  for X, y in train_dl:  print("Shape of X[N, C, H, W]:", X.shape)  print("Shape of y: ", y.shape, y.dtype)  break  

image.png

# 加载VGG16模型  
model = VGG16(len(classes_name)).to(device)  
print(model)  # 统计模型参数量以及其他指标  
import torchsummary as summary  
summary.summary(model, (3, 224, 224))  

image.png

# 正式训练  
import copy  
optimizer = torch.optim.Adam(model.parameters(), lr=opt.lr)  
loss_fn = torch.nn.CrossEntropyLoss() # 创建损失函数  train_loss, train_acc, test_loss, test_acc = [], [], [], []  
best_acc = 0    # 设置了一个最佳准确率,来用作为最佳模型的判别标准  for epoch in range(opt.epochs):  model.train()  epoch_train_acc, epoch_train_loss = train(train_dl, device, model, loss_fn, optimizer)  model.eval()  epoch_test_acc, epoch_test_loss = test(test_dl, device, model, loss_fn)  # 保存最佳模型  if epoch_test_acc > best_acc:  best_acc = epoch_test_acc  best_model = copy.deepcopy(model)  train_acc.append(epoch_train_acc)  test_acc.append(epoch_test_acc)  train_loss.append(epoch_train_loss)  test_loss.append(epoch_test_loss)  # 获取当前学习率  lr = optimizer.state_dict()['param_groups'][0]['lr']  template = 'Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}'  print(template.format(epoch + 1, epoch_train_acc * 100, epoch_train_loss,  epoch_test_acc * 100, epoch_test_loss, lr))  # 保存最佳模型到文件中  
PATH = './models/potato-model.pth'  
torch.save(model.state_dict(), PATH)  print("完成")

image.png

4. predict.py 预测单张图片
import torch  
from PIL import Image  
from Utils import predict_one_image, USE_GPU, get_transforms_setting  
from model import VGG16  classes = ['Early_blight','Late_blight', 'healthy']  transforms = get_transforms_setting([224,224])  device = USE_GPU()  
# 加载VGG16模型  
model = VGG16(3)  
model.load_state_dict(torch.load('./models/potato-model.pth', map_location=device))  
model.to(device)  img_path = "./data/PotatoPlants/Early_blight/0c4f6f72-c7a2-42e1-9671-41ab3bf37fe7___RS_Early.B 6752.JPG"  predict_one_image(img_path, device, model, transforms['train'], classes)

image.png
image.png


http://www.ppmy.cn/news/1568573.html

相关文章

SAP SD学习笔记27 - 请求计划(开票计划)之1 - 定期请求(定期开票)

上两章讲了贩卖契约(框架协议)的概要,以及贩卖契约中最为常用的 基本契约 - 数量契约和金额契约。 SAP SD学习笔记26 - 贩卖契约(框架协议)的概要,基本契约 - 数量契约_sap 框架协议-CSDN博客 SAP SD学习笔记27 - 贩卖契约(框架…

什么是波士顿矩阵,怎么制作?AI工具一键生成战略分析图!

当今商业环境瞬息万变,每个企业都面临着越来越多的挑战与机遇。如何科学合理地进行战略管理,成为了每个企业决策者必须直面的重要课题。 在众多战略管理框架中,波士顿矩阵作为一种经典的战略管理工具,因其简洁明了的分析方式而广…

Springboot如何使用面向切面编程AOP?

Springboot如何使用面向切面编程AOP? 在 Spring Boot 中使用面向切面编程(AOP)非常简单,Spring Boot 提供了对 AOP 的自动配置支持。以下是详细的步骤和示例,帮助你快速上手 Spring Boot 中的 AOP。 1. 添加依赖 首先&#xff…

Elasticsearch:如何搜索含有复合词的语言

作者:来自 Elastic Peter Straer 复合词在文本分析和标记过程中给搜索引擎带来挑战,因为它们会掩盖词语成分之间的有意义的联系。连字分解器标记过滤器等工具可以通过解构复合词来帮助解决这些问题。 德语以其长复合词而闻名:Rindfleischetik…

deepseek无辅助损失的负载均衡策略

无辅助损失的负载均衡策略 是一种用于解决深度学习中专家混合系统(MoE)负载不均衡问题的方法,它不依赖额外的辅助损失函数来实现负载均衡,而是通过直接调整专家接收输入的概率来平衡各个专家的负载,以下是具体介绍及举例: 策略原理 动态调整接收概率:系统会根据每个专…

lightgbm做分类

python import pandas as pd#导入csv文件的库 import numpy as np#进行矩阵运算的库 import json#用于读取和写入json数据格式#model lgb分类模型,日志评估,早停防止过拟合 from lightgbm import LGBMClassifier,log_evaluation,early_stopping #metric from sklearn.metrics …

2006-2021年 省级数字经济与实体经济融合水平计算代码及原始数据-社科数据

省级数字经济与实体经济融合水平计算代码及原始数据2006-2021年-社科数据https://download.csdn.net/download/paofuluolijiang/90028609 https://download.csdn.net/download/paofuluolijiang/90028609 数字经济与实体经济的融合是推动现代经济发展的关键力量。从2006年至20…

Vue.js 生命周期钩子在 Composition API 中的应用

Vue.js 生命周期钩子在 Composition API 中的应用 今天我们来聊聊在 Vue 3 的组合式 API(Composition API)中,如何使用生命周期钩子。如果你对如何在 setup() 函数中处理组件的生命周期事件感到困惑,那么这篇文章将为你解答。 什…