PyTorch翻译官网教程7-OPTIMIZING MODEL PARAMETERS

news/2024/12/22 14:29:45/

官网链接

Optimizing Model Parameters — PyTorch Tutorials 2.0.1+cu117 documentation

优化模型参数

现在我们有了一个模型和数据,是时候通过优化我们的数据参数来训练、验证和测试我们的模型了。训练模型是一个迭代过程;在每次迭代中,模型对输出进行预测,计算猜测中的误差(损失),收集误差相对于其参数的导数(如我们在前一节中看到的),并使用梯度下降优化这些参数。要了解这个过程的更详细的步骤,请查看backpropagation from 3Blue1Brown.关于反向传播的视频。
 

预备代码

我们从前面的数据集和数据加载器和构建模型部分加载代码。

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensortraining_data = datasets.FashionMNIST(root="data",train=True,download=True,transform=ToTensor()
)test_data = datasets.FashionMNIST(root="data",train=False,download=True,transform=ToTensor()
)train_dataloader = DataLoader(training_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)class NeuralNetwork(nn.Module):def __init__(self):super(NeuralNetwork, self).__init__()self.flatten = nn.Flatten()self.linear_relu_stack = nn.Sequential(nn.Linear(28*28, 512),nn.ReLU(),nn.Linear(512, 512),nn.ReLU(),nn.Linear(512, 10),)def forward(self, x):x = self.flatten(x)logits = self.linear_relu_stack(x)return logitsmodel = NeuralNetwork()

输出

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz0%|          | 0/26421880 [00:00<?, ?it/s]0%|          | 65536/26421880 [00:00<01:12, 364467.23it/s]1%|          | 229376/26421880 [00:00<00:38, 682867.26it/s]3%|2         | 753664/26421880 [00:00<00:12, 2105549.70it/s]7%|7         | 1867776/26421880 [00:00<00:06, 4037060.52it/s]17%|#7        | 4554752/26421880 [00:00<00:02, 10054971.17it/s]25%|##4       | 6586368/26421880 [00:00<00:01, 10826687.85it/s]35%|###4      | 9175040/26421880 [00:01<00:01, 14440602.83it/s]43%|####3     | 11370496/26421880 [00:01<00:01, 13904185.40it/s]52%|#####2    | 13828096/26421880 [00:01<00:00, 16310860.12it/s]61%|######1   | 16154624/26421880 [00:01<00:00, 15292129.11it/s]70%|######9   | 18448384/26421880 [00:01<00:00, 17052919.81it/s]79%|#######9  | 20971520/26421880 [00:01<00:00, 16194537.05it/s]87%|########7 | 23101440/26421880 [00:01<00:00, 17376925.42it/s]97%|#########7| 25755648/26421880 [00:01<00:00, 16656740.38it/s]
100%|##########| 26421880/26421880 [00:02<00:00, 13156067.43it/s]
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/rawDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz0%|          | 0/29515 [00:00<?, ?it/s]
100%|##########| 29515/29515 [00:00<00:00, 330432.50it/s]
Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/rawDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz0%|          | 0/4422102 [00:00<?, ?it/s]1%|1         | 65536/4422102 [00:00<00:11, 366194.81it/s]5%|5         | 229376/4422102 [00:00<00:06, 687142.64it/s]15%|#4        | 655360/4422102 [00:00<00:02, 1809732.80it/s]38%|###7      | 1671168/4422102 [00:00<00:00, 3647146.90it/s]89%|########8 | 3932160/4422102 [00:00<00:00, 8701433.31it/s]
100%|##########| 4422102/4422102 [00:00<00:00, 6041840.30it/s]
Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/rawDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz0%|          | 0/5148 [00:00<?, ?it/s]
100%|##########| 5148/5148 [00:00<00:00, 33685299.52it/s]
Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw

超参数

超参数是可调节的参数,可以让您控制模型优化过程。不同的超参数值会影响模型训练和收敛速度(read more 关于超参数调优的信息)。

我们为训练定义了以下超参数:

  • epoch的次数-迭代数据集的次数
  • Batch Size -在参数更新之前通过网络传播的数据样本数量
  • Learning Rate-在每个Batch /epoch中更新模型参数的程度。较小的值产生较慢的学习速度,而较大的值可能导致训练过程中不可预测的行为。
learning_rate = 1e-3
batch_size = 64
epochs = 5

优化循环

一旦我们设置了超参数,我们就可以用优化循环来训练和优化我们的模型。优化循环的每次迭代称为一个epoch。

每个epoch由两个主要部分组成:

  • 训练循环-迭代训练数据集并尝试收敛到最优参数。
  • 验证/测试循环-迭代测试数据集以检查模型性能是否有所改善。

让我们简单地熟悉一下训练循环中使用的一些概念。跳到前面看看优化循环的完整实现(Full Implementation)。

损失函数

当提供一些训练数据时,我们没有训练的神经网络可能不会给出正确的答案。损失函数衡量的是得到的结果与目标值的错误程度,这是我们在训练中要最小化的损失函数。为了计算损失,我们使用给定数据样本的输入进行预测,并将其与真实的数据标签值进行比较。

常见的损失函数包括 nn.MSELoss (均方误差) 用户回归任务,nn.NLLLoss(负对数似然)用于分类。nn.CrossEntropyLoss 组合nn.LogSoftmax nn.NLLLoss。

我们将模型的输出logits传递给nn.CrossEntropyLoss,它将对logits进行归一化并计算预测误差。

# Initialize the loss function
loss_fn = nn.CrossEntropyLoss()

优化器

优化指的是在每个训练步骤中调整模型参数以减少模型误差的过程。优化算法定义了如何执行这个过程(在这个例子中,我们使用随机梯度下降)。所有优化逻辑都封装在optimizer对象中。这里,我们使用SGD优化器;此外,PyTorch中有许多不同的优化器(different optimizers ),如ADAM和RMSProp,它们可以更好地用于不同类型的模型和数据。

我们通过注册需要训练的模型参数,并传入学习率超参数来初始化优化器。

optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

在训练循环中,优化分三个步骤进行:

  • 调用optimizer.zero_grad()重置模型参数的梯度。梯度默认累计 ;为了防止重复计算,我们在每次迭代时显式地将它们归零。
  • 调用 loss.backward() 方法反向传播预测的损失。PyTorch将损失的梯度与每个参数关联起来。
  • 一旦我们有了梯度,我们通过调用optimizer.step() 方法 在向后传递过程中使用收集的梯度来调整模型参数。

完整实现

我们定义了train_loop方法循环优化代码,还有test_loop 方法根据测试数据循环评估模型的性能。

def train_loop(dataloader, model, loss_fn, optimizer):size = len(dataloader.dataset)# Set the model to training mode - important for batch normalization and dropout layers# Unnecessary in this situation but added for best practicesmodel.train()for batch, (X, y) in enumerate(dataloader):# Compute prediction and losspred = model(X)loss = loss_fn(pred, y)# Backpropagationloss.backward()optimizer.step()optimizer.zero_grad()if batch % 100 == 0:loss, current = loss.item(), (batch + 1) * len(X)print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")def test_loop(dataloader, model, loss_fn):# Set the model to evaluation mode - important for batch normalization and dropout layers# Unnecessary in this situation but added for best practicesmodel.eval()size = len(dataloader.dataset)num_batches = len(dataloader)test_loss, correct = 0, 0# Evaluating the model with torch.no_grad() ensures that no gradients are computed during test mode# also serves to reduce unnecessary gradient computations and memory usage for tensors with requires_grad=Truewith torch.no_grad():for X, y in dataloader:pred = model(X)test_loss += loss_fn(pred, y).item()correct += (pred.argmax(1) == y).type(torch.float).sum().item()test_loss /= num_batchescorrect /= sizeprint(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")

我们定义损失函数和优化器,并将其传递给train_loop和test_loop。您可以随意增加epoch的数量,以跟踪模型不断改进的性能。

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)epochs = 10
for t in range(epochs):print(f"Epoch {t+1}\n-------------------------------")train_loop(train_dataloader, model, loss_fn, optimizer)test_loop(test_dataloader, model, loss_fn)
print("Done!")

输出

Epoch 1
-------------------------------
loss: 2.298730  [   64/60000]
loss: 2.289123  [ 6464/60000]
loss: 2.273286  [12864/60000]
loss: 2.269406  [19264/60000]
loss: 2.249603  [25664/60000]
loss: 2.229407  [32064/60000]
loss: 2.227368  [38464/60000]
loss: 2.204261  [44864/60000]
loss: 2.206193  [51264/60000]
loss: 2.166651  [57664/60000]
Test Error:Accuracy: 50.9%, Avg loss: 2.166725Epoch 2
-------------------------------
loss: 2.176750  [   64/60000]
loss: 2.169595  [ 6464/60000]
loss: 2.117500  [12864/60000]
loss: 2.129272  [19264/60000]
loss: 2.079674  [25664/60000]
loss: 2.032928  [32064/60000]
loss: 2.050115  [38464/60000]
loss: 1.985236  [44864/60000]
loss: 1.987887  [51264/60000]
loss: 1.907162  [57664/60000]
Test Error:Accuracy: 55.9%, Avg loss: 1.915486Epoch 3
-------------------------------
loss: 1.951612  [   64/60000]
loss: 1.928685  [ 6464/60000]
loss: 1.815709  [12864/60000]
loss: 1.841552  [19264/60000]
loss: 1.732467  [25664/60000]
loss: 1.692914  [32064/60000]
loss: 1.701714  [38464/60000]
loss: 1.610632  [44864/60000]
loss: 1.632870  [51264/60000]
loss: 1.514263  [57664/60000]
Test Error:Accuracy: 58.8%, Avg loss: 1.541525Epoch 4
-------------------------------
loss: 1.616448  [   64/60000]
loss: 1.582892  [ 6464/60000]
loss: 1.427595  [12864/60000]
loss: 1.487950  [19264/60000]
loss: 1.359332  [25664/60000]
loss: 1.364817  [32064/60000]
loss: 1.371491  [38464/60000]
loss: 1.298706  [44864/60000]
loss: 1.336201  [51264/60000]
loss: 1.232145  [57664/60000]
Test Error:Accuracy: 62.2%, Avg loss: 1.260237Epoch 5
-------------------------------
loss: 1.345538  [   64/60000]
loss: 1.327798  [ 6464/60000]
loss: 1.153802  [12864/60000]
loss: 1.254829  [19264/60000]
loss: 1.117322  [25664/60000]
loss: 1.153248  [32064/60000]
loss: 1.171765  [38464/60000]
loss: 1.110263  [44864/60000]
loss: 1.154467  [51264/60000]
loss: 1.070921  [57664/60000]
Test Error:Accuracy: 64.1%, Avg loss: 1.089831Epoch 6
-------------------------------
loss: 1.166889  [   64/60000]
loss: 1.170514  [ 6464/60000]
loss: 0.979435  [12864/60000]
loss: 1.113774  [19264/60000]
loss: 0.973411  [25664/60000]
loss: 1.015192  [32064/60000]
loss: 1.051113  [38464/60000]
loss: 0.993591  [44864/60000]
loss: 1.039709  [51264/60000]
loss: 0.971077  [57664/60000]
Test Error:Accuracy: 65.8%, Avg loss: 0.982440Epoch 7
-------------------------------
loss: 1.045165  [   64/60000]
loss: 1.070583  [ 6464/60000]
loss: 0.862304  [12864/60000]
loss: 1.022265  [19264/60000]
loss: 0.885213  [25664/60000]
loss: 0.919528  [32064/60000]
loss: 0.972762  [38464/60000]
loss: 0.918728  [44864/60000]
loss: 0.961629  [51264/60000]
loss: 0.904379  [57664/60000]
Test Error:Accuracy: 66.9%, Avg loss: 0.910167Epoch 8
-------------------------------
loss: 0.956964  [   64/60000]
loss: 1.002171  [ 6464/60000]
loss: 0.779057  [12864/60000]
loss: 0.958409  [19264/60000]
loss: 0.827240  [25664/60000]
loss: 0.850262  [32064/60000]
loss: 0.917320  [38464/60000]
loss: 0.868384  [44864/60000]
loss: 0.905506  [51264/60000]
loss: 0.856353  [57664/60000]
Test Error:Accuracy: 68.3%, Avg loss: 0.858248Epoch 9
-------------------------------
loss: 0.889765  [   64/60000]
loss: 0.951220  [ 6464/60000]
loss: 0.717035  [12864/60000]
loss: 0.911042  [19264/60000]
loss: 0.786085  [25664/60000]
loss: 0.798370  [32064/60000]
loss: 0.874939  [38464/60000]
loss: 0.832796  [44864/60000]
loss: 0.863254  [51264/60000]
loss: 0.819742  [57664/60000]
Test Error:Accuracy: 69.5%, Avg loss: 0.818780Epoch 10
-------------------------------
loss: 0.836395  [   64/60000]
loss: 0.910220  [ 6464/60000]
loss: 0.668506  [12864/60000]
loss: 0.874338  [19264/60000]
loss: 0.754805  [25664/60000]
loss: 0.758453  [32064/60000]
loss: 0.840451  [38464/60000]
loss: 0.806153  [44864/60000]
loss: 0.830360  [51264/60000]
loss: 0.790281  [57664/60000]
Test Error:Accuracy: 71.0%, Avg loss: 0.787271Done!

延伸阅读

  • Loss Functions
  • torch.optim
  • Warmstart Training a Model

http://www.ppmy.cn/news/887232.html

相关文章

计算机主机检测不到耳机,win10电脑检测不到耳机怎么办_win10电脑检测不到耳机如何解决-系统城...

日常在使用win10电脑听歌曲时喜欢插入耳机&#xff0c;这样音质比直接外放好多了&#xff0c;最近很多win10用户出现了耳机拔下来再插上电脑然后耳机听不到声音&#xff0c;检测耳机是没问题的&#xff0c;那么就是设置出现问题&#xff0c;针对此故障&#xff0c;小编这里和大…

计算机主机检测不到耳机,win10电脑检测不到耳机的原因及处理方法

日常在使用 具体方法如下&#xff1a; 1、首先我们要确保耳机本身没有问题&#xff0c;然后再插入电脑&#xff0c;如果还是出现下图所示情况&#xff0c;就往下看。 2、 接着&#xff0c;我们下载安装软件“驱动精灵”&#xff0c;用这个软件卸载安装声卡驱动&#xff0c;确保…

Cell 子刊 | 深度睡眠脑电波调节胰岛素敏感性促进血糖调节

缺乏高质量的睡眠会增加一个人患糖尿病的风险。然而&#xff0c;为什么会这样仍然是一个不解之谜。 近期&#xff0c;加州大学伯克利分校的一组睡眠科学家的新发现为我们揭示了答案。研究人员在人体内发现了一种潜在的调控机制&#xff0c;解释了为什么夜间深度睡眠脑电波能够调…

MFC扩展库BCGControlBar Pro v33.5新版亮点 - 其他增强功能

BCGControlBar库拥有500多个经过全面设计、测试和充分记录的MFC扩展类。 我们的组件可以轻松地集成到您的应用程序中&#xff0c;并为您节省数百个开发和调试时间。 BCGControlBar专业版 v33.5已正式发布了&#xff0c;此版本包含了Ribbon&#xff08;功能区&#xff09;自定义…

约汗——基于Android的大学生找伙伴约运动app 开发总结

最近和小伙伴花了四天时间赶工&#xff0c;把我们的约汗app部分功能模块 实现了。虽然并不是尽善尽美&#xff0c;但加班加点熬夜做的东西&#xff0c; 确实是在竭尽全力。下面来简略地说一下和这次开发有关的事吧。 职责 我的小伙伴主要是负责UI设计&#xff0c;前端开发。…

正逆运动学+动力学仿真

​【前言】最近看到一些留言在寻找源码&#xff0c;现整理了全部分享出来&#xff0c;其中包括正逆运动学解算、雅克比矩阵应用&#xff08;主要用于关节力矩的计算&#xff09;&#xff0c;需要的小伙伴自取。 一、平面二连杆运动学建模 二、平面二连杆动力学建模 应用背景&a…

微信、QQ、支付宝运动步数自定义小工具

前言 冬天了&#xff0c;外面寒风刺骨&#xff0c;给不少朋友带来很多困扰 天冷、风大、下雪都不想出门&#xff0c;又想给蚂蚁森林浇水、捐步数怎么办~ 今天再次分享给你们一个可以在恶劣天气下偷偷懒的小工具&#xff0c;可以自由定义自己V、Q、ZFB等平台的运动量&#xf…

2017年春招“森林举行运动会”编程题

森林举行运动会&#xff0c;小伙伴们身上每个都印着一个字符标记&#xff0c;排成一列&#xff0c;委员会要挑出每列里相邻小伙伴身上没有重复字符标记的&#xff0c;最多能挑出几个&#xff1f; 比如&#xff1a;小伙伴们的字符标记串起来是“ccccccbc” 那相邻的小伙伴身上没…