加速原理
苹果有自己的一套GPU实现API — Metal,而Pytorch此次的加速就是基于Metal,具体来说,使用苹果的Metal Performance Shaders(MPS)作为PyTorch的后端,可以实现加速GPU训练。MPS后端扩展了PyTorch框架,提供了在Mac上设置和运行操作的脚本和功能。MPS通过针对每个Metal GPU系列的独特特性进行微调的内核来优化计算性能。新设备在MPS图形框架和MPS提供的调整内核上映射机器学习计算图形和基元。
因此此次新增的的device名字是mps, 使用方式与cuda 类似,例如:
import torch
foo = torch.rand(1, 3, 224, 224).to('mps')device = torch.device('mps')
foo = foo.to(device)
此外发现,Pytorch已经支持下面这些device了,确实出乎意料:
cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, ort, mps, xla, lazy, vulkan, meta, hpu
环境配置
为了使用这个实验特性,你需要满足下面三个条件:
-
有一台配有Apple Silicon 系列芯片(M1, M1 Pro, M1 Pro Max, M1 Ultra)的Mac笔记本
-
安装了arm64位的Python
-
安装了最新的nightly 版本的Pytorch
假设机器已经准备好。我们可以从这里下载arm64版本的miniconda(文件名是Miniconda3 macOS Apple M1 64-bit bash,基于它安装的Python环境就是arm64位的。下载和安装Minicoda的命令如下:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh chmod +x Miniconda3-latest-MacOSX-arm64.sh ./Miniconda3-latest-MacOSX-arm64.sh
按照说明来操作即可,安装完成后,创建一个虚拟环境,通过检查platform.uname()[4] 是不是为arm64 来检查Python的架构:
conda config --env --set always_yes true
conda create -n try-mps python=3.8
conda activate try-mps
python -c "import platform; print(platform.uname()[4])"
如果最后一句命令的输出为arm64 ,说明Python版本OK,可以继续往下走了。
第三步,安装nightly版本的Pytorch,在开启的虚拟环境中进行下面的操作:
python -m pip install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
执行完成后通过下面的命令检查MPS后端是否可用:
python -c "import torch;print(torch.backends.mps.is_built())"
如果输出为True ,说明MPS后端可用,可以继续往下走了。
跑一个MNIST
基于Pytorch官方的example中的MNIST例子,修改了来测试cpu和mps模式,代码如下:
from __future__ import print_function
import argparse
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLRclass Net(nn.Module):def __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv2d(1, 32, 3, 1)self.conv2 = nn.Conv2d(32, 64, 3, 1)self.dropout1 = nn.Dropout(0.25)self.dropout2 = nn.Dropout(0.5)self.fc1 = nn.Linear(9216, 128)self.fc2 = nn.Linear(128, 10)def forward(self, x):x = self.conv1(x)x = F.relu(x)x = self.conv2(x)x = F.relu(x)x = F.max_pool2d(x, 2)x = self.dropout1(x)x = torch.flatten(x, 1)x = self.fc1(x)x = F.relu(x)x = self.dropout2(x)x = self.fc2(x)output = F.log_softmax(x, dim=1)return outputdef train(args, model, device, train_loader, optimizer, epoch):model.train()for batch_idx, (data, target) in enumerate(train_loader):data, target = data.to(device), target.to(device)optimizer.zero_grad()output = model(data)loss = F.nll_loss(output, target)loss.backward()optimizer.step()if batch_idx % args.log_interval == 0:print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch, batch_idx * len(data), len(train_loader.dataset),100. * batch_idx / len(train_loader), loss.item()))if args.dry_run:breakdef main():# Training settingsparser = argparse.ArgumentParser(description='PyTorch MNIST Example')parser.add_argument('--batch-size', type=int, default=64, metavar='N',help='input batch size for training (default: 64)')parser.add_argument('--epochs', type=int, default=1, metavar='N',help='number of epochs to train (default: 14)')parser.add_argument('--lr', type=float, default=1.0, metavar='LR',help='learning rate (default: 1.0)')parser.add_argument('--gamma', type=float, default=0.7, metavar='M',help='Learning rate step gamma (default: 0.7)')parser.add_argument('--no-cuda', action='store_true', default=False,help='disables CUDA training')parser.add_argument('--use_gpu', action='store_true', default=False,help='enable MPS')parser.add_argument('--dry-run', action='store_true', default=False,help='quickly check a single pass')parser.add_argument('--seed', type=int, default=1, metavar='S',help='random seed (default: 1)')parser.add_argument('--log-interval', type=int, default=10, metavar='N',help='how many batches to wait before logging training status')parser.add_argument('--save-model', action='store_true', default=False,help='For Saving the current Model')args = parser.parse_args()use_gpu = args.use_gputorch.manual_seed(args.seed)device = torch.device("mps" if args.use_gpu else "cpu")train_kwargs = {'batch_size': args.batch_size}test_kwargs = {'batch_size': args.test_batch_size}if use_gpu:cuda_kwargs = {'num_workers': 1,'pin_memory': True,'shuffle': True}train_kwargs.update(cuda_kwargs)transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))])dataset1 = datasets.MNIST('../data', train=True, download=True,transform=transform)dataset2 = datasets.MNIST('../data', train=False,transform=transform)train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)model = Net().to(device)optimizer = optim.Adadelta(model.parameters(), lr=args.lr)scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)for epoch in range(1, args.epochs + 1):train(args, model, device, train_loader, optimizer, epoch)scheduler.step()if __name__ == '__main__':t0 = time.time()main()t1 = time.time()print('time_cost:', t1 - t0)
测试CPU:
python main.py
测试MPS:
python main --use_gpu
在我的M1机器上测试发现,训一个Epoch的MNIST,CPU耗时149.6s,而使用MPS的话耗时18.4s。提升效果显著,也可能是cpu跑的太拉了,总而言之,可以用mps训练模型就一定要用mps,cpu太慢了。