前言
當深度學習模型完成訓練開始部署、推理階段浩嫌,模型的推理速度补胚、性能往往受到關注溶其。目前主流DL framework都有各自的性能分析工具瓶逃,本文主要介紹PyTorch 的性能分析工具——torch.autograd.profiler
測試環(huán)境
- ubuntu 18.04
- anaconda3 + python 3.7
- NVIDIA GPU/CUDA 10.2 (可選)
- PyTorch 1.6
Profiler 性能分析工具介紹
Profiler 一般指性能分析工具厢绝,用于分析APP昔汉、模型的執(zhí)行時間,執(zhí)行流程钞速,內(nèi)存消耗等渴语。除了Pytorch,Tensorflow 這樣的深度學習框架, 像NVIDIA CUDA掷酗, AMD ROCm 等也提供了各自的Profiler性能分析工具泻轰,比如 nvprof, rocprofiler浮声。
PyTorch Profiler工具
pytroch Profiler位于torch.autograd.profiler
, 目前支持的功能:
- CPU/GPU 端Op執(zhí)行時間統(tǒng)計
- CPU/GPU 端Op輸入Tensor的維度分析
- Op的內(nèi)存消耗統(tǒng)計
PyTorch 官網(wǎng)關于Profiler的介紹
https://pytorch.org/docs/master/autograd.html
Profiler分析CPU、GPU端Op執(zhí)行時間
torch.autograd.profiler.profile(use_cuda=False...)
- CPU Only: 設置use_cuda=False
- GPU 模式:設置use_cuda=True, 注意:模型 以及輸入Tensor 需要事先導入顯存
CPU Only 模式
import os
import numpy as np
import torch
from torchvision.models import resnet18
import time
if __name__ == '__main__':
model = resnet18(pretrained=False)
device = torch.device('cpu')
model.eval()
model.to(device)
dump_input = torch.ones(1,3,224,224).to(device)
# Warn-up
for _ in range(5):
start = time.time()
outputs = model(dump_input)
torch.cuda.synchronize()
end = time.time()
print('Time:{}ms'.format((end-start)*1000))
with torch.autograd.profiler.profile(enabled=True, use_cuda=False, record_shapes=False, profile_memory=False) as prof:
outputs = model(dump_input)
print(prof.table())
profiler輸出:(CPU Only)
GPU 模式
import os
import numpy as np
import torch
from torchvision.models import resnet18
import time
if __name__ == '__main__':
model = resnet18(pretrained=False)
device = torch.device('cuda')
model.eval()
model.to(device)
dump_input = torch.ones(1,3,224,224).to(device)
# Warn-up
for _ in range(5):
start = time.time()
outputs = model(dump_input)
torch.cuda.synchronize()
end = time.time()
print('Time:{}ms'.format((end-start)*1000))
with torch.autograd.profiler.profile(enabled=True, use_cuda=True, record_shapes=False, profile_memory=False) as prof:
outputs = model(dump_input)
print(prof.table())
profiler輸出:(GPU)
使用Chrome trace可視化Profiler結(jié)果
上面的例子中锹引,profiler的結(jié)果直接輸出到終端唆香,為了更進一步分析模型Op的執(zhí)行關系,pytroch profiler支持生成 chrome trace json格式的輸出秸应,然后采用chrome 瀏覽器可視化結(jié)果:
只需要在上面的代碼最后软啼,加上 prof.export_chrome_trace('./resnet_profile.json')
import os
import numpy as np
import torch
from torchvision.models import resnet18
import time
# def process_event(profiler_events):
if __name__ == '__main__':
model = resnet18(pretrained=False)
device = torch.device('cuda')
model.eval()
model.to(device)
dump_input = torch.ones(1,3,224,224).to(device)
# Warn-up
for _ in range(5):
start = time.time()
outputs = model(dump_input)
torch.cuda.synchronize()
end = time.time()
print('Time:{}ms'.format((end-start)*1000))
with torch.autograd.profiler.profile(enabled=True, use_cuda=True, record_shapes=False, profile_memory=False) as prof:
outputs = model(dump_input)
print(prof.table())
prof.export_chrome_trace('./resnet_profile.json')
生成的JSON 文件
打開Chrome瀏覽器锣披,在地址欄輸入 chrome://tracing
導入profiler生成的JSON文件:
操作:
按鍵盤w, a, s, d鍵,可以對profiler的結(jié)果進行縮放和移動
Profiler 結(jié)果分析
上面內(nèi)容主要是pytorch profiler的用法胧辽,我們更關心的是如何分析profiler的數(shù)據(jù)邑商, 如何通過profiler發(fā)現(xiàn)模型的性能瓶頸人断,得出結(jié)論
模型整體分析
-
CPU 端的OP List
-
GPU 端的OP List
CPU 和 GPU Op的關系
CNN/RNN/GAN/Transformer 等模型最終都是由許多Op組成的,在采用GPU設備的情況下蝉绷,首先CPU端負責Op的調(diào)度(schedule),將Op的運算發(fā)送到GPU, GPU負責Op的具體運算。 筆者略微了解CUDA編程知識熔吗,在CUDA編程中桅狠, host(cpu)端調(diào)用GPU kernel function, GPU kernel啟動之后,CPU與GPU異步執(zhí)行咨堤。
Op的wall_duration_time, self_time 區(qū)別
- wall_duration_time: 此Op的總共執(zhí)行時間
- self_time: 此Op自身的執(zhí)行時間嗜暴,不包含調(diào)用其他子Op的執(zhí)行時間
以relu_ Op為例:(relu_ 是in-place ReLU)
調(diào)用關系: relu_ ---->threshold_
- threshold_: wall_dur=154.624us
- relu_: wall_dur=179us, 由于relu_ Op又調(diào)用了threshold_ Op,因此relu_的self_time = 179 - 154 = 25us
relu_ op:
threshold_ op:
Op Tensor數(shù)據(jù)維度分析
PyTorch profiler提供了Op 輸入維度
DNN Model Training Profile
代碼示例: MNIST訓練
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR
import torch.cuda.nvtx as nvtx
from torch.profiler import profile, ProfilerActivity
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
prof = torch.profiler.profile(
schedule=torch.profiler.schedule(wait=1, warmup=1, active=10, repeat=1),
on_trace_ready=torch.profiler.tensorboard_trace_handler('./profile'),
record_shapes=True,
with_stack=False)
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
prof.start()
nvtx.range_push('Forward')
output = model(data)
loss = F.nll_loss(output, target)
nvtx.range_pop()
nvtx.range_push('Backward')
loss.backward()
optimizer.step()
nvtx.range_pop()
prof.step()
prof.stop()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
if args.dry_run:
break
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
def inference():
print('Inference')
model.eval()
data = torch.rand(64, 1, 28, 28).cuda()
output = model(data)
print(output.size())
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=14, metavar='N',
help='number of epochs to train (default: 14)')
parser.add_argument('--lr', type=float, default=1.0, metavar='LR',
help='learning rate (default: 1.0)')
parser.add_argument('--gamma', type=float, default=0.7, metavar='M',
help='Learning rate step gamma (default: 0.7)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--no-mps', action='store_true', default=False,
help='disables macOS GPU training')
parser.add_argument('--dry-run', action='store_true', default=False,
help='quickly check a single pass')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--save-model', action='store_true', default=False,
help='For Saving the current Model')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()
use_mps = not args.no_mps and torch.backends.mps.is_available()
torch.manual_seed(args.seed)
if use_cuda:
device = torch.device("cuda")
elif use_mps:
device = torch.device("mps")
else:
device = torch.device("cpu")
train_kwargs = {'batch_size': args.batch_size}
test_kwargs = {'batch_size': args.test_batch_size}
if use_cuda:
cuda_kwargs = {'num_workers': 1,
'pin_memory': True,
'shuffle': True}
train_kwargs.update(cuda_kwargs)
test_kwargs.update(cuda_kwargs)
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
dataset1 = datasets.MNIST('../data', train=True, download=True,
transform=transform)
dataset2 = datasets.MNIST('../data', train=False,
transform=transform)
train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)
test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs)
model = Net().to(device)
optimizer = optim.Adadelta(model.parameters(), lr=args.lr)
scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
scheduler.step()
if args.save_model:
torch.save(model.state_dict(), "mnist_cnn.pt")
if __name__ == '__main__':
main()
發(fā)現(xiàn)生成了很多json文件虫啥,保存了Profile的信息:
采用Tensorboard打開:
tensorboard --logdir=./profile
查看Training interation=2的運行情況:
可以觀察到3部分內(nèi)容: Forward, backword, Optimizer.step (weight update)