當(dāng)網(wǎng)絡(luò)層數(shù)走向更深時丘逸,出現(xiàn)了網(wǎng)絡(luò)退化問題媒楼,即增加網(wǎng)絡(luò)層數(shù)之后,訓(xùn)練誤差往往不降反升雳刺,網(wǎng)絡(luò)性能快速下降劫灶。2015年何愷明推出的ResNet,通過增加一個identity mapping(恒等映射)掖桦,將原始所需要學(xué)的函數(shù)H(x)轉(zhuǎn)換成F(x)+x本昏,成功訓(xùn)練152層深的神經(jīng)網(wǎng)絡(luò),在ILSVRC2015比賽中獲得了冠軍枪汪,top-5錯誤率為3.57%涌穆,同時參數(shù)量卻比VGGNet低很多。
引用地址:Deep Residual Learning for Image Recognition
殘差塊結(jié)構(gòu):
代碼實現(xiàn):
import paddle
import paddle.nn.functional as F # 組網(wǎng)相關(guān)的函數(shù)雀久,如conv2d, relu...
import numpy as np
from paddle.nn.layer.common import Dropout
from paddle.vision.transforms import Compose, Resize, Transpose, Normalize, ToTensor
from paddle.vision.datasets import Cifar10
# 構(gòu)建ResNet網(wǎng)絡(luò)
# Sequential:順序容器蒲犬,子Layer將按構(gòu)造函數(shù)參數(shù)的順序添加到此容器中,傳遞給構(gòu)造函數(shù)的參數(shù)可以Layers或可迭代的name Layer元組
from paddle.nn import Sequential, Conv2D, ReLU, MaxPool2D, Linear, Dropout, Flatten, BatchNorm2D, AvgPool2D
#構(gòu)建模型
class Residual(paddle.nn.Layer):
def __init__(self, in_channel, out_channel, use_conv1x1=False, stride=1):
super().__init__()
self.conv1 = Conv2D(in_channel, out_channel, kernel_size=3, padding=1, stride=stride)
self.conv2 = Conv2D(out_channel, out_channel, kernel_size=3, padding=1)
if use_conv1x1: #使用1x1卷積核
self.conv3 = Conv2D(in_channel, out_channel, kernel_size=1, stride=stride)
else:
self.conv3 = None
self.batchNorm1 = BatchNorm2D(out_channel)
self.batchNorm2 = BatchNorm2D(out_channel)
def forward(self, x):
y = F.relu(self.batchNorm1(self.conv1(x)))
y = self.batchNorm2(self.conv2(y))
if self.conv3:
x = self.conv3(x)
out = F.relu(y+x) #核心代碼
return out
殘差網(wǎng)絡(luò)ResNet50代碼實現(xiàn):
def ResNetBlock(in_channel, out_channel, num_layers, is_first=False):
if is_first:
assert in_channel == out_channel
block_list = []
for i in range(num_layers):
if i == 0 and not is_first:
block_list.append(Residual(in_channel, out_channel, use_conv1x1=True, stride=2))
else:
block_list.append(Residual(out_channel, out_channel))
resNetBlock = Sequential(*block_list) #用*號可以把list列表展開為元素
return resNetBlock
class ResNet50(paddle.nn.Layer):
def __init__(self, num_classes=10):
super().__init__()
self.b1 = Sequential(
Conv2D(3, 64, kernel_size=7, stride=2, padding=3),
BatchNorm2D(64),
ReLU(),
MaxPool2D(kernel_size=3, stride=2, padding=1))
self.b2 = ResNetBlock(64, 64, 3, is_first=True)
self.b3 = ResNetBlock(64, 128, 4)
self.b4 = ResNetBlock(128, 256, 6)
self.b5 = ResNetBlock(256, 512, 3)
self.AvgPool = AvgPool2D(2)
self.flatten = Flatten()
self.Linear = Linear(512, num_classes)
def forward(self, x):
x = self.b1(x)
x = self.b2(x)
x = self.b3(x)
x = self.b4(x)
x = self.b5(x)
x = self.AvgPool(x)
x = self.flatten(x)
x = self.Linear(x)
return x
resnet = ResNet50(num_classes=10)
model = paddle.Model(resnet)
from paddle.static import InputSpec
input = InputSpec([None, 3, 96, 96], 'float32', 'image')
label = InputSpec([None, 1], 'int64', 'label')
model = paddle.Model(resnet, input, label)
model.summary()
訓(xùn)練代碼如下所示:
resnet = ResNet50(num_classes=10)
model = paddle.Model(resnet)
from paddle.static import InputSpec
input = InputSpec([None, 3, 96, 96], 'float32', 'image')
label = InputSpec([None, 1], 'int64', 'label')
model = paddle.Model(resnet, input, label)
model.summary()
# Compose: 以列表的方式組合數(shù)據(jù)集預(yù)處理功能
# Resize: 調(diào)整圖像大小
# Transpose: 調(diào)整通道順序, eg, HWC(img) -> CHW(NN)
# Normalize: 對圖像數(shù)據(jù)歸一化
# ToTensor: 將 PIL.Image 或 numpy.ndarray 轉(zhuǎn)換成 paddle.Tensor
# cifar10 手動計算均值和標(biāo)準(zhǔn)差:mean = [125.31, 122.95, 113.86] 和 std = [62.99, 62.08, 66.7] link:http://www.reibang.com/p/a3f3ffc3cac1
t = Compose([Resize(size=96),
Normalize(mean=[125.31, 122.95, 113.86], std=[62.99, 62.08, 66.7], data_format='HWC'),
Transpose(order=(2,0,1)),
ToTensor(data_format='HWC')])
train_dataset = Cifar10(mode='train', transform=t, backend='cv2')
test_dataset = Cifar10(mode='test', transform=t, backend='cv2')
BATCH_SIZE = 256
train_loader = paddle.io.DataLoader(train_dataset, shuffle=True, batch_size=BATCH_SIZE)
test_loader = paddle.io.DataLoader(test_dataset, batch_size=BATCH_SIZE)
# 為模型訓(xùn)練做準(zhǔn)備岸啡,設(shè)置優(yōu)化器原叮,損失函數(shù)和精度計算方式
learning_rate = 0.001
loss_fn = paddle.nn.CrossEntropyLoss()
opt = paddle.optimizer.Adam(learning_rate=learning_rate, parameters=model.parameters())
model.prepare(optimizer=opt, loss=loss_fn, metrics=paddle.metric.Accuracy())
# 啟動模型訓(xùn)練,指定訓(xùn)練數(shù)據(jù)集巡蘸,設(shè)置訓(xùn)練輪次奋隶,設(shè)置每次數(shù)據(jù)集計算的批次大小,設(shè)置日志格式
model.fit(train_loader, test_loader, batch_size=256, epochs=20, eval_freq= 5, verbose=1)
model.evaluate(test_loader, verbose=1)
訓(xùn)練結(jié)果:測試數(shù)據(jù)集上的精度在:80%左右
Epoch 20/20
step 196/196 [==============================] - loss: 0.0529 - acc: 0.9840 - 318ms/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 40/40 [==============================] - loss: 0.1676 - acc: 0.7816 - 198ms/step
在PaddlePaddle中ResNet網(wǎng)絡(luò)還有一種實現(xiàn)方式悦荒,即直接用PaddlePaddle自帶的ResNet類唯欣,范例代碼如下所示:
from paddle.vision.models import ResNet
from paddle.vision.models.resnet import BottleneckBlock
# resnet = ResNet50(num_classes=10)
resnet = ResNet(BottleneckBlock, 50, num_classes=10)
model = paddle.Model(resnet)
from paddle.static import InputSpec
input = InputSpec([None, 3, 96, 96], 'float32', 'image')
label = InputSpec([None, 1], 'int64', 'label')
model = paddle.Model(resnet, input, label)
model.summary()
運行結(jié)果:
Epoch 20/20
step 196/196 [==============================] - loss: 0.0661 - acc: 0.9743 - 467ms/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 40/40 [==============================] - loss: 0.7514 - acc: 0.7846 - 235ms/step