公益AI-TASK06-批量歸一化和殘差網(wǎng)絡(luò);凸優(yōu)化近刘;梯度下降

學(xué)習(xí)目標(biāo)
一擒贸、批量歸一化和殘差網(wǎng)絡(luò)
二、凸優(yōu)化
三觉渴、梯度下降

一介劫、批量歸一化和殘差網(wǎng)絡(luò)
我的理解:把一堆凌亂的數(shù)據(jù)集成比例的縮放、改造案淋,生成一個(gè)均值mean為0方差std為1的操作座韵,批量batch是對(duì)應(yīng)喂數(shù)據(jù)是一批一批的。殘差網(wǎng)絡(luò)將神經(jīng)網(wǎng)絡(luò)鏈上的傳參跳躍傳導(dǎo),貌似這樣可以讓神經(jīng)網(wǎng)絡(luò)可深可淺誉碴。

對(duì)輸入的標(biāo)準(zhǔn)化(淺層模型)
處理后的任意一個(gè)特征在數(shù)據(jù)集中所有樣本上的均值為0宦棺、標(biāo)準(zhǔn)差為1。
標(biāo)準(zhǔn)化處理輸入數(shù)據(jù)使各個(gè)特征的分布相近

批量歸一化(深度模型)
利用小批量上的均值和標(biāo)準(zhǔn)差黔帕,不斷調(diào)整神經(jīng)網(wǎng)絡(luò)中間輸出代咸,從而使整個(gè)神經(jīng)網(wǎng)絡(luò)在各層的中間輸出的數(shù)值更穩(wěn)定。

1.對(duì)全連接層做批量歸一化
位置:全連接層中的仿射變換和激活函數(shù)之間成黄。
全連接:

image.png

image.png

2.對(duì)卷積層做批量歸一化
位置:卷積計(jì)算之后呐芥、應(yīng)?激活函數(shù)之前。
如果卷積計(jì)算輸出多個(gè)通道奋岁,我們需要對(duì)這些通道的輸出分別做批量歸一化思瘟,且每個(gè)通道都擁有獨(dú)立的拉伸和偏移參數(shù)。 計(jì)算:對(duì)單通道厦取,batchsize=m,卷積計(jì)算輸出=pxq 對(duì)該通道中m×p×q個(gè)元素同時(shí)做批量歸一化,使用相同的均值和方差潮太。

3.預(yù)測(cè)時(shí)的批量歸一化
訓(xùn)練:以batch為單位,對(duì)每個(gè)batch計(jì)算均值和方差管搪。
預(yù)測(cè):用移動(dòng)平均估算整個(gè)訓(xùn)練數(shù)據(jù)集的樣本均值和方差虾攻。

【代碼】

# 從零實(shí)現(xiàn)
import time
import torch
from torch import nn, optim
import torch.nn.functional as F
import torchvision
import sys
sys.path.append("/home/kesci/input/") 
import d2lzh1981 as d2l
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

def batch_norm(is_training, X, gamma, beta, moving_mean, moving_var, eps, momentum):
    # 判斷當(dāng)前模式是訓(xùn)練模式還是預(yù)測(cè)模式
    if not is_training:
        # 如果是在預(yù)測(cè)模式下,直接使用傳入的移動(dòng)平均所得的均值和方差
        X_hat = (X - moving_mean) / torch.sqrt(moving_var + eps)
    else:
        assert len(X.shape) in (2, 4)
        if len(X.shape) == 2:
            # 使用全連接層的情況更鲁,計(jì)算特征維上的均值和方差
            mean = X.mean(dim=0)
            var = ((X - mean) ** 2).mean(dim=0)
        else:
            # 使用二維卷積層的情況霎箍,計(jì)算通道維上(axis=1)的均值和方差。這里我們需要保持
            # X的形狀以便后面可以做廣播運(yùn)算
            mean = X.mean(dim=0, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
            var = ((X - mean) ** 2).mean(dim=0, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
        # 訓(xùn)練模式下用當(dāng)前的均值和方差做標(biāo)準(zhǔn)化
        X_hat = (X - mean) / torch.sqrt(var + eps)
        # 更新移動(dòng)平均的均值和方差
        moving_mean = momentum * moving_mean + (1.0 - momentum) * mean
        moving_var = momentum * moving_var + (1.0 - momentum) * var
    Y = gamma * X_hat + beta  # 拉伸和偏移
    return Y, moving_mean, moving_var

class BatchNorm(nn.Module):
    def __init__(self, num_features, num_dims):
        super(BatchNorm, self).__init__()
        if num_dims == 2:
            shape = (1, num_features) #全連接層輸出神經(jīng)元
        else:
            shape = (1, num_features, 1, 1)  #通道數(shù)
        # 參與求梯度和迭代的拉伸和偏移參數(shù)澡为,分別初始化成0和1
        self.gamma = nn.Parameter(torch.ones(shape))
        self.beta = nn.Parameter(torch.zeros(shape))
        # 不參與求梯度和迭代的變量漂坏,全在內(nèi)存上初始化成0
        self.moving_mean = torch.zeros(shape)
        self.moving_var = torch.zeros(shape)

    def forward(self, X):
        # 如果X不在內(nèi)存上,將moving_mean和moving_var復(fù)制到X所在顯存上
        if self.moving_mean.device != X.device:
            self.moving_mean = self.moving_mean.to(X.device)
            self.moving_var = self.moving_var.to(X.device)
        # 保存更新過的moving_mean和moving_var, Module實(shí)例的traning屬性默認(rèn)為true, 調(diào)用.eval()后設(shè)成false
        Y, self.moving_mean, self.moving_var = batch_norm(self.training, 
            X, self.gamma, self.beta, self.moving_mean,
            self.moving_var, eps=1e-5, momentum=0.9)
        return Y

# 基于LeNet的應(yīng)用
net = nn.Sequential(
            nn.Conv2d(1, 6, 5), # in_channels, out_channels, kernel_size
            BatchNorm(6, num_dims=4),
            nn.Sigmoid(),
            nn.MaxPool2d(2, 2), # kernel_size, stride
            nn.Conv2d(6, 16, 5),
            BatchNorm(16, num_dims=4),
            nn.Sigmoid(),
            nn.MaxPool2d(2, 2),
            d2l.FlattenLayer(),
            nn.Linear(16*4*4, 120),
            BatchNorm(120, num_dims=2),
            nn.Sigmoid(),
            nn.Linear(120, 84),
            BatchNorm(84, num_dims=2),
            nn.Sigmoid(),
            nn.Linear(84, 10)
        )
print(net)

#batch_size = 256  
##cpu要調(diào)小batchsize
batch_size=16

def load_data_fashion_mnist(batch_size, resize=None, root='/home/kesci/input/FashionMNIST2065'):
    """Download the fashion mnist dataset and then load into memory."""
    trans = []
    if resize:
        trans.append(torchvision.transforms.Resize(size=resize))
    trans.append(torchvision.transforms.ToTensor())
    
    transform = torchvision.transforms.Compose(trans)
    mnist_train = torchvision.datasets.FashionMNIST(root=root, train=True, download=True, transform=transform)
    mnist_test = torchvision.datasets.FashionMNIST(root=root, train=False, download=True, transform=transform)

    train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True, num_workers=2)
    test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False, num_workers=2)

    return train_iter, test_iter
train_iter, test_iter = load_data_fashion_mnist(batch_size)

lr, num_epochs = 0.001, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
d2l.train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)

# 簡(jiǎn)潔實(shí)現(xiàn)(直接用nn包里的BatchNorm~)
net = nn.Sequential(
            nn.Conv2d(1, 6, 5), # in_channels, out_channels, kernel_size
            nn.BatchNorm2d(6),
            nn.Sigmoid(),
            nn.MaxPool2d(2, 2), # kernel_size, stride
            nn.Conv2d(6, 16, 5),
            nn.BatchNorm2d(16),
            nn.Sigmoid(),
            nn.MaxPool2d(2, 2),
            d2l.FlattenLayer(),
            nn.Linear(16*4*4, 120),
            nn.BatchNorm1d(120),
            nn.Sigmoid(),
            nn.Linear(120, 84),
            nn.BatchNorm1d(84),
            nn.Sigmoid(),
            nn.Linear(84, 10)
        )

optimizer = torch.optim.Adam(net.parameters(), lr=lr)
d2l.train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)

殘差網(wǎng)絡(luò)
深度學(xué)習(xí)的問題:深度CNN網(wǎng)絡(luò)達(dá)到一定深度后再一味地增加層數(shù)并不能帶來(lái)進(jìn)一步地分類性能提高媒至,反而會(huì)招致網(wǎng)絡(luò)收斂變得更慢顶别,準(zhǔn)確率也變得更差。
殘差塊 Residual Block
恒等映射:
左邊:f(x)=x
右邊:f(x)-x=0 (易于捕捉恒等映射的細(xì)微波動(dòng)

image.png

在殘差塊中拒啰,輸?可通過跨層的數(shù)據(jù)線路更快 地向前傳播驯绎。

class Residual(nn.Module):  # 本類已保存在d2lzh_pytorch包中方便以后使用
    #可以設(shè)定輸出通道數(shù)、是否使用額外的1x1卷積層來(lái)修改通道數(shù)以及卷積層的步幅谋旦。
    def __init__(self, in_channels, out_channels, use_1x1conv=False, stride=1):
        super(Residual, self).__init__()
        self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=stride)
        self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1)
        if use_1x1conv:
            self.conv3 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride)
        else:
            self.conv3 = None
        self.bn1 = nn.BatchNorm2d(out_channels)
        self.bn2 = nn.BatchNorm2d(out_channels)

    def forward(self, X):
        Y = F.relu(self.bn1(self.conv1(X)))
        Y = self.bn2(self.conv2(Y))
        if self.conv3:
            X = self.conv3(X)
        return F.relu(Y + X)

blk = Residual(3, 3)
X = torch.rand((4, 3, 6, 6))
blk(X).shape # torch.Size([4, 3, 6, 6])

blk = Residual(3, 6, use_1x1conv=True, stride=2)
blk(X).shape # torch.Size([4, 6, 3, 3])

# ResNet模型
# 卷積(64,7x7,3)
# 批量一體化
# 最大池化(3x3,2)
# 殘差塊x4 (通過步幅為2的殘差塊在每個(gè)模塊之間減小高和寬)
#全局平均池化
#全連接

net = nn.Sequential(
        nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
        nn.BatchNorm2d(64), 
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=3, stride=2, padding=1))

def resnet_block(in_channels, out_channels, num_residuals, first_block=False):
    if first_block:
        assert in_channels == out_channels # 第一個(gè)模塊的通道數(shù)同輸入通道數(shù)一致
    blk = []
    for i in range(num_residuals):
        if i == 0 and not first_block:
            blk.append(Residual(in_channels, out_channels, use_1x1conv=True, stride=2))
        else:
            blk.append(Residual(out_channels, out_channels))
    return nn.Sequential(*blk)

net.add_module("resnet_block1", resnet_block(64, 64, 2, first_block=True))
net.add_module("resnet_block2", resnet_block(64, 128, 2))
net.add_module("resnet_block3", resnet_block(128, 256, 2))
net.add_module("resnet_block4", resnet_block(256, 512, 2))

net.add_module("global_avg_pool", d2l.GlobalAvgPool2d()) # GlobalAvgPool2d的輸出: (Batch, 512, 1, 1)
net.add_module("fc", nn.Sequential(d2l.FlattenLayer(), nn.Linear(512, 10))) 

X = torch.rand((1, 1, 224, 224))
for name, layer in net.named_children():
    X = layer(X)
    print(name, ' output shape:\t', X.shape)

lr, num_epochs = 0.001, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
d2l.train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)

稠密連接網(wǎng)絡(luò)(DenseNet)

image.png

主要構(gòu)建模塊
稠密塊(dense block): 定義了輸入和輸出是如何連結(jié)的剩失。
過渡層(transition layer):用來(lái)控制通道數(shù),使之不過大册着。

稠密塊代碼

def conv_block(in_channels, out_channels):
    blk = nn.Sequential(nn.BatchNorm2d(in_channels), 
                        nn.ReLU(),
                        nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1))
    return blk

class DenseBlock(nn.Module):
    def __init__(self, num_convs, in_channels, out_channels):
        super(DenseBlock, self).__init__()
        net = []
        for i in range(num_convs):
            in_c = in_channels + i * out_channels
            net.append(conv_block(in_c, out_channels))
        self.net = nn.ModuleList(net)
        self.out_channels = in_channels + num_convs * out_channels # 計(jì)算輸出通道數(shù)

    def forward(self, X):
        for blk in self.net:
            Y = blk(X)
            X = torch.cat((X, Y), dim=1)  # 在通道維上將輸入和輸出連結(jié)
        return X
blk = DenseBlock(2, 3, 10)
X = torch.rand(4, 3, 8, 8)
Y = blk(X)
Y.shape # torch.Size([4, 23, 8, 8])

# 過渡層
#  1×1 卷積層:來(lái)減小通道數(shù)
# 步幅為2的平均池化層:減半高和寬
def transition_block(in_channels, out_channels):
    blk = nn.Sequential(
            nn.BatchNorm2d(in_channels), 
            nn.ReLU(),
            nn.Conv2d(in_channels, out_channels, kernel_size=1),
            nn.AvgPool2d(kernel_size=2, stride=2))
    return blk

blk = transition_block(23, 10)
blk(Y).shape # torch.Size([4, 10, 4, 4])

# DenseNet模型

net = nn.Sequential(
        nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
        nn.BatchNorm2d(64), 
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=3, stride=2, padding=1))

num_channels, growth_rate = 64, 32  # num_channels為當(dāng)前的通道數(shù)
num_convs_in_dense_blocks = [4, 4, 4, 4]

for i, num_convs in enumerate(num_convs_in_dense_blocks):
    DB = DenseBlock(num_convs, num_channels, growth_rate)
    net.add_module("DenseBlosk_%d" % i, DB)
    # 上一個(gè)稠密塊的輸出通道數(shù)
    num_channels = DB.out_channels
    # 在稠密塊之間加入通道數(shù)減半的過渡層
    if i != len(num_convs_in_dense_blocks) - 1:
        net.add_module("transition_block_%d" % i, transition_block(num_channels, num_channels // 2))
        num_channels = num_channels // 2

net.add_module("BN", nn.BatchNorm2d(num_channels))
net.add_module("relu", nn.ReLU())
net.add_module("global_avg_pool", d2l.GlobalAvgPool2d()) # GlobalAvgPool2d的輸出: (Batch, num_channels, 1, 1)
net.add_module("fc", nn.Sequential(d2l.FlattenLayer(), nn.Linear(num_channels, 10))) 

X = torch.rand((1, 1, 96, 96))
for name, layer in net.named_children():
    X = layer(X)
    print(name, ' output shape:\t', X.shape)

#batch_size = 256
batch_size=16
# 如出現(xiàn)“out of memory”的報(bào)錯(cuò)信息拴孤,可減小batch_size或resize
train_iter, test_iter =load_data_fashion_mnist(batch_size, resize=96)
lr, num_epochs = 0.001, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
d2l.train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)

二、凸優(yōu)化

優(yōu)化與深度學(xué)習(xí)
優(yōu)化與估計(jì)
盡管優(yōu)化方法可以最小化深度學(xué)習(xí)中的損失函數(shù)值甲捏,但本質(zhì)上優(yōu)化方法達(dá)到的目標(biāo)與深度學(xué)習(xí)的目標(biāo)并不相同演熟。
優(yōu)化方法目標(biāo):訓(xùn)練集損失函數(shù)值
深度學(xué)習(xí)目標(biāo):測(cè)試集損失函數(shù)值(泛化性)

%matplotlib inline
import sys
sys.path.append('/home/kesci/input')
import d2lzh1981 as d2l
from mpl_toolkits import mplot3d # 三維畫圖
import numpy as np

def f(x): return x * np.cos(np.pi * x)
def g(x): return f(x) + 0.2 * np.cos(5 * np.pi * x)

d2l.set_figsize((5, 3))
x = np.arange(0.5, 1.5, 0.01)
fig_f, = d2l.plt.plot(x, f(x),label="train error")
fig_g, = d2l.plt.plot(x, g(x),'--', c='purple', label="test error")
fig_f.axes.annotate('empirical risk', (1.0, -1.2), (0.5, -1.1),arrowprops=dict(arrowstyle='->'))
fig_g.axes.annotate('expected risk', (1.1, -1.05), (0.95, -0.5),arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('risk')
d2l.plt.legend(loc="upper right")
image.png

優(yōu)化在深度學(xué)習(xí)中的挑戰(zhàn)
①局部最小值
②鞍點(diǎn)
③梯度消失
局部最小值
<center>f(x) = x\cos\pi\,x</center>

def f(x):
    return x * np.cos(np.pi * x)

d2l.set_figsize((4.5, 2.5))
x = np.arange(-1.0, 2.0, 0.1)
fig,  = d2l.plt.plot(x, f(x))
fig.axes.annotate('local minimum', xy=(-0.3, -0.25), xytext=(-0.77, -1.0),
                  arrowprops=dict(arrowstyle='->'))
fig.axes.annotate('global minimum', xy=(1.1, -0.95), xytext=(0.6, 0.8),
                  arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('f(x)');
image.png

鞍點(diǎn)

x = np.arange(-2.0, 2.0, 0.1)
fig, = d2l.plt.plot(x, x**3)
fig.axes.annotate('saddle point', xy=(0, -0.2), xytext=(-0.52, -5.0),
                  arrowprops=dict(arrowstyle='->'))
d2l.plt.xlabel('x')
d2l.plt.ylabel('f(x)');
image.png

image.png
x, y = np.mgrid[-1: 1: 31j, -1: 1: 31j]
z = x**2 - y**2

d2l.set_figsize((6, 4))
ax = d2l.plt.figure().add_subplot(111, projection='3d')
ax.plot_wireframe(x, y, z, **{'rstride': 2, 'cstride': 2})
ax.plot([0], [0], [0], 'ro', markersize=10)
ticks = [-1,  0, 1]
d2l.plt.xticks(ticks)
d2l.plt.yticks(ticks)
ax.set_zticks(ticks)
d2l.plt.xlabel('x')
d2l.plt.ylabel('y');
image.png

梯度消失

x = np.arange(-2.0, 5.0, 0.01)
fig, = d2l.plt.plot(x, np.tanh(x))
d2l.plt.xlabel('x')
d2l.plt.ylabel('f(x)')
fig.axes.annotate('vanishing gradient', (4, 1), (2, 0.0) ,arrowprops=dict(arrowstyle='->'))
image.png

凸性

基礎(chǔ)
集合


image.png

函數(shù)


image.png
def f(x):
    return 0.5 * x**2  # Convex

def g(x):
    return np.cos(np.pi * x)  # Nonconvex

def h(x):
    return np.exp(0.5 * x)  # Convex

x, segment = np.arange(-2, 2, 0.01), np.array([-1.5, 1])
d2l.use_svg_display()
_, axes = d2l.plt.subplots(1, 3, figsize=(9, 3))

for ax, func in zip(axes, [f, g, h]):
    ax.plot(x, func(x))
    ax.plot(segment, func(segment),'--', color="purple")
    # d2l.plt.plot([x, segment], [func(x), func(segment)], axes=ax)
image.png

Jensen 不等式


image.png

性質(zhì)
①無(wú)局部極小值
②與凸集的關(guān)系
③二階條件

無(wú)局部最小值


image.png

與凸集的關(guān)系


image.png
x, y = np.meshgrid(np.linspace(-1, 1, 101), np.linspace(-1, 1, 101),
                   indexing='ij')

z = x**2 + 0.5 * np.cos(2 * np.pi * y)

# Plot the 3D surface
d2l.set_figsize((6, 4))
ax = d2l.plt.figure().add_subplot(111, projection='3d')
ax.plot_wireframe(x, y, z, **{'rstride': 10, 'cstride': 10})
ax.contour(x, y, z, offset=-1)
ax.set_zlim(-1, 1.5)

# Adjust labels
for func in [d2l.plt.xticks, d2l.plt.yticks, ax.set_zticks]:
    func([-1, 0, 1])
image.png

凸函數(shù)與二階導(dǎo)數(shù)


image.png
image.png
def f(x):
    return 0.5 * x**2

x = np.arange(-2, 2, 0.01)
axb, ab = np.array([-1.5, -0.5, 1]), np.array([-1.5, 1])

d2l.set_figsize((3.5, 2.5))
fig_x, = d2l.plt.plot(x, f(x))
fig_axb, = d2l.plt.plot(axb, f(axb), '-.',color="purple")
fig_ab, = d2l.plt.plot(ab, f(ab),'g-.')

fig_x.axes.annotate('a', (-1.5, f(-1.5)), (-1.5, 1.5),arrowprops=dict(arrowstyle='->'))
fig_x.axes.annotate('b', (1, f(1)), (1, 1.5),arrowprops=dict(arrowstyle='->'))
fig_x.axes.annotate('x', (-0.5, f(-0.5)), (-1.5, f(-0.5)),arrowprops=dict(arrowstyle='->'))
image.png
image.png
image.png

三、梯度下降

%matplotlib inline
import numpy as np
import torch
import time
from torch import nn, optim
import math
import sys
sys.path.append('/home/kesci/input')
import d2lzh1981 as d2l

一維梯度下降
證明:沿梯度反方向移動(dòng)自變量可以減小函數(shù)值


image.png
def f(x):
    return x**2  # Objective function

def gradf(x):
    return 2 * x  # Its derivative

def gd(eta):
    x = 10
    results = [x]
    for i in range(10):
        x -= eta * gradf(x)
        results.append(x)
    print('epoch 10, x:', x)
    return results

res = gd(0.2)

def show_trace(res):
    n = max(abs(min(res)), abs(max(res)))
    f_line = np.arange(-n, n, 0.01)
    d2l.set_figsize((3.5, 2.5))
    d2l.plt.plot(f_line, [f(x) for x in f_line],'-')
    d2l.plt.plot(res, [f(x) for x in res],'-o')
    d2l.plt.xlabel('x')
    d2l.plt.ylabel('f(x)')
    

show_trace(res)
image.png

學(xué)習(xí)率

show_trace(gd(0.05))
image.png
show_trace(gd(1.1))
image.png

局部極小值

<center>f(x) = x\cos cx</center>

c = 0.15 * np.pi

def f(x):
    return x * np.cos(c * x)

def gradf(x):
    return np.cos(c * x) - c * x * np.sin(c * x)

show_trace(gd(2))
image.png

多維梯度下降法

image.png
def train_2d(trainer, steps=20):
    x1, x2 = -5, -2
    results = [(x1, x2)]
    for i in range(steps):
        x1, x2 = trainer(x1, x2)
        results.append((x1, x2))
    print('epoch %d, x1 %f, x2 %f' % (i + 1, x1, x2))
    return results

def show_trace_2d(f, results): 
    d2l.plt.plot(*zip(*results), '-o', color='#ff7f0e')
    x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1), np.arange(-3.0, 1.0, 0.1))
    d2l.plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')
    d2l.plt.xlabel('x1')
    d2l.plt.ylabel('x2')

<center>f(x) = x^2_1 + 2x^2_2</center>

eta = 0.1

def f_2d(x1, x2):  # 目標(biāo)函數(shù)
    return x1 ** 2 + 2 * x2 ** 2

def gd_2d(x1, x2):
    return (x1 - eta * 2 * x1, x2 - eta * 4 * x2)

show_trace_2d(f_2d, train_2d(gd_2d))
image.png

自適應(yīng)方法
牛頓法


image.png
c = 0.5

def f(x):
    return np.cosh(c * x)  # Objective

def gradf(x):
    return c * np.sinh(c * x)  # Derivative

def hessf(x):
    return c**2 * np.cosh(c * x)  # Hessian

# Hide learning rate for now
def newton(eta=1):
    x = 10
    results = [x]
    for i in range(10):
        x -= eta * gradf(x) / hessf(x)
        results.append(x)
    print('epoch 10, x:', x)
    return results

show_trace(newton())
image.png
c = 0.15 * np.pi

def f(x):
    return x * np.cos(c * x)

def gradf(x):
    return np.cos(c * x) - c * x * np.sin(c * x)

def hessf(x):
    return - 2 * c * np.sin(c * x) - x * c**2 * np.cos(c * x)

show_trace(newton())
image.png
show_trace(newton(0.5))
image.png

收斂性分析


image.png

預(yù)處理(Heissan陣輔助梯度下降)

\mathbf{x} \leftarrow \mathbf{x} - \etadiag(H_f)^{-1}\nabla\mathbf{x}$

梯度下降與線性搜索(共軛梯度法)

隨機(jī)梯度下降
隨機(jī)梯度下降參數(shù)更新


image.png
def f(x1, x2):
    return x1 ** 2 + 2 * x2 ** 2  # Objective

def gradf(x1, x2):
    return (2 * x1, 4 * x2)  # Gradient

def sgd(x1, x2):  # Simulate noisy gradient
    global lr  # Learning rate scheduler
    (g1, g2) = gradf(x1, x2)  # Compute gradient
    (g1, g2) = (g1 + np.random.normal(0.1), g2 + np.random.normal(0.1))
    eta_t = eta * lr()  # Learning rate at time t
    return (x1 - eta_t * g1, x2 - eta_t * g2)  # Update variables

eta = 0.1
lr = (lambda: 1)  # Constant learning rate
show_trace_2d(f, train_2d(sgd, steps=50))
image.png
image.png

動(dòng)態(tài)學(xué)習(xí)率


image.png
def exponential():
    global ctr
    ctr += 1
    return math.exp(-0.1 * ctr)

ctr = 1
lr = exponential  # Set up learning rate
show_trace_2d(f, train_2d(sgd, steps=1000))
image.png
def polynomial():
    global ctr
    ctr += 1
    return (1 + 0.1 * ctr)**(-0.5)

ctr = 1
lr = polynomial  # Set up learning rate
show_trace_2d(f, train_2d(sgd, steps=50))
image.png

小批量隨機(jī)梯度下降

# 讀取數(shù)據(jù)
def get_data_ch7():  # 本函數(shù)已保存在d2lzh_pytorch包中方便以后使用
    data = np.genfromtxt('/home/kesci/input/airfoil4755/airfoil_self_noise.dat', delimiter='\t')
    data = (data - data.mean(axis=0)) / data.std(axis=0) # 標(biāo)準(zhǔn)化
    return torch.tensor(data[:1500, :-1], dtype=torch.float32), \
           torch.tensor(data[:1500, -1], dtype=torch.float32) # 前1500個(gè)樣本(每個(gè)樣本5個(gè)特征)

features, labels = get_data_ch7()
features.shape

import pandas as pd
df = pd.read_csv('/home/kesci/input/airfoil4755/airfoil_self_noise.dat', delimiter='\t', header=None)
df.head(10)

# 從零開始實(shí)現(xiàn)
def sgd(params, states, hyperparams):
    for p in params:
        p.data -= hyperparams['lr'] * p.grad.data

# 本函數(shù)已保存在d2lzh_pytorch包中方便以后使用
def train_ch7(optimizer_fn, states, hyperparams, features, labels,
              batch_size=10, num_epochs=2):
    # 初始化模型
    net, loss = d2l.linreg, d2l.squared_loss
    
    w = torch.nn.Parameter(torch.tensor(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=torch.float32),
                           requires_grad=True)
    b = torch.nn.Parameter(torch.zeros(1, dtype=torch.float32), requires_grad=True)

    def eval_loss():
        return loss(net(features, w, b), labels).mean().item()

    ls = [eval_loss()]
    data_iter = torch.utils.data.DataLoader(
        torch.utils.data.TensorDataset(features, labels), batch_size, shuffle=True)
    
    for _ in range(num_epochs):
        start = time.time()
        for batch_i, (X, y) in enumerate(data_iter):
            l = loss(net(X, w, b), y).mean()  # 使用平均損失
            
            # 梯度清零
            if w.grad is not None:
                w.grad.data.zero_()
                b.grad.data.zero_()
                
            l.backward()
            optimizer_fn([w, b], states, hyperparams)  # 迭代模型參數(shù)
            if (batch_i + 1) * batch_size % 100 == 0:
                ls.append(eval_loss())  # 每100個(gè)樣本記錄下當(dāng)前訓(xùn)練誤差
    # 打印結(jié)果和作圖
    print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
    d2l.set_figsize()
    d2l.plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
    d2l.plt.xlabel('epoch')
    d2l.plt.ylabel('loss')

def train_sgd(lr, batch_size, num_epochs=2):
    train_ch7(sgd, None, {'lr': lr}, features, labels, batch_size, num_epochs)

train_sgd(1, 1500, 6)
train_sgd(0.005, 1)
train_sgd(0.05, 10)
# 簡(jiǎn)潔實(shí)現(xiàn)
# 本函數(shù)與原書不同的是這里第一個(gè)參數(shù)優(yōu)化器函數(shù)而不是優(yōu)化器的名字
# 例如: optimizer_fn=torch.optim.SGD, optimizer_hyperparams={"lr": 0.05}
def train_pytorch_ch7(optimizer_fn, optimizer_hyperparams, features, labels,
                    batch_size=10, num_epochs=2):
    # 初始化模型
    net = nn.Sequential(
        nn.Linear(features.shape[-1], 1)
    )
    loss = nn.MSELoss()
    optimizer = optimizer_fn(net.parameters(), **optimizer_hyperparams)

    def eval_loss():
        return loss(net(features).view(-1), labels).item() / 2

    ls = [eval_loss()]
    data_iter = torch.utils.data.DataLoader(
        torch.utils.data.TensorDataset(features, labels), batch_size, shuffle=True)

    for _ in range(num_epochs):
        start = time.time()
        for batch_i, (X, y) in enumerate(data_iter):
            # 除以2是為了和train_ch7保持一致, 因?yàn)閟quared_loss中除了2
            l = loss(net(X).view(-1), y) / 2 
            
            optimizer.zero_grad()
            l.backward()
            optimizer.step()
            if (batch_i + 1) * batch_size % 100 == 0:
                ls.append(eval_loss())
    # 打印結(jié)果和作圖
    print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
    d2l.set_figsize()
    d2l.plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
    d2l.plt.xlabel('epoch')
    d2l.plt.ylabel('loss')

train_pytorch_ch7(optim.SGD, {"lr": 0.05}, features, labels, 10)
image.png
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末司顿,一起剝皮案震驚了整個(gè)濱河市芒粹,隨后出現(xiàn)的幾起案子蚕冬,更是在濱河造成了極大的恐慌,老刑警劉巖是辕,帶你破解...
    沈念sama閱讀 211,496評(píng)論 6 491
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件囤热,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡获三,警方通過查閱死者的電腦和手機(jī)旁蔼,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,187評(píng)論 3 385
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)疙教,“玉大人棺聊,你說我怎么就攤上這事≌晡剑” “怎么了限佩?”我有些...
    開封第一講書人閱讀 157,091評(píng)論 0 348
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)裸弦。 經(jīng)常有香客問我祟同,道長(zhǎng),這世上最難降的妖魔是什么理疙? 我笑而不...
    開封第一講書人閱讀 56,458評(píng)論 1 283
  • 正文 為了忘掉前任晕城,我火速辦了婚禮,結(jié)果婚禮上窖贤,老公的妹妹穿的比我還像新娘砖顷。我一直安慰自己,他們只是感情好赃梧,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,542評(píng)論 6 385
  • 文/花漫 我一把揭開白布滤蝠。 她就那樣靜靜地躺著,像睡著了一般授嘀。 火紅的嫁衣襯著肌膚如雪物咳。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,802評(píng)論 1 290
  • 那天粤攒,我揣著相機(jī)與錄音所森,去河邊找鬼。 笑死夯接,一個(gè)胖子當(dāng)著我的面吹牛焕济,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播盔几,決...
    沈念sama閱讀 38,945評(píng)論 3 407
  • 文/蒼蘭香墨 我猛地睜開眼晴弃,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起上鞠,我...
    開封第一講書人閱讀 37,709評(píng)論 0 266
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤际邻,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后芍阎,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體世曾,經(jīng)...
    沈念sama閱讀 44,158評(píng)論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,502評(píng)論 2 327
  • 正文 我和宋清朗相戀三年谴咸,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了轮听。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 38,637評(píng)論 1 340
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡岭佳,死狀恐怖血巍,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情珊随,我是刑警寧澤述寡,帶...
    沈念sama閱讀 34,300評(píng)論 4 329
  • 正文 年R本政府宣布,位于F島的核電站叶洞,受9級(jí)特大地震影響鲫凶,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜京办,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,911評(píng)論 3 313
  • 文/蒙蒙 一掀序、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧惭婿,春花似錦、人聲如沸叶雹。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,744評(píng)論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)折晦。三九已至钥星,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間满着,已是汗流浹背谦炒。 一陣腳步聲響...
    開封第一講書人閱讀 31,982評(píng)論 1 266
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留风喇,地道東北人宁改。 一個(gè)月前我還...
    沈念sama閱讀 46,344評(píng)論 2 360
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像魂莫,于是被迫代替她去往敵國(guó)和親还蹲。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,500評(píng)論 2 348

推薦閱讀更多精彩內(nèi)容