Chapter6_與學(xué)習(xí)相關(guān)的技巧

與學(xué)習(xí)相關(guān)的技巧

參數(shù)的更新

SGD

  • W \leftarrow W-\eta \frac{\partial L}{\partial W}
  • 缺點(diǎn):如果函數(shù)的形狀非均向(anisotropic),搜索的路徑會非常低效.梯度的方向并沒有指向最小值的方向.呈"之"字形向最小值移動(dòng),效率低.
class SGD:
    def __init__(self,lr = 0.01):
        self.lr = lr
        
    def update(self,params,grads):
        '''
        params和grads為字典型變量,按params['W1'],grads['W1']的形式保存權(quán)重和它們的梯度
        '''
        for key in params.keys():
            params[key]-=self.lr*grads[key]

Momentum(動(dòng)量)

  • \upsilon \leftarrow \alpha \upsilon - \eta \frac{\partial L}{\partial W}\\ W \leftarrow W + \upsilon
  • \eta表示學(xué)習(xí)率,\upsilon對應(yīng)物理上的速度.
  • \alpha \upsilon表示物體在不受到任何力時(shí)物體逐漸減速.
  • \upsilon保存物體的速度
  • 與SGD相比,"之"字形的程度減輕.在\alpha \upsilon的作用下,一直存在向某一方向運(yùn)動(dòng)的趨勢.
class Momentum:
    def __init__(self,lr=0.01,momentum=0.9):
        self.lr = lr
        self.momentum = momentum
        self.v = None
        
    def update(self,params,grads):
        if self.v is None:
            self.v = {}
            for key,val in params.items():
                self.v[key]=np.zeros_like(val)
        for key in params.keys():
            self.v[key] = self.momentum*self.v[key]-self.lr*grads[key]
            params[key]+=self.v[key]

AdaGrad

  • "學(xué)習(xí)率衰減":隨著學(xué)習(xí)的進(jìn)行,使學(xué)習(xí)率逐漸減小
  • AdaGrad(Adaptive Gradient)為參數(shù)的每個(gè)元素適當(dāng)?shù)卣{(diào)整學(xué)習(xí)率,與此同時(shí)進(jìn)行學(xué)習(xí)
  • h \leftarrow h + \frac{\partial L}{\partial W}\bigodot {\partial L}{\partial W}\\ W \leftarrow W - \eta \frac{1}{\sqrt{h}}\frac{\partial L}{\partial W}
  • h保存以前的所有梯度值的平方和.
  • 在更新參數(shù)時(shí),通過乘以\frac{1}{\sqrt{h}}調(diào)整學(xué)習(xí)的尺度.參數(shù)的元素中變動(dòng)較大(被大幅更新)的元素的學(xué)習(xí)率將變小.
class AdaGrad:
    def __init__(self,lr = 0.01):
        self.lr = lr
        self.h = None
        
    def update(self,params,grads):
        if self.h is None:
            self.h = {}
            for key,val in params.items():
                self.h[key]=np.zeros_lisk(val)
        for key in params.keys():
            self.h[key]+=grads[key]*grads[key]
            params[key]-=self.lr*grads[key]/(np.sqrt(self.h[key])+1e-7)#防止除數(shù)為0

Adam

  • 融合Momentum和AdaGrad的方法
  • 進(jìn)行超參數(shù)的"偏置校正"

權(quán)重的初始值

權(quán)重初始值為0

  • 初始權(quán)重為0,在誤差反向傳播法中,所有的權(quán)重值都會進(jìn)行相同的更新,因此權(quán)重被更新為相同的值,并擁有了對稱的值(重復(fù)的值).為了防止"權(quán)重均一化",必須隨機(jī)生成初始值

隱藏層的激活值的分布

import  numpy as np
import matplotlib.pyplot as plt
def sigmoid(x):
    return 1/(1+np.exp(-x))
x = np.random.randn(1000,100)
node_num = 100          #各隱藏層的神經(jīng)元數(shù)  
hidden_layer_size = 5   #隱藏層五層
activations = {}        #激活值的結(jié)果
for i in range(hidden_layer_size):
    if i != 0:
        x = activations[i-1]
    w = np.random.randn(node_num,node_num)/np.sqrt(node_num)
    z = np.dot(x,w)
    a = sigmoid(z) #sigmoid函數(shù)
    activations[i]=a

for i,a in activations.items():
    plt.subplot(1,len(activations),i+1)
    plt.title(str(i+1)+".layer")
    plt.hist(a.flatten(),30,range=(0,1))
plt.show()
output_10_0.png

Xavier初始值

  • 為了使各層的激活值呈現(xiàn)出具有相同廣度的分布.
  • <font color="red">如果前一層的節(jié)點(diǎn)數(shù)為n,則初始值使用標(biāo)準(zhǔn)差為\frac{1}{\sqrt{n}}的分布</font>

ReLU的權(quán)重初始值

  • Xavier初始值是以激活函數(shù)是線性函數(shù)為前提而推導(dǎo)出來的.因?yàn)閟igmoid函數(shù)和tanh函數(shù)左右對稱,且中央附近可以視作線性函數(shù),所以適合使用Xavier初始值
  • ReLU專用的初始值,也稱為"He初始值"
  • <font color='red'>當(dāng)前一層的節(jié)點(diǎn)數(shù)為n時(shí),He初始值使用標(biāo)準(zhǔn)差為\sqrt{\frac{2}{n}}的高斯分布</font>

Batch Normalization

  • 優(yōu)點(diǎn)
    • 可以時(shí)學(xué)習(xí)快速進(jìn)行(可以增大學(xué)習(xí)率)
    • 不那么依賴初始值(對于初始值不用那么神經(jīng)質(zhì))
    • 抑制過擬合(降低Dropout等的必要性)
  • 思路:調(diào)整各層的激活值分布使其擁有實(shí)當(dāng)?shù)膹V度
  • 措施:在神經(jīng)網(wǎng)絡(luò)中插入對數(shù)據(jù)分布進(jìn)行正規(guī)化的層,即Batch Normalization層
    [圖片上傳失敗...(image-4e3ead-1580825967849)]
  • <font color='red'>以進(jìn)行學(xué)習(xí)時(shí)的mini-batch為單位,按照mini-batch進(jìn)行正規(guī)化.進(jìn)行使數(shù)據(jù)分布的均值為0,方差為1的正規(guī)化.</font>
    \begin{aligned} \mu_B &\leftarrow \frac{1}{m}\sum_{i=1}^mx_i\\ \sigma^2_B &\leftarrow \frac{1}{m}\sum_{i=1}^m(x_i-\mu_B)^2\\ \hat{x_i}&\leftarrow \frac{x_i-\mu_B}{\sqrt{\sigma^2_B+\varepsilon}} \end{aligned}
  • 對m個(gè)輸入數(shù)據(jù)的集合B={x_1,x_2,...,x_m}求均值\mu_B和方差\sigma_B^2.然后對輸入數(shù)據(jù)進(jìn)行均值為0,方差為1的正規(guī)化.\varepsilon使一個(gè)微小值,防止出現(xiàn)除以0的情況
  • <font color='red'>Batch Norm層對正規(guī)化后的數(shù)據(jù)進(jìn)行縮放和平移的變換.(初始時(shí)\gamma=1,\beta=0,然后通過學(xué)習(xí)調(diào)整到合適的值)</font>
    y_i \leftarrow \gamma \hat{x_i}+\beta
#Batch Norm的實(shí)現(xiàn)
import sys,os
import numpy as np
import matplotlib.pyplot as plt
path = os.getcwd()+'\\sourcecode'
sys.path.append(path)
from sourcecode.dataset.mnist import load_mnist
from sourcecode.common.multi_layer_net_extend import MultiLayerNetExtend
from sourcecode.common.optimizer import SGD,Adam
(x_train,t_train),(x_test,t_test) = load_mnist(normalize=True)
x_train = x_train[:1000]
t_train = t_train[:1000]

max_epochs = 20
train_size = x_train.shape[0]
batch_size = 100
learning_rate = 0.01

def __train(weight_init_std):
    bn_network = MultiLayerNetExtend(input_size=784,hidden_size_list=[100,100,100,100,100],output_size=100,
                                    weight_init_std=weight_init_std,use_batchnorm=True)
    network = MultiLayerNetExtend(input_size=784,hidden_size_list=[100,100,100,100,100],output_size=100,
                                 weight_init_std=weight_init_std)
    optimizer = SGD(lr= learning_rate)
    
    train_acc_list = []
    bn_train_acc_list = []
    
    iter_per_epoch = max(train_size/batch_size,1)
    epoch_cnt = 0
    for i in range(1000000000):
        batch_mask = np.random.choice(train_size,batch_size)#從train_size中隨機(jī)選取batch_size個(gè)數(shù)
        x_batch= x_train[batch_mask]
        t_batch= t_train[batch_mask]
        
        for _network in (bn_network,network):
            grads = _network.gradient(x_batch,t_batch)
            optimizer.update(_network.params,grads)
        
        if i % iter_per_epoch==0:
            train_acc = network.accuracy(x_train,t_train)
            bn_train_acc = bn_network.accuracy(x_train,t_train)
            train_acc_list.append(train_acc)
            bn_train_acc_list.append(bn_train_acc)
            
            print("epoch:"+str(epoch_cnt)+"|"+str(train_acc)+"-"+str(bn_train_acc))
            
            epoch_cnt+=1
            if epoch_cnt>=max_epochs:
                break
                
    return train_acc_list,bn_train_acc_list

#繪制圖形
weight_scale_list = np.logspace(0,-4,num=16)#開始點(diǎn)和結(jié)束點(diǎn)是10的冪
x = np.arange(max_epochs)
for i,w in enumerate(weight_scale_list):
    print("======"+str(i+1)+"/16"+"========")
    train_acc_list,bn_train_acc_list=__train(w)
    
    plt.subplot(4,4,i+1)
    plt.title("W:"+str(w))
    if i == 15:
        plt.plot(x,bn_train_acc_list,label="Batch Normalization,",markevery=2)
        plt.plot(x,train_acc_list,linestyle="--",label='Normal(without BatchNorm)',markevery=2)
    else:
        plt.plot(x,bn_train_acc_list,markevery=2)
        plt.plot(x,train_acc_list,linestyle='--',markevery=2)
        
    plt.ylim(0,1.0)
    if i%4==0:
        plt.yticks([])
    else:
        plt.ylabel("accuracy")
    if i<12:
        plt.xticks([])
    else:
        plt.xlabel("epoches")
    plt.legend(loc='lower right')
plt.show()
======1/16========
epoch:0|0.1-0.094


C:\Note\DeepLearningByPython\sourcecode\common\multi_layer_net_extend.py:101: RuntimeWarning: overflow encountered in square
  weight_decay += 0.5 * self.weight_decay_lambda * np.sum(W**2)
C:\Note\DeepLearningByPython\sourcecode\common\multi_layer_net_extend.py:101: RuntimeWarning: invalid value encountered in double_scalars
  weight_decay += 0.5 * self.weight_decay_lambda * np.sum(W**2)
C:\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py:83: RuntimeWarning: overflow encountered in reduce
  return ufunc.reduce(obj, axis, dtype, out, **passkwargs)


epoch:1|0.116-0.057
epoch:2|0.116-0.031
epoch:3|0.116-0.037
epoch:4|0.116-0.053
epoch:5|0.116-0.07
epoch:6|0.116-0.087
epoch:7|0.116-0.11
epoch:8|0.116-0.136
epoch:9|0.116-0.148
epoch:10|0.116-0.175
epoch:11|0.116-0.198
epoch:12|0.116-0.222
epoch:13|0.116-0.239
epoch:14|0.116-0.248
epoch:15|0.116-0.279
epoch:16|0.116-0.292
epoch:17|0.116-0.326
epoch:18|0.116-0.337


No handles with labels found to put in legend.


epoch:19|0.116-0.357
======2/16========
epoch:0|0.094-0.0


C:\Note\DeepLearningByPython\sourcecode\common\layers.py:12: RuntimeWarning: invalid value encountered in less_equal
  self.mask = (x <= 0)
C:\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py:83: RuntimeWarning: invalid value encountered in reduce
  return ufunc.reduce(obj, axis, dtype, out, **passkwargs)


epoch:1|0.097-0.005
epoch:2|0.097-0.006
epoch:3|0.097-0.015
epoch:4|0.097-0.023
epoch:5|0.097-0.047
epoch:6|0.097-0.072
epoch:7|0.097-0.104
epoch:8|0.097-0.13
epoch:9|0.097-0.163
epoch:10|0.097-0.191
epoch:11|0.097-0.21
epoch:12|0.097-0.242
epoch:13|0.097-0.271
epoch:14|0.097-0.29
epoch:15|0.097-0.321
epoch:16|0.097-0.347
epoch:17|0.097-0.363
epoch:18|0.097-0.392


No handles with labels found to put in legend.


epoch:19|0.097-0.418
======3/16========
epoch:0|0.098-0.014
epoch:1|0.401-0.021
epoch:2|0.513-0.052
epoch:3|0.614-0.084
epoch:4|0.7-0.124
epoch:5|0.754-0.156
epoch:6|0.789-0.205
epoch:7|0.84-0.24
epoch:8|0.868-0.278
epoch:9|0.884-0.323
epoch:10|0.901-0.365
epoch:11|0.916-0.404
epoch:12|0.926-0.433
epoch:13|0.938-0.456
epoch:14|0.944-0.494
epoch:15|0.958-0.52
epoch:16|0.965-0.544
epoch:17|0.972-0.56
epoch:18|0.976-0.58


No handles with labels found to put in legend.


epoch:19|0.982-0.601
======4/16========
epoch:0|0.002-0.011
epoch:1|0.127-0.026
epoch:2|0.263-0.053
epoch:3|0.396-0.092
epoch:4|0.474-0.157
epoch:5|0.56-0.231
epoch:6|0.596-0.315
epoch:7|0.645-0.41
epoch:8|0.681-0.476
epoch:9|0.709-0.523
epoch:10|0.713-0.566
epoch:11|0.75-0.609
epoch:12|0.762-0.65
epoch:13|0.787-0.676
epoch:14|0.794-0.704
epoch:15|0.778-0.722
epoch:16|0.774-0.738
epoch:17|0.822-0.761
epoch:18|0.821-0.771


No handles with labels found to put in legend.


epoch:19|0.841-0.789
======5/16========
epoch:0|0.0-0.0
epoch:1|0.0-0.003
epoch:2|0.017-0.03
epoch:3|0.041-0.173
epoch:4|0.077-0.332
epoch:5|0.088-0.45
epoch:6|0.101-0.534
epoch:7|0.111-0.59
epoch:8|0.118-0.624
epoch:9|0.124-0.658
epoch:10|0.13-0.694
epoch:11|0.136-0.723
epoch:12|0.133-0.749
epoch:13|0.134-0.766
epoch:14|0.129-0.791
epoch:15|0.128-0.803
epoch:16|0.118-0.818
epoch:17|0.108-0.828
epoch:18|0.102-0.842


No handles with labels found to put in legend.


epoch:19|0.1-0.853
======6/16========
epoch:0|0.003-0.007
epoch:1|0.129-0.162
epoch:2|0.105-0.426
epoch:3|0.116-0.557
epoch:4|0.123-0.63
epoch:5|0.133-0.673
epoch:6|0.138-0.707
epoch:7|0.129-0.728
epoch:8|0.121-0.756
epoch:9|0.161-0.776
epoch:10|0.143-0.795
epoch:11|0.116-0.818
epoch:12|0.168-0.858
epoch:13|0.126-0.888
epoch:14|0.121-0.902
epoch:15|0.127-0.919
epoch:16|0.132-0.932
epoch:17|0.146-0.946
epoch:18|0.127-0.953
epoch:19|0.125-0.96


No handles with labels found to put in legend.


======7/16========
epoch:0|0.116-0.0
epoch:1|0.105-0.25
epoch:2|0.117-0.556
epoch:3|0.117-0.664
epoch:4|0.117-0.691
epoch:5|0.117-0.719
epoch:6|0.117-0.743
epoch:7|0.117-0.757
epoch:8|0.117-0.774
epoch:9|0.117-0.808
epoch:10|0.117-0.846
epoch:11|0.117-0.893
epoch:12|0.117-0.929
epoch:13|0.117-0.95
epoch:14|0.117-0.959
epoch:15|0.117-0.967
epoch:16|0.117-0.972
epoch:17|0.117-0.979
epoch:18|0.117-0.982


No handles with labels found to put in legend.


epoch:19|0.117-0.985
======8/16========
epoch:0|0.105-0.001
epoch:1|0.117-0.454
epoch:2|0.117-0.677
epoch:3|0.117-0.735
epoch:4|0.117-0.783
epoch:5|0.116-0.842
epoch:6|0.116-0.897
epoch:7|0.116-0.92
epoch:8|0.116-0.944
epoch:9|0.116-0.961
epoch:10|0.116-0.974
epoch:11|0.116-0.982
epoch:12|0.116-0.985
epoch:13|0.117-0.988
epoch:14|0.116-0.995
epoch:15|0.116-0.993
epoch:16|0.116-0.994
epoch:17|0.116-0.996
epoch:18|0.116-0.996


No handles with labels found to put in legend.


epoch:19|0.116-0.998
======9/16========
epoch:0|0.097-0.087
epoch:1|0.116-0.467
epoch:2|0.116-0.725
epoch:3|0.117-0.828
epoch:4|0.117-0.858
epoch:5|0.117-0.93
epoch:6|0.117-0.953
epoch:7|0.117-0.969
epoch:8|0.117-0.978
epoch:9|0.117-0.984
epoch:10|0.117-0.989
epoch:11|0.117-0.992
epoch:12|0.117-0.994
epoch:13|0.117-0.996
epoch:14|0.117-0.997
epoch:15|0.116-0.998
epoch:16|0.117-0.998
epoch:17|0.117-0.998
epoch:18|0.117-0.999
epoch:19|0.116-1.0


No handles with labels found to put in legend.


======10/16========
epoch:0|0.093-0.043
epoch:1|0.116-0.477
epoch:2|0.116-0.712
epoch:3|0.116-0.787
epoch:4|0.116-0.827
epoch:5|0.116-0.797
epoch:6|0.116-0.891
epoch:7|0.116-0.926
epoch:8|0.116-0.968
epoch:9|0.116-0.975
epoch:10|0.117-0.949
epoch:11|0.116-0.993
epoch:12|0.116-0.988
epoch:13|0.116-0.996
epoch:14|0.116-0.926
epoch:15|0.116-0.997
epoch:16|0.116-0.994
epoch:17|0.116-0.999
epoch:18|0.116-1.0
epoch:19|0.116-0.997


No handles with labels found to put in legend.


======11/16========
epoch:0|0.094-0.086
epoch:1|0.097-0.668
epoch:2|0.116-0.681
epoch:3|0.116-0.744
epoch:4|0.116-0.728
epoch:5|0.116-0.784
epoch:6|0.117-0.904
epoch:7|0.117-0.916
epoch:8|0.117-0.908
epoch:9|0.116-0.979
epoch:10|0.117-0.978
epoch:11|0.117-0.984
epoch:12|0.117-0.977
epoch:13|0.117-0.984
epoch:14|0.117-0.972
epoch:15|0.117-0.989
epoch:16|0.117-0.99
epoch:17|0.117-0.974
epoch:18|0.117-0.997


No handles with labels found to put in legend.


epoch:19|0.117-0.998
======12/16========
epoch:0|0.116-0.165
epoch:1|0.116-0.498
epoch:2|0.116-0.653
epoch:3|0.116-0.645
epoch:4|0.117-0.682
epoch:5|0.117-0.582
epoch:6|0.116-0.7
epoch:7|0.116-0.767
epoch:8|0.116-0.745
epoch:9|0.116-0.788
epoch:10|0.116-0.79
epoch:11|0.116-0.797
epoch:12|0.117-0.8
epoch:13|0.117-0.802
epoch:14|0.117-0.8
epoch:15|0.117-0.803
epoch:16|0.117-0.816
epoch:17|0.117-0.776
epoch:18|0.117-0.98


No handles with labels found to put in legend.


epoch:19|0.117-0.987
======13/16========
epoch:0|0.105-0.097
epoch:1|0.105-0.496
epoch:2|0.117-0.267
epoch:3|0.105-0.573
epoch:4|0.117-0.594
epoch:5|0.117-0.579
epoch:6|0.117-0.563
epoch:7|0.117-0.574
epoch:8|0.117-0.603
epoch:9|0.117-0.611
epoch:10|0.117-0.603
epoch:11|0.117-0.691
epoch:12|0.117-0.68
epoch:13|0.117-0.691
epoch:14|0.117-0.698
epoch:15|0.117-0.722
epoch:16|0.117-0.71
epoch:17|0.117-0.789
epoch:18|0.117-0.694


No handles with labels found to put in legend.


epoch:19|0.117-0.801
======14/16========
epoch:0|0.117-0.141
epoch:1|0.117-0.33
epoch:2|0.1-0.399
epoch:3|0.117-0.22
epoch:4|0.117-0.401
epoch:5|0.117-0.411
epoch:6|0.116-0.408
epoch:7|0.116-0.405
epoch:8|0.116-0.412
epoch:9|0.116-0.411
epoch:10|0.116-0.426
epoch:11|0.116-0.507
epoch:12|0.116-0.513
epoch:13|0.116-0.521
epoch:14|0.116-0.515
epoch:15|0.116-0.518
epoch:16|0.116-0.606
epoch:17|0.116-0.611
epoch:18|0.116-0.618


No handles with labels found to put in legend.


epoch:19|0.116-0.619
======15/16========
epoch:0|0.116-0.162
epoch:1|0.117-0.365
epoch:2|0.117-0.381
epoch:3|0.117-0.391
epoch:4|0.117-0.387
epoch:5|0.116-0.386
epoch:6|0.116-0.417
epoch:7|0.116-0.419
epoch:8|0.117-0.461
epoch:9|0.117-0.476
epoch:10|0.117-0.484
epoch:11|0.117-0.52
epoch:12|0.116-0.517
epoch:13|0.117-0.45
epoch:14|0.116-0.482
epoch:15|0.117-0.43
epoch:16|0.117-0.514
epoch:17|0.117-0.434
epoch:18|0.117-0.465


No handles with labels found to put in legend.


epoch:19|0.117-0.524
======16/16========
epoch:0|0.1-0.184
epoch:1|0.116-0.212
epoch:2|0.117-0.221
epoch:3|0.117-0.305
epoch:4|0.117-0.315
epoch:5|0.117-0.312
epoch:6|0.117-0.317
epoch:7|0.117-0.317
epoch:8|0.117-0.325
epoch:9|0.117-0.334
epoch:10|0.117-0.346
epoch:11|0.117-0.33
epoch:12|0.117-0.417
epoch:13|0.117-0.517
epoch:14|0.117-0.435
epoch:15|0.117-0.509
epoch:16|0.117-0.466
epoch:17|0.117-0.484
epoch:18|0.117-0.511
epoch:19|0.117-0.458
output_14_35.png

正則化

過擬合

  • 只能擬合訓(xùn)練數(shù)據(jù),但不能很好地?cái)M合不包含在訓(xùn)練數(shù)據(jù)中地其他數(shù)據(jù)地狀態(tài)
  • 原因
    • 模型擁有大量參數(shù),表現(xiàn)力強(qiáng)
    • 訓(xùn)練數(shù)據(jù)少

權(quán)值衰減

  • 通過在學(xué)習(xí)地過程中對大的權(quán)重進(jìn)行懲罰,來抑制過擬合
  • 如果將權(quán)重記為W,L2范數(shù)地權(quán)值衰減為\frac{1}{2}\lambda W^2,然后將這個(gè)\frac{1}{2}\lambda W^2加到損失函數(shù)上.

Dropout

  • 在學(xué)習(xí)過程中隨機(jī)刪除神經(jīng)元地方法.訓(xùn)練時(shí),隨機(jī)選出隱藏層地神經(jīng)元,然后將其刪除
  • 集成學(xué)習(xí):讓多個(gè)模型單獨(dú)進(jìn)行學(xué)習(xí),推理時(shí)再取多個(gè)模型的輸出的平均值.
  • Dropout將集成學(xué)習(xí)的效果(模擬地)通過一個(gè)網(wǎng)絡(luò)實(shí)現(xiàn)了
class Dropout:
    def __init__(self,dropout_ratio = 0.5):
        self.dropout_ratio = dropout_ratio
        self.mask = None
    
    def forward(self,x,train_flg = True):
        if train_flg:
            self.mask = np.random.rand(*x.shape)>self.dropout_ratio
            return x*self.mask
        else:
            return x*(1.0-self.dropout_ratio)
    
    def backward(self,dout):
        return dout*self.mask

超參數(shù)的驗(yàn)證

  • 超參數(shù)指各層的神經(jīng)元數(shù)量,batch大小,參數(shù)更新時(shí)的學(xué)習(xí)率或權(quán)值衰減等

驗(yàn)證數(shù)據(jù)

  • 不能使用測試數(shù)據(jù)評估超參數(shù)的性能.如果使用測試數(shù)據(jù)調(diào)整超參數(shù),超參數(shù)的值會對測試數(shù)據(jù)發(fā)生過擬合.
  • 調(diào)整超參數(shù)時(shí),必須使用超參數(shù)專用的確認(rèn)數(shù)據(jù),一般稱為驗(yàn)證數(shù)據(jù).

超參數(shù)的最優(yōu)化

  • 進(jìn)行超參數(shù)的最優(yōu)化時(shí),逐漸縮小超參數(shù)的"好值"的存在范圍
  • 步驟
    • 步驟0:設(shè)定超參數(shù)的范圍
    • 步驟1:從設(shè)定的超參數(shù)范圍中隨機(jī)采樣
    • 步驟2:使用步驟1中采樣到的超參數(shù)的值進(jìn)行學(xué)習(xí),通過驗(yàn)證數(shù)據(jù)評估識別精度(將epoch設(shè)置的很小)
    • 步驟3:重復(fù)步驟1和步驟2(100次等),根據(jù)它們的識別精度的結(jié)果,縮小超參數(shù)的范圍
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末霉咨,一起剝皮案震驚了整個(gè)濱河市落包,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 217,406評論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異,居然都是意外死亡版仔,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,732評論 3 393
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來蛮粮,“玉大人益缎,你說我怎么就攤上這事∪幌耄” “怎么了莺奔?”我有些...
    開封第一講書人閱讀 163,711評論 0 353
  • 文/不壞的土叔 我叫張陵,是天一觀的道長变泄。 經(jīng)常有香客問我令哟,道長,這世上最難降的妖魔是什么妨蛹? 我笑而不...
    開封第一講書人閱讀 58,380評論 1 293
  • 正文 為了忘掉前任屏富,我火速辦了婚禮,結(jié)果婚禮上蛙卤,老公的妹妹穿的比我還像新娘狠半。我一直安慰自己,他們只是感情好颤难,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,432評論 6 392
  • 文/花漫 我一把揭開白布典予。 她就那樣靜靜地躺著,像睡著了一般乐严。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上衣摩,一...
    開封第一講書人閱讀 51,301評論 1 301
  • 那天昂验,我揣著相機(jī)與錄音,去河邊找鬼艾扮。 笑死既琴,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的泡嘴。 我是一名探鬼主播甫恩,決...
    沈念sama閱讀 40,145評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼酌予!你這毒婦竟也來了磺箕?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,008評論 0 276
  • 序言:老撾萬榮一對情侶失蹤抛虫,失蹤者是張志新(化名)和其女友劉穎松靡,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體建椰,經(jīng)...
    沈念sama閱讀 45,443評論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡雕欺,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,649評論 3 334
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片屠列。...
    茶點(diǎn)故事閱讀 39,795評論 1 347
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡啦逆,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出笛洛,到底是詐尸還是另有隱情夏志,我是刑警寧澤,帶...
    沈念sama閱讀 35,501評論 5 345
  • 正文 年R本政府宣布撞蜂,位于F島的核電站盲镶,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏蝌诡。R本人自食惡果不足惜溉贿,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,119評論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望浦旱。 院中可真熱鬧宇色,春花似錦、人聲如沸颁湖。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,731評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽甥捺。三九已至抢蚀,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間镰禾,已是汗流浹背皿曲。 一陣腳步聲響...
    開封第一講書人閱讀 32,865評論 1 269
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留吴侦,地道東北人屋休。 一個(gè)月前我還...
    沈念sama閱讀 47,899評論 2 370
  • 正文 我出身青樓,卻偏偏與公主長得像备韧,于是被迫代替她去往敵國和親劫樟。 傳聞我的和親對象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,724評論 2 354

推薦閱讀更多精彩內(nèi)容