吳恩達Deep Learning第一課作業(yè)——搭建Deep net

目錄鏈接:吳恩達Deep Learning學習筆記目錄

?1.Outline of the Assignment
?2. Initialization
?3. Forword propagate
?4. Backward propagate
?5. L-layers Model
?6.Training and predicting
?:本次作業(yè)參照Building your Deep Neural Network: Step by Step而完成胧奔。

1. Outline of the Assignment

Packages
?dnn_utils:激活函數(shù)及其導數(shù)計算function
?testCases:用于驗證函數(shù)是否正常運行的測試數(shù)組

import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases import *
from dnn_utils import sigmoid, sigmoid_backward, relu, relu_backward

# %matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

# %load_ext autoreload
# %autoreload 2

np.random.seed(1)

Deep Neural Network實現(xiàn)步驟:
?(1)2-layers 和 L-layers 神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)參數(shù)初始化
?(2)實現(xiàn)模型前向傳播計算:
???①實現(xiàn)前向傳播算法Linear計算,得到結(jié)果Z[L],L代表第L層
???②將Linear和Activation函數(shù)組合為一個神經(jīng)元[Linear->Activation]函數(shù)
???③神經(jīng)元函數(shù)結(jié)合網(wǎng)絡(luò)結(jié)構(gòu)參數(shù)堆疊得到隱藏層住诸,最后添加一個[Linear->Sigmoid]舌劳,用于輸出前向傳播計算結(jié)果
?(3)計算loss
?(4)實現(xiàn)向后傳播計算:
???①計算Linear函數(shù)向后傳播計算
???②計算Activation函數(shù)gradient
???③將①②結(jié)合為向后傳播函數(shù)
???④堆疊③L次和①一次實現(xiàn)向后傳播計算
?(5)更新參數(shù)

Deep Neural Net compute procedure

??每一步前向傳播計算中得到的Z和A需要保存下來用于向后傳播計算,以減少計算量。L為第L層神經(jīng)元玛瘸,m為樣本數(shù),nh[L]為第L層隱藏層神經(jīng)元數(shù)量苟蹈,向后傳播計算過程為:
計算過程中各個矩陣維度

matrix A[L] 糊渊, dA[L] W[L],dW[L] Z[L]慧脱,Z[L]
dim (nh[L],m) (nh[L],nh[L-1]) (nh[L],m)

計算gradients

layers gradients back propagate
L
L
L-1
L-1
··· ··· ···
1
1

2 .Initialization

??需要寫兩個函數(shù)渺绒,用于初始化2-layers模型、L-layers模型參數(shù)。

?2.1 雙層模型

??np.random.randn()返回一組服從標準正態(tài)分布得隨機值宗兼,乘以0.001是為了將初始化權(quán)重值減小躏鱼,以加快收斂速度,同時避免數(shù)值問題(cost計算結(jié)果出現(xiàn)nan针炉,inf挠他,沒乘以0.01時出現(xiàn)過)扳抽,參數(shù)初始化問題參見神經(jīng)網(wǎng)絡(luò)參數(shù)初始化的學問
篡帕、神經(jīng)網(wǎng)絡(luò)中參數(shù)的初始化方法

def initialize_params(n_x,n_h,n_y):
    """
    params:
        n_x: num of input layer
        n_h: num of units of hidden layer
        n_y: num of output layer 
    return: a dict
        W1:weights matrix of input layer, dim = [n_h,n_x]
        b1:bias vector of input layer,dim = [n_h,1]
        W2:weights matrix of hidden layer, dim = [n_y,n_h]
        b2:bias vector of hiddden layer,dim = [n_y,1]
    """
    W1 = np.random.randn(n_h,n_x) * 0.001
    b1 = np.zeros((n_h,1))
    W2 = np.random.randn(n_y,n_h) * 0.01
    b2 = np.zeros((n_y,1))
    
    assert(W1.shape == ( n_h , n_x ))
    assert(b1.shape == ( n_h , 1 ))
    assert(W2.shape == ( n_y , n_h ))
    assert(b2.shape == ( n_y , 1 ))
    
    params = {
        "W1": W1,
        "b1": b1,
        "W2": W2,
        "b2": b2,
    }
    return params
"""
params = initialize_params(2,2,1)
輸出
W1: [ 0.01744812, -0.00761207]
    [ 0.00319039, -0.0024937 ]
b1: [0.]
    [0.]
W2: [ 0.01462108, -0.02060141]
b2: [0.]
"""

?2.2 多層模型

??

def initialize_params_deep(layer_dims):
    """
    params:a array containing num of neural units of each layer
    return: a dict of weights and bias
    """
    np.random.seed(1)
    layers = len(layer_dims)
    params = {}
    for layer in range(1,layers):
        params["W" + str(layer)] = np.random.randn(layer_dims[layer],layer_dims[layer - 1]) * np.sqrt(2/layer_dims[layer-1])
        params["b" + str(layer)] = np.zeros((layer_dims[layer],1))
        
        assert(params["W" + str(layer)].shape == (layer_dims[layer],layer_dims[layer - 1]))
        assert(params["b" + str(layer)].shape == (layer_dims[layer],1))
    return params
"""
params = initialize_params_deep([4,3,2,1])
輸出
{'W1':[[ 1.14858562, -0.43257711, -0.37347383, -0.75870339],
        [ 0.6119356 , -1.62743362,  1.23376823, -0.53825456],
        [ 0.22559471, -0.17633148,  1.03386644, -1.45673947]]),
 'b1': array([[0.],
        [0.],
        [0.]]),
 'W2': array([[-0.26325254, -0.31357907,  0.92571887],
        [-0.89805746, -0.14078704, -0.7167684 ]]),
 'b2': array([[0.],
        [0.]]),
 'W3': array([[0.04221375, 0.58281521]]),
 'b3': array([[0.]])}
"""

3 .Forword propagate

??由于計算時采用向量化的數(shù)據(jù)進行計算贸呢,neural net 結(jié)構(gòu)描述為:(L-1)層[Linear -> ReLu] -> Linear -> sigmoid(二分類問題)镰烧,Linear計算為:

?3.1 Linear Forword

??用于計算Z,并保存上一層A楞陷,當前層W怔鳖,b

def linear_forword(previous_A,W,b):
    """
    params:
        previous_A:values of previous layer(input data is A[0]), dim = [num_units of previous layer,num of samples]
        W:weights matrix of current layer,dim = [num_units of current layer,num_units of previous layer]
        b:bias vector,dim = [num_units of current layer,1]
    return:
        Z:input of activition,dim = [num_units of previous layer,num of samples]
        cache:a dict containing "previous_A","current_W","current_b",stored for computing back propagate
    """
    Z = np.dot(W,previous_A)+b
    assert(Z.shape == (W.shape[0],previous_A.shape[1]))
    cache = (previous_A,W,b)
    return Z,cache

?3.2 Activation Forword

??用于計算當前層的激活值A(chǔ),保存當前層Z固蛾,保存當前層W结执,b,上一層A

def activation_forwaor(previous_A,W,b,activation):
    """
    params:
        previous_A:values of previous layer(input data is A[0]), dim = [num_units of previous layer,num of samples]
        W:weights matrix of current layer,dim = [num_units of current layer,num_units of previous layer]
        b:bias vector,dim = [num_units of current layer,1]
        activation:str "sigmoid" or "relu"
    return:
        A:values of current layer , dim = [num_units of current layer,num of samples]
        cache:a dict containing activation_cache and linear_cache of cunrrent layer
            ->activation_cache stores current Z
            ->linear_cache stores "previous_A","current_W","current_b"
    """
    if activation == "sigmoid":
        Z,linear_cache = linear_forword(previous_A,W,b)
        A,activation_cache = sigmoid(Z)
        
    elif activation == "relu":
        Z,linear_cache = linear_forword(previous_A,W,b)
        A,activation_cache = relu(Z)
        
    assert (A.shape == (W.shape[0],previous_A.shape[1]))
    cache = (linear_cache,activation_cache)
    
    return A,cache

?3.3 Forword propagate

def forword_propagate(X,params):
    """
    params:
        X:dim = [num of features,num of samples]
        params:output of initialize_params_deep() containing W,b
    return:
        A_L:the last activation value
        caches:a list of cache:
            ->every cache of activation_forwaor() ,num:L-1,index:0 to L-2
            ->the last cache of activation_forword(),num:1,index:L-1
    """
    caches = []
    A = X
    L = len(params) // 2 # the dict contains w and b,
    
    for layer in range(1,L):
        A_pre = A
        A,cache = activation_forwaor(A_pre,
                                     params["W" + str(layer)],
                                     params["b" + str(layer)],
                                     activation = "relu")

        caches.append(cache)
        
    A_L,cache = activation_forwaor(A,
                                   params["W" + str(L)],
                                   params["b" + str(L)],
                                   activation = "sigmoid")
    caches.append(cache)
    assert(A_L.shape == (1,X.shape[1]))
    
    return A_L,caches

caches:
??[((A0,W1,b1),(Z1))
??((A1,W2,b2),(Z2))
?????···????]
其中艾凯,A,W献幔,Z等均為np.array

3.4 Cost function

def cost(A_L,Y):
    """
    return: cross entropy
    """
    m = Y.shape[1]
    cost  = (-1 / m) * np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1 - Y, np.log(1 - AL)))
    cost = np.squeeze(cost)
    assert(cost.shape == ())
    return cost

4. Backword propagate

?4.1 linear backword


??如上圖所示,假設(shè)趾诗,已知:

??則可以計算:
def linear_backword(dZ,cache):
    """
    params:
        dZ: current layer dL/dZ
        cache: a tuple -> (previous_A,W,b)
    return:
        dA_pre,dW,db
    """
    previous_A,W,b = cache
    m = previous_A.shape[1]
    dW = (1 / m) * np.dot(dZ,previous_A.T)
    db = (1 / m) * np.sum(dZ,axis = 1,keepdims = True)
    dA_pre = np.dot(W.T,dZ)
    
    assert(dA_pre.shape == previous_A.shape)
    assert(dW.shape == W.shape)
    
    return dA_pre,dW,db

?4.2 activation backword

??用于計算dZ,再通過linear backword返回dA_pre,dW,db:
def activation_backword(dA,cache,activation):
    """
    params:
        dA: current layer dL/dA
        cache: a tuple -> (linear_cache, activation_cache)
        activation:str -> "relu" or "sigmoid"
    return:
        dA_pre,dW,db
    """
    linear_cache,activation_cache = cache
    if activation == "relu":
        dZ = relu_backward(dA,activation_cache)
        
    elif activation == "sigmoid":
        dZ = sigmoid_backward(dA,activation_cache)
        
    dA_pre,dW,db = linear_backword(dZ,linear_cache)
    return dA_pre,dW,db

?4.3 backword propagate

??由于輸出層激活函數(shù)與隱藏層激活函數(shù)不同蜡感,單獨計算。

def backword_propagate(A_L,Y,caches):
    """
    params:
        A_L:the last activation value,probability vector,dim = [1,m]
        caches:a list of cache:
            ->every cache of activation_forwaor() ,num:L-1,index:0 to L-2
            ->the last cache of activation_forword(),num:1,index:L-1
    return:
        gradients:a dict contains dA, dW, db
    """
    grads = {}
    L = len(caches)
    m = A_L.shape[1]
    Y = Y.reshape(A_L.shape)
    
    dA_L = -(np.divide(Y,A_L) - np.divide(1 - Y, 1 - A_L))
    
    current_cache = caches[-1]
    grads["dA" + str(L - 1)], grads["dW" + str(L)], grads["db" + str(L)] = activation_backword(dA_L,current_cache,activation = "sigmoid")
    for layer in reversed(range(L - 1)):
        current_cache = caches[layer]
        dA_prev_temp, dW_temp, db_temp = activation_backword(grads["dA" + str(layer + 1)],current_cache,activation = "relu")
        grads["dA" + str(layer)] = dA_prev_temp
        grads["dW" + str(layer + 1)] = dW_temp
        grads["db" + str(layer + 1)] = db_temp

    return grads

?4.4 optimize parameters

def optimize_params(params,grads,learning_rate = 1e-3):
    L = len(params) // 2
    for layer in range(L):
        params["W" + str(layer + 1)] = params["W" + str(layer + 1)] - learning_rate * grads["dW" + str(layer + 1)]
        params["b" + str(layer + 1)] = params["b" + str(layer + 1)] - learning_rate * grads["db" + str(layer + 1)]
    
    return params

5 L-layers model

?5.1 model

??①輸入的layer_dims恃泪,layer_dims[0]X的特征數(shù)量郑兴,layer_dims[-1]=1為輸出層,initialize_params_deep()生成的W矩陣有len(layer_dims)-1個娘香,即為隱藏層+輸出層猿涨;如layer_dims=[4,3,3,1]時示括,隱藏層+輸出層=3,有W1却舀、W2、W3朽肥;
??②forword_propagate()計算時所獲得caches為:
????((A0,W1,b1),(Z1))
????((A1,W2,b2),(Z2))
??????······
?? ??((AL-1,WL,bL),(ZL)) = ((Alen(layer_dims)-2,Wlen(layer_dims)-1,blen(layer_dims)-1),(Zlen(layer_dims)-1))
最后輸出AL禁筏;
??③backword_propagate()每次計算結(jié)果為當前層dW,db和前一層dA衡招。

def model(X,Y,layer_dims,learning_rate = 1e-3,epochs = 10):
    np.random.seed(1)
    costs = []
    params = initialize_params_deep(layer_dims)
    
    for epoch in range(epochs):
        A_L, caches = forword_propagate(X,params)
        loss = cost(A_L,Y)
        grads = backword_propagate(A_L,Y,caches)
        pramas = optimize_params(params,grads,learning_rate=learning_rate)
        costs.append(loss)
        if epoch % 100 == 0:
            print("epoch: %d, cost: %3.3f" % (epoch,loss))
            
    plt.plot(np.squeeze(costs))
    plt.ylabel('cost')
    plt.xlabel('epochs')
    plt.title("Learning rate =" + str(learning_rate))
    plt.show()
    return params

p = model(train_x,train_y,[12288,20,8,6,1],1e-2,2000)

?5.2 predict

def predict(X,params):
    m = X.shape[1]
    predictions = np.zeros((1,m))
    probs,caches = forword_propagate(X,params)
    for i in range(probs.shape[1]):
        predictions[0,i] = 1 if probs[0,i] >0.5 else 0
    assert(predictions.shape == (1,m))
   
    return predictions

6.Training and predicting

train_dataset = h5py.File("./input/train_catvnoncat.h5","r")
train_dataset_x = np.array(train_dataset['train_set_x'][:])
train_dataset_y = np.array(train_dataset['train_set_y'][:])

test_dataset = h5py.File("./input/test_catvnoncat.h5","r")
test_dataset_x = np.array(test_dataset['test_set_x'][:])
test_dataset_y = np.array(test_dataset['test_set_y'][:])

train_x = train_dataset_x.reshape(train_dataset_x.shape[0],-1).T
train_y = train_dataset_y.reshape(train_dataset_y.shape[0],-1).T
test_x = test_dataset_x.reshape(test_dataset_x.shape[0],-1).T
test_y = test_dataset_y.reshape(test_dataset_y.shape[0],-1).T
print(train_x.shape,train_y.shape,test_x.shape,test_y.shape)

train_x = train_x / 255
test_x = test_x / 255

p = model(train_x,train_y,[12288,20,8,6,1],1e-2,2000)
y_pred_train = predict(train_x,p)
print("train_acc: %3.3f" % (1 - np.mean(np.abs(y_pred_train - train_y))))
y_pred_test = predict(test_x,p)
print("test_acc: %3.3f" % (1 - np.mean(np.abs(y_pred_test - test_y))))
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末篱昔,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌州刽,老刑警劉巖空执,帶你破解...
    沈念sama閱讀 219,270評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異穗椅,居然都是意外死亡辨绊,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,489評論 3 395
  • 文/潘曉璐 我一進店門匹表,熙熙樓的掌柜王于貴愁眉苦臉地迎上來门坷,“玉大人,你說我怎么就攤上這事袍镀∧觯” “怎么了?”我有些...
    開封第一講書人閱讀 165,630評論 0 356
  • 文/不壞的土叔 我叫張陵苇羡,是天一觀的道長绸吸。 經(jīng)常有香客問我,道長设江,這世上最難降的妖魔是什么锦茁? 我笑而不...
    開封第一講書人閱讀 58,906評論 1 295
  • 正文 為了忘掉前任,我火速辦了婚禮叉存,結(jié)果婚禮上码俩,老公的妹妹穿的比我還像新娘。我一直安慰自己鹉胖,他們只是感情好握玛,可當我...
    茶點故事閱讀 67,928評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著甫菠,像睡著了一般挠铲。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上寂诱,一...
    開封第一講書人閱讀 51,718評論 1 305
  • 那天拂苹,我揣著相機與錄音,去河邊找鬼痰洒。 笑死瓢棒,一個胖子當著我的面吹牛,可吹牛的內(nèi)容都是我干的丘喻。 我是一名探鬼主播脯宿,決...
    沈念sama閱讀 40,442評論 3 420
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼泉粉!你這毒婦竟也來了连霉?” 一聲冷哼從身側(cè)響起榴芳,我...
    開封第一講書人閱讀 39,345評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎跺撼,沒想到半個月后窟感,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,802評論 1 317
  • 正文 獨居荒郊野嶺守林人離奇死亡歉井,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,984評論 3 337
  • 正文 我和宋清朗相戀三年柿祈,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片哩至。...
    茶點故事閱讀 40,117評論 1 351
  • 序言:一個原本活蹦亂跳的男人離奇死亡躏嚎,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出憨募,到底是詐尸還是另有隱情紧索,我是刑警寧澤,帶...
    沈念sama閱讀 35,810評論 5 346
  • 正文 年R本政府宣布菜谣,位于F島的核電站,受9級特大地震影響晚缩,放射性物質(zhì)發(fā)生泄漏尾膊。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 41,462評論 3 331
  • 文/蒙蒙 一荞彼、第九天 我趴在偏房一處隱蔽的房頂上張望冈敛。 院中可真熱鬧,春花似錦鸣皂、人聲如沸抓谴。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,011評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽癌压。三九已至,卻和暖如春荆陆,著一層夾襖步出監(jiān)牢的瞬間滩届,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,139評論 1 272
  • 我被黑心中介騙來泰國打工被啼, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留帜消,地道東北人。 一個月前我還...
    沈念sama閱讀 48,377評論 3 373
  • 正文 我出身青樓浓体,卻偏偏與公主長得像泡挺,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子命浴,可洞房花燭夜當晚...
    茶點故事閱讀 45,060評論 2 355