神經(jīng)網(wǎng)絡(luò):學(xué)習(xí)(一)

代價函數(shù)(Cost Function)

在神經(jīng)網(wǎng)絡(luò)模型中,我們引入一些新的標(biāo)記:

  • L:表示神經(jīng)網(wǎng)絡(luò)模型的層數(shù)诉字;
  • Sl:表示第l層的激活單元的個數(shù)(注:不包括偏置單元)懦尝;
  • SL:表示輸出層的激活單元的個數(shù);
  • K:表示類別分類的個數(shù)壤圃。

在正則化的邏輯回歸中陵霉,其代價函數(shù)J(θ)如下:

在邏輯回歸中,只有一個輸出變量y伍绳,但在神經(jīng)網(wǎng)絡(luò)模型中踊挠,其輸出變量是一個維度為K的向量。因此冲杀,在神經(jīng)網(wǎng)絡(luò)模型中效床,代價函數(shù)J(θ)改寫為:

補(bǔ)充筆記
Cost Function

Let's first define a few variables that we will need to use:

  • L = total number of layers in the network
  • sl = number of units (not counting bias unit) in layer l
  • K = number of output units/classes

Recall that in neural networks, we may have many output nodes. We denote hΘ(x)k as being a hypothesis that results in the kth output. Our cost function for neural networks is going to be a generalization of the one we used for logistic regression. Recall that the cost function for regularized logistic regression was:

For neural networks, it is going to be slightly more complicated:

We have added a few nested summations to account for our multiple output nodes. In the first part of the equation, before the square brackets, we have an additional nested summation that loops through the number of output nodes.

In the regularization part, after the square brackets, we must account for multiple theta matrices. The number of columns in our current theta matrix is equal to the number of nodes in our current layer (including the bias unit). The number of rows in our current theta matrix is equal to the number of nodes in the next layer (excluding the bias unit). As before with logistic regression, we square every term.

Note:

  • the double sum simply adds up the logistic regression costs calculated for each cell in the output layer
  • the triple sum simply adds up the squares of all the individual Θs in the entire network
  • the i in the triple sum does not refer to training example i
反向傳播算法(Backpropagation Algorithm)

在計算hΘ(x)時,我們采用正向傳播算法漠趁,從輸入層一層一層計算扁凛,直至輸出層為止。

現(xiàn)為了計算偏導(dǎo)數(shù):

我們采用反向傳播算法闯传,從輸出層一層一層計算誤差(誤差:指激活單元的預(yù)測值ak(l)與實際值yk之間的誤差谨朝,其中k=1:K。),直至倒數(shù)第二層為止字币。最后一層為輸入層则披,其數(shù)據(jù)時我們從訓(xùn)練集中獲取的,所以該部分沒有誤差洗出。

假設(shè)現(xiàn)訓(xùn)練集中只有一個樣本士复,神經(jīng)網(wǎng)絡(luò)模型如下圖所示:

按照反向傳播算法,我們先從輸出層開始計算誤差翩活,此處為了標(biāo)記誤差阱洪,我們引用δ來表示,則該表達(dá)式為:

這時菠镇,我們利用上述誤差δ(4)來計算第三層的誤差冗荸,其表達(dá)式為:

其中,g'(z(l))根據(jù)邏輯回歸(二)中關(guān)于梯度下降算法的公式推導(dǎo)利耍,其求導(dǎo)后的表達(dá)式為:

最后蚌本,我們利用δ(3)來計算第二層的誤差,其表達(dá)式為:

因此隘梨,我們可以推導(dǎo)出代價函數(shù)J(θ)的偏導(dǎo)數(shù)為:

若考慮正則化以及全體樣本的訓(xùn)練集程癌,則我們用Δij(l)表示誤差矩陣,其運算步驟如下:

將上述步驟完成后得到誤差矩陣Δi,j(l)

我們便可以計算代價函數(shù)的偏導(dǎo)數(shù)了轴猎,其計算方法如下:

最后嵌莉,我們可以得到:

由此,我們可以利用該表達(dá)式使用梯度下降算法或其他高級算法税稼。

補(bǔ)充筆記
Backpropagation Algorithm

"Backpropagation" is neural-network terminology for minimizing our cost function, just like what we were doing with gradient descent in logistic and linear regression. Our goal is to compute:

That is, we want to minimize our cost function J using an optimal set of parameters in theta. In this section we'll look at the equations we use to compute the partial derivative of J(Θ):

To do so, we use the following algorithm:

Back propagation Algorithm

Given training set {(x(1),y(1))?(x(m),y(m))}

  • Set Δi,j(l) := 0 for all (l,i,j), (hence you end up having a matrix full of zeros)

For training example t =1 to m:

  1. Set a(1):=x(t)
  2. Perform forward propagation to compute a(l) for l=2,3,…,L
  1. Using y(t), compute δ(L)=a(L)?y(t)

Where L is our total number of layers and a(L) is the vector of outputs of the activation units for the last layer. So our "error values" for the last layer are simply the differences of our actual results in the last layer and the correct outputs in y. To get the delta values of the layers before the last layer, we can use an equation that steps us back from right to left:

  1. Compute δ(L?1)(L?2),…,δ(2) using δ(l)=((Θ(l))Tδ(l+1)) .? a(l) .? (1?a(l))

The delta values of layer l are calculated by multiplying the delta values in the next layer with the theta matrix of layer l. We then element-wise multiply that with a function called g', or g-prime, which is the derivative of the activation function g evaluated with the input values given by z(l).

The g-prime derivative terms can also be written out as:

  1. Δi,j(l):=Δi,j(l)+aj(l)δi(l+1) or with vectorization, Δ(l):=Δ(l)(l+1)(a(l))T

Hence we update our new Δ matrix.

he capital-delta matrix D is used as an "accumulator" to add up our values as we go along and eventually compute our partial derivative. Thus we get:

Backpropagation Intuition

Recall that the cost function for a neural network is:

If we consider simple non-multiclass classification (k = 1) and disregard regularization, the cost is computed with:

Intuitively, δj(l) is the "error" for aj(l) (unit j in layer l). More formally, the delta values are actually the derivative of the cost function:

Recall that our derivative is the slope of a line tangent to the cost function, so the steeper the slope the more incorrect we are. Let us consider the following neural network below and see how we could calculate some δj(l):

In the image above, to calculate δ2(2), we multiply the weights Θ12(2) and Θ22(2) by their respective δ values found to the right of each edge. So we get δ2(2)= Θ12(2)δ1(3)22(2)δ2(3). To calculate every single possible δj(l), we could start from the right of our diagram. We can think of our edges as our Θij. Going from right to left, to calculate the value of δj(l), you can just take the over all sum of each weight times the δ it is coming from. Hence, another example would be δ2(3)12(3)1(4).

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末烦秩,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子郎仆,更是在濱河造成了極大的恐慌,老刑警劉巖兜蠕,帶你破解...
    沈念sama閱讀 216,997評論 6 502
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件扰肌,死亡現(xiàn)場離奇詭異,居然都是意外死亡熊杨,警方通過查閱死者的電腦和手機(jī)曙旭,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,603評論 3 392
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來晶府,“玉大人桂躏,你說我怎么就攤上這事〈剑” “怎么了剂习?”我有些...
    開封第一講書人閱讀 163,359評論 0 353
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經(jīng)常有香客問我鳞绕,道長失仁,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,309評論 1 292
  • 正文 為了忘掉前任们何,我火速辦了婚禮萄焦,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘冤竹。我一直安慰自己拂封,他們只是感情好,可當(dāng)我...
    茶點故事閱讀 67,346評論 6 390
  • 文/花漫 我一把揭開白布鹦蠕。 她就那樣靜靜地躺著烘苹,像睡著了一般。 火紅的嫁衣襯著肌膚如雪片部。 梳的紋絲不亂的頭發(fā)上镣衡,一...
    開封第一講書人閱讀 51,258評論 1 300
  • 那天,我揣著相機(jī)與錄音档悠,去河邊找鬼廊鸥。 笑死,一個胖子當(dāng)著我的面吹牛辖所,可吹牛的內(nèi)容都是我干的惰说。 我是一名探鬼主播,決...
    沈念sama閱讀 40,122評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼缘回,長吁一口氣:“原來是場噩夢啊……” “哼吆视!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起酥宴,我...
    開封第一講書人閱讀 38,970評論 0 275
  • 序言:老撾萬榮一對情侶失蹤啦吧,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后拙寡,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體授滓,經(jīng)...
    沈念sama閱讀 45,403評論 1 313
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,596評論 3 334
  • 正文 我和宋清朗相戀三年肆糕,在試婚紗的時候發(fā)現(xiàn)自己被綠了般堆。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 39,769評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡诚啃,死狀恐怖淮摔,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情始赎,我是刑警寧澤和橙,帶...
    沈念sama閱讀 35,464評論 5 344
  • 正文 年R本政府宣布仔燕,位于F島的核電站,受9級特大地震影響胃碾,放射性物質(zhì)發(fā)生泄漏涨享。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 41,075評論 3 327
  • 文/蒙蒙 一仆百、第九天 我趴在偏房一處隱蔽的房頂上張望厕隧。 院中可真熱鬧,春花似錦俄周、人聲如沸吁讨。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,705評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽建丧。三九已至,卻和暖如春波势,著一層夾襖步出監(jiān)牢的瞬間翎朱,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 32,848評論 1 269
  • 我被黑心中介騙來泰國打工尺铣, 沒想到剛下飛機(jī)就差點兒被人妖公主榨干…… 1. 我叫王不留拴曲,地道東北人。 一個月前我還...
    沈念sama閱讀 47,831評論 2 370
  • 正文 我出身青樓凛忿,卻偏偏與公主長得像澈灼,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子店溢,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,678評論 2 354

推薦閱讀更多精彩內(nèi)容

  • 我總是像個不懂事的孩子叁熔,所有的開心,難過床牧,生氣荣回,歡喜都寫在臉上;喜歡的人怎么都好叠赦,對不喜歡的人客套話都說不...
    Air煥煥閱讀 206評論 0 0