代價函數(shù)(Cost Function)
在神經(jīng)網(wǎng)絡(luò)模型中,我們引入一些新的標(biāo)記:
- L:表示神經(jīng)網(wǎng)絡(luò)模型的層數(shù)诉字;
- Sl:表示第l層的激活單元的個數(shù)(注:不包括偏置單元)懦尝;
- SL:表示輸出層的激活單元的個數(shù);
- K:表示類別分類的個數(shù)壤圃。
在正則化的邏輯回歸中陵霉,其代價函數(shù)J(θ)如下:
在邏輯回歸中,只有一個輸出變量y伍绳,但在神經(jīng)網(wǎng)絡(luò)模型中踊挠,其輸出變量是一個維度為K的向量。因此冲杀,在神經(jīng)網(wǎng)絡(luò)模型中效床,代價函數(shù)J(θ)改寫為:
補(bǔ)充筆記
Cost Function
Let's first define a few variables that we will need to use:
- L = total number of layers in the network
- sl = number of units (not counting bias unit) in layer l
- K = number of output units/classes
Recall that in neural networks, we may have many output nodes. We denote hΘ(x)k as being a hypothesis that results in the kth output. Our cost function for neural networks is going to be a generalization of the one we used for logistic regression. Recall that the cost function for regularized logistic regression was:
For neural networks, it is going to be slightly more complicated:
We have added a few nested summations to account for our multiple output nodes. In the first part of the equation, before the square brackets, we have an additional nested summation that loops through the number of output nodes.
In the regularization part, after the square brackets, we must account for multiple theta matrices. The number of columns in our current theta matrix is equal to the number of nodes in our current layer (including the bias unit). The number of rows in our current theta matrix is equal to the number of nodes in the next layer (excluding the bias unit). As before with logistic regression, we square every term.
Note:
- the double sum simply adds up the logistic regression costs calculated for each cell in the output layer
- the triple sum simply adds up the squares of all the individual Θs in the entire network
- the i in the triple sum does not refer to training example i
反向傳播算法(Backpropagation Algorithm)
在計算hΘ(x)時,我們采用正向傳播算法漠趁,從輸入層一層一層計算扁凛,直至輸出層為止。
現(xiàn)為了計算偏導(dǎo)數(shù):
我們采用反向傳播算法闯传,從輸出層一層一層計算誤差(誤差:指激活單元的預(yù)測值ak(l)與實際值yk之間的誤差谨朝,其中k=1:K。),直至倒數(shù)第二層為止字币。最后一層為輸入層则披,其數(shù)據(jù)時我們從訓(xùn)練集中獲取的,所以該部分沒有誤差洗出。
假設(shè)現(xiàn)訓(xùn)練集中只有一個樣本士复,神經(jīng)網(wǎng)絡(luò)模型如下圖所示:
按照反向傳播算法,我們先從輸出層開始計算誤差翩活,此處為了標(biāo)記誤差阱洪,我們引用δ來表示,則該表達(dá)式為:
這時菠镇,我們利用上述誤差δ(4)來計算第三層的誤差冗荸,其表達(dá)式為:
其中,g'(z(l))根據(jù)邏輯回歸(二)中關(guān)于梯度下降算法的公式推導(dǎo)利耍,其求導(dǎo)后的表達(dá)式為:
最后蚌本,我們利用δ(3)來計算第二層的誤差,其表達(dá)式為:
因此隘梨,我們可以推導(dǎo)出代價函數(shù)J(θ)的偏導(dǎo)數(shù)為:
若考慮正則化以及全體樣本的訓(xùn)練集程癌,則我們用Δij(l)表示誤差矩陣,其運算步驟如下:
將上述步驟完成后得到誤差矩陣Δi,j(l):
我們便可以計算代價函數(shù)的偏導(dǎo)數(shù)了轴猎,其計算方法如下:
最后嵌莉,我們可以得到:
由此,我們可以利用該表達(dá)式使用梯度下降算法或其他高級算法税稼。
補(bǔ)充筆記
Backpropagation Algorithm
"Backpropagation" is neural-network terminology for minimizing our cost function, just like what we were doing with gradient descent in logistic and linear regression. Our goal is to compute:
That is, we want to minimize our cost function J using an optimal set of parameters in theta. In this section we'll look at the equations we use to compute the partial derivative of J(Θ):
To do so, we use the following algorithm:
Back propagation Algorithm
Given training set {(x(1),y(1))?(x(m),y(m))}
- Set Δi,j(l) := 0 for all (l,i,j), (hence you end up having a matrix full of zeros)
For training example t =1 to m:
- Set a(1):=x(t)
- Perform forward propagation to compute a(l) for l=2,3,…,L
- Using y(t), compute δ(L)=a(L)?y(t)
Where L is our total number of layers and a(L) is the vector of outputs of the activation units for the last layer. So our "error values" for the last layer are simply the differences of our actual results in the last layer and the correct outputs in y. To get the delta values of the layers before the last layer, we can use an equation that steps us back from right to left:
- Compute δ(L?1),δ(L?2),…,δ(2) using δ(l)=((Θ(l))Tδ(l+1)) .? a(l) .? (1?a(l))
The delta values of layer l are calculated by multiplying the delta values in the next layer with the theta matrix of layer l. We then element-wise multiply that with a function called g', or g-prime, which is the derivative of the activation function g evaluated with the input values given by z(l).
The g-prime derivative terms can also be written out as:
- Δi,j(l):=Δi,j(l)+aj(l)δi(l+1) or with vectorization, Δ(l):=Δ(l)+δ(l+1)(a(l))T
Hence we update our new Δ matrix.
he capital-delta matrix D is used as an "accumulator" to add up our values as we go along and eventually compute our partial derivative. Thus we get:
Backpropagation Intuition
Recall that the cost function for a neural network is:
If we consider simple non-multiclass classification (k = 1) and disregard regularization, the cost is computed with:
Intuitively, δj(l) is the "error" for aj(l) (unit j in layer l). More formally, the delta values are actually the derivative of the cost function:
Recall that our derivative is the slope of a line tangent to the cost function, so the steeper the slope the more incorrect we are. Let us consider the following neural network below and see how we could calculate some δj(l):
In the image above, to calculate δ2(2), we multiply the weights Θ12(2) and Θ22(2) by their respective δ values found to the right of each edge. So we get δ2(2)= Θ12(2)δ1(3)+Θ22(2)δ2(3). To calculate every single possible δj(l), we could start from the right of our diagram. We can think of our edges as our Θij. Going from right to left, to calculate the value of δj(l), you can just take the over all sum of each weight times the δ it is coming from. Hence, another example would be δ2(3)=Θ12(3)*δ1(4).