最近為了找實(shí)習(xí)開始做一些練手的編程穆律,剛好在復(fù)習(xí)深度學(xué)習(xí)基礎(chǔ)的時候桥胞,遇到了吳恩達(dá)和李宏毅兩位大神,講課講得好真的很重要榄审,廢話不多說砌们,下面開始我們的第一份編程練習(xí)。
首先我們要實(shí)現(xiàn)的是sigmoid激活函數(shù)搁进,也就是邏輯回歸的function浪感,個人建議學(xué)習(xí)深度學(xué)習(xí)從邏輯回歸開始,雖然到最后基本沒有人在自己的神經(jīng)網(wǎng)絡(luò)里面使用這個激活函數(shù)了拷获,但是它還是深度學(xué)習(xí)開始的地方篮撑。
sigmoid
ps:它的梯度峰值是0.25
math庫實(shí)現(xiàn)
import math
def basic_sigmoid(x):
s = 1/(1+math.exp(-x))
return s
雖然可以這樣實(shí)現(xiàn),但是我們深度學(xué)習(xí)希望能夠很好的處理向量而不是像這樣還需要寫一個for循環(huán)匆瓜,去給向量中的每個值去計算一個值赢笨,那樣就太浪費(fèi)時間啦!
numpy庫實(shí)現(xiàn)
In [1]: import numpy as np
In [2]: x = np.array([1,2,3])
In [3]: print(np.exp(x))
[ 2.71828183 7.3890561 20.08553692]
numpy可以實(shí)現(xiàn)優(yōu)秀的向量計算
In [4]: import numpy as np
...:
...: def sigmoid(x):
...: s = 1/(1+np.exp(-x))
...: return s
...:
...: x = np.array([1,2,3])
...:
...: print(sigmoid(x))
...:
[ 0.73105858 0.88079708 0.95257413]
sigmoid梯度
計算公式:
In [5]: def sigmoid_derivative(x):
...: s = 1/(1+np.exp(-x))
...: ds = s*(1-s)
...: return ds
維度的轉(zhuǎn)換
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2]))
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### ( 1 line of code)
v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2]),1)
### END CODE HERE ###
return v
數(shù)據(jù)的歸一化
例如:x/||x||
In [1]: import numpy as np
In [6]: def normalizeRows(x):
...: x_norm = np.linalg.norm(x,axis=1,keepdims=True)
...: x = x/x_norm
...: return x
...:
In [7]: x = np.array([[0,3,4],[1,6,4]])
In [8]: print(normalizeRows(x))
[[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]
softmax函數(shù)
def softmax(x):
x_exp = np.exp(x)
x_sum = np.sum(x_exp,axis = 1,keepdims = True)
s = x_exp/x_sum
return s
L1驮吱、L2正則化
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
12
Returns:
loss -- the value of the L1 loss function defined above
"""
loss = sum(abs(y-yhat))
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
loss = np.dot(y-yhat,y-yhat)
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))