梯度下降算法實(shí)現(xiàn)
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
'''梯度下降法的代碼實(shí)現(xiàn)'''
'''準(zhǔn)備訓(xùn)練數(shù)據(jù)'''
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]
#猜測(cè)初始的權(quán)重
w = 1.0
'''定義模型'''
def forward(x):
return x * w
'''定義cost函數(shù),計(jì)算MSE'''
def cost(xs, ys):
cost = 0
for x, y in zip(xs, ys):
y_pred = forward(x)
cost += (y_pred - y) ** 2
return cost / len(xs)
'''計(jì)算梯度'''
def gradients(xs, ys):
grad = 0
for x, y in zip(xs, ys):
grad += 2 * x * (x * w - y)
return grad / len(xs)
epoch_list = []
cost_list = []
print('predict (before training)', 4, forward(4))
'''訓(xùn)練過(guò)程'''
for epoch in range(100):
cost_val = cost(x_data, y_data)
grad_val = gradients(x_data, y_data)
w -= 0.01 * grad_val # 學(xué)習(xí)率設(shè)置為0.01
print('epoch:', epoch, 'w=', w, 'loss=', cost_val)
epoch_list.append(epoch)
cost_list.append(cost_val)
print('predict (after training)', 4, forward(4))
plt.plot(epoch_list, cost_list)
plt.xlabel('epoch')
plt.ylabel('cost')
plt.show()
隨機(jī)梯度下降算法實(shí)現(xiàn)
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
'''
隨機(jī)梯度下降法的代碼實(shí)現(xiàn)
隨機(jī)梯度下降法在神經(jīng)網(wǎng)絡(luò)中被證明是有效的。效率較低(時(shí)間復(fù)雜度較高)碟刺,學(xué)習(xí)性能較好乳绕。
隨機(jī)梯度下降法和梯度下降法的主要區(qū)別在于:
1辙诞、損失函數(shù)由cost()更改為loss()惧浴。cost是計(jì)算所有訓(xùn)練數(shù)據(jù)的損失主胧,loss是計(jì)算一個(gè)訓(xùn)練函數(shù)的損失晦譬。對(duì)應(yīng)于源代碼則是少了兩個(gè)for循環(huán)疤苹。
2、梯度函數(shù)gradient()由計(jì)算所有訓(xùn)練數(shù)據(jù)的梯度更改為計(jì)算一個(gè)訓(xùn)練數(shù)據(jù)的梯度敛腌。
3卧土、本算法中的隨機(jī)梯度主要是指惫皱,每次拿一個(gè)訓(xùn)練數(shù)據(jù)來(lái)訓(xùn)練,然后更新梯度參數(shù)尤莺。本算法中梯度總共更新100(epoch)x3 = 300次旅敷。梯度下降法中梯度總共更新100(epoch)次。
'''
'''準(zhǔn)備訓(xùn)練數(shù)據(jù)'''
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]
#猜測(cè)初始的權(quán)重
w = 1.0
'''定義模型'''
def forward(x):
return x * w
'''定義loss函數(shù),計(jì)算一個(gè)訓(xùn)練函數(shù)的損失'''
def loss(xs, ys):
y_pred = forward(x)
return (y_pred - y) ** 2
'''計(jì)算梯度'''
def gradients(xs, ys):
return 2 * x * (x * w - y)
epoch_list = []
lost_list = []
print('predict (before training)', 4, forward(4))
'''訓(xùn)練過(guò)程'''
for epoch in range(100):
for x, y in zip(x_data, y_data):
grad = gradients(x, y)
w -= 0.01 * grad
print("\tgrad:", x, y,grad)
l = loss(x, y)
print("progress:", epoch, "w=", w, "loss=", l)
epoch_list.append(epoch)
lost_list.append(l)
print('predict (after training)', 4, forward(4))
plt.plot(epoch_list, lost_list)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()