本文主要闡述應用tensorflow搭建簡單的線性回歸司顿,邏輯回歸模型芒粹,搭建流程較清晰,主要劃分以下步驟:
- 定義輸入數(shù)據(jù)大溜,常用placeholder占位符定義化漆;
- 初始化參數(shù),初始化隨機钦奋;
- 定義模型座云,例如,線性函數(shù)付材,邏輯回歸方程等朦拖,這里主要是每層的激活函數(shù);
- 定義損失函數(shù);
- 定義梯度下降模型;
- 預測模型精度;
- 迭代計算厌衔,結(jié)合前向后向計算璧帝,更新參數(shù),從而最小化損失函數(shù)富寿。
線性回歸實現(xiàn)
為了實現(xiàn)線性回歸模型睬隶,首先需要創(chuàng)建訓練數(shù)據(jù)。這里页徐,我們以方程y=0.1*x+0.3為基礎苏潜,利用高斯分布隨機創(chuàng)建一些(x, y)訓練數(shù)據(jù):
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
# 隨機生成1000個點
num_points = 1000
vectors_set = []
for i in range(num_points):
x1 = np.random.normal(0.0, 1)
y1 = x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.01)
vectors_set.append([x1, y1])
# 生成一些樣本
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]
# 畫出圖像
plt.scatter(x_data, y_data, c='r')
plt.show()
運行以上代碼,得到以下初始化圖片:
線性回歸訓練數(shù)據(jù)
下面变勇,開始構(gòu)建模型:
# 初始化參數(shù)
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0), name='W')
b = tf.Variable(tf.zeros([1]), name='b')
# 求線性函數(shù)
y = W * x_data + b
# 求損失函數(shù)
loss = tf.reduce_mean(tf.square(y-y_data), name='loss')
# 梯度下降
optimizer = tf.train.AdamOptimizer(0.2)
train = optimizer.minimize(loss, name='train')
# 迭代最小化損失值求結(jié)果
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
for step in range(100):
sess.run(train)
print('W=', sess.run(W), 'b=', sess.run(b), 'loss=', sess.run(loss))
這里直接上代碼恤左,簡單明了,具體步驟可見代碼中的注釋搀绣。代碼中迭代運行了100次赃梧,擬合以后的結(jié)果如下:
W= [ 0.1014075] b= [ 0.30171829] loss= 0.000109523
邏輯回歸
為了實現(xiàn)邏輯回歸模型,這里主要應用MNIST數(shù)據(jù)集豌熄,一個整合了手寫識別圖片方面的數(shù)據(jù)集授嘀。加載數(shù)據(jù)如下:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data/', one_hot=True)
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
print('MNIST loaded')
下面構(gòu)建邏輯回歸模型
# 輸入數(shù)據(jù)用placeholder占位
x = tf.placeholder('float', [None, 784])
y = tf.placeholder('float', [None, 10])
# 初始化參數(shù)
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# 激活函數(shù)
actv = tf.nn.softmax(tf.add(tf.matmul(x, W), b))
# 損失函數(shù)
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(actv), reduction_indices=1))
# 梯度下降
learning_rate = 0.01
optm = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
# 預測-求精度
pred = tf.equal(tf.arg_max(actv, 1), tf.arg_max(y, 1))
accr = tf.reduce_mean(tf.cast(pred, 'float'))
# 迭代損失值最小化-更新參數(shù),求精度
init = tf.global_variables_initializer()
training_epochs = 50
batch_size = 100
display_step = 5
sess = tf.Session()
sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0
num_batch = int(mnist.train.num_examples/batch_size)
for i in range(num_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(optm, feed_dict={x: batch_xs, y: batch_ys})
avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys})/num_batch
if epoch%display_step == 0:
train_acc = sess.run(accr, feed_dict={x: batch_xs, y: batch_ys})
test_acc = sess.run(accr, feed_dict={x: testimg, y: testlabel})
print('Epoch: %03d/%03d cost: %.9f train_acc: %.3f test_acc: %.3f'
% (epoch, training_epochs, avg_cost, train_acc, test_acc))
具體步驟與第一小結(jié)中的描述一致锣险,這里蹄皱,模型迭代50次,每次迭代應用隨機批處理的方式計算損失函數(shù),每次處理100條訓練數(shù)據(jù),其訓練過程如下:
Epoch: 000/050 cost: 1.176365508 train_acc: 0.860 test_acc: 0.851
Epoch: 005/050 cost: 0.440964549 train_acc: 0.900 test_acc: 0.895
Epoch: 010/050 cost: 0.383310327 train_acc: 0.870 test_acc: 0.904
Epoch: 015/050 cost: 0.357270292 train_acc: 0.890 test_acc: 0.909
Epoch: 020/050 cost: 0.341507422 train_acc: 0.950 test_acc: 0.913
Epoch: 025/050 cost: 0.330557244 train_acc: 0.880 test_acc: 0.914
Epoch: 030/050 cost: 0.322380775 train_acc: 0.900 test_acc: 0.915
Epoch: 035/050 cost: 0.315963900 train_acc: 0.920 test_acc: 0.917
Epoch: 040/050 cost: 0.310716868 train_acc: 0.930 test_acc: 0.918
Epoch: 045/050 cost: 0.306357458 train_acc: 0.870 test_acc: 0.919
總結(jié)
本人還是深度學習的入門漢舱卡,剛剛使用tensorflow,對其使用處于初步階段锻拘。總結(jié)搭建體會:tensorflow只需要手動處理神經(jīng)網(wǎng)絡的前向計算,對于后向和損失優(yōu)化的相關方法已經(jīng)封裝成相應的函數(shù)直接調(diào)用即可署拟,不失靈活的同時又很方便婉宰。