本文使用的tensorlow版本:1.4
tensorlow安裝:pip install tensorflow
1冤留、引言
在上一節(jié)課中碧囊,我們實(shí)現(xiàn)了一個(gè)簡(jiǎn)單的神經(jīng)網(wǎng)絡(luò),那么網(wǎng)絡(luò)中的數(shù)據(jù)是如何流動(dòng)的呢纤怒?如果我們想通過(guò)可視化的方式看到網(wǎng)絡(luò)中的數(shù)據(jù)流動(dòng)糯而,以及參數(shù)和誤差的變化,這時(shí)候就該Tensorbord大顯身手了泊窘。
先來(lái)回顧一下我們之前的tensorflow代碼熄驼,我們用神經(jīng)網(wǎng)絡(luò)來(lái)預(yù)測(cè) y = x^2 - 0.5,定義了一個(gè)神經(jīng)元的輸入層和輸出層,10個(gè)神經(jīng)元的隱藏層州既,代碼如下:
import tensorflow as tf
import numpy as np
def add_layer(inputs,in_size,out_size,activation_function=None):
Weights = tf.Variable(tf.random_normal([in_size,out_size]))
biases = tf.Variable(tf.zeros([1,out_size])+0.1)
Wx_plus_b = tf.add(tf.matmul(inputs,Weights),biases)
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs
x_data = np.linspace(-1,1,300)[:,np.newaxis]
noise = np.random.normal(0,0.05,x_data.shape).astype(np.float32)
y_data = np.square(x_data) - 0.5 + noise
#None表示給多少個(gè)sample都可以
xs = tf.placeholder(tf.float32,[None,1])
ys = tf.placeholder(tf.float32,[None,1])
l1 = add_layer(xs,1,10,activation_function=tf.nn.relu)
prediction = add_layer(l1,10,1,activation_function=None)
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(1000):
sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
if i % 50 == 0:
print(sess.run(loss,feed_dict={xs:x_data,ys:y_data}))
接下來(lái)谜洽,我們就以這一個(gè)例子為例,一起來(lái)看一下tensorboard的使用吧吴叶。
2阐虚、可視化流程
使用Tensorboard,我們首先要定義變量的命名空間name_scope蚌卤,只有定義了name_scope实束,我們?cè)趖ensorboard中的Graph才會(huì)看起來(lái)井然有序奥秆。所以,我們以修改一層網(wǎng)絡(luò)的函數(shù)為例咸灿,來(lái)看一下如何使用name_scope构订,name_scope對(duì)神經(jīng)網(wǎng)絡(luò)的訓(xùn)練過(guò)程是沒(méi)有影響的。
def add_layer(inputs,in_size,out_size,n_layer,activation_function=None):
layer_name = "layer%s" % n_layer
with tf.name_scope(layer_name):
with tf.name_scope("Weights"):
Weights = tf.Variable(tf.random_normal([in_size,out_size]),name='W')
#概率分布的形式
with tf.name_scope("biases"):
biases = tf.Variable(tf.zeros([1,out_size])+0.1,name='b')
with tf.name_scope("Wx_plus_b"):
Wx_plus_b = tf.add(tf.matmul(inputs,Weights),biases)
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs
類似的避矢,你可以在任何你想要的地方加上命名空間悼瘾,使得你的tensorboard看上去更加整潔有序。
在定義好命名空間之后审胸,我們需要將我們網(wǎng)絡(luò)中的數(shù)據(jù)流保存到文件中:
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter('logs/',sess.graph)
除了網(wǎng)絡(luò)的結(jié)構(gòu)外亥宿,我們有時(shí)候想看一下網(wǎng)絡(luò)的loss的變化,以及參數(shù)的變化砂沛,我們需要進(jìn)一步增加我們summary的內(nèi)容烫扼,因?yàn)閃eights和biases是一組數(shù)而不是單個(gè)數(shù),所以我們使用histogram來(lái)表示二者在每一期的分布變化碍庵,而loss是一個(gè)單個(gè)的數(shù)映企,也就是標(biāo)量scalaer,所以我們使用scalaer來(lái)表示loss的變化静浴,二者的定義如下:
tf.summary.histogram(layer_name+'/weights',Weights)
tf.summary.scalar("loss",loss)
但是堰氓,光定義這個(gè)是沒(méi)有用的,我們需要run一下這個(gè)苹享,記得我們之間定義了merged節(jié)點(diǎn)了么豆赏,運(yùn)行這個(gè)就行啦,我們將返回的結(jié)果通過(guò)writer寫入到文件中就可以啦:
result = sess.run(merged,feed_dict={xs:x_data,ys:y_data})
writer.add_summary(result,i)
所以富稻,到現(xiàn)在掷邦,我們的完整代碼如下:
import tensorflow as tf
import numpy as np
def add_layer(inputs,in_size,out_size,n_layer,activation_function=None):
layer_name = "layer%s" % n_layer
with tf.name_scope(layer_name):
with tf.name_scope("Weights"):
Weights = tf.Variable(tf.random_normal([in_size,out_size]),name='W')
#概率分布的形式
tf.summary.histogram(layer_name+'/weights',Weights)
with tf.name_scope("biases"):
biases = tf.Variable(tf.zeros([1,out_size])+0.1,name='b')
tf.summary.histogram(layer_name + '/biases', biases)
with tf.name_scope("Wx_plus_b"):
Wx_plus_b = tf.add(tf.matmul(inputs,Weights),biases)
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs
x_data = np.linspace(-1,1,300)[:,np.newaxis]
noise = np.random.normal(0,0.05,x_data.shape).astype(np.float32)
y_data = np.square(x_data) - 0.5 + noise
#None表示給多少個(gè)sample都可以
with tf.name_scope("input"):
xs = tf.placeholder(tf.float32,[None,1],name='x_input')
ys = tf.placeholder(tf.float32,[None,1],name='y_input')
l1 = add_layer(xs,1,10,1,activation_function=tf.nn.relu)
prediction = add_layer(l1,10,1,2,activation_function=None)
with tf.name_scope('loss'):
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
reduction_indices=[1]))
tf.summary.scalar("loss",loss)
with tf.name_scope('train'):
train_step = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
# 1.2之前 tf.train.SummaryWriter("logs/",sess.graph)
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter('logs/',sess.graph)
sess.run(init)
for i in range(1000):
sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
if i % 50 == 0:
result = sess.run(merged,feed_dict={xs:x_data,ys:y_data})
writer.add_summary(result,i)
3、結(jié)果查看
我們使用命令來(lái)查看最后的結(jié)果:
tensorboard --logdir logs
然后我們就可以根據(jù)它提示的網(wǎng)址去訪問(wèn)我們的結(jié)果啦椭赋,這里我用safari瀏覽器是沒(méi)有看到結(jié)果的抚岗, 用chrome是可以的呦。
我們可以看GRAPH選項(xiàng)下面保存了我們的整個(gè)網(wǎng)絡(luò)的流圖哪怔,我們可以點(diǎn)開看每一個(gè)層的內(nèi)容宣蔚,比如我們點(diǎn)開layer1:
在Scalaer下,我們可以看到我們的loss結(jié)果:
而在distribution下面认境,我們可以看到我們定義的權(quán)重和偏置的參數(shù)分布變化:
更多的信息胚委,大家可以自己去探索喲,我們這里就不繼續(xù)啦叉信,下篇繼續(xù)亩冬!