保存檢查點(diǎn)(checkpoint)
為了得到可以用來(lái)后續(xù)恢復(fù)模型以進(jìn)一步訓(xùn)練或評(píng)估的檢查點(diǎn)文件(checkpoint file)裙士,我們實(shí)例化一個(gè)tf.train.Saver
。
saver = tf.train.Saver()
在訓(xùn)練循環(huán)中判呕,將定期調(diào)用saver.save()
方法,向訓(xùn)練文件夾中寫入包含了當(dāng)前所有可訓(xùn)練變量值得檢查點(diǎn)文件。
saver.save(sess, FLAGS.train_dir, global_step=step)
這樣,我們以后就可以使用saver.restore()
方法烛芬,重載模型的參數(shù),繼續(xù)訓(xùn)練飒责。
saver.restore(sess, FLAGS.train_dir)
例:
保存變量
用tf.train.Saver()創(chuàng)建一個(gè)Saver來(lái)管理模型中的所有變量。
# Create some variables.
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Add an op to initialize the variables.
init_op = tf.initialize_all_variables()
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, initialize the variables, do some work, save the
# variables to disk.
with tf.Session() as sess:
sess.run(init_op)
# Do some work with the model.
..
# Save the variables to disk.
save_path = saver.save(sess, "/tmp/model.ckpt")
print "Model saved in file: ", save_path
恢復(fù)變量
用同一個(gè)Saver對(duì)象來(lái)恢復(fù)變量仆潮。注意宏蛉,當(dāng)你從文件中恢復(fù)變量時(shí),不需要事先對(duì)它們做初始化性置。
# Create some variables.
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, use the saver to restore variables from disk, and
# do some work with the model.
with tf.Session() as sess:
# Restore variables from disk.
saver.restore(sess, "/tmp/model.ckpt")
print "Model restored."
# Do some work with the model
...
選擇存儲(chǔ)和恢復(fù)哪些變量
如果你不給tf.train.Saver()
傳入任何參數(shù)拾并,那么saver將處理graph中的所有變量。其中每一個(gè)變量都以變量創(chuàng)建時(shí)傳入的名稱被保存。
有時(shí)候在檢查點(diǎn)文件中明確定義變量的名稱很有用嗅义。舉個(gè)例子屏歹,你也許已經(jīng)訓(xùn)練得到了一個(gè)模型,其中有個(gè)變量命名為"weights"
之碗,你想把它的值恢復(fù)到一個(gè)新的變量"params"
中蝙眶。
有時(shí)候僅保存和恢復(fù)模型的一部分變量很有用。再舉個(gè)例子褪那,你也許訓(xùn)練得到了一個(gè)5層神經(jīng)網(wǎng)絡(luò)幽纷,現(xiàn)在想訓(xùn)練一個(gè)6層的新模型,可以將之前5層模型的參數(shù)導(dǎo)入到新模型的前5層中博敬。
你可以通過(guò)給tf.train.Saver()
構(gòu)造函數(shù)傳入Python字典友浸,很容易地定義需要保持的變量及對(duì)應(yīng)名稱:鍵對(duì)應(yīng)使用的名稱,值對(duì)應(yīng)被管理的變量偏窝。
注意:
- 如果需要保存和恢復(fù)模型變量的不同子集收恢,可以創(chuàng)建任意多個(gè)saver對(duì)象。同一個(gè)變量可被列入多個(gè)saver對(duì)象中祭往,只有當(dāng)saver的
restore()
函數(shù)被運(yùn)行時(shí)伦意,它的值才會(huì)發(fā)生改變。 - 如果你僅在session開(kāi)始時(shí)恢復(fù)模型變量的一個(gè)子集链沼,你需要對(duì)剩下的變量執(zhí)行初始化op默赂。詳情請(qǐng)見(jiàn)
tf.initialize_variables()
。
# Create some variables.
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Add ops to save and restore only 'v2' using the name "my_v2"
saver = tf.train.Saver({"my_v2": v2})
# Use the saver object normally after that.
...
關(guān)于tensorflow中tensorborad No dashboards are active for the current data set.的解決辦法
our issue may be related to the drive you are attempting to start tensorboard from and the drive your logdir is on. Tensorboard uses a colon to separate the optional run name and the path in the logdir flag, so your path is being interpreted as \path\to\output\folder with name C. This can be worked around by either starting tensorboard from the same drive as your log directory or by providing an explicit run name, e.g. --logdir=mylogs:C:\path\to\output\folder.
什么意思呢括勺。也就是說(shuō)你的--logdir后面的文件和目錄是通過(guò)冒號(hào)分割的缆八,先寫文件,再寫路徑:
附上我的代碼:
python;gutter:true;">import tensorflow as tf
a=tf.constant(1,name="input_a")
b=tf.constant(2,name="input_b")
c=tf.multiply(a,b, name=‘mul_c‘)
d=tf.multiply(a,b, name=‘mul_d‘)
e=tf.add(c,d, name=‘a(chǎn)dd_e‘)
print(e)
sess=tf.Session()
sess.run(e)
writer=tf.summary.FileWriter(‘./my_graph‘,sess.graph)
然后在console上執(zhí)行:
注意:tensorboard --logdir=file_name:path_name
tensorboard --logdir=logs:D:/Code/PycharmProjects/test/venv
附上成功的代碼:
import tensorflow as tf
import numpy as np
# 使用 NumPy 生成假數(shù)據(jù)(phony data), 總共 100 個(gè)點(diǎn).
x_data = np.float32(np.random.rand(2, 100)) # 隨機(jī)輸入
y_data = np.dot([0.100, 0.200], x_data) + 0.300
# 構(gòu)造一個(gè)線性模型
#
b = tf.Variable(tf.zeros([1]))
W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0))
y = tf.matmul(W, x_data) + b
# 最小化方差
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
# 初始化變量
init = tf.initialize_all_variables()
# 啟動(dòng)圖 (graph)
sess = tf.Session()
sess.run(init)
# 擬合平面
for step in range(0, 201):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(W), sess.run(b))
# 得到最佳擬合結(jié)果 W: [[0.100 0.200]], b: [0.300]
summary_writer = tf.summary.FileWriter('logs', sess.graph)