學(xué)習(xí)目標(biāo)
- 張量
- 變量
- 圖
- 保存和恢復(fù)
基本概念
TensorFlow的核心數(shù)據(jù)單位是張量,TensorFlow Core程序可以看作是兩個(gè)相互獨(dú)立的部分組成:構(gòu)建計(jì)算圖套才,運(yùn)行計(jì)算圖够吩。
張量
一個(gè)張量由一組陣列(任意維數(shù))的原始值組成嘀倒。張量的階是它的維數(shù)采盒,而它的的形狀是一個(gè)整數(shù)元組唧龄,制定了陣列每個(gè)維度的長(zhǎng)度兼砖。
1 #0階張量(標(biāo)量)
[1,2] #1階張量(向量)
[[1,2],[3,4]] #2階張量(矩陣)
......
tf.Tensor具有的屬性:
- 數(shù)據(jù)類型(float32,int既棺,string)
- 形狀
張量中的每個(gè)元素都具有相同的數(shù)據(jù)類型讽挟,且數(shù)據(jù)類型一定是已知的。形狀可能是部分已知丸冕。
a = tf.constant(3.0, dtype=tf.float32) #定義一個(gè)常量
b = tf.constant(4.0) # 數(shù)據(jù)類型也是tf.float32
常用的特殊張量:
- tf.Variable
- tf.constant
- tf.placeholder
- tf.SparseTensor
除tf.Variable意外戏挡,張量的值不可變。但同一張量在讀取隨機(jī)數(shù)等情況下可能返回不同值晨仑。
階
tf.Tensor的階就是它本身的維數(shù)褐墅,和數(shù)學(xué)中矩陣的階并不是同一個(gè)概念。
階 | 數(shù)學(xué)實(shí)例 |
---|---|
0 | 標(biāo)量 |
1 | 向量 |
2 | 矩陣 |
3 | 數(shù)據(jù)立體 |
n | n階張量(腦補(bǔ)) |
- 0階
mammal = tf.Variable("Elephant", tf.string)
ignition = tf.Variable(451, tf.int16)
floating = tf.Variable(3.14159265359, tf.float64)
its_complicated = tf.Variable(12.3 - 4.85j, tf.complex64)
- 1階
mystr = tf.Variable(["Hello"], tf.string)
cool_numbers = tf.Variable([3.14159, 2.71828], tf.float32)
first_primes = tf.Variable([2, 3, 5, 7, 11], tf.int32)
its_very_complicated = tf.Variable([12.3 - 4.85j, 7.5 - 6.23j], tf.complex64)
- 2階
mymat = tf.Variable([[7],[11]], tf.int16)
myxor = tf.Variable([[False, True],[True, False]], tf.bool)
linear_squares = tf.Variable([[4], [9], [16], [25]], tf.int32)
squarish_squares = tf.Variable([ [4, 9], [16, 25] ], tf.int32)
rank_of_squares = tf.rank(squarish_squares)
mymatC = tf.Variable([[7],[11]], tf.int32)
獲取階
r = tf.rank(my_image) #計(jì)算圖運(yùn)行后洪己,r將返回4
切片
my_scalar = my_vector[2] #1階妥凳,返回標(biāo)量
my_scalar = my_matrix[1, 2] #2階,返回標(biāo)量
my_row_vector = my_matrix[2] #2階答捕,返回行
my_column_vector = my_matrix[:, 3] #2階逝钥,返回列
形狀
張量的形狀是每個(gè)維度中元素的數(shù)量
實(shí)例 | 階 | 形狀 | 維數(shù) | 說(shuō)明 |
---|---|---|---|---|
1 | 0 | [] | 0-D | 0維(階)張量 |
[1,2] | 1 | [2] | 1-D | 形狀為[2]的1維(階)張量 |
[[1,2],[3,4,5]] | 2 | [2,3] | 2-D | 形狀為[2,3]的2維(階)張量 |
[[[1,2],[3,4]],[[5,6,7],[8,9,10]]] | 3 | [2,2,2] | 3-D | 形狀為[2,2,2]的3維(階)張量 |
****** | n | [D0,...,Dn-1] | n-D | 形狀為[D0,...,Dn-1]的n維(階)張量 |
獲取形狀
c = tf.constant([1,2])
printf(c.shape) #方法1,返回(2,)
tf.shape(c) #方法2拱镐,返回TensorShapr目標(biāo)艘款,<tf.Tensor 'Shape:0' shape=(1,) dtype=int32>
改變形狀
張量的元素?cái)?shù)量是其所有形狀大小的乘積持际,標(biāo)量的元素?cái)?shù)量永遠(yuǎn)是1。
rank_three_tensor = tf.ones([3, 4, 5]) #創(chuàng)建一個(gè)3維哗咆,形狀為(3,4,5)的張量
matrix = tf.reshape(rank_three_tensor, [6, 10]) #重構(gòu)成一個(gè)2維蜘欲,形狀為(6,10)的張量
matrixB = tf.reshape(matrix, [3, -1]) #重構(gòu)成一個(gè)2維,形狀為(3,20)的張量
matrixAlt = tf.reshape(matrixB, [4, 3, -1]) #重構(gòu)成一個(gè)3維晌柬,形狀為(4,3,,5)的張量
yet_another = tf.reshape(matrixAlt, [13, 2, -1]) #元素?cái)?shù)量不匹配姥份,報(bào)錯(cuò)
數(shù)據(jù)類型
float_tensor = tf.cast(tf.constant([1, 2, 3]), dtype=tf.float32) #將int轉(zhuǎn)換為float32
printf(tf.constant([1, 2, 3]).dtype) #返回?cái)?shù)據(jù)類型
數(shù)據(jù)類型 | Python類型 | 描述 |
---|---|---|
DT_FLOAT | tf.float32 | 32 位浮點(diǎn)數(shù). |
DT_DOUBLE | tf.float64 | 64 位浮點(diǎn)數(shù). |
DT_INT64 | tf.int64 | 64 位有符號(hào)整型. |
DT_INT32 | tf.int32 | 32 位有符號(hào)整型. |
DT_INT16 | tf.int16 | 16 位有符號(hào)整型. |
DT_INT8 | tf.int8 | 8 位有符號(hào)整型. |
DT_UINT8 | tf.uint8 | 8 位無(wú)符號(hào)整型. |
DT_STRING | tf.string | 可變長(zhǎng)度的字節(jié)數(shù)組.每一個(gè)張量元素都是一個(gè)字節(jié)數(shù)組. |
DT_BOOL | tf.bool | 布爾型. |
DT_COMPLEX64 | tf.complex64 | 由兩個(gè)32位浮點(diǎn)數(shù)組成的復(fù)數(shù):實(shí)數(shù)和虛數(shù). |
DT_QINT32 | tf.qint32 | 用于量化Ops的32位有符號(hào)整型. |
DT_QINT8 | tf.qint8 | 用于量化Ops的8位有符號(hào)整型. |
DT_QUINT8 | tf.quint8 | 用于量化Ops的8位無(wú)符號(hào)整型. |
評(píng)估張量
t = tf.constant(42.0)
u = tf.constant(37.0)
tu = tf.mul(t, u)
ut = tf.mul(u, t)
with sess.as_default():
tu.eval() # 執(zhí)行一步
ut.eval() # 執(zhí)行一步
sess.run([tu, ut]) # 一步執(zhí)行兩個(gè)張量
打印張量
調(diào)試用
x=tf.constant([2,3,4,5])
x=tf.Print(x,[x,x.shape,'any thing i want'],message='Debug message:',summarize=100)
with tf.Session() as sess:
sess.run(x)
輸出:Debug message:[2 3 4 5][4][any thing i want]
變量
創(chuàng)建變量
my_variable = tf.get_variable("my_variable", [1, 2, 3]) #初始值通過(guò)tf.glorot_uniform_initializer隨機(jī)設(shè)置
my_int_variable = tf.get_variable("my_int_variable", [1, 2, 3], dtype=tf.int32,initializer=tf.zeros_initializer) #指定類型和初始化器
other_variable = tf.get_variable("other_variable", dtype=tf.int32,initializer=tf.constant([23, 42])) #使用張量的類型
變量集合
Tensorflow提供集合,放置變量年碘。默認(rèn)情況下澈歉,每個(gè) tf.Variable 都放置在以下兩個(gè)集合中:
- tf.GraphKeys.GLOBAL_VARIABLES - 可以在多個(gè)設(shè)備共享的變量
- tf.GraphKeys.TRAINABLE_VARIABLES - TensorFlow 將計(jì)算其梯度的變量。
my_local = tf.get_variable("my_local", shape=(),collections=[tf.GraphKeys.LOCAL_VARIABLES]) # 添加到 tf.GraphKeys.LOCAL_VARIABLES 集合中
my_non_trainable = tf.get_variable("my_non_trainable",shape=(),trainable=False) # 添加到 tf.GraphKeys.LOCAL_VARIABLES 集合中
tf.add_to_collection("my_collection_name", my_local) #添加到自己命名的my_collection_name集合中
tf.get_collection("my_collection_name") #檢索該集合下所有變量
設(shè)備放置方式
###將變量放置在第二個(gè)GPU設(shè)備上
with tf.device("/device:GPU:1"):
v = tf.get_variable("v", [1])
分布式設(shè)置
以后再更
初始化變量
### 初始化tf.GraphKeys.GLOBAL_VARIABLES 集合中所有變量
# 創(chuàng)建兩個(gè)變量
weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35),
name="weights")
biases = tf.Variable(tf.zeros([200]), name="biases")
...
# 添加用于初始化變量的節(jié)點(diǎn)
init_op = tf.global_variables_initializer()
# 然后屿衅,在加載模型的時(shí)候
with tf.Session() as sess:
# 運(yùn)行初始化操作
sess.run(init_op)
...
# 使用模型
...
### 自行初始化變量
session.run(my_variable.initializer)
print(session.run(tf.report_uninitialized_variables())) #查詢哪些變量尚未初始化
默認(rèn)的 tf.global_variables_initializer 不會(huì)指定變量的初始化順序埃难。因此,如果變量的初始值取決于另一變量的值涤久,那么很有可能會(huì)出現(xiàn)錯(cuò)誤凯砍。
# 使用隨機(jī)數(shù)創(chuàng)建一個(gè)變量
weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35),name="weights")
# 創(chuàng)建另一個(gè)變量,它與weights擁有相同的初始值
w2 = tf.Variable(weights.initialized_value(), name="w2")
# 創(chuàng)建另一個(gè)變量拴竹,它的初始值是weights的兩倍
w_twice = tf.Variable(weights.initialized_value() * 2.0, name="w_twice")
使用變量
v = tf.get_variable("v", shape=(), initializer=tf.zeros_initializer())
w = v + 1
###為變量賦值
v = tf.get_variable("v", shape=(), initializer=tf.zeros_initializer())
assignment = v.assign_add(1)
tf.global_variables_initializer().run()
sess.run(assignment) # or assignment.op.run(), or assignment.eval()
v = tf.get_variable("v", shape=(), initializer=tf.zeros_initializer())
assignment = v.assign_add(1)
with tf.control_dependencies([assignment]):
w = v.read_value() #w在assign_add操作后反映v的值
共享變量
###創(chuàng)建一個(gè)卷積層
def conv_relu(input, kernel_shape, bias_shape):
# Create variable named "weights".
weights = tf.get_variable("weights", kernel_shape,
initializer=tf.random_normal_initializer())
# Create variable named "biases".
biases = tf.get_variable("biases", bias_shape,
initializer=tf.constant_initializer(0.0))
conv = tf.nn.conv2d(input, weights,
strides=[1, 1, 1, 1], padding='SAME')
return tf.nn.relu(conv + biases)
###由于期望的操作不清楚(創(chuàng)建新變量還是重新使用現(xiàn)有變量?)剧罩,因此 TensorFlow 將會(huì)失敗栓拜。
input1 = tf.random_normal([1,10,10,32])
input2 = tf.random_normal([1,20,20,32])
x = conv_relu(input1, kernel_shape=[5, 5, 32, 32], bias_shape=[32])
x = conv_relu(x, kernel_shape=[5, 5, 32, 32], bias_shape = [32]) # This fails.
用變量域?qū)崿F(xiàn)共享參數(shù)
這里主要包括兩個(gè)函數(shù)接口:`
- tf.get_variable(<name>, <shape>, <initializer>) :根據(jù)指定的變量名實(shí)例化或返回一個(gè) tensor 對(duì)象
- tf.variable_scope(<scope_name>):管理 tf.get_variable() 變量的域名
tf.get_variable() 的機(jī)制跟 tf.Variable() 有很大不同,如果指定的變量名已經(jīng)存在(即先前已經(jīng)用同一個(gè)變量名通過(guò) get_variable() 函數(shù)實(shí)例化了變量)惠昔,那么 get_variable()只會(huì)返回之前的變量幕与,否則才創(chuàng)造新的變量。
def conv_relu(input, kernel_shape, bias_shape):
# Create variable named "weights".
weights = tf.get_variable("weights", kernel_shape,
initializer=tf.random_normal_initializer())
# Create variable named "biases".
biases = tf.get_variable("biases", bias_shape,
initializer=tf.constant_initializer(0.0))
conv = tf.nn.conv2d(input, weights,
strides=[1, 1, 1, 1], padding='SAME')
return tf.nn.relu(conv + biases)
def my_image_filter(input_images):
with tf.variable_scope("conv1"):
# Variables created here will be named "conv1/weights", "conv1/biases".
relu1 = conv_relu(input_images, [5, 5, 32, 32], [32])
with tf.variable_scope("conv2"):
# Variables created here will be named "conv2/weights", "conv2/biases".
return conv_relu(relu1, [5, 5, 32, 32], [32])
先定義一個(gè) conv_relu() 函數(shù),用 tf.variable_scope() 來(lái)分別處理兩個(gè)卷積層的參數(shù)镇防。正如注釋中提到的那樣啦鸣,這個(gè)函數(shù)會(huì)在內(nèi)部的變量名前面再加上一個(gè)「scope」前綴,比如:conv1/weights表示第一個(gè)卷積層的權(quán)值參數(shù)来氧。這樣一來(lái)诫给,我們就可以通過(guò)域名來(lái)區(qū)分各個(gè)層之間的參數(shù)了。
不過(guò)啦扬,如果直接這樣調(diào)用 my_image_filter中狂,是會(huì)拋異常的:
result1 = my_image_filter(image1)
result2 = my_image_filter(image2)
# Raises ValueError(... conv1/weights already exists ...)
因?yàn)?tf.get_variable()雖然可以共享變量,但默認(rèn)上它只是檢查變量名扑毡,防止重復(fù)胃榕。要開(kāi)啟變量共享,你還必須指定在哪個(gè)域名內(nèi)可以共用變量:
with tf.variable_scope("image_filters") as scope:
result1 = my_image_filter(image1)
scope.reuse_variables()
result2 = my_image_filter(image2)
到這一步瞄摊,共享變量的工作就完成了勋又。你甚至都不用在函數(shù)外定義變量苦掘,直接調(diào)用同一個(gè)函數(shù)并傳入不同的域名,就可以讓 TensorFlow 來(lái)幫你管理變量了楔壤。
若部分變量共享鹤啡,部分不共享:
def test(mode):
w = tf.get_variable(name=mode+"w", shape=[1,2])
u = tf.get_variable(name="u", shape=[1,2])
return w, u
with tf.variable_scope("test", reuse=tf.AUTO_REUSE) as scope:
w1, u1 = test("mode1")
w2, u2 = test("mode2")
這里只是加了一個(gè)參數(shù) reuse=tf.AUTO_REUSE,但正如名字所示挺邀,這是一種自動(dòng)共享的機(jī)制揉忘,當(dāng)系統(tǒng)檢測(cè)到我們用了一個(gè)之前已經(jīng)定義的變量時(shí),就開(kāi)啟共享端铛,否則就重新創(chuàng)建變量泣矛。
變量域的工作機(jī)理
首先,TensorFlow 會(huì)判斷是否要共享變量禾蚕,也就是判斷 tf.get_variable_scope().reuse 的值您朽,如果結(jié)果為 False(即你沒(méi)有在變量域內(nèi)調(diào)用scope.reuse_variables()),那么 TensorFlow 認(rèn)為你是要初始化一個(gè)新的變量换淆,緊接著它會(huì)判斷這個(gè)命名的變量是否存在哗总。如果存在,會(huì)拋出 ValueError 異常倍试,否則讯屈,就根據(jù) initializer 初始化變量:
with tf.variable_scope("foo"):
v = tf.get_variable("v", [1])
assert v.name == "foo/v:0"
而如果 tf.get_variable_scope().reuse == True,那么 TensorFlow 會(huì)執(zhí)行相反的動(dòng)作县习,就是到程序里面尋找變量名為 scope name + name 的變量涮母,如果變量不存在,會(huì)拋出 ValueError 異常躁愿,否則叛本,就返回找到的變量:
with tf.variable_scope("foo"):
v = tf.get_variable("v", [1])
with tf.variable_scope("foo", reuse=True):
v1 = tf.get_variable("v", [1])
assert v1 is v
變量域的基本使用
- 變量域可以嵌套使用
with tf.variable_scope("foo"):
with tf.variable_scope("bar"):
v = tf.get_variable("v", [1])
assert v.name == "foo/bar/v:0"
我們也可以通過(guò) tf.get_variable_scope() 來(lái)獲得當(dāng)前的變量域?qū)ο螅⑼ㄟ^(guò) reuse_variables() 方法來(lái)設(shè)置是否共享變量彤钟。不過(guò)来候,TensorFlow 并不支持將 reuse 值設(shè)為 False,如果你要停止共享變量逸雹,可以選擇離開(kāi)當(dāng)前所在的變量域营搅,或者再進(jìn)入一個(gè)新的變量域(比如,再進(jìn)入一個(gè) with 語(yǔ)句梆砸,然后指定新的域名)剧防。
還需注意的一點(diǎn)是,一旦在一個(gè)變量域內(nèi)將 reuse 設(shè)為 True辫樱,那么這個(gè)變量域的子變量域也會(huì)繼承這個(gè) reuse 值峭拘,自動(dòng)開(kāi)啟共享變量:
with tf.variable_scope("root"):
# At start, the scope is not reusing.
assert tf.get_variable_scope().reuse == False
with tf.variable_scope("foo"):
# Opened a sub-scope, still not reusing.
assert tf.get_variable_scope().reuse == False
with tf.variable_scope("foo", reuse=True):
# Explicitly opened a reusing scope.
assert tf.get_variable_scope().reuse == True
with tf.variable_scope("bar"):
# Now sub-scope inherits the reuse flag.
assert tf.get_variable_scope().reuse == True
# Exited the reusing scope, back to a non-reusing one.
assert tf.get_variable_scope().reuse == False
捕獲變量域?qū)ο?/h6>
如果一直用字符串來(lái)區(qū)分變量域,寫起來(lái)容易出錯(cuò)。為此鸡挠,TensorFlow 提供了一個(gè)變量域?qū)ο髞?lái)幫助我們管理代碼:
with tf.variable_scope("foo") as foo_scope:
v = tf.get_variable("v", [1])
with tf.variable_scope(foo_scope)
w = tf.get_variable("w", [1])
with tf.variable_scope(foo_scope, reuse=True)
v1 = tf.get_variable("v", [1])
w1 = tf.get_variable("w", [1])
assert v1 is v
assert w1 is w
記住辉饱,用這個(gè)變量域?qū)ο筮€可以讓我們跳出當(dāng)前所在的變量域區(qū)域:
with tf.variable_scope("foo") as foo_scope:
assert foo_scope.name == "foo"
with tf.variable_scope("bar")
with tf.variable_scope("baz") as other_scope:
assert other_scope.name == "bar/baz"
with tf.variable_scope(foo_scope) as foo_scope2:
assert foo_scope2.name == "foo" # Not changed.
在變量域內(nèi)初始化變量
每次初始化變量時(shí)都要傳入一個(gè) initializer,這實(shí)在是麻煩拣展,而如果使用變量域的話彭沼,就可以批量初始化參數(shù)了:
with tf.variable_scope("foo", initializer=tf.constant_initializer(0.4)):
v = tf.get_variable("v", [1])
assert v.eval() == 0.4 # Default initializer as set above.
w = tf.get_variable("w", [1], initializer=tf.constant_initializer(0.3)):
assert w.eval() == 0.3 # Specific initializer overrides the default.
with tf.variable_scope("bar"):
v = tf.get_variable("v", [1])
assert v.eval() == 0.4 # Inherited default initializer.
with tf.variable_scope("baz", initializer=tf.constant_initializer(0.2)):
v = tf.get_variable("v", [1])
assert v.eval() == 0.2 # Changed default initializer.
圖
TensorFlow 使用數(shù)據(jù)流圖將計(jì)算表示為獨(dú)立的指令之間的依賴關(guān)系。這可生成低級(jí)別的編程模型备埃,在該模型中姓惑,您首先定義數(shù)據(jù)流圖,然后創(chuàng)建 TensorFlow 會(huì)話按脚,以便在一組本地和遠(yuǎn)程設(shè)備上運(yùn)行圖的各個(gè)部分于毙。
為什么使用數(shù)據(jù)流圖
計(jì)算圖是排列成一個(gè)圖的一系列TensorFlow指令。
tf.Graph包含兩類相關(guān)信息:
- 圖結(jié)構(gòu)辅搬。由兩種類型的對(duì)象組成:
- 指令:圖的節(jié)點(diǎn)唯沮。消耗和生成張量的計(jì)算。
- 張量:圖的邊堪遂。代表流經(jīng)圖的值介蛉。
- 圖集合。
構(gòu)建tf.Graph
tf.Graph 對(duì)象為其包含的 tf.Operation對(duì)象定義一個(gè)命名空間溶褪。TensorFlow 會(huì)自動(dòng)為您的圖中的每個(gè)指令選擇一個(gè)唯一名稱币旧,但您也可以指定描述性名稱,使您的程序閱讀和調(diào)試起來(lái)更輕松猿妈。TensorFlow API 提供兩種方法來(lái)改寫指令的名稱:
- 每個(gè)創(chuàng)建新的 tf.Operation 或返回新的 tf.Tensor的 API 函數(shù)可以接受可選的 name參數(shù)吹菱。例如,tf.constant(42.0, name="answer") 創(chuàng)建一個(gè)名為 "answer" 的新 tf.Operation并返回一個(gè)名為 "answer:0" 的 tf.Tensor于游。如果默認(rèn)圖已包含名為 "answer"`的指令,則 TensorFlow 會(huì)在名稱上附加 "_1"垫言、"_2" 等字符贰剥,以便讓名稱具有唯一性。
c_0 = tf.constant(0, name="c") # => operation named "c"
# Already-used names will be "uniquified".
c_1 = tf.constant(2, name="c") # => operation named "c_1"
# Name scopes add a prefix to all operations created in the same context.
with tf.name_scope("outer"):
c_2 = tf.constant(2, name="c") # => operation named "outer/c"
# Name scopes nest like paths in a hierarchical file system.
with tf.name_scope("inner"):
c_3 = tf.constant(3, name="c") # => operation named "outer/inner/c"
# Exiting a name scope context will return to the previous prefix.
c_4 = tf.constant(4, name="c") # => operation named "outer/c_1"
# Already-used name scopes will be "uniquified".
with tf.name_scope("inner"):
c_5 = tf.constant(5, name="c") # => operation named "outer/inner_1/c"
類似于張量的對(duì)象
默認(rèn)情況下筷频,每次您使用同一個(gè)類似于張量的對(duì)象時(shí)蚌成,TensorFlow 將創(chuàng)建新的 tf.Tensor。如果類似于張量的對(duì)象很大(例如包含一組訓(xùn)練示例的 numpy.ndarray
)凛捏,且您多次使用該對(duì)象担忧,您可能會(huì)用光內(nèi)存。要避免出現(xiàn)此問(wèn)題坯癣,請(qǐng)?jiān)陬愃朴趶埩康膶?duì)象:
- tf.Tensor
- tf.Variable
- numpy.ndarray
- list
- 標(biāo)量 Python 類型:bool瓶盛、float、int、str
tf.Session
# Create a default in-process session.
with tf.Session() as sess:
# ...
# Create a remote session.
with tf.Session("grpc://example.org:2222"):
# ...
由于 tf.Session 擁有物理資源(例如 GPU 和網(wǎng)絡(luò)連接)惩猫,它通常用作上下文管理器(在 with代碼塊中)芝硬,該管理器可在您退出代碼塊時(shí)自動(dòng)關(guān)閉會(huì)話。您也可以在不使用with代碼塊的情況下創(chuàng)建會(huì)話轧房,但應(yīng)在完成會(huì)話時(shí)明確調(diào)用 tf.Session.close以便釋放資源
tf.Session.run
tf.Session.run方法是一種用于運(yùn)行 tf.Operation 或?qū)?tf.Tensor求值的主要機(jī)制拌阴。您可以將一個(gè)或多個(gè) tf.Operation或 tf.Tensor對(duì)象傳遞到 tf.Session.run,TensorFlow 將執(zhí)行計(jì)算結(jié)果所需的指令奶镶。
= tf.constant([[37.0, -23.0], [1.0, 4.0]])
w = tf.Variable(tf.random_uniform([2, 2]))
y = tf.matmul(x, w)
output = tf.nn.softmax(y)
init_op = w.initializer
with tf.Session() as sess:
# Run the initializer on `w`.
sess.run(init_op)
# Evaluate `output`. `sess.run(output)` will return a NumPy array containing
# the result of the computation.
print(sess.run(output))
# Evaluate `y` and `output`. Note that `y` will only be computed once, and its
# result used both to return `y_val` and as an input to the `tf.nn.softmax()`
# op. Both `y_val` and `output_val` will be NumPy arrays.
y_val, output_val = sess.run([y, output])
tf.Session.run也可以視情況接受Feed字典
# Define a placeholder that expects a vector of three floating-point values,
# and a computation that depends on it.
x = tf.placeholder(tf.float32, shape=[3])
y = tf.square(x)
with tf.Session() as sess:
# Feeding a value changes the result that is returned when you evaluate `y`.
print(sess.run(y, {x: [1.0, 2.0, 3.0]})) # => "[1.0, 4.0, 9.0]"
print(sess.run(y, {x: [0.0, 0.0, 5.0]})) # => "[0.0, 0.0, 25.0]"
# Raises `tf.errors.InvalidArgumentError`, because you must feed a value for
# a `tf.placeholder()` when evaluating a tensor that depends on it.
sess.run(y)
# Raises `ValueError`, because the shape of `37.0` does not match the shape
# of placeholder `x`.
sess.run(y, {x: 37.0})
使用多個(gè)圖進(jìn)行編程
g_1 = tf.Graph()
with g_1.as_default():
# Operations created in this scope will be added to `g_1`.
c = tf.constant("Node in g_1")
# Sessions created in this scope will run operations from `g_1`.
sess_1 = tf.Session()
g_2 = tf.Graph()
with g_2.as_default():
# Operations created in this scope will be added to `g_2`.
d = tf.constant("Node in g_2")
# Alternatively, you can pass a graph when constructing a `tf.Session`:
# `sess_2` will run operations from `g_2`.
sess_2 = tf.Session(graph=g_2)
assert c.graph is g_1
assert sess_1.graph is g_1
# Print all of the operations in the default graph.
g = tf.get_default_graph()
print(g.get_operations())
保存和恢復(fù)變量
保存變量
# Create some variables.
v1 = tf.get_variable("v1", shape=[3], initializer = tf.zeros_initializer)
v2 = tf.get_variable("v2", shape=[5], initializer = tf.zeros_initializer)
inc_v1 = v1.assign(v1+1)
dec_v2 = v2.assign(v2-1)
# Add an op to initialize the variables.
init_op = tf.global_variables_initializer()
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, initialize the variables, do some work, and save the
# variables to disk.
with tf.Session() as sess:
sess.run(init_op)
# Do some work with the model.
inc_v1.op.run()
dec_v2.op.run()
# Save the variables to disk.
save_path = saver.save(sess, "/tmp/model.ckpt")
print("Model saved in path: %s" % save_path)
恢復(fù)變量
tf.reset_default_graph()
# Create some variables.
v1 = tf.get_variable("v1", shape=[3])
v2 = tf.get_variable("v2", shape=[5])
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, use the saver to restore variables from disk, and
# do some work with the model.
with tf.Session() as sess:
# Restore variables from disk.
saver.restore(sess, "/tmp/model.ckpt")
print("Model restored.")
# Check the values of the variables
print("v1 : %s" % v1.eval())
print("v2 : %s" % v2.eval())