Tensor長(zhǎng)度擴(kuò)張
在進(jìn)行模型結(jié)構(gòu)設(shè)計(jì)的時(shí)人弓,我們時(shí)常需要將一個(gè)變長(zhǎng)的Tensor通過(guò)擴(kuò)張來(lái)與另一個(gè)Tensor維度對(duì)齊抖僵,進(jìn)而方便下一步的計(jì)算督怜。這個(gè)時(shí)候就可以使用tf.tile()
來(lái)進(jìn)行Tensor的復(fù)制性擴(kuò)張摸恍。
import tensorflow as tf
x = tf.constant(['a'], name='x')
y = tf.tile(x, [3], name='y')
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print(y.eval())
Output:
[b'a' b'a' b'a']
Tensor打印
直接打印Tensor
在初學(xué)TensorFlow時(shí)吱型,我們通常需要頻繁的編寫(xiě)Demo以及打印Tensor來(lái)促使我們快速了解TensorFlow。但是與普通編程框架不同园骆,TensorFlow大體上屬于聲明式編程舔痪,它的基礎(chǔ)數(shù)據(jù)單元Tensor無(wú)法直接通過(guò)print()
進(jìn)行打印。如下代碼將會(huì)輸出該Tensor的結(jié)構(gòu)而非內(nèi)容:
import tensorflow as tf
a = tf.constant(["Hello World"])
print(a)
Output:
Tensor("Const:0", shape=(1,), dtype=string)
在TensorFlow中锌唾,如果我們希望簡(jiǎn)單的打印某個(gè)常量的內(nèi)容锄码,我們可以在Session初始化完畢后通過(guò)Tensor的eval()
函數(shù)來(lái)進(jìn)行獲取。
import tensorflow as tf
a = tf.constant(["Hello World"])
with tf.Session() as sess:
print(a.eval())
Output:
[b'Hello World']
更進(jìn)一步晌涕,當(dāng)我們?cè)噲D采用上述方式試圖打印某個(gè)變量的內(nèi)容時(shí):
import tensorflow as tf
a = tf.get_variable(name='a', dtype=tf.string, initializer=["Hello World"])
with tf.Session() as sess:
print(a.eval())
將會(huì)產(chǎn)生如下異常:
Instructions for updating:
Colocations handled automatically by placer.
2020-03-05 23:29:28.889473: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
File "/Users/tan/anaconda2/envs/tf36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/Users/tan/anaconda2/envs/tf36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/Users/tan/anaconda2/envs/tf36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value a
[[{{node _retval_a_0_0}}]]
...
正如上文所述滋捶,TensorFlow大體上來(lái)說(shuō)屬于聲明式編程框架,對(duì)于Tensor變量余黎,雖然我們?cè)O(shè)置類初始值重窟,我們?nèi)詰?yīng)當(dāng)在其在Session中初始化之后才能進(jìn)行各類操作:
import tensorflow as tf
a = tf.get_variable(name='a', dtype=tf.string, initializer=["Hello World"])
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print(a.eval())
Output:
[b'Hello World']
獲取Tensor的Shape
在使用TensorFlow進(jìn)行建模的過(guò)程中,我們經(jīng)常會(huì)需要獲取一個(gè)Tensor的Shape以用于比如構(gòu)建一個(gè)新的Tensor等邏輯惧财。通常巡扇,我們有以下兩種方式獲取一個(gè)Tensor的Shape:
import tensorflow as tf
x = tf.constant(['a,b,c,d,e'], name='x')
x_shape = x.get_shape()
print(x_shape)
x_shape = tf.shape(x)
print(x_shape)
Output:
(1,)
Tensor("Shape:0", shape=(1,), dtype=int32)
從Output中我們不難發(fā)現(xiàn),兩種形式返回的數(shù)據(jù)是截然不同的垮衷,當(dāng)我們希望使用Shape處理普通實(shí)現(xiàn)邏輯時(shí)厅翔,我們應(yīng)當(dāng)采用第一種方式;當(dāng)我們希望使用Shape進(jìn)行TensorFlow計(jì)算時(shí)搀突,我們應(yīng)當(dāng)采用第二種方式刀闷。畢竟TensorFlow中的數(shù)據(jù)計(jì)算,大都是Tensor格式仰迁。