tensorflow控制流之tf.case
tf.case
case(pred_fn_pairs,
???????? default,
???????? exclusive=False,
???????? strict=False,
????????? name='case')
控制語句還是非常好用的
Defined intensorflow/python/ops/control_flow_ops.py.
See the guide:Control Flow > Control Flow Operations
Create a case operation.
Thepred_fn_pairsparameter is a dict or list of pairs of size N.Each pair contains a boolean scalar tensor and a python callable thatcreates the tensors to be returned if the boolean evaluates to True.defaultis a callable generating a list of tensors. All the callablesinpred_fn_pairsas well asdefaultshould return the same numberand types of tensors.
If exclusive==True, all predicates are evaluated, and an exception isthrown if more than one of the predicates evaluates toTrue.Ifexclusive==False, execution stops are the first predicate whichevaluates to True, and the tensors generated by the corresponding functionare returned immediately. If none of the predicates evaluate to True, thisoperation returns the tensors generated bydefault.
tf.casesupports nested structures as implemented intensorflow.python.util.nest. All of the callables must return the same(possibly nested) value structure of lists, tuples, and/or named tuples.Singleton lists and tuples form the only exceptions to this: when returned bya callable, they are implicitly unpacked to single values. Thisbehavior is disabled by passingstrict=True.
If an unordered dictionary is used forpred_fn_pairs, the order of theconditional tests is not guaranteed. However, the order is guaranteed to bedeterministic, so that variables created in conditional branches are createdin fixed order across runs.
Example 1:
Pseudocode:
if(x
elsereturn23;
Expressions:
f1=lambda:tf.constant(17)
f2=lambda:tf.constant(23)
r=case([(tf.less(x,y),f1)],default=f2)
Example 2:
Pseudocode:
if(xz)raiseOpError("Only one predicate may evaluate true");
if(x
elseif(x>z)return23;
elsereturn-1;
Expressions:
deff1():returntf.constant(17)
deff2():returntf.constant(23)
deff3():returntf.constant(-1)
r=case({tf.less(x,y):f1,tf.greater(x,z):f2},?????? # case1, case2, case3, ...
default=f3,exclusive=True)
Args:
pred_fn_pairs: Dict or list of pairs of a boolean scalar tensor and a? ? ? ? ? ? ? ? callable which returns a list of tensors.
default: A callable that returns a list of tensors.
exclusive: True iff at most one predicate is allowed to evaluate toTrue.
strict: A boolean that enables/disables 'strict' mode; see above.
name: A name for this operation (optional).
Returns:
The tensors returned by the first pair whose predicate evaluated to True, or? those returned bydefaultif none does.
Raises:
TypeError: Ifpred_fn_pairsis not a list/dictionary.
TypeError: Ifpred_fn_pairsis a list but does not contain 2-tuples.
TypeError: Iffns[i]is not callable for any i, ordefaultis not?? callable.
tensorflow tf.stack tf.unstack 實(shí)例
import tensorflow as tf
a = tf.constant([1,2,3])
b = tf.constant([4,5,6])
c = tf.stack([a,b],axis=1)
d = tf.unstack(c,axis=0)
e = tf.unstack(c,axis=1)
print(c.get_shape())
with tf.Session() as sess:? ?
???? print(sess.run(c))??
??? print(sess.run(d))
? ? print(sess.run(e))
在安裝了epel源的情況下,直接yum就可以安裝python3.4
yum install python34 -y
python3 --version
沒有自帶pip3,從官網(wǎng)安裝
wget --no-check-certificate https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py
pip3 -V
tensorflow GPU 安裝
http://www.linuxidc.com/Linux/2016-11/137561.htm
http://blog.csdn.net/zhaoyu106/article/details/52793183/
http://blog.csdn.net/liaodong2010/article/details/71482304
Deepin15.4 下 CUDA 配置方法
deepin15.4不僅漂亮而且運(yùn)行流暢玫坛,吸引了大批linuxer肆氓,其中也不乏搞cuda的小伙伴优炬。但是有不少童鞋在deepin15.4下配置cuda遇到了困難官扣,所以抽空寫個(gè)博文說一下我配置的方法祝蝠。主要針對電腦是intel
核顯延塑,nvidia顯卡绣张,需要運(yùn)行cuda,并且有雙顯卡熱切換需求的小朋友。
先說一下我電腦的配置吧页畦,大家的硬件環(huán)境不一樣胖替,我也沒法一一測試。
CPUintel core i5 4210u
顯卡nvidia gt840m
系統(tǒng)deepin 15.4 x64
安裝nvidia-bumblebee,實(shí)現(xiàn)雙顯卡切換
對于筆記本用戶來說独令,一直開著獨(dú)顯的話發(fā)熱量會明顯增大端朵,并且耗電也會變快,所以需要安裝bumblebee來切換顯卡燃箭,平時(shí)只用核顯就足夠了冲呢,需要運(yùn)行cuda或者玩游戲的話才開啟獨(dú)顯。
安裝cuda開發(fā)工具
cuda在linux下的開發(fā)工具基本上夠用了招狸,有基于eclipse 的nsight敬拓,有visual
profiler性能分析工具,還有pycuda庫實(shí)現(xiàn)對python運(yùn)算的加速裙戏。但是我以前在deepin上面嘗試安裝官方的.run包乘凸,均以失敗告終,很容易把電腦搞崩潰累榜。最近終于找到了從軟件源直接安裝cuda的方法营勤。
安裝nvidia-bumblebee
sudoapt updatesudoapt install bumblebee bumblebee-nvidia nvidia-smi
一行命令搞定nvidia驅(qū)動(dòng)、bumblebee切換程序壹罚、和顯卡狀態(tài)監(jiān)控程序葛作。
不用管nouveau驅(qū)動(dòng),系統(tǒng)會自己屏蔽掉猖凛。
然后重啟
sudoreboot
重啟之后測試
nvidia-smi
和
optirun nvidia-smi
如果出現(xiàn)如下界面赂蠢,說明驅(qū)動(dòng)安裝成功
安裝cuda開發(fā)工具
首先安裝配置g++,gcc
因?yàn)閏uda版本原因,cuda8之前都只支持g++-4.8,gcc-4.8
所以gcc需要降級
sudoapt install g++-4.8gcc-4.8
然后更改軟連接
cd/usr/binsudorm gcc g++sudoln-sg++-4.8g++sudoln-sgcc-4.8gcc
然后下載開發(fā)工具
sudo apt install nvidia-cuda-devnvidia-cuda-toolkitnvidia-nsightnvidia-visual-profiler
使用nsight的方法為:在終端下輸入
optirun nsight
tf.while_loop()
tf.while_loop(cond, body, loop_vars, shape_invariants=None,
parallel_iterations=10, back_prop=True, swap_memory=False, name=None)
while_loop可以這么理解
loop_vars = [...]whilecond(*loop_vars):? ? loop_vars = body(*loop_vars)
1
2
3
示例:
importtensorflowastfa = tf.get_variable("a", dtype=tf.int32, shape=[], initializer=tf.ones_initializer())b = tf.constant(2)f = tf.constant(6)# Definition of condition and bodydefcond(a, b, f):returna <3defbody(a, b, f):# do some stuff with a, ba = a +1returna, b, f# Loop, 返回的tensor while 循環(huán)后的 a辨泳,b虱岂,fa, b, f = tf.while_loop(cond, body, [a, b, f])withtf.Session()assess:? ? tf.global_variables_initializer().run()? ? res = sess.run([a, b, f])? ? print(res)