TensorFlow從入門到入門

  1. 簡單線性回歸

     import tensorflow as tf
     import numpy 
    
     # 創(chuàng)造數(shù)據(jù)
     x_data = numpy.random.rand(100).astype(numpy.float32)
     y_data = x_data*0.1 + 0.3
    
     print(x_data,y_data)
    
     Weights = tf.Variable(tf.random_uniform([1],-1.0,1))
     biases = tf.Variable(tf.zeros([1]))
     y = Weights*x_data+biases
    
     loss = tf.reduce_mean(tf.square(y-y_data))
    
     optimizer = tf.train.GradientDescentOptimizer(0.5)
     train = optimizer.minimize(loss)
     init = tf.global_variables_initializer()
    
     sess = tf.Session()
     sess.run(init)
     for step in range(201):
         sess.run(train)
         if step%20 == 0:
             print(step,sess.run(Weights),sess.run(biases))
    
  2. 矩陣相乘礼烈,和Session()的兩種使用方法

     import tensorflow as tf
    
     #創(chuàng)建兩個矩陣
     matrix1 = tf.constant([[3,3]])
     matrix2 = tf.constant([[2],[2]])
     product = tf.matmul(matrix1,matrix2)
     #到此都是在準備計算關系,并沒有實際計算
    
     #啟動session并計算的第一種形式
     sess = tf.Session()
     result = sess.run(product)
     print(result)
     sess.close()
    
     #啟動session并計算的第二種方法
     with tf.Session() as sess:
         result = sess.run(product)
         print(result)
    
  3. 變量定義零酪,常量定義茶鹃,步驟定義,操作定義 全释,Session 本身對狀態(tài)保存的特性

     # Tensorflow中必須定義變量装处,添加到構建的流圖中  基本語法  state = tensorflow.Variable()  
    
     import tensorflow as tf
     #定義變量
     state = tf.Variable(0, name='counter')
     #定義常量
     one = tf.constant(1)
     #定義步驟
     new_value = tf.add(state,one)
     #定義賦值操作
     update = tf.assign(state, new_value)
    
     #定義變量以后初始化變量就是必須的
     init = tf.global_variables_initializer()
    
     #啟動Session
     with tf.Session() as sess:
         sess.run(init)
         for _ in range(3):
             sess.run(update)
             print(sess.run(state))
    
  4. placeholder

     #placeholder 有時候會出現(xiàn)一些量我們不想在,定義流圖階段就把這些量寫成常量浸船,而是想計算的時候再輸入妄迁。此時就要用到placeholder,定義流圖的時候占位
     import tensorflow as tf
     #在 Tensorflow 中需要定義 placeholder 的 type 李命,一般為 float32 形式
     input1 = tf.placeholder(tf.float32)
     input2 = tf.placeholder(tf.float32)
    
     # mul = multiply 是將input1和input2 做乘法運算登淘,并輸出為 output 
     ouput = tf.multiply(input1, input2)
     with tf.Session() as sess:
         print(sess.run(ouput, feed_dict={input1: [7.], input2: [2.]}))
    
  5. 激勵函數(shù) (Activation Function),人工智能領域為了適應復雜多變的現(xiàn)實世界專門找到的一些形狀奇特的函數(shù)封字。特點或是要求:1.必須是非線性函數(shù)黔州,因為要適應非線性問題。2.必須是可微分的阔籽,backpropagation誤差反向傳遞要使用到可微分特性流妻。

  6. 添加層函數(shù)

     # 神經網(wǎng)絡層的構建
     import tensorflow as tf
    
     #定義添加層的操作,新版的TensorFlow庫中自帶層不用手動懟
     def add_layer(inputs, in_size, out_size, activation_function = None):
         Weights = tf.Variable(tf.random_normal([in_size, out_size]))
         biases = tf.Variable(tf.zeros(1,out_size))+0.1
         Wx_plus_b = tf.matmul(inputs, Weights)+biases
         if activation_function is None:
             outputs = Wx_plus_b
         else:
             outputs = activation_function(Wx_plus_b)
         return outputs
    
  7. 數(shù)據(jù)可視化

     #結果可視化仿耽, 數(shù)據(jù)轉換成圖像
     # 1. matplotlib 
     import tensorflow as tf
     import numpy as np
     import matplotlib.pyplot as plt
    
     x_data = np.linspace(-1,1,300, dtype=np.float32)[:,np.newaxis]
     noise = np.random.normal(0,0.05,x_data.shape).astype(np.float32)
     y_data = np.square(x_data) - 0.5 +noise
    
     plt.figure(1, figsize=(8, 6))
    
     plt.subplot(111)
     plt.plot(x_data, y_data, c='red', label='relu')
     plt.ylim((-1, 5))
     plt.legend(loc='best')
    
     plt.show()
    

    動畫過程

      # 神經網(wǎng)絡建造合冀,訓練過程
     import tensorflow as tf
     import numpy as np
     import matplotlib.pyplot as plt
    
     def add_layer(inputs, in_size, out_size, activation_function=None):
         Weights = tf.Variable(tf.random_normal([in_size, out_size]))
         biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
         Wx_plus_b = tf.matmul(inputs, Weights) + biases
         if activation_function is None:
             outputs = Wx_plus_b
         else:
             outputs = activation_function(Wx_plus_b)
         return outputs
    
     x_data = np.linspace(-1,1,300, dtype=np.float32)[:,np.newaxis]
     noise = np.random.normal(0,0.05,x_data.shape).astype(np.float32)
     y_data = np.square(x_data) - 0.5 +noise
    
     xs = tf.placeholder(tf.float32,[None,1])
     ys = tf.placeholder(tf.float32,[None,1])
    
     #開始搭建神經網(wǎng)絡
     #1個輸入,10個輸出 激勵函數(shù)為tf.nn.relu
     l1 = add_layer(xs,1,10,activation_function=tf.nn.relu)
     #輸出層定義
     prediction = add_layer(l1,10,1,activation_function=None)
     #誤差計算 二者差的平方求和再取平均
     loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction),reduction_indices=[1]))
     #學習效率參數(shù) 學習效率 0-1
     train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
    
     #初始化變量
     init = tf.global_variables_initializer()
    
     #準備顯示數(shù)據(jù)
     fig = plt.figure()
     ax = fig.add_subplot(1,1,1)
     ax.scatter(x_data, y_data)
     plt.ion()
     plt.show()
    
     #啟動Session 開始訓練
     with tf.Session() as sess:
         sess.run(init)
         for i in range(1000):
             sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
             #每過50步輸出狀態(tài)
             if i%50 == 0 :
                 # to visualize the result and improvement
                 try:
                     ax.lines.remove(lines[0])
                 except Exception:
                     pass
                 prediction_value = sess.run(prediction, feed_dict={xs: x_data})
                 # plot the prediction
                 lines = ax.plot(x_data, prediction_value, 'r-', lw=5)
                 plt.pause(0.1)
    
  8. 加速神經網(wǎng)絡訓練 包括以下幾種模式:

    1. Stochastic Gradient Descent (SGD)
    2. Momentum
    3. AdaGrad
    4. RMSProp
    5. Adam
  9. 計算可視化(tensorboard)

     # TensorFlow 中自帶一個流圖可視化工具tensorboard 可以用圖的方式顯示定義的流圖
     # 神經網(wǎng)絡建造项贺,訓練過程
     import tensorflow as tf
     import numpy as np
     import matplotlib.pyplot as plt
    
     def add_layer(inputs, in_size, out_size, activation_function=None):
         #都放到命名空間內
         with tf.name_scope('layer'):
             with tf.name_scope('weights'):
                 Weights = tf.Variable(
                 tf.random_normal([in_size, out_size]), 
                 name='W')
             with tf.name_scope('biases'):
                 biases = tf.Variable(
                 tf.zeros([1, out_size]) + 0.1, 
                 name='b')
             with tf.name_scope('Wx_plus_b'):
                 Wx_plus_b = tf.add(
                 tf.matmul(inputs, Weights), 
                 biases)
             if activation_function is None:
                 outputs = Wx_plus_b
             else:
                 outputs = activation_function(Wx_plus_b, )
             return outputs
    
     x_data = np.linspace(-1,1,300, dtype=np.float32)[:,np.newaxis]
     noise = np.random.normal(0,0.05,x_data.shape).astype(np.float32)
     y_data = np.square(x_data) - 0.5 +noise
    
     #圖結構分層 把兩個placeholder放在一個方框中
     with tf.name_scope('inputs'):
         #站位名稱給定 以前沒有name參數(shù)
         xs= tf.placeholder(tf.float32, [None, 1],name='x_in') 
         ys= tf.placeholder(tf.float32, [None, 1],name='y_in')
    
     #開始搭建神經網(wǎng)絡
     #1個輸入君躺,10個輸出 激勵函數(shù)為tf.nn.relu
     l1 = add_layer(xs,1,10,activation_function=tf.nn.relu)
     #輸出層定義
     prediction = add_layer(l1,10,1,activation_function=None)
    
     with tf.name_scope('loss'):
         #誤差計算 二者差的平方求和再取平均
         loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction),reduction_indices=[1]))
    
     with tf.name_scope('train'):
         #學習效率參數(shù) 學習效率 0-1
         train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
    
     #初始化變量
     init = tf.global_variables_initializer()
    
     #準備顯示數(shù)據(jù)
     fig = plt.figure()
     ax = fig.add_subplot(1,1,1)
     ax.scatter(x_data, y_data)
     plt.ion()
     plt.show()
    
     #啟動Session 開始訓練
     with tf.Session() as sess:
         sess.run(init)
         #手動建立logs文件夾,運行后沒有錯誤 再執(zhí)行tensorboard --logdir logs
         writer = tf.summary.FileWriter("logs/", sess.graph)
         for i in range(1000):
             sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
             #每過50步輸出狀態(tài)
             if i%50 == 0 :
                 # to visualize the result and improvement
                 try:
                     ax.lines.remove(lines[0])
                 except Exception:
                     pass
                 prediction_value = sess.run(prediction, feed_dict={xs: x_data})
                 # plot the prediction
                 lines = ax.plot(x_data, prediction_value, 'r-', lw=5)
                 plt.pause(0.1)
    
  10. 訓練可視化开缎,在計算結構中記錄變量tf.summary.histogram(layer_name+'/weights',Weights)記錄標量tf.summary.scalar('loss', loss)棕叫。Seesion初始化以后merged = tf.summary.merge_all()相當于初始化,通過rs = sess.run(merged,feed_dict={xs:x_data,ys:y_data}),writer.add_summary(rs, i)進行步進記錄

    # TensorFlow 中自帶一個流圖可視化工具tensorboard 可以用圖的方式顯示定義的流圖
    # 神經網(wǎng)絡建造奕删,訓練過程
    import tensorflow as tf
    import numpy as np
    import matplotlib.pyplot as plt
    
    def add_layer(inputs, in_size, out_size,layer_n, activation_function=None):
        #都放到命名空間內
        layer_name = 'layer%s'%layer_n
        with tf.name_scope('layer'):
            with tf.name_scope('weights'):
                Weights = tf.Variable(
                tf.random_normal([in_size, out_size]), 
                name='W')
            with tf.name_scope('biases'):
                biases = tf.Variable(
                tf.zeros([1, out_size]) + 0.1, 
                name='b')
            with tf.name_scope('Wx_plus_b'):
                Wx_plus_b = tf.add(
                tf.matmul(inputs, Weights), 
                biases)
            if activation_function is None:
                outputs = Wx_plus_b
            else:
                outputs = activation_function(Wx_plus_b, )
            #添加分析數(shù)據(jù)
            tf.summary.histogram(layer_name+'/weights',Weights)
            tf.summary.histogram(layer_name+'/biase',biases)
            tf.summary.histogram(layer_name+'/outputs',outputs)
            return outputs
    
    x_data = np.linspace(-1,1,300, dtype=np.float32)[:,np.newaxis]
    noise = np.random.normal(0,0.05,x_data.shape).astype(np.float32)
    y_data = np.square(x_data) - 0.5 +noise
    
    #圖結構分層 把兩個placeholder放在一個方框中
    with tf.name_scope('inputs'):
        #站位名稱給定 以前沒有name參數(shù)
        xs= tf.placeholder(tf.float32, [None, 1],name='x_in') 
        ys= tf.placeholder(tf.float32, [None, 1],name='y_in')
    
    #開始搭建神經網(wǎng)絡
    #1個輸入俺泣,10個輸出 激勵函數(shù)為tf.nn.relu
    l1 = add_layer(xs,1,10,1,activation_function=tf.nn.relu)
    #輸出層定義
    prediction = add_layer(l1,10,1,2,activation_function=None)
    
    with tf.name_scope('loss'):
        #誤差計算 二者差的平方求和再取平均
        loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction),reduction_indices=[1]))
        #添加分析數(shù)據(jù)
        tf.summary.scalar('loss', loss)
    
    with tf.name_scope('train'):
        #學習效率參數(shù) 學習效率 0-1
        train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
    
    #初始化變量
    init = tf.global_variables_initializer()
    
    #準備顯示數(shù)據(jù)
    fig = plt.figure()
    ax = fig.add_subplot(1,1,1)
    ax.scatter(x_data, y_data)
    plt.ion()
    plt.show()
    
    #啟動Session 開始訓練
    with tf.Session() as sess:
        sess.run(init)
        #數(shù)據(jù)分析初始化
        merged = tf.summary.merge_all()
        #手動建立logs文件夾,運行后沒有錯誤 再執(zhí)行tensorboard --logdir logs
        writer = tf.summary.FileWriter("logs/", sess.graph)
        for i in range(1000):
            sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
            #每過50步輸出狀態(tài)
            if i%50 == 0 :
                #圖標統(tǒng)計
                rs = sess.run(merged,feed_dict={xs:x_data,ys:y_data})
                writer.add_summary(rs, i)
                # to visualize the result and improvement
                try:
                    ax.lines.remove(lines[0])
                except Exception:
                    pass
                prediction_value = sess.run(prediction, feed_dict={xs: x_data})
                # plot the prediction
                lines = ax.plot(x_data, prediction_value, 'r-', lw=5)
                plt.pause(0.1)
    
  11. 分類器:利用MNIST數(shù)據(jù)實現(xiàn)測試分類器完残,程序中主要新的學習點有1. MNIST數(shù)據(jù)使用伏钠。2.優(yōu)化目標函數(shù)中的交叉熵函數(shù) 3. 訓練方法采用梯度下降法

    import tensorflow as tf
    
    def add_layer(inputs, in_size, out_size,layer_n, activation_function=None):
        #都放到命名空間內
        layer_name = 'layer%s'%layer_n
        with tf.name_scope('layer'):
            with tf.name_scope('weights'):
                Weights = tf.Variable(
                tf.random_normal([in_size, out_size]), 
                name='W')
            with tf.name_scope('biases'):
                biases = tf.Variable(
                tf.zeros([1, out_size]) + 0.1, 
                name='b')
            with tf.name_scope('Wx_plus_b'):
                Wx_plus_b = tf.add(
                tf.matmul(inputs, Weights), 
                biases)
            if activation_function is None:
                outputs = Wx_plus_b
            else:
                outputs = activation_function(Wx_plus_b, )
            #添加分析數(shù)據(jù)
            tf.summary.histogram(layer_name+'/weights',Weights)
            tf.summary.histogram(layer_name+'/biase',biases)
            tf.summary.histogram(layer_name+'/outputs',outputs)
            return outputs
    
    def compute_accuracy(v_xs, v_ys):
        global prediction
        y_pre = sess.run(prediction, feed_dict={xs: v_xs})
        correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
        result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys})
        return result
    
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets('MNIST_data',one_hot = True)
    
    xs = tf.placeholder(tf.float32,[None,784])
    ys = tf.placeholder(tf.float32,[None,10])
    
    prediction = add_layer(xs,784,10,1,activation_function=tf.nn.softmax)
    
    #loss函數(shù)(即最優(yōu)化目標函數(shù))選用交叉熵函數(shù)茬腿。交叉熵用來衡量預測值和真實值的相似程度蜓肆,如果完全相同坞古,它們的交叉熵等于零
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys*tf.log(prediction),reduction_indices=[1]))
    #train方法(最優(yōu)化算法)采用梯度下降法媚污。
    train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for i in range(1000):
            batch_xs,batch_ys = mnist.train.next_batch(100)
            sess.run(train_step,feed_dict={xs:batch_xs,ys:batch_ys})
            if i%50 == 0:
                print(compute_accuracy(mnist.test.images, mnist.test.labels))
    
  12. 過度擬合(Overfitting)定枷,過度學習啥容。在處理現(xiàn)實問題的時候蛙酪,數(shù)據(jù)來源是不可控的鸟整。總會出現(xiàn)對機器學習神經網(wǎng)絡不利的數(shù)據(jù)誉券,來源主要可以分為指厌,測量誤差,文化背景踊跟,外界干擾踩验。總結起來就是機器學習的神經網(wǎng)絡自設計之初就處理不了的數(shù)據(jù)琴锭。對于由于過度擬合人們找到了優(yōu)化神經網(wǎng)絡的思路晰甚。

    1. 增加數(shù)據(jù)量,機器學習的結果來源于數(shù)據(jù)的思想决帖。數(shù)據(jù)量大了厕九,有一個半個的異常數(shù)據(jù)也就不算什么了,或是出現(xiàn)對立的數(shù)來平衡(小概率數(shù)據(jù))地回。沒有提高神經網(wǎng)絡的質量扁远。
    2. 正規(guī)化。
      1.修改誤差計算函數(shù)刻像,使得神經網(wǎng)絡得到不同程度的反饋畅买。原始的 cost 誤差是這樣計算, cost = 預測值-真實值的平方。如果 W 變得太大, 我們就讓 cost 也跟著變大, 變成一種懲罰機制. 所以我們把 W 自己考慮進來. 這里 abs 是絕對值. 這一種形式的 正規(guī)化, 叫做 l1 正規(guī)化. L2 正規(guī)化和 l1 類似, 只是絕對值換成了平方. 其他的l3, l4 也都是換成了立方和4次方等等. 形式類似. 用這些方法,我們就能保證讓學出來的線條不會過于扭曲.(引用https://morvanzhou.github.io/tutorials/machine-learning/tensorflow/5-02-A-overfitting/
      1. dropout细睡。一種專門用在神經網(wǎng)絡的正規(guī)化的方法谷羞。Dropout 的做法是從根本上讓神經網(wǎng)絡沒機會過度依賴.信息存在網(wǎng)絡中而不是關鍵節(jié)點。
  13. overfitting和dropout 效果對比溜徙,dropout 對于不重復的數(shù)據(jù)很有效但是如果數(shù)據(jù)有限湃缎,過度訓練的情況下效果會反彈。

    import tensorflow as tf
    import numpy as np
    import matplotlib.pyplot as plt
    
    tf.set_random_seed(1)
    np.random.seed(1)
    
    # Hyper parameters
    N_SAMPLES = 20
    N_HIDDEN = 300
    LR = 0.01
    
    # training data
    x = np.linspace(-1, 1, N_SAMPLES)[:, np.newaxis]
    y = x + 0.3*np.random.randn(N_SAMPLES)[:, np.newaxis]
    
    # test data
    test_x = x.copy()
    test_y = test_x + 0.3*np.random.randn(N_SAMPLES)[:, np.newaxis]
    
    # show data
    plt.scatter(x, y, c='magenta', s=50, alpha=0.5, label='train')
    plt.scatter(test_x, test_y, c='cyan', s=50, alpha=0.5, label='test')
    plt.legend(loc='upper left')
    plt.ylim((-2.5, 2.5))
    plt.show()
    
    # tf placeholders
    tf_x = tf.placeholder(tf.float32, [None, 1])
    tf_y = tf.placeholder(tf.float32, [None, 1])
    tf_is_training = tf.placeholder(tf.bool, None)  # to control dropout when training and testing
    
    # overfitting net
    o1 = tf.layers.dense(tf_x, N_HIDDEN, tf.nn.relu)
    o2 = tf.layers.dense(o1, N_HIDDEN, tf.nn.relu)
    o_out = tf.layers.dense(o2, 1)
    o_loss = tf.losses.mean_squared_error(tf_y, o_out)
    o_train = tf.train.AdamOptimizer(LR).minimize(o_loss)
    
    # dropout net
    d1 = tf.layers.dense(tf_x, N_HIDDEN, tf.nn.relu)
    d1 = tf.layers.dropout(d1, rate=0.5, training=tf_is_training)   # drop out 50% of inputs
    d2 = tf.layers.dense(d1, N_HIDDEN, tf.nn.relu)
    d2 = tf.layers.dropout(d2, rate=0.5, training=tf_is_training)   # drop out 50% of inputs
    d_out = tf.layers.dense(d2, 1)
    d_loss = tf.losses.mean_squared_error(tf_y, d_out)
    d_train = tf.train.AdamOptimizer(LR).minimize(d_loss)
    
    sess = tf.Session()
    sess.run(tf.global_variables_initializer())
    
    plt.ion()   # something about plotting
    
    for t in range(5000):
        sess.run([o_train, d_train], {tf_x: x, tf_y: y, tf_is_training: True})  # train, set is_training=True
    
        if t % 50 == 0:
            # plotting
            plt.cla()
            o_loss_, d_loss_, o_out_, d_out_ = sess.run(
                [o_loss, d_loss, o_out, d_out], {tf_x: test_x, tf_y: test_y, tf_is_training: False} # test, set is_training=False
            )
            plt.scatter(x, y, c='magenta', s=50, alpha=0.3, label='train'); 
            plt.scatter(test_x, test_y, c='cyan', s=50, alpha=0.3, label='test')
            plt.plot(test_x, o_out_, 'r-', lw=3, label='overfitting'); 
            plt.plot(test_x, d_out_, 'b--', lw=3, label='dropout(50%)')
            plt.text(0, -1.2, 'overfitting loss=%.4f' % o_loss_, fontdict={'size': 20, 'color':  'red'}); 
            plt.text(0, -1.5, 'dropout loss=%.4f' % d_loss_, fontdict={'size': 20, 'color': 'blue'})
            plt.legend(loc='upper left'); 
            plt.ylim((-2.5, 2.5)); 
            plt.pause(0.1)
    
    plt.ioff()
    plt.show()
    
  14. 卷積神經網(wǎng)絡蠢壹,非常消耗計算資源嗓违,pc機器已經顯得慢了。參考:https://morvanzhou.github.io/tutorials/machine-learning/tensorflow/5-03-A-CNN/

    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    import numpy as np
    import matplotlib.pyplot as plt
    
    tf.set_random_seed(1)
    np.random.seed(1)
    
    BATCH_SIZE = 50
    LR = 0.001              # learning rate
    
    mnist = input_data.read_data_sets('./mnist', one_hot=True)  # they has been normalized to range (0,1)
    test_x = mnist.test.images[:2000]
    test_y = mnist.test.labels[:2000]
    
    # plot one example
    print(mnist.train.images.shape)     # (55000, 28 * 28)
    print(mnist.train.labels.shape)   # (55000, 10)
    plt.imshow(mnist.train.images[0].reshape((28, 28)), cmap='gray')
    plt.title('%i' % np.argmax(mnist.train.labels[0])); plt.show()
    
    tf_x = tf.placeholder(tf.float32, [None, 28*28]) / 255.
    image = tf.reshape(tf_x, [-1, 28, 28, 1])              # (batch, height, width, channel)
    tf_y = tf.placeholder(tf.int32, [None, 10])            # input y
    
    # CNN
    conv1 = tf.layers.conv2d(   # shape (28, 28, 1)
        inputs=image,
        filters=16,
        kernel_size=5,
        strides=1,
        padding='same',
        activation=tf.nn.relu
    )           # -> (28, 28, 16)
    pool1 = tf.layers.max_pooling2d(
        conv1,
        pool_size=2,
        strides=2,
    )           # -> (14, 14, 16)
    conv2 = tf.layers.conv2d(pool1, 32, 5, 1, 'same', activation=tf.nn.relu)    # -> (14, 14, 32)
    pool2 = tf.layers.max_pooling2d(conv2, 2, 2)    # -> (7, 7, 32)
    flat = tf.reshape(pool2, [-1, 7*7*32])          # -> (7*7*32, )
    output = tf.layers.dense(flat, 10)              # output layer
    
    loss = tf.losses.softmax_cross_entropy(onehot_labels=tf_y, logits=output)           # compute cost
    train_op = tf.train.AdamOptimizer(LR).minimize(loss)
    
    accuracy = tf.metrics.accuracy(          # return (acc, update_op), and create 2 local variables
        labels=tf.argmax(tf_y, axis=1), predictions=tf.argmax(output, axis=1),)[1]
    
    sess = tf.Session()
    init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()) # the local var is for accuracy_op
    sess.run(init_op)     # initialize var in graph
    
    # following function (plot_with_labels) is for visualization, can be ignored if not interested
    from matplotlib import cm
    try: from sklearn.manifold import TSNE; HAS_SK = True
    except: HAS_SK = False; print('\nPlease install sklearn for layer visualization\n')
    def plot_with_labels(lowDWeights, labels):
        plt.cla(); X, Y = lowDWeights[:, 0], lowDWeights[:, 1]
        for x, y, s in zip(X, Y, labels):
            c = cm.rainbow(int(255 * s / 9)); plt.text(x, y, s, backgroundcolor=c, fontsize=9)
        plt.xlim(X.min(), X.max()); plt.ylim(Y.min(), Y.max()); plt.title('Visualize last layer'); plt.show(); plt.pause(0.01)
    
    plt.ion()
    for step in range(600):
        b_x, b_y = mnist.train.next_batch(BATCH_SIZE)
        _, loss_ = sess.run([train_op, loss], {tf_x: b_x, tf_y: b_y})
        if step % 50 == 0:
            accuracy_, flat_representation = sess.run([accuracy, flat], {tf_x: test_x, tf_y: test_y})
            print('Step:', step, '| train loss: %.4f' % loss_, '| test accuracy: %.2f' % accuracy_)
    
            if HAS_SK:
                # Visualization of trained flatten layer (T-SNE)
                tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000); plot_only = 500
                low_dim_embs = tsne.fit_transform(flat_representation[:plot_only, :])
                labels = np.argmax(test_y, axis=1)[:plot_only]; plot_with_labels(low_dim_embs, labels)
    plt.ioff()
    
    # print 10 predictions from test data
    test_output = sess.run(output, {tf_x: test_x[:10]})
    pred_y = np.argmax(test_output, 1)
    print(pred_y, 'prediction number')
    print(np.argmax(test_y[:10], 1), 'real number')
    
  15. 神經網(wǎng)絡的保存或提取图贸。
    1. 保存蹂季,本質是session的保存。

    import tensorflow as tf
    import numpy as np
    
    ## Save to file
    # remember to define the same dtype and shape when restore
    W = tf.Variable([[1,2,3],[3,4,5]], dtype=tf.float32, name='weights')
    b = tf.Variable([[1,2,3]], dtype=tf.float32, name='biases')
    
    # 替換成下面的寫法:
    init = tf.global_variables_initializer()
    
    saver = tf.train.Saver()
    
    with tf.Session() as sess:
        sess.run(init)
        save_path = saver.save(sess, "my_net/save_net.ckpt")
        print("Save to path: ", save_path)
    
    1. 提取疏日,session的恢復
    import tensorflow as tf
    import numpy as np
    
    # 先建立 W, b 的容器
    W = tf.Variable(np.arange(6).reshape((2, 3)), dtype=tf.float32, name="weights")
    b = tf.Variable(np.arange(3).reshape((1, 3)), dtype=tf.float32, name="biases")
    
    # 這里不需要初始化步驟 init= tf.initialize_all_variables()
    
    saver = tf.train.Saver()
    with tf.Session() as sess:
        # 提取變量
        saver.restore(sess, "my_net/save_net.ckpt")
        print("weights:", sess.run(W))
        print("biases:", sess.run(b))
    
  16. 循環(huán)神經網(wǎng)絡偿洁。參考:https://morvanzhou.github.io/tutorials/machine-learning/tensorflow/5-07-A-RNN/

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市沟优,隨后出現(xiàn)的幾起案子涕滋,更是在濱河造成了極大的恐慌,老刑警劉巖净神,帶你破解...
    沈念sama閱讀 218,682評論 6 507
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件何吝,死亡現(xiàn)場離奇詭異,居然都是意外死亡鹃唯,警方通過查閱死者的電腦和手機爱榕,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,277評論 3 395
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來坡慌,“玉大人黔酥,你說我怎么就攤上這事『殚伲” “怎么了跪者?”我有些...
    開封第一講書人閱讀 165,083評論 0 355
  • 文/不壞的土叔 我叫張陵,是天一觀的道長熄求。 經常有香客問我渣玲,道長,這世上最難降的妖魔是什么弟晚? 我笑而不...
    開封第一講書人閱讀 58,763評論 1 295
  • 正文 為了忘掉前任忘衍,我火速辦了婚禮,結果婚禮上卿城,老公的妹妹穿的比我還像新娘枚钓。我一直安慰自己,他們只是感情好瑟押,可當我...
    茶點故事閱讀 67,785評論 6 392
  • 文/花漫 我一把揭開白布搀捷。 她就那樣靜靜地躺著,像睡著了一般多望。 火紅的嫁衣襯著肌膚如雪嫩舟。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,624評論 1 305
  • 那天便斥,我揣著相機與錄音至壤,去河邊找鬼。 笑死枢纠,一個胖子當著我的面吹牛像街,可吹牛的內容都是我干的。 我是一名探鬼主播晋渺,決...
    沈念sama閱讀 40,358評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼镰绎,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了木西?” 一聲冷哼從身側響起畴栖,我...
    開封第一講書人閱讀 39,261評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎八千,沒想到半個月后吗讶,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體燎猛,經...
    沈念sama閱讀 45,722評論 1 315
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 37,900評論 3 336
  • 正文 我和宋清朗相戀三年照皆,在試婚紗的時候發(fā)現(xiàn)自己被綠了重绷。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 40,030評論 1 350
  • 序言:一個原本活蹦亂跳的男人離奇死亡膜毁,死狀恐怖昭卓,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情瘟滨,我是刑警寧澤候醒,帶...
    沈念sama閱讀 35,737評論 5 346
  • 正文 年R本政府宣布,位于F島的核電站杂瘸,受9級特大地震影響倒淫,放射性物質發(fā)生泄漏。R本人自食惡果不足惜败玉,卻給世界環(huán)境...
    茶點故事閱讀 41,360評論 3 330
  • 文/蒙蒙 一昌简、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧绒怨,春花似錦纯赎、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,941評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至六剥,卻和暖如春晚顷,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背疗疟。 一陣腳步聲響...
    開封第一講書人閱讀 33,057評論 1 270
  • 我被黑心中介騙來泰國打工该默, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人策彤。 一個月前我還...
    沈念sama閱讀 48,237評論 3 371
  • 正文 我出身青樓栓袖,卻偏偏與公主長得像,于是被迫代替她去往敵國和親店诗。 傳聞我的和親對象是個殘疾皇子裹刮,可洞房花燭夜當晚...
    茶點故事閱讀 44,976評論 2 355

推薦閱讀更多精彩內容