Mnist數(shù)據(jù)集測(cè)試demo
參考tensorflow官網(wǎng)中的demo:mnist
分析mnist的數(shù)據(jù)集的格式:
28*28的矩陣格式羽杰,1表示該像素點(diǎn)為黑妆够,0代表該像素點(diǎn)為白
然后吏够,導(dǎo)入數(shù)據(jù)集:
import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
設(shè)置占位符:
因?yàn)?8*28 = 784 所以每次塞入的數(shù)據(jù)是784個(gè)
x = tf.placeholder(tf.float32, [None, 784])#輸入的數(shù)據(jù)占位符
y_actual = tf.placeholder(tf.float32, shape=[None, 10])#輸入的標(biāo)簽占位符
權(quán)重和偏置初始化函數(shù)?
權(quán)重使用的truncated_normal進(jìn)行初始化,stddev標(biāo)準(zhǔn)差定義為0.1
偏置初始化為常量0.1:
'''權(quán)重初始化函數(shù)'''
def weight_variable(shape): inital = tf.truncated_normal(shape, stddev=0.1) # 使用truncated_normal進(jìn)行初始化
????return tf.Variable(inital)
'''偏置初始化函數(shù)'''
def bias_variable(shape): inital = tf.constant(0.1,shape=shape)
# 偏置定義為常量
????return tf.Variable(inital)
卷積函數(shù)?
strides[0]和strides[3]的兩個(gè)1是默認(rèn)值订讼,中間兩個(gè)1代表padding時(shí)在x方向運(yùn)動(dòng)1步赶袄,y方向運(yùn)動(dòng)1步
padding='SAME'代表經(jīng)過(guò)卷積之后的輸出圖像和原圖像大小一樣
#定義一個(gè)函數(shù),用于構(gòu)建卷積層?
def conv2d(x, W):
????return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
定義池化函數(shù):
def max_pool(x):
? return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME')
構(gòu)建網(wǎng)絡(luò):
#構(gòu)建網(wǎng)絡(luò)
x_image = tf.reshape(x, [-1,28,28,1])? ? ? ? #轉(zhuǎn)換輸入數(shù)據(jù)shape,以便于用于網(wǎng)絡(luò)中
W_conv1 = weight_variable([5, 5, 1, 32])? ? ?
b_conv1 = bias_variable([32])? ? ?
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)? ? #第一個(gè)卷積層
h_pool1 = max_pool(h_conv1)? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? #第一個(gè)池化層
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)? ? ? #第二個(gè)卷積層
h_pool2 = max_pool(h_conv2)? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? #第二個(gè)池化層
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])? ? ? ? ? ? ? #reshape成向量
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)? ? #第一個(gè)全連接層
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)? ? ? ? ? ? ? ? ? #dropout層
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_predict=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)? #softmax層
開(kāi)始訓(xùn)練:
cross_entropy = -tf.reduce_sum(y_actual*tf.log(y_predict)) #交叉熵
train_step = tf.train.GradientDescentOptimizer(1e-3).minimize(cross_entropy)? ? #梯度下降法
correct_prediction = tf.equal(tf.argmax(y_predict,1), tf.argmax(y_actual,1))? ?
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))? ? ? ? ? ? ? ? #精確度計(jì)算
sess=tf.InteractiveSession()? ? ? ? ? ? ? ? ? ? ? ? ?
sess.run(tf.initialize_all_variables())
for i in range(20000):
? batch = mnist.train.next_batch(50)
? if i%100 == 0:? ? ? ? ? ? ? ? ? #訓(xùn)練100次狸窘,驗(yàn)證一次
? ? train_acc = accuracy.eval(feed_dict={x:batch[0], y_actual: batch[1], keep_prob: 1.0})
? ? print 'step %d, training accuracy %g'%(i,train_acc)
? ? train_step.run(feed_dict={x: batch[0], y_actual: batch[1], keep_prob: 0.5})
test_acc=accuracy.eval(feed_dict={x: mnist.test.images, y_actual: mnist.test.labels, keep_prob: 1.0})
print("test accuracy",test_acc)
運(yùn)行結(jié)果:
效果還算不錯(cuò)。