前兩篇文章tensorflow入門應(yīng)用方法(二)——全連接深度網(wǎng)絡(luò)搭建和tensorflow入門應(yīng)用方法——線性回歸和邏輯回歸分別描述了應(yīng)用tensorflow搭建擁有兩層隱藏層的全連接型深度模型和單層邏輯回歸和線性回歸的方法流程。本文主要講述應(yīng)用tensorflow搭建卷積神經(jīng)網(wǎng)絡(luò)識(shí)別MNIST手寫數(shù)字的模型訓(xùn)練方法笤成。
卷積網(wǎng)絡(luò)搭建
本文搭建的卷積網(wǎng)絡(luò)模型主要有三層隱藏層肾档,其中兩層conv_layers和一層full_conn_layers,加上最后的輸出層奢米,一共為四層網(wǎng)絡(luò)結(jié)構(gòu)撞秋。
卷積網(wǎng)絡(luò)結(jié)構(gòu)圖
如下圖饰豺,四層網(wǎng)絡(luò)模型分別是64層特征的卷積層+128層特征的卷積層+1024層特征的全連接層+輸出層。輸出層為10個(gè)特征層痢站,分別代表手寫數(shù)字0~9相關(guān)特征磷箕。
由圖可知,每層卷積層后面連接max pool層阵难,上采樣可以:1. 保持特征處理的不變性岳枷,例如,圖片特征的微小平移,旋轉(zhuǎn)等不會(huì)收到影響空繁;2. 減少參數(shù)和計(jì)算量殿衰,防止過擬合,增強(qiáng)模型泛化能力盛泡。
代碼實(shí)現(xiàn)
首先闷祥,加載mnist數(shù)據(jù)
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data/', one_hot=True)
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
print('MNIST loaded')
然后,參數(shù)初始化
# 輸入輸出
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])
keepratio = tf.placeholder(tf.float32)
Weights = {
'wc1': tf.Variable(tf.random_normal([3,3,1,64], stddev=0.1)),
# 3,3,1,64 -- f_hight, f_width, input_channel, output_channel
'wc2': tf.Variable(tf.random_normal([3,3,64,128], stddev=0.1)),
'wd1': tf.Variable(tf.random_normal([7*7*128, 1024], stddev=0.1)),
# 兩次max pool以后由28×28變成7×7 卷積特征核的大小變化公式:(input_(h,w) - f_(h,w) + 2*padding)/stride + 1傲诵,
# 這里卷積對(duì)特征核大小沒有影響
'wd2': tf.Variable(tf.random_normal([1024, n_output], stddev=0.1))
}
biases = {
'bc1': tf.Variable(tf.random_normal([64], stddev=0.1)),
'bc2': tf.Variable(tf.random_normal([128], stddev=0.1)),
'bd1': tf.Variable(tf.random_normal([1024], stddev=0.1)),
'bd2': tf.Variable(tf.random_normal([n_output], stddev=0.1))
}
這里網(wǎng)絡(luò)權(quán)重初始化時(shí)凯砍,參數(shù)的維度設(shè)計(jì)很重要。例如拴竹,在卷積層wc1中悟衩,[3,3,1,64]分別代表卷積核高度,卷積核寬度栓拜,輸入通道座泳,輸出通道。因此菱属,卷積核為一個(gè)四維的tensor钳榨。
在從卷積層到連接層時(shí),連接層的參數(shù)設(shè)計(jì)也需要準(zhǔn)確計(jì)算纽门,需要從整個(gè)網(wǎng)絡(luò)的輸入數(shù)據(jù)的維度開始推算,例如营罢,全連接層wd1赏陵,參數(shù)為7×7×128。原始輸入數(shù)據(jù)維度是28×28,這里由于兩次2×2的max pool操作之后饲漾,維度變成28/2/2=7,所以是7×7,而128是上一層輸出的通道數(shù)量蝙搔,因此這里維度是7×7×128。
此外考传,卷積操作也會(huì)影響特征大小的變化吃型,其變化公式為:layer_(n+1)_(h,w) = (layer_n_(h,w) - f_(h,w) + 2padding)/stride + 1,layer_(n+1)_(h,w)代表n+1層特征的高和寬維度僚楞,layer_n_(h,w)代表n層特征的高和寬維度勤晚,f_(h,w)代表卷積核的高和寬,padding代表padding長(zhǎng)度泉褐,stride代表卷積核移動(dòng)的步長(zhǎng)赐写。這里,layer_(n+1)_(h,w)=(28 - 3 + 21)/1 + 1 = 28膜赃,所以每層卷積操作對(duì)特征層維度沒有影響挺邀。
下面定義卷積層以及相關(guān)網(wǎng)絡(luò)結(jié)構(gòu)
# 定義卷積層
def conv_basic(_input, _w, _b, _keepratio):
# input
_input_r = tf.reshape(_input, shape=[-1, 28, 28, 1])
# shape=[batch_size, input_height, input_width, input_channel]
# conv layer1
_conv1 = tf.nn.conv2d(_input_r, _w['wc1'], strides=[1,1,1,1], padding='SAME')
# strides = [batch_size_stride, height_stride, width_stride, channel_stride]
_conv1 = tf.nn.relu(tf.nn.bias_add(_conv1, _b['bc1']))
_pool1 = tf.nn.max_pool(_conv1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
# padding = 'SAME' 代表會(huì)填補(bǔ)0, padding = 'VALID' 代表不會(huì)填補(bǔ),丟棄最后的行列元素
_pool_dr1 = tf.nn.dropout(_pool1, _keepratio)
# conv layer2
_conv2 = tf.nn.conv2d(_pool_dr1, _w['wc2'], strides=[1,1,1,1], padding='SAME')
_conv2 = tf.nn.relu(tf.nn.bias_add(_conv2, _b['bc2']))
_pool2 = tf.nn.max_pool(_conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# ksize = [batch_size_stride, height_stride, width_stride, channel_stride]
_pool_dr2 = tf.nn.dropout(_pool2, _keepratio)
# vectorize
_dense1 = tf.reshape(_pool_dr2, [-1, _w['wd1'].get_shape().as_list()[0]]) # reshape
# full connection layer
_fc1 = tf.nn.relu(tf.add(tf.matmul(_dense1, _w['wd1']), _b['bd1']))
_fc_dr1 = tf.nn.dropout(_fc1, _keepratio)
_out = tf.add(tf.matmul(_fc_dr1, _w['wd2']), _b['bd2'])
# result
out = {
'input_r': _input_r, 'conv1': _conv1, 'pool1': _pool1, 'pool_dr1': _pool_dr1,
'conv2': _conv2, 'pool2': _pool2, 'pool_dr2': _pool_dr2, 'dense1': _dense1,
'fc1': _fc1, 'fc_dr1': _fc_dr1, 'out': _out
}
return out
print('CNN ready')
每一層卷積中,主要包括卷積操作端铛,池化泣矛,dropout。其中禾蚕,dropout操作主要隨機(jī)刪除一定比率的w參數(shù)乳蓄,使得w參數(shù)有更強(qiáng)的泛化能力。
網(wǎng)絡(luò)結(jié)構(gòu)的定義
# 網(wǎng)絡(luò)定義
_pred = conv_basic(x, Weights, biases, keepratio)['out']
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=_pred, labels=y))
optm = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
_corr = tf.equal(tf.arg_max(_pred, 1), tf.arg_max(y, 1))
accr = tf.reduce_mean(tf.cast(_corr, tf.float32))
init = tf.global_variables_initializer()
print('graph ready')
與前幾篇文章相似夕膀,網(wǎng)絡(luò)結(jié)構(gòu)主要包括:
- 前向計(jì)算
- 損失值計(jì)算
- 梯度優(yōu)化
- 計(jì)算模型精確度
- 初始化所有參數(shù)操作
最后虚倒,進(jìn)行迭代
# 迭代計(jì)算
training_epochs = 30
batch_size = 20
display_step = 5
sess = tf.Session()
sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0
# total_batch = int(mnist.train.num_examples/batch_size)
total_batch = 5
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
feeds = {x: batch_xs, y: batch_ys, keepratio: 0.5}
sess.run(optm, feed_dict=feeds)
avg_cost += sess.run(cost, feed_dict=feeds)/total_batch
# 顯示結(jié)果
if (epoch+1) % display_step == 0:
print('Epoch: %03d/%03d cost: %.9f' % (epoch+1, training_epochs, avg_cost))
feeds = {x: batch_xs, y: batch_ys, keepratio: 1.0}
train_acc = sess.run(accr, feed_dict=feeds)
print('Train accuracy: %.3f' % train_acc)
# feeds = {x: mnist.test.images, y: mnist.test.labels, keepratio: 1.0}
# test_acc = sess.run(accr, feed_dict=feeds)
# print('Test accuracy: %.3f' % test_acc)
print('optmization finished')
迭代操作中,定義dropout比率為0.5产舞。
結(jié)果輸出
由于筆記本內(nèi)存不足魂奥,在以上代碼中的total_batch變量手動(dòng)將賦值改為5,下面是迭代30次輸出的訓(xùn)練結(jié)果易猫。
Epoch: 005/030 cost: 2.127988672
Train accuracy: 0.450
Epoch: 010/030 cost: 1.514546847
Train accuracy: 0.600
Epoch: 015/030 cost: 1.691785026
Train accuracy: 0.600
Epoch: 020/030 cost: 1.580975151
Train accuracy: 0.750
Epoch: 025/030 cost: 1.391473675
Train accuracy: 0.800
Epoch: 030/030 cost: 1.286683488
Train accuracy: 0.750
optmization finished