本專欄專注分享大型Bat面試知識辆毡,后續(xù)會持續(xù)更新锋边,喜歡的話麻煩點擊一個關(guān)注
廢話不多說先上面試目錄
這篇博客主要基于我做的一個數(shù)字手勢識別APP歌馍,具體分享下如何一步步訓(xùn)練一個卷積神經(jīng)網(wǎng)絡(luò)模型(CNN)模型黍氮,然后把模型集成到Android Studio中跛十,開發(fā)一個數(shù)字手勢識別APP泉手。整個project的源碼已經(jīng)開源在github上,github地址:Chinese-number-gestures-recognition偶器,歡迎star斩萌,哈哈。先說下這個數(shù)字手勢識別APP的功能:能夠識別做出的 0屏轰,1颊郎,2,3霎苗,4姆吭,5,6唁盏,7内狸,8,9厘擂,10這11個手勢昆淡。
一、數(shù)據(jù)集的收集
這么點照片想訓(xùn)練模型簡直天方夜譚刽严,只能祭出 data augmentation(數(shù)據(jù)增強(qiáng))神器了昂灵,通過旋轉(zhuǎn),平移舞萄,拉伸 等操作每張圖片生成100張眨补,這樣圖片就變成了21500張。下面是 data augmentation 的代碼:
from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
import os
datagen = ImageDataGenerator(
rotation_range=20,
width_shift_range=0.15,
height_shift_range=0.15,
zoom_range=0.15,
shear_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
dirs = os.listdir("picture")
print(len(dirs))
for filename in dirs:
img = load_img("picture//{}".format(filename))
x = img_to_array(img)
# print(x.shape)
x = x.reshape((1,) + x.shape) #datagen.flow要求rank為4
# print(x.shape)
datagen.fit(x)
prefix = filename.split('.')[0]
print(prefix)
counter = 0
for batch in datagen.flow(x, batch_size=4 , save_to_dir='generater_pic', save_prefix=prefix, save_format='jpg'):
counter += 1
if counter > 100:
break # 否則生成器會退出循環(huán)
二倒脓、數(shù)據(jù)集的處理
1.縮放圖片
接下來對這21500張照片進(jìn)行處理撑螺,首先要把每張照片縮放到64*64的尺寸,這么做的原因如下:
- 不同手機(jī)拍出的照片的size各不相同崎弃,要統(tǒng)一
- 如果手機(jī)拍出來的高分辨率圖片甘晤,太大,GPU顯存有限吊履,要壓縮下安皱,減少體積。
- APP通過手機(jī)攝像頭拍攝出來的照片艇炎,不同機(jī)型有差異酌伊,要統(tǒng)一。
對圖片的縮放不能簡單的直接縮小尺寸,那樣的話會失真嚴(yán)重居砖。所以要用到一些縮放算法虹脯,TensorFlow中已經(jīng)提供了四種縮放算法,分別為: 雙線性插值法(Bilinear interpolation)奏候、最近鄰居法(Nearest neighbor interpolation)循集、雙三次插值法(Bicubic interpolation)和面積插值法(area interpolation)。 我這里使用了面積插值法(area interpolation)蔗草。代碼為:
#壓縮圖片,把圖片壓縮成64*64的
def resize_img():
dirs = os.listdir("split_pic//6")
for filename in dirs:
im = tf.gfile.FastGFile("split_pic//6//{}".format(filename), 'rb').read()
# print("正在處理第%d張照片"%counter)
with tf.Session() as sess:
img_data = tf.image.decode_jpeg(im)
image_float = tf.image.convert_image_dtype(img_data, tf.float32)
resized = tf.image.resize_images(image_float, [64, 64], method=3)
resized_im = resized.eval()
# new_mat = np.asarray(resized_im).reshape(1, 64, 64, 3)
scipy.misc.imsave("resized_img6//{}".format(filename),resized_im)
2.把圖片轉(zhuǎn)成 .h5文件
h5文件的種種好處咒彤,這里不再累述。我們首先把圖片轉(zhuǎn)成RGB矩陣咒精,即每個圖片是一個64643的矩陣(因為是彩色圖片镶柱,所以通道是3)。這里不做歸一化模叙,因為我認(rèn)為歸一化應(yīng)該在你用到的時候自己代碼歸一化歇拆,如果直接把數(shù)據(jù)集做成了歸一化,有點死板了范咨,不靈活故觅。在我們把矩陣存進(jìn)h5文件時,此時標(biāo)簽一定要對應(yīng)每一張圖片(矩陣)渠啊,直接上代碼:
#圖片轉(zhuǎn)h5文件
def image_to_h5():
dirs = os.listdir("resized_img")
Y = [] #label
X = [] #data
print(len(dirs))
for filename in dirs:
label = int(filename.split('_')[0])
Y.append(label)
im = Image.open("resized_img//{}".format(filename)).convert('RGB')
mat = np.asarray(im) #image 轉(zhuǎn)矩陣
X.append(mat)
file = h5py.File("dataset//data.h5","w")
file.create_dataset('X', data=np.array(X))
file.create_dataset('Y', data=np.array(Y))
file.close()
#test
# data = h5py.File("dataset//data.h5","r")
# X_data = data['X']
# print(X_data.shape)
# Y_data = data['Y']
# print(Y_data[123])
# image = Image.fromarray(X_data[123]) #矩陣轉(zhuǎn)圖片并顯示
# image.show()
訓(xùn)練模型
接下來就是訓(xùn)練模型了输吏,首先把數(shù)據(jù)集劃分為訓(xùn)練集和測試集,然后先坐下歸一化昭抒,把標(biāo)簽轉(zhuǎn)化為one-hot向量表示评也,代碼如下:
#load dataset
def load_dataset():
#劃分訓(xùn)練集炼杖、測試集
data = h5py.File("dataset//data.h5","r")
X_data = np.array(data['X']) #data['X']是h5py._hl.dataset.Dataset類型灭返,轉(zhuǎn)化為array
Y_data = np.array(data['Y'])
# print(type(X_data))
X_train, X_test, y_train, y_test = train_test_split(X_data, Y_data, train_size=0.9, test_size=0.1, random_state=22)
# print(X_train.shape)
# print(y_train[456])
# image = Image.fromarray(X_train[456])
# image.show()
# y_train = y_train.reshape(1,y_train.shape[0])
# y_test = y_test.reshape(1,y_test.shape[0])
print(X_train.shape)
# print(X_train[0])
X_train = X_train / 255. # 歸一化
X_test = X_test / 255.
# print(X_train[0])
# one-hot
y_train = np_utils.to_categorical(y_train, num_classes=11)
print(y_train.shape)
y_test = np_utils.to_categorical(y_test, num_classes=11)
print(y_test.shape)
return X_train, X_test, y_train, y_test
構(gòu)建CNN模型,這里用了最簡單的類LeNet-5坤邪,具體兩層卷積層熙含、兩層池化層、一層全連接層艇纺,一層softmax輸出怎静。具體的小trick有:dropout、relu黔衡、regularize蚓聘、mini-batch、adam盟劫。具體看代碼吧:
def weight_variable(shape):
tf.set_random_seed(1)
return tf.Variable(tf.truncated_normal(shape, stddev=0.1))
def bias_variable(shape):
return tf.Variable(tf.constant(0.0, shape=shape))
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME')
def max_pool_2x2(z):
return tf.nn.max_pool(z, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
def random_mini_batches(X, Y, mini_batch_size=16, seed=0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
mini_batch_size - size of the mini-batches, integer
seed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours.
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
m = X.shape[0] # number of training examples
mini_batches = []
np.random.seed(seed)
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[permutation]
shuffled_Y = Y[permutation,:].reshape((m, Y.shape[1]))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m / mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
mini_batch_X = shuffled_X[k * mini_batch_size: k * mini_batch_size + mini_batch_size]
mini_batch_Y = shuffled_Y[k * mini_batch_size: k * mini_batch_size + mini_batch_size]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
mini_batch_X = shuffled_X[num_complete_minibatches * mini_batch_size: m]
mini_batch_Y = shuffled_Y[num_complete_minibatches * mini_batch_size: m]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
def cnn_model(X_train, y_train, X_test, y_test, keep_prob, lamda, num_epochs = 450, minibatch_size = 16):
X = tf.placeholder(tf.float32, [None, 64, 64, 3], name="input_x")
y = tf.placeholder(tf.float32, [None, 11], name="input_y")
kp = tf.placeholder_with_default(1.0, shape=(), name="keep_prob")
lam = tf.placeholder(tf.float32, name="lamda")
#conv1
W_conv1 = weight_variable([5,5,3,32])
b_conv1 = bias_variable([32])
z1 = tf.nn.relu(conv2d(X, W_conv1) + b_conv1)
maxpool1 = max_pool_2x2(z1) #max_pool1完后maxpool1維度為[?,32,32,32]
#conv2
W_conv2 = weight_variable([5,5,32,64])
b_conv2 = bias_variable([64])
z2 = tf.nn.relu(conv2d(maxpool1, W_conv2) + b_conv2)
maxpool2 = max_pool_2x2(z2) #max_pool2,shape [?,16,16,64]
#conv3 效果比較好的一次模型是沒有這一層夜牡,只有兩次卷積層,隱藏單元100侣签,訓(xùn)練20次
# W_conv3 = weight_variable([5, 5, 64, 128])
# b_conv3 = bias_variable([128])
# z3 = tf.nn.relu(conv2d(maxpool2, W_conv3) + b_conv3)
# maxpool3 = max_pool_2x2(z3) # max_pool3,shape [?,8,8,128]
#full connection1
W_fc1 = weight_variable([16*16*64, 200])
b_fc1 = bias_variable([200])
maxpool2_flat = tf.reshape(maxpool2, [-1, 16*16*64])
z_fc1 = tf.nn.relu(tf.matmul(maxpool2_flat, W_fc1) + b_fc1)
z_fc1_drop = tf.nn.dropout(z_fc1, keep_prob=kp)
#softmax layer
W_fc2 = weight_variable([200, 11])
b_fc2 = bias_variable([11])
z_fc2 = tf.add(tf.matmul(z_fc1_drop, W_fc2),b_fc2, name="outlayer")
prob = tf.nn.softmax(z_fc2, name="probability")
#cost function
regularizer = tf.contrib.layers.l2_regularizer(lam)
regularization = regularizer(W_fc1) + regularizer(W_fc2)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=z_fc2)) + regularization
train = tf.train.AdamOptimizer().minimize(cost)
# output_type='int32', name="predict"
pred = tf.argmax(prob, 1, output_type="int32", name="predict") # 輸出結(jié)點名稱predict方便后面保存為pb文件
correct_prediction = tf.equal(pred, tf.argmax(y, 1, output_type='int32'))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.set_random_seed(1) # to keep consistent results
seed = 0
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(num_epochs):
seed = seed + 1
epoch_cost = 0.
num_minibatches = int(X_train.shape[0] / minibatch_size)
minibatches = random_mini_batches(X_train, y_train, minibatch_size, seed)
for minibatch in minibatches:
(minibatch_X, minibatch_Y) = minibatch
_, minibatch_cost = sess.run([train, cost], feed_dict={X: minibatch_X, y: minibatch_Y, kp: keep_prob, lam: lamda})
epoch_cost += minibatch_cost / num_minibatches
if epoch % 10 == 0:
print("Cost after epoch %i: %f" % (epoch, epoch_cost))
print(str((time.strftime('%Y-%m-%d %H:%M:%S'))))
# 這個accuracy是前面的accuracy塘装,tensor.eval()和Session.run區(qū)別很小
train_acc = accuracy.eval(feed_dict={X: X_train[:1000], y: y_train[:1000], kp: 0.8, lam: lamda})
print("train accuracy", train_acc)
test_acc = accuracy.eval(feed_dict={X: X_test[:1000], y: y_test[:1000], lam: lamda})
print("test accuracy", test_acc)
#save model
saver = tf.train.Saver({'W_conv1':W_conv1, 'b_conv1':b_conv1, 'W_conv2':W_conv2, 'b_conv2':b_conv2,
'W_fc1':W_fc1, 'b_fc1':b_fc1, 'W_fc2':W_fc2, 'b_fc2':b_fc2})
saver.save(sess, "model_500_200_c3//cnn_model.ckpt")
#將訓(xùn)練好的模型保存為.pb文件急迂,方便在Android studio中使用
output_graph_def = graph_util.convert_variables_to_constants(sess, sess.graph_def, output_node_names=['predict'])
with tf.gfile.FastGFile('model_500_200_c3//digital_gesture.pb', mode='wb') as f: # ’wb’中w代表寫文件,b代表將數(shù)據(jù)以二進(jìn)制方式寫入文件蹦肴。
f.write(output_graph_def.SerializeToString())
這里有一個非常非常非常重要的事情僚碎,要注意,具體請參考上一篇博客中的 2. 模型訓(xùn)練注意事項 鏈接為:將TensorFlow訓(xùn)練好的模型遷移到Android APP上(TensorFlowLite)阴幌。整個模型訓(xùn)練幾個小時即可勺阐,當(dāng)然調(diào)參更是門藝術(shù)活,不多說了矛双。
??這里小小感慨下皆看,i7-7700k跑一個epoch需要2分鐘,750ti需要36秒背零,1070需要6秒腰吟。。徙瓶。這里再次感謝宋俞璋的神機(jī)毛雇。。關(guān)于如何搭建TensorFlow GPU環(huán)境侦镇,請參見我的博客:ubuntu16.04+GTX750ti+python3.6.5配置cuda9.0+cudnn7.05+TensorFlow-gpu1.8.0
訓(xùn)練完的模型性能:
但是在APP上因為面臨的環(huán)境更加復(fù)雜灵疮,準(zhǔn)備遠(yuǎn)沒有這么高。
PC端隨便實測的效果圖:
4.在Android Studio中調(diào)用訓(xùn)練好的模型
關(guān)于如何把模型遷移到Android studio中壳繁,請參考我的上一篇博客:將TensorFlow訓(xùn)練好的模型遷移到Android APP上(TensorFlowLite)震捣。這里面解釋下為何會用到OpenCV,這一切都要源于那個圖片縮放闹炉,還記得我們在上面提到的area interpolation嗎蒿赢,這個算法不像那些雙線性插值法等,網(wǎng)上并沒有java版本的實現(xiàn)渣触,無奈去仔細(xì)翻了遍TensorFlow API文檔羡棵,發(fā)現(xiàn)這么一段話:
Each output pixel is computed by first transforming the pixel’s footprint into the input tensor and then averaging the pixels that intersect the footprint. An input pixel’s contribution to the average is weighted by the fraction of its area that intersects the footprint. This is the same as OpenCV’s INTER_AREA.
這就是為什么會用OpenCV了,OpenCV在Android studio中的配置也是坑多嗅钻,具體的配置請參見我的博客:Android Studio中配置OpenCV皂冰。這里只說下,TensorFlowLite只提供了幾個簡單的接口养篓,雖然在我的博客將TensorFlow訓(xùn)練好的模型遷移到Android APP上(TensorFlowLite)也提過了秃流,但是這里還是想提一下,提供的接口官網(wǎng)地址
// Load the model from disk.
TensorFlowInferenceInterface inferenceInterface =
new TensorFlowInferenceInterface(assetManager, modelFilename);
// Copy the input data into TensorFlow.
inferenceInterface.feed(inputName, floatValues, 1, inputSize, inputSize, 3);
// Run the inference call.
inferenceInterface.run(outputNames, logStats);
// Copy the output Tensor back into the output array.
inferenceInterface.fetch(outputName, outputs);
注釋也都說明了各個接口的作用柳弄,就不多說了舶胀。
我也不知道是不是因為OpenCV里的area interpolation算法實現(xiàn)的和TensorFlow不一樣還是其他什么原因,總感覺在APP上測得效果要比在PC上模型性能差。峻贮。也許有可能只是我感覺席怪。。
關(guān)于Android APP代碼也沒啥好說的了纤控,代碼都放到github上了挂捻,地址:Chinese-number-gestures-recognition,歡迎star船万,哈哈刻撒。
下面上幾張測試的效果圖吧,更多的展示效果見github耿导,:Chinese-number-gestures-recognition
福利時間:
如果對Android開發(fā)有疑問的小伙伴可以進(jìn)群一起討論學(xué)習(xí)
QQ群:108080291進(jìn)群可以免費獲取如下資源