手寫數(shù)字識別卷積神經(jīng)網(wǎng)絡(luò)版
參考代碼:
import numpy
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend as K # backend 是一個(gè)后端引擎绣张,主要有三種:Theano/Tensorflow/CNTK
K.set_image_dim_ordering('th') # th與tf的區(qū)別就是參數(shù)填寫位置的區(qū)別: th(28,28,3), tf(3,28,28) 【3代表通道數(shù)蝗柔,其他兩個(gè)參數(shù)代表圖片像素大小】
# 隨機(jī)種子確保結(jié)果可再現(xiàn)
seed = 7
numpy.random.seed(seed)
# 加載數(shù)據(jù)
(X_train, y_train), (X_test, y_test) = mnist.load_data() # X_train 代表一張圖片讨永,y_train 代表圖片對應(yīng)的數(shù)字;X_test 代表一張圖片,y_test 代表圖片對應(yīng)的數(shù)字
# 重新改變大小為: [樣本數(shù)目][通道數(shù)][圖片寬][圖片高]
X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32')
# 將灰度值0-255范圍歸一化映射為0-1之間
X_train = X_train / 255
X_test = X_test / 255
# 模型的輸出是對每個(gè)類別的打分預(yù)測,對于分類結(jié)果從0-9的每個(gè)類別都有一個(gè)預(yù)測分值,表示將模型輸入預(yù)測為該類的概率大小侦锯,概率越大可信度越高。
# 由于原始的數(shù)據(jù)標(biāo)簽是0-9的整數(shù)值秦驯,通常將其表示成0ne-hot向量。如第一個(gè)訓(xùn)練數(shù)據(jù)的標(biāo)簽為5挣棕,one-hot表示為[0,0,0,0,0,1,0,0,0,0]译隘。
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1] # y_test.shape(10000,10)
# 定義模型函數(shù)
def baseline_model():
# 創(chuàng)建模型
model = Sequential()
# 設(shè)置模型參數(shù)
model.add(Conv2D(32, (5, 5), input_shape=(1, 28, 28), activation='relu')) # 32個(gè)filter ,每個(gè)filter大小為5 x 5;單通道洛心,圖片大小28 x 28
model.add(MaxPooling2D(pool_size=(2, 2))) # 最大池化矩陣 2 x 2固耘,池化的最主要作用就是壓縮數(shù)據(jù)和參數(shù)。
model.add(Dropout(0.3)) # 控制需要斷開的鏈接的比例词身,可以減輕過擬合現(xiàn)象厅目,一般設(shè)為0.3或者0.5。
model.add(Flatten())# Flatten層用來將輸入“壓平”法严,即把多維的輸入一維化损敷,常用在從卷積層到全連接層的過渡。
model.add(Dense(128, activation='relu')) # 全連接層128個(gè)單元
model.add(Dense(num_classes, activation='softmax')) # 輸出層只能為10個(gè)單元
# 編譯模型
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) #選擇了交叉編譯深啤,Adam優(yōu)化器
return model
# 開始編譯模型
model = baseline_model()
# 訓(xùn)練模型
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)
# 評估模型
loss , scores = model.evaluate(X_test, y_test, verbose=1)
# 輸出Loss值和評分
print("Loss: ", loss)
print("scores: ", scores)
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
- 6s - loss: 0.2400 - acc: 0.9314 - val_loss: 0.0837 - val_acc: 0.9740
Epoch 2/10
- 6s - loss: 0.0779 - acc: 0.9765 - val_loss: 0.0482 - val_acc: 0.9843
Epoch 3/10
- 6s - loss: 0.0560 - acc: 0.9827 - val_loss: 0.0431 - val_acc: 0.9861
Epoch 4/10
- 6s - loss: 0.0433 - acc: 0.9866 - val_loss: 0.0423 - val_acc: 0.9857
Epoch 5/10
- 6s - loss: 0.0363 - acc: 0.9883 - val_loss: 0.0333 - val_acc: 0.9884
Epoch 6/10
- 6s - loss: 0.0301 - acc: 0.9906 - val_loss: 0.0310 - val_acc: 0.9893
Epoch 7/10
- 6s - loss: 0.0246 - acc: 0.9922 - val_loss: 0.0316 - val_acc: 0.9888
Epoch 8/10
- 6s - loss: 0.0232 - acc: 0.9927 - val_loss: 0.0296 - val_acc: 0.9896
Epoch 9/10
- 6s - loss: 0.0195 - acc: 0.9937 - val_loss: 0.0284 - val_acc: 0.9906
Epoch 10/10
- 6s - loss: 0.0164 - acc: 0.9949 - val_loss: 0.0282 - val_acc: 0.9911
10000/10000 [==============================] - 1s 85us/step
Loss: 0.0281819745783
scores: 0.9911