Mnist數(shù)據(jù)集一共有7萬張圖片:
- 6萬張28*28像素點(diǎn)的0~9手寫數(shù)字圖片和標(biāo)簽,用于訓(xùn)練
- 1萬張28*28像素點(diǎn)的0~9手寫數(shù)字圖片和標(biāo)簽绽昏,用于測試
import tensorflow as tf
import matplotlib.pyplot as plt
# 導(dǎo)入數(shù)據(jù)
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# 看看數(shù)據(jù)樣式把还,有個大概認(rèn)識
print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
print(x_train[0].shape)
print(x_train[0])
# 可視化數(shù)據(jù)(隨便看一個)
plt.imshow(x_train[24], cmap="gray")
plt.show()
運(yùn)行結(jié)果:
(60000, 28, 28) (60000,) (10000, 28, 28) (10000,)
(28, 28)
[[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 3 18 18 18 126 136
175 26 166 255 247 127 0 0 0 0]
[ 0 0 0 0 0 0 0 0 30 36 94 154 170 253 253 253 253 253
225 172 253 242 195 64 0 0 0 0]
[ 0 0 0 0 0 0 0 49 238 253 253 253 253 253 253 253 253 251
93 82 82 56 39 0 0 0 0 0]
[ 0 0 0 0 0 0 0 18 219 253 253 253 253 253 198 182 247 241
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 80 156 107 253 253 205 11 0 43 154
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 14 1 154 253 90 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 139 253 190 2 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 11 190 253 70 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 35 241 225 160 108 1
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 81 240 253 253 119
25 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 45 186 253 253
150 27 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 16 93 252
253 187 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 249
253 249 64 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 46 130 183 253
253 207 2 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 39 148 229 253 253 253
250 182 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 24 114 221 253 253 253 253 201
78 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 23 66 213 253 253 253 253 198 81 2
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 18 171 219 253 253 253 253 195 80 9 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 55 172 226 253 253 253 253 244 133 11 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 136 253 253 253 212 135 132 16 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]]
圖片如下:
1.png
Sequential實現(xiàn)手寫數(shù)字識別
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0 # 輸入特征歸一化实蓬,把0·255的數(shù)值變?yōu)?·1之間,更適合神經(jīng)網(wǎng)絡(luò)吸收
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(), # 拉直為一維數(shù)組
tf.keras.layers.Dense(128, activation="relu"), # 第一層128個神經(jīng)元吊履,relu激活函數(shù)
tf.keras.layers.Dense(10, activation="softmax") # 第二層10個神經(jīng)元安皱,softmax使輸出符合概率分布
])
# 訓(xùn)練方法:adam優(yōu)化器,損失函數(shù)艇炎,評測指標(biāo)
model.compile(optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=["sparse_categorical_accuracy"])
model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test),
validation_freq=1)
model.summary()
運(yùn)行結(jié)果:
Epoch 5/5
1/1875 [..............................] - ETA: 0s - loss: 0.1622 - sparse_categorical_accuracy: 0.9375
50/1875 [..............................] - ETA: 2s - loss: 0.0566 - sparse_categorical_accuracy: 0.9812
98/1875 [>.............................] - ETA: 2s - loss: 0.0531 - sparse_categorical_accuracy: 0.9831
147/1875 [=>............................] - ETA: 1s - loss: 0.0478 - sparse_categorical_accuracy: 0.9853
196/1875 [==>...........................] - ETA: 1s - loss: 0.0475 - sparse_categorical_accuracy: 0.9850
235/1875 [==>...........................] - ETA: 1s - loss: 0.0444 - sparse_categorical_accuracy: 0.9859
276/1875 [===>..........................] - ETA: 1s - loss: 0.0439 - sparse_categorical_accuracy: 0.9868
321/1875 [====>.........................] - ETA: 1s - loss: 0.0422 - sparse_categorical_accuracy: 0.9876
366/1875 [====>.........................] - ETA: 1s - loss: 0.0420 - sparse_categorical_accuracy: 0.9880
407/1875 [=====>........................] - ETA: 1s - loss: 0.0425 - sparse_categorical_accuracy: 0.9878
452/1875 [======>.......................] - ETA: 1s - loss: 0.0430 - sparse_categorical_accuracy: 0.9876
502/1875 [=======>......................] - ETA: 1s - loss: 0.0425 - sparse_categorical_accuracy: 0.9877
552/1875 [=======>......................] - ETA: 1s - loss: 0.0444 - sparse_categorical_accuracy: 0.9870
596/1875 [========>.....................] - ETA: 1s - loss: 0.0450 - sparse_categorical_accuracy: 0.9869
640/1875 [=========>....................] - ETA: 1s - loss: 0.0453 - sparse_categorical_accuracy: 0.9867
689/1875 [==========>...................] - ETA: 1s - loss: 0.0453 - sparse_categorical_accuracy: 0.9868
739/1875 [==========>...................] - ETA: 1s - loss: 0.0455 - sparse_categorical_accuracy: 0.9867
782/1875 [===========>..................] - ETA: 1s - loss: 0.0458 - sparse_categorical_accuracy: 0.9867
827/1875 [============>.................] - ETA: 1s - loss: 0.0452 - sparse_categorical_accuracy: 0.9870
877/1875 [=============>................] - ETA: 1s - loss: 0.0455 - sparse_categorical_accuracy: 0.9868
926/1875 [=============>................] - ETA: 1s - loss: 0.0458 - sparse_categorical_accuracy: 0.9867
968/1875 [==============>...............] - ETA: 1s - loss: 0.0457 - sparse_categorical_accuracy: 0.9867
1003/1875 [===============>..............] - ETA: 1s - loss: 0.0458 - sparse_categorical_accuracy: 0.9866
1036/1875 [===============>..............] - ETA: 1s - loss: 0.0455 - sparse_categorical_accuracy: 0.9867
1085/1875 [================>.............] - ETA: 0s - loss: 0.0455 - sparse_categorical_accuracy: 0.9867
1130/1875 [=================>............] - ETA: 0s - loss: 0.0456 - sparse_categorical_accuracy: 0.9866
1175/1875 [=================>............] - ETA: 0s - loss: 0.0457 - sparse_categorical_accuracy: 0.9867
1224/1875 [==================>...........] - ETA: 0s - loss: 0.0460 - sparse_categorical_accuracy: 0.9865
1273/1875 [===================>..........] - ETA: 0s - loss: 0.0458 - sparse_categorical_accuracy: 0.9866
1319/1875 [====================>.........] - ETA: 0s - loss: 0.0454 - sparse_categorical_accuracy: 0.9866
1367/1875 [====================>.........] - ETA: 0s - loss: 0.0459 - sparse_categorical_accuracy: 0.9865
1415/1875 [=====================>........] - ETA: 0s - loss: 0.0458 - sparse_categorical_accuracy: 0.9864
1463/1875 [======================>.......] - ETA: 0s - loss: 0.0462 - sparse_categorical_accuracy: 0.9863
1508/1875 [=======================>......] - ETA: 0s - loss: 0.0465 - sparse_categorical_accuracy: 0.9862
1555/1875 [=======================>......] - ETA: 0s - loss: 0.0469 - sparse_categorical_accuracy: 0.9861
1603/1875 [========================>.....] - ETA: 0s - loss: 0.0474 - sparse_categorical_accuracy: 0.9859
1650/1875 [=========================>....] - ETA: 0s - loss: 0.0473 - sparse_categorical_accuracy: 0.9859
1696/1875 [==========================>...] - ETA: 0s - loss: 0.0468 - sparse_categorical_accuracy: 0.9861
1744/1875 [==========================>...] - ETA: 0s - loss: 0.0471 - sparse_categorical_accuracy: 0.9859
1795/1875 [===========================>..] - ETA: 0s - loss: 0.0471 - sparse_categorical_accuracy: 0.9858
1844/1875 [============================>.] - ETA: 0s - loss: 0.0468 - sparse_categorical_accuracy: 0.9858
1875/1875 [==============================] - 3s 1ms/step - loss: 0.0467 - sparse_categorical_accuracy: 0.9859 - val_loss: 0.0869 - val_sparse_categorical_accuracy: 0.9750
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) multiple 0
_________________________________________________________________
dense (Dense) multiple 100480
_________________________________________________________________
dense_1 (Dense) multiple 1290
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________