將訓(xùn)練一個(gè)神經(jīng)網(wǎng)絡(luò)模型雏赦,對(duì)運(yùn)動(dòng)鞋和襯衫等服裝圖像進(jìn)行分類。本指南使用 Fashion MNIST 數(shù)據(jù)集芙扎,該數(shù)據(jù)集包含 10 個(gè)類別的 70,000 個(gè)灰度圖像星岗。這些圖像以低分辨率(28x28 像素)展示了單件衣物。
Fashion MNIST 旨在臨時(shí)替代經(jīng)典 MNIST 數(shù)據(jù)集纵顾,后者常被用作計(jì)算機(jī)視覺(jué)機(jī)器學(xué)習(xí)程序的“Hello, World”伍茄。MNIST 數(shù)據(jù)集包含手寫數(shù)字(0、1施逾、2 等)的圖像敷矫,其格式與您將使用的衣物圖像的格式相同例获。
本例使用 Fashion MNIST 來(lái)實(shí)現(xiàn)多樣化,因?yàn)樗瘸R?guī) MNIST 更具挑戰(zhàn)性曹仗。這兩個(gè)數(shù)據(jù)集都相對(duì)較小榨汤,都用于驗(yàn)證某個(gè)算法是否按預(yù)期工作。對(duì)于代碼的測(cè)試和調(diào)試怎茫,它們都是很好的起點(diǎn)收壕。
在本例中,我們使用 60,000 個(gè)圖像來(lái)訓(xùn)練網(wǎng)絡(luò)轨蛤,使用 10,000 個(gè)圖像來(lái)評(píng)估網(wǎng)絡(luò)學(xué)習(xí)對(duì)圖像分類的準(zhǔn)確率蜜宪。
%matplotlib inline # 本代碼在jupterbook中顯示,需要加上此行祥山,其他編譯器請(qǐng)去掉圃验。
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
#圖像是 28x28 的 NumPy 數(shù)組,像素值介于 0 到 255 之間缝呕。標(biāo)簽是整數(shù)數(shù)組澳窑,介于 0 到 9 之間。這些標(biāo)簽對(duì)應(yīng)于圖像所代表的服裝類:
# 標(biāo)簽 類
# 0 T恤/上衣
# 1 褲子
# 2 套頭衫
# 3 連衣裙
# 4 外套
# 5 涼鞋
# 6 襯衫
# 7 運(yùn)動(dòng)鞋
# 8 包
# 9 短靴
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
#在訓(xùn)練網(wǎng)絡(luò)之前供常,必須對(duì)數(shù)據(jù)進(jìn)行預(yù)處理摊聋。如果您檢查訓(xùn)練集中的第一個(gè)圖像,您會(huì)看到像素值處于 0 到 255 之間:
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
#將這些值縮小至 0 到 1 之間栈暇,然后將其饋送到神經(jīng)網(wǎng)絡(luò)模型麻裁。為此,請(qǐng)將這些值除以 255源祈。請(qǐng)務(wù)必以相同的方式對(duì)訓(xùn)練集和測(cè)試集進(jìn)行預(yù)處理:
train_images = train_images / 255.0
test_images = test_images / 255.0
#為了驗(yàn)證數(shù)據(jù)的格式是否正確悲立,以及您是否已準(zhǔn)備好構(gòu)建和訓(xùn)練網(wǎng)絡(luò),讓我們顯示訓(xùn)練集中的前 25 個(gè)圖像新博,并在每個(gè)圖像下方顯示類名稱。
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
#構(gòu)建神經(jīng)網(wǎng)絡(luò)需要先配置模型的層脚草,然后再編譯模型赫悄。
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
#要開(kāi)始訓(xùn)練,請(qǐng)調(diào)用 model.fit 方法馏慨,這樣命名是因?yàn)樵摲椒〞?huì)將模型與訓(xùn)練數(shù)據(jù)進(jìn)行“擬合”:
model.fit(train_images, train_labels, epochs=10)
#評(píng)估準(zhǔn)確率
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
#在模型經(jīng)過(guò)訓(xùn)練后埂淮,您可以使用它對(duì)一些圖像進(jìn)行預(yù)測(cè)。
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
#可以將其繪制成圖表写隶,看看模型對(duì)于全部 10 個(gè)類的預(yù)測(cè)倔撞。
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
#我們來(lái)看看第 0 個(gè)圖像、預(yù)測(cè)結(jié)果和預(yù)測(cè)數(shù)組慕趴。正確的預(yù)測(cè)標(biāo)簽為藍(lán)色痪蝇,錯(cuò)誤的預(yù)測(cè)標(biāo)簽為紅色鄙陡。數(shù)字表示預(yù)測(cè)標(biāo)簽的百分比(總計(jì)為 100)。
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
#讓我們用模型的預(yù)測(cè)繪制幾張圖像躏啰。請(qǐng)注意趁矾,即使置信度很高,模型也可能出錯(cuò)给僵。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
# MIT License
#
# Copyright (c) 2017 Fran?ois Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
output1:
<pre style="box-sizing: border-box; overflow: auto; font-family: monospace; font-size: inherit; display: block; padding: 1px 0px; margin: 0px; line-height: inherit; color: black; word-break: break-all; overflow-wrap: break-word; background-color: transparent; border: 0px; border-radius: 0px; white-space: pre-wrap; vertical-align: baseline;">Epoch 1/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.4947 - accuracy: 0.8247
Epoch 2/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.3696 - accuracy: 0.8665
Epoch 3/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.3331 - accuracy: 0.8787
Epoch 4/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.3118 - accuracy: 0.8868
Epoch 5/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.2939 - accuracy: 0.8910
Epoch 6/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.2805 - accuracy: 0.8954
Epoch 7/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2668 - accuracy: 0.9017
Epoch 8/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.2551 - accuracy: 0.9050
Epoch 9/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.2471 - accuracy: 0.9079
Epoch 10/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.2388 - accuracy: 0.9098
313/313 - 0s - loss: 0.3377 - accuracy: 0.8837
Test accuracy: 0.8837000131607056
</pre>
實(shí)際例子代碼下載:https://github.com/wennaz/Deep_Learning