來源:Pinecone628 - kesci.com
原文鏈接:基于CNN實現(xiàn)垃圾分類
點擊以上鏈接?? 不用配置環(huán)境,直接在線運行
數(shù)據(jù)集下載:另一個垃圾分類數(shù)據(jù)集,更多的生活垃圾圖片
1.介紹
上海開始施行垃圾分類快兩周啦。那么我們能不能通過平常學(xué)習(xí)的機器學(xué)習(xí)和深度學(xué)習(xí)的算法來實現(xiàn)一個簡單的垃圾分類的模型呢罢艾?
下面主要用過CNN來實現(xiàn)垃圾的分類。在本數(shù)據(jù)集中,垃圾的種類有六種(和上海的標準不一樣)巡蘸,分為玻璃奋隶、紙、硬紙板悦荒、塑料唯欣、金屬、一般垃圾搬味。
本文才有Keras來實現(xiàn)境氢。
2.導(dǎo)入包和數(shù)據(jù)
import numpy as np
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img
from keras.layers import Conv2D, Flatten, MaxPooling2D, Dense
from keras.models import Sequential
import glob, os, random
Using TensorFlow backend.
base_path = '../input/trash_div7612/dataset-resized'
img_list = glob.glob(os.path.join(base_path, '*/*.jpg'))
我們總共有2527張圖片。我們隨機展示其中的6張圖片碰纬。
print(len(img_list))
2527
for i, img_path in enumerate(random.sample(img_list, 6)):
img = load_img(img_path)
img = img_to_array(img, dtype=np.uint8)
plt.subplot(2, 3, i+1)
plt.imshow(img.squeeze())
3.對數(shù)據(jù)進行分組
train_datagen = ImageDataGenerator(
rescale=1./225, shear_range=0.1, zoom_range=0.1,
width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True,
vertical_flip=True, validation_split=0.1)
test_datagen = ImageDataGenerator(
rescale=1./255, validation_split=0.1)
train_generator = train_datagen.flow_from_directory(
base_path, target_size=(300, 300), batch_size=16,
class_mode='categorical', subset='training', seed=0)
validation_generator = test_datagen.flow_from_directory(
base_path, target_size=(300, 300), batch_size=16,
class_mode='categorical', subset='validation', seed=0)
labels = (train_generator.class_indices)
labels = dict((v,k) for k,v in labels.items())
print(labels)
Found 2276 images belonging to 6 classes.
Found 251 images belonging to 6 classes.
{0: 'cardboard', 1: 'glass', 2: 'metal', 3: 'paper', 4: 'plastic', 5: 'trash'}
4.模型的建立和訓(xùn)練
model = Sequential([
Conv2D(filters=32, kernel_size=3, padding='same', activation='relu', input_shape=(300, 300, 3)),
MaxPooling2D(pool_size=2),
Conv2D(filters=64, kernel_size=3, padding='same', activation='relu'),
MaxPooling2D(pool_size=2),
Conv2D(filters=32, kernel_size=3, padding='same', activation='relu'),
MaxPooling2D(pool_size=2),
Conv2D(filters=32, kernel_size=3, padding='same', activation='relu'),
MaxPooling2D(pool_size=2),
Flatten(),
Dense(64, activation='relu'),
Dense(6, activation='softmax')
])
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
model.fit_generator(train_generator, epochs=100, steps_per_epoch=2276//32,validation_data=validation_generator,
validation_steps=251//32)
Epoch 1/100
71/71 [==============================] - 29s 404ms/step - loss: 1.7330 - acc: 0.2236 - val_loss: 1.6778 - val_acc: 0.3393
Epoch 2/100
71/71 [==============================] - 25s 359ms/step - loss: 1.5247 - acc: 0.3415 - val_loss: 1.4649 - val_acc: 0.3750
Epoch 3/100
71/71 [==============================] - 24s 344ms/step - loss: 1.4455 - acc: 0.4006 - val_loss: 1.4694 - val_acc: 0.3832
...
Epoch 98/100
71/71 [==============================] - 24s 335ms/step - loss: 0.3936 - acc: 0.8583 - val_loss: 0.7845 - val_acc: 0.7321
Epoch 99/100
71/71 [==============================] - 24s 332ms/step - loss: 0.4013 - acc: 0.8503 - val_loss: 0.6881 - val_acc: 0.7664
Epoch 100/100
71/71 [==============================] - 24s 335ms/step - loss: 0.3275 - acc: 0.8768 - val_loss: 0.9691 - val_acc: 0.6696
5.結(jié)果展示
下面我們隨機抽取validation中的16張圖片产还,展示圖片以及其標簽,并且給予我們的預(yù)測嘀趟。
我們發(fā)現(xiàn)預(yù)測的準確度還是蠻高的脐区,對于大部分圖片,都能識別出其類別她按。
test_x, test_y = validation_generator.__getitem__(1)
preds = model.predict(test_x)
plt.figure(figsize=(16, 16))
for i in range(16):
plt.subplot(4, 4, i+1)
plt.title('pred:%s / truth:%s' % (labels[np.argmax(preds[i])], labels[np.argmax(test_y[i])]))
plt.imshow(test_x[i])