AlexNet把CNN的基本原理應(yīng)用到了深度神經(jīng)網(wǎng)絡(luò)中,同時(shí)應(yīng)用了許多新的技術(shù):
- 將ReLU作為CNN的激活函數(shù),成功解決了Sigmoid在網(wǎng)絡(luò)較深時(shí)的梯度彌散問題
- 訓(xùn)練時(shí)使用Dropout隨機(jī)忽略一部分神經(jīng)豹悬,以避免模型過擬合咳促。
過擬合是機(jī)器學(xué)習(xí)中一個(gè)常見的問題赵辕。Hinton教授團(tuán)隊(duì)提出了一個(gè)簡(jiǎn)單
有效的方法心包,Dropout,將神經(jīng)網(wǎng)絡(luò)某一層的輸出節(jié)點(diǎn)數(shù)據(jù)隨機(jī)丟棄一
部分障簿,實(shí)質(zhì)上等于創(chuàng)造出了很多新的隨機(jī)樣本盹愚,通過增大樣本、減少特
征數(shù)量來防止過擬合站故,可以理解為每次丟棄節(jié)點(diǎn)數(shù)據(jù)是對(duì)特征的一種采樣皆怕。
- 在CNN中使用重疊的最大池化。避免平均池化的模糊化效果西篓,同時(shí)讓步長(zhǎng)比池化核的尺寸小愈腾,這樣池化層的輸出之間會(huì)有重疊和覆蓋,提升了特征的豐富性污淋。
- 使用LRN層顶滩,對(duì)局部神經(jīng)元的活動(dòng)創(chuàng)建競(jìng)爭(zhēng)機(jī)制,使得其中響應(yīng)較大的值變得相對(duì)更大寸爆,并抑制其它反饋較小的神經(jīng)元礁鲁,增加了模型的泛化能力。
5.數(shù)據(jù)增強(qiáng)赁豆。主要就是對(duì)原始圖像機(jī)型截取仅醇、翻轉(zhuǎn)等。使用了數(shù)據(jù)增強(qiáng)后可以大大減輕過擬合魔种,提升泛化能力析二。 - 對(duì)圖像的RGB數(shù)據(jù)進(jìn)行PCA處理,并對(duì)主成份分析做一個(gè)標(biāo)準(zhǔn)差為0.1的高斯擾動(dòng)节预,增加一些噪聲叶摄。
輸入的圖像是224x224x3的圖像,以下是每個(gè)處理層的尺寸大邪材狻:
conv1 [32, 56, 56, 64]
pool1 [32, 27, 27, 64]
conv2 [32, 27, 27, 192]
pool2 [32, 13, 13, 192]
conv3 [32, 13, 13, 384]
conv4 [32, 13, 13, 256]
conv5 [32, 13, 13, 256]
pool5 [32, 6, 6, 256]
fcl1 [32, 4096]
fcl2 [32, 4096]
fcl3 [32, 1000]
下面是使用Tensorflow的實(shí)現(xiàn):
# _*- coding:utf-8 _*_
import math
import time
import tensorflow as tf
from datetime import datetime
batch_size = 32
num_batches = 100
# 查看每一層網(wǎng)絡(luò)結(jié)構(gòu)
def print_activations(t):
print(t.op.name, ' ', t.get_shape().as_list())
def inference(images):
parameters = []
# 第一個(gè)卷積層
with tf.name_scope('conv1') as scope:
kernel = tf.Variable(tf.truncated_normal([11, 11, 3, 64], dtype=tf.float32, stddev=1e-1),
name='weights')
conv = tf.nn.conv2d(images, kernel, [1, 4, 4, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),
trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
parameters += [kernel, biases]
print_activations(conv1)
lrn1 = tf.nn.lrn(conv1, 4, bias=1.0, alpha=0.001/9, beta=0.75, name='lrn1')
pool1 = tf.nn.max_pool(lrn1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID', name='pool1')
print_activations(pool1)
# 第二個(gè)卷積層
with tf.name_scope('conv2') as scope:
kernel = tf.Variable(tf.truncated_normal([5, 5, 64, 192], dtype=tf.float32, stddev=1e-1),
name='weights')
conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[192], dtype=tf.float32),
trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope)
parameters += [kernel, biases]
print_activations(conv2)
lrn2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001/9, beta=0.75, name='lrn2')
pool2 = tf.nn.max_pool(lrn2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID', name='pool2')
print_activations(pool2)
# 第三個(gè)卷積層
with tf.name_scope('conv3') as scope:
kernel = tf.Variable(tf.random_normal([3, 3, 192, 384], dtype=tf.float32, stddev=1e-1),
name='weights')
conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[384], dtype=tf.float32),
trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
parameters += [kernel, biases]
print_activations(conv3)
# 第四個(gè)卷積層
with tf.name_scope('conv4') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 384, 256], dtype=tf.float32, stddev=1e-1),
name='weights')
conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
parameters += [kernel, biases]
print_activations(conv4)
# 第五個(gè)卷積層
with tf.name_scope('conv5') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1),
name='weights')
conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
parameters += [kernel, biases]
print_activations(conv5)
pool5 = tf.nn.max_pool(conv5, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID', name='pool5')
print_activations(pool5)
# 第一個(gè)全連接層
with tf.name_scope('fcl1') as scope:
weight = tf.Variable(tf.truncated_normal([6 * 6 * 256, 4096], stddev=0.1), name='weights')
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases')
h_pool5_flat = tf.reshape(pool5, [-1, 6 * 6 * 256])
fcl1 = tf.nn.relu(tf.matmul(h_pool5_flat, weight) + biases, name=scope)
drop1 = tf.nn.dropout(fcl1, 0.7)
parameters += [weight, biases]
print_activations(fcl1)
# 第二個(gè)全連接層
with tf.name_scope('fcl2') as scope:
weight = tf.Variable(tf.truncated_normal([4096, 4096], stddev=0.1), name='weights')
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases')
fcl2 = tf.nn.relu(tf.matmul(drop1, weight) + biases, name=scope)
drop2 = tf.nn.dropout(fcl2, 0.7)
parameters += [weight, biases]
print_activations(fcl2)
# 第三個(gè)全連接層
with tf.name_scope('fcl3') as scope:
weight = tf.Variable(tf.truncated_normal([4096, 1000], stddev=0.1), name='weights')
biases = tf.Variable(tf.constant(0.0, shape=[1000], dtype=tf.float32), trainable=True, name='biases')
fcl3 = tf.nn.relu(tf.matmul(drop2, weight) + biases, name=scope)
parameters += [weight, biases]
print_activations(fcl3)
return fcl3, parameters
參考自《tensorflow實(shí)戰(zhàn)》