自編碼器和多層感知機(jī)
整個(gè)神經(jīng)網(wǎng)絡(luò)的流程:
定義算法公式,也就是神經(jīng)網(wǎng)絡(luò)的forward時(shí)的計(jì)算
定義loss,選定優(yōu)化器,并指定優(yōu)化器優(yōu)化loss
迭代地對(duì)數(shù)據(jù)進(jìn)行訓(xùn)練
在測(cè)試集或驗(yàn)證集上對(duì)準(zhǔn)確率進(jìn)行評(píng)測(cè)
1.1 自編碼簡(jiǎn)介
稀疏編碼(Sparse Coding)發(fā)現(xiàn)圖像碎片可以由64種正交的邊組合而成柬祠,音頻也有基本結(jié)構(gòu)線性組合。
原本通過標(biāo)注的數(shù)據(jù)负芋,我們可以訓(xùn)練一個(gè)深層的神經(jīng)網(wǎng)絡(luò)漫蛔,現(xiàn)在對(duì)于沒有標(biāo)注的數(shù)據(jù),我們可以用無(wú)監(jiān)督的自編碼器來提取特征旧蛾。自編碼器(AutoEncoder)可以使用自身高階特征編碼自己莽龟。
自編碼器也是一種神經(jīng)網(wǎng)絡(luò),它有兩個(gè)明顯特征:1.期望輸入/輸出一致锨天;2.希望使用高階特征來重構(gòu)自己毯盈,而不是復(fù)制像素點(diǎn)。
無(wú)監(jiān)督的逐層訓(xùn)練:1病袄。如果限制中間隱含層的數(shù)量搂赋,這樣就只能學(xué)習(xí)數(shù)據(jù)中最重要的特征然后復(fù)原赘阀。如果給中間的隱藏層加一個(gè)L1的正則,嘖可以根據(jù)懲罰系數(shù)來調(diào)整學(xué)到特征組合的稀疏程度脑奠。
2基公。如果給數(shù)據(jù)加入噪聲,就變成了去噪自編碼器(Denoising AutoEncoder)捺信,完全復(fù)制是不能去除噪聲的酌媒,只有學(xué)習(xí)數(shù)據(jù)頻繁出現(xiàn)的模式和結(jié)構(gòu),將無(wú)律的噪聲略去迄靠,才能復(fù)原數(shù)據(jù)秒咨。
去噪自編碼器最常使用的是加性高斯噪聲(Additive Gaussian Noise,AGN)掌挚,也可使用有隨機(jī)遮擋的噪聲(Masking Noise)雨席。
如果自編碼器的隱含層只有一層,那么原理類似主成分分析(PCA)吠式。DBN模型有多個(gè)隱含層陡厘,每個(gè)隱含層都是限制性玻爾茲曼機(jī)(Restricted Boltzman Machine,RBM)特占。
1.2 實(shí)現(xiàn)自編碼器
Variation AutoEncoder(VAE)糙置,Stochastic Gradient Variational Bayes(SGVB)
代碼:
mnistAutoEncoder.py
import numpy as np
import sklearn.preprocessing as prep
# 數(shù)據(jù)預(yù)處理的模塊,還有使用數(shù)據(jù)標(biāo)準(zhǔn)化的功能
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# 使用的是一種參數(shù)初始化方法xavier initialization是目。Xavier初始化器
def xavier_init(fan_in, fan_out, constant = 1):
low = -constant * np.sqrt(6.0 / (fan_in + fan_out))
high = -constant * np.sqrt(6.0 / (fan_in + fan_out))
return tf.random_uniform((fan_in, fan_out),
minval=low, maxval=high,
dtype=tf.float32)
class? AdditiveGaussianNoiseAutoencoder(object):
def __init__(self, n_input, n_hidden, transfer_function=tf.nn.softplus,
optimizer=tf.train.AdamOptimizer(), scale=0.1):
self.n_input = n_input? ? ? ? ? ? ? ? ? # 輸入變量數(shù)
self.n_hidden = n_hidden? ? ? ? ? ? ? ? # 隱含層節(jié)點(diǎn)數(shù)
self.transfer = transfer_function? ? ? ? # 隱含層的激活函數(shù)
self.scale = tf.placeholder(tf.float32)? # 優(yōu)化器谤饭,默認(rèn)為Adam
self.training_scale = scale? ? ? ? ? ? ? # 高斯噪聲技術(shù)
network_weights = self._initialize_weights()
self.weights = network_weights
# 定義網(wǎng)絡(luò)結(jié)構(gòu),建立n_input維度的placeholder然后建立隱含層懊纳,將輸入的x加上噪聲
# 之后x*w1+b1揉抵,用transfer對(duì)結(jié)果進(jìn)行激活函數(shù)處理
self.x = tf.placeholder(tf.float32, [None, self.n_input])
self.hidden = self.transfer(tf.add(tf.matmul(
self.x + scale * tf.random_normal((n_input,)),
self.weights['w1']),self.weights['b1']))
# 在輸出層進(jìn)行數(shù)據(jù)復(fù)原、重建操作(即reconstruction)嗤疯,只要輸出self.hidden*w2+b2
self.reconstruction = tf.add(tf.matmul(self.hidden,
self.weights['w2']),self.weights['b2'])
# 定義自編碼的損失函數(shù)
# 計(jì)算平方誤差和優(yōu)化損失cost
self.cost = 0.5 * tf.reduce_sum(tf.pow(tf.subtract(
self.reconstruction, self.x), 2.0))
self.optimizer = optimizer.minimize(self.cost)
# 初始化模型
init = tf.global_variables_initializer()
self.sess = tf.Session()
self.sess.run(init)
# 創(chuàng)建初始化函數(shù)
def _initialize_weights(self):
# 把w1,b1,w2,b2存入all_weights,w1要用到xavier初始化冤今,后三個(gè)變量使用tf.zeros置0
all_weights = dict()
all_weights['w1'] = tf.Variable(xavier_init(self.n_input,
self.n_hidden))
all_weights['b1'] = tf.Variable(tf.zeros([self.n_hidden],
dtype=tf.float32))
all_weights['w2'] = tf.Variable(tf.zeros([self.n_hidden,
self.n_input], dtype=tf.float32))
all_weights['b2'] = tf.Variable(tf.zeros([self.n_input],
dtype=tf.float32))
return all_weights
# 計(jì)算損失cost以及執(zhí)行一步訓(xùn)練的函數(shù)partial_fit
# feed_dict輸入了數(shù)據(jù)x和噪聲系數(shù)sacle
def partial_fit(self, X):
cost, opt = self.sess.run((self.cost, self.optimizer),
feed_dict={self.x: X, self.scale:self.training_scale})
return cost
# 只求損失cost的函數(shù),評(píng)測(cè)性能會(huì)用到
def calc_total_cost(self, X):
return self.sess.run(self.cost, feed_dict={self.x: X,
self.scale: self.training_scale})
# 提供一個(gè)接口獲取抽象后的特征
def transform(self, X):
return self.sess.run(self.hidden, feed_dict={self.x: X,
self.scale: self.training_scale
})
# 將高階特征復(fù)原為原始數(shù)據(jù)
def generate(self, hidden = None):
if hidden is None:
hidden = np.random.normal(size=self.weights["b1"])
return self.sess.run(self.reconstruction,
feed_dict={self.hidden: hidden})
# 整體運(yùn)行一遍復(fù)原過程,包括提取高階特征和通過高階特征復(fù)原數(shù)據(jù)
def reconstruct(self, X):
return self.sess.run(self.reconstruction, feed_dict={self.x: X,
self.scale: self.training_scale
})
# 獲取隱含層權(quán)重w1
def getWeights(self):
return self.sess.run(self.weights['w1'])
# 獲取隱含層偏置系數(shù)b1
def getBiases(self):
return self.sess.run(self.weights['b1'])
# 使用定義好的AGN自編碼
mnist = input_data.read_data_sets('MNIST-data', one_hot=True)
# 標(biāo)準(zhǔn)化處理函數(shù)茂缚,把數(shù)據(jù)變?yōu)榫?戏罢,標(biāo)準(zhǔn)差1的分布
def standard_scale(X_train, X_test):
preprocessor = prep.StandardScaler().fit(X_train)
X_train = preprocessor.transform(X_train)
X_test = preprocessor.transform(X_test)
return X_train, X_test
# 定義一個(gè)隨機(jī)block數(shù)據(jù)的函數(shù):取一個(gè)0~(len(data) - batch_size)之間的隨機(jī)整數(shù)
def get_random_block_from_data(data, batch_size_1):
start_index = np.random.randint(0, len(data) - batch_size_1)
return data[start_index:(start_index + batch_size_1)]
X_train, X_test = standard_scale(mnist.train.images, mnist.test.images)
n_samples = int(mnist.train.num_examples)
training_epochs = 40
batch_size = 128
display_step = 1
# 創(chuàng)建一個(gè)AGN實(shí)例
autoencoder = AdditiveGaussianNoiseAutoencoder(n_input=784,
n_hidden=200,
transfer_function=tf.nn.softplus,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
scale=0.01)
# 開始訓(xùn)練過程
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(n_samples/batch_size)
for i in range(total_batch):
batch_xs = get_random_block_from_data(X_train, batch_size)
cost = autoencoder.partial_fit(batch_xs)
avg_cost += cost/n_samples*batch_size
if epoch % display_step == 0:
print("Epoch:",'%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost))
print("Total cost: " + str(autoencoder.calc_total_cost(X_test)))
2.1 多層感知器神經(jīng)網(wǎng)絡(luò)(Multi-layer perceptron neural networks,MLP neural netwoks)又稱全連接神經(jīng)網(wǎng)絡(luò)(Fully Connected Network,FCN)
最終脚囊,在測(cè)試集上可以達(dá)到98%的準(zhǔn)確率帖汞,僅僅是增加一個(gè)隱含層就實(shí)現(xiàn)了。其中也使用了一些Trick進(jìn)行輔助凑术,如Dropout翩蘸、Adagrad、ReLU等淮逊,但是起決定作用的還是隱含層本身催首,它能對(duì)特征進(jìn)行抽象和轉(zhuǎn)化扶踊。
相比于Sotfmax Regression只能從圖像的像素點(diǎn)推斷哪個(gè)是數(shù)字,MLP可以依靠神經(jīng)層組合出高階特征郎任,比如說橫線秧耗,豎線和圓圈等。
mnistMLP.py
# 多層感知器神經(jīng)網(wǎng)絡(luò)(Multi-layer perceptron neural networks舶治,MLP neural netwoks)
# 創(chuàng)建一個(gè)Tensorflow默認(rèn)的InteractiveSession分井,這樣后面執(zhí)行無(wú)須指定Session
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
mnist = input_data.read_data_sets("MNIST-data", one_hot=True)
sess = tf.InteractiveSession()
# ### step1:定義算法公式 ####
in_units = 784? # 輸入節(jié)點(diǎn)數(shù)
h1_units = 300? # 隱含層的輸出節(jié)點(diǎn)數(shù)
# W1和b1是隱含層的權(quán)重和偏置,將偏置全部賦值為0霉猛,設(shè)置為截?cái)嗟恼龖B(tài)分布尺锚,標(biāo)準(zhǔn)差stddev為0.1
# 可以通過tf.truncated_normal實(shí)現(xiàn)
W1 = tf.Variable(tf.truncated_normal([in_units, h1_units], stddev=0.1))
b1 = tf.Variable(tf.zeros([h1_units]))
# 因?yàn)槟P褪褂玫氖荝eLU幾乎函數(shù),所以需要用正態(tài)分布加一點(diǎn)噪聲打破完全對(duì)稱惜浅,和避免0梯度
# 其他模型可能還需要給偏置賦上一點(diǎn)小的零值來避免dead neuron,輸出層Softmax瘫辩,W2和b2初始化為0
W2 = tf.Variable(tf.zeros([h1_units, 10]))
b2 = tf.Variable(tf.zeros([10]))
# x的輸入Dropout的比率keep_prob是不一樣的,通常在訓(xùn)練時(shí)小于1坛悉,在預(yù)測(cè)時(shí)等于1伐厌,
x = tf.placeholder(tf.float32, [None, in_units])
keep_prob = tf.placeholder(tf.float32)
# 首先一個(gè)ReLU的隱含層,調(diào)用Dropout裸影。keep_prob為保留數(shù)據(jù)的比例
# 預(yù)測(cè)時(shí)應(yīng)該等于1挣轨,用全部特征來預(yù)測(cè)樣本的類別
hidden1 = tf.nn.relu(tf.matmul(x, W1) + b1)
hidden1_drop = tf.nn.dropout(hidden1, keep_prob)
y = tf.nn.softmax(tf.matmul(hidden1_drop, W2) + b2)
# ### step2:定義loss,選定優(yōu)化器 ####
# 交叉信息熵轩猩,AdagradOptimizer 和 學(xué)習(xí)率0.3
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),
reduction_indices=[1]))
train_step = tf.train.AdagradOptimizer(0.3).minimize(cross_entropy)
# ### step3:訓(xùn)練 ####
# 輸入數(shù)據(jù)集卷扮,設(shè)置keep_prob為0.75
tf.global_variables_initializer().run()
for i in range(10000):
batch_xs, batch_ys = mnist.train.next_batch(100)
train_step.run({x: batch_xs, y_: batch_ys, keep_prob: 0.75})
if i % 1000 == 0:
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(i, accuracy.eval({x: batch_xs, y_: batch_ys, keep_prob: 1.0}))
# ### step4:在測(cè)試集或驗(yàn)證集上對(duì)準(zhǔn)確率進(jìn)行評(píng)測(cè) ####
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels,
keep_prob: 1.0}))