嘗試使用TF+CNN時(shí)間MNIST,而不是《機(jī)器學(xué)習(xí)實(shí)戰(zhàn)》中的KNN
首先導(dǎo)入該本次實(shí)驗(yàn)要用的包
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.model_selection import ShuffleSplit
from sklearn.preprocessing import StandardScaler, LabelEncoder, OneHotEncoder
如何定義一些后面會(huì)用到的變量翁脆,意義見注釋
LABELS = 10 # 10種圖片
WIDTH = 28 # 圖的寬高
CHANNELS = 1 # 灰度圖筑悴,所以只有一個(gè)channel
VALID = 10000 # 驗(yàn)證集尺寸
STEPS = 3500 #
BATCH = 100 # 隨機(jī)梯度下降batch size
PATCH = 5 # 卷積核大小
DEPTH = 8 #32 # 卷積核深度大小==卷積核的數(shù)量
HIDDEN = 100 #1024 #完全連接層中隱藏神經(jīng)元的數(shù)量
LR = 0.001 #學(xué)習(xí)速率
然后讀取和簡(jiǎn)單處理:
data = pd.read_csv('input/train.csv') # 讀取csv文件為DF類型
labels = np.array(data.pop('label')) # 移除標(biāo)簽魏宽,返回為數(shù)組
labels = LabelEncoder().fit_transform(labels)[:, None]
labels = OneHotEncoder().fit_transform(labels).todense()
data = StandardScaler().fit_transform(np.float32(data.values)) # DF轉(zhuǎn)數(shù)組
data = data.reshape(-1, WIDTH, WIDTH, CHANNELS) # Reshape成二維圖像xchannel
train_data, valid_data = data[:-VALID], data[-VALID:]
train_labels, valid_labels = labels[:-VALID], labels[-VALID:]
sklearn.preprocessing.LabelEncoder():標(biāo)準(zhǔn)化標(biāo)簽呆馁,將標(biāo)簽值統(tǒng)一轉(zhuǎn)換成range(標(biāo)簽值個(gè)數(shù)-1)范圍內(nèi)梗摇,示例如下:
>> le = preprocessing.LabelEncoder()
>> le.fit(["paris", "paris", "tokyo", "amsterdam"])
LabelEncoder()
>> list(le.classes_)
['amsterdam', 'paris', 'tokyo'] # 三個(gè)類別分別為0 1 2
>> le.transform(["tokyo", "tokyo", "paris"])
array([2, 2, 1]...)
>> list(le.inverse_transform([2, 2, 1])) # 逆過程
['tokyo', 'tokyo', 'paris']
one-hot獨(dú)熱編碼糯崎,是因?yàn)榇蟛糠炙惴ㄊ腔谙蛄靠臻g中的度量來進(jìn)行計(jì)算的,為了使非偏序關(guān)系的變量取值不具有偏序性耗拓,并且到圓點(diǎn)是等距的。使用one-hot編碼奏司,將離散特征的取值擴(kuò)展到了歐式空間乔询,離散特征的某個(gè)取值就對(duì)應(yīng)歐式空間的某個(gè)點(diǎn)。將離散型特征使用one-hot編碼韵洋,會(huì)讓特征之間的距離計(jì)算更加合理竿刁。離散特征進(jìn)行one-hot編碼后,編碼后的特征搪缨,其實(shí)每一維度的特征都可以看做是連續(xù)的特征食拜。就可以跟對(duì)連續(xù)型特征的歸一化方法一樣,對(duì)每一維特征進(jìn)行歸一化副编。比如歸一化到[-1,1]或歸一化到均值為0,方差為1负甸。
然后打印下尺寸啥的:
print('train data shape = ' + str(train_data.shape) + ' = (TRAIN, WIDTH, WIDTH, CHANNELS)')
print('labels shape = ' + str(labels.shape) + ' = (TRAIN, LABELS)')
定義“形參”后面用:
tf_data = tf.placeholder(tf.float32, shape=(None, WIDTH, WIDTH, CHANNELS))
tf_labels = tf.placeholder(tf.float32, shape=(None, LABELS))
生成相關(guān)權(quán)值:
w1 = tf.Variable(tf.truncated_normal([PATCH, PATCH, CHANNELS, DEPTH], stddev=0.1))
b1 = tf.Variable(tf.zeros([DEPTH]))
w2 = tf.Variable(tf.truncated_normal([PATCH, PATCH, DEPTH, 2*DEPTH], stddev=0.1))
b2 = tf.Variable(tf.constant(1.0, shape=[2*DEPTH]))
w3 = tf.Variable(tf.truncated_normal([WIDTH // 4 * WIDTH // 4 * 2*DEPTH, HIDDEN], stddev=0.1))
b3 = tf.Variable(tf.constant(1.0, shape=[HIDDEN]))
w4 = tf.Variable(tf.truncated_normal([HIDDEN, LABELS], stddev=0.1))
b4 = tf.Variable(tf.constant(1.0, shape=[LABELS]))
tf.truncated_normal(shape, mean, stddev) :shape表示生成張量的維度,mean是均值痹届,stddev是標(biāo)準(zhǔn)差呻待。這個(gè)函數(shù)產(chǎn)生正太分布,均值和標(biāo)準(zhǔn)差自己設(shè)定队腐。
下面定義幾個(gè)卷積層
def logits(data):
# 卷積層1
x = tf.nn.conv2d(data, w1, [1, 1, 1, 1], padding='SAME')
x = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
x = tf.nn.relu(x + b1)
#卷積層2
x = tf.nn.conv2d(x, w2, [1, 1, 1, 1], padding='SAME')
x = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
x = tf.nn.relu(x + b2)
#全連接層
x = tf.reshape(x, (-1, WIDTH // 4 * WIDTH // 4 * 2*DEPTH))
x = tf.nn.relu(tf.matmul(x, w3) + b3)
return tf.matmul(x, w4) + b4
預(yù)測(cè)部分的代碼:
tf_pred = tf.nn.softmax(logits(tf_data))
tf_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits(tf_data),
labels=tf_labels))
tf_acc = 100*tf.reduce_mean(tf.to_float(tf.equal(tf.argmax(tf_pred, 1), tf.argmax(tf_labels, 1))))
下面是幾種優(yōu)化器:
#tf_opt = tf.train.GradientDescentOptimizer(LR)
#tf_opt = tf.train.AdamOptimizer(LR)
tf_opt = tf.train.RMSPropOptimizer(LR)
tf_step = tf_opt.minimize(tf_loss)
初始化session:
init = tf.global_variables_initializer()
session = tf.Session()
session.run(init)
ShuffleSplit用于生成交叉驗(yàn)證數(shù)據(jù)集蚕捉,get_n_splits返回?cái)?shù)據(jù)集:
ss = ShuffleSplit(n_splits=STEPS, train_size=BATCH)
ss.get_n_splits(train_data, train_labels)
history = [(0, np.nan, 10)] # 初始化錯(cuò)誤差
for step, (idx, _) in enumerate(ss.split(train_data,train_labels), start=1):
fd = {tf_data:train_data[idx], tf_labels:train_labels[idx]}
session.run(tf_step, feed_dict=fd)
if step%500 == 0:
fd = {tf_data:valid_data, tf_labels:valid_labels}
valid_loss, valid_accuracy = session.run([tf_loss, tf_acc], feed_dict=fd)
history.append((step, valid_loss, valid_accuracy))
print('Step %i \t Valid. Acc. = %f'%(step, valid_accuracy), end='\n')
steps, loss, acc = zip(*history)
這些就是畫圖的啦
fig = plt.figure()
plt.title('Validation Loss / Accuracy')
ax_loss = fig.add_subplot(111)
ax_acc = ax_loss.twinx()
plt.xlabel('Training Steps')
plt.xlim(0, max(steps))
ax_loss.plot(steps, loss, '-o', color='C0')
ax_loss.set_ylabel('Log Loss', color='C0');
ax_loss.tick_params('y', colors='C0')
ax_loss.set_ylim(0.01, 0.5)
ax_acc.plot(steps, acc, '-o', color='C1')
ax_acc.set_ylabel('Accuracy [%]', color='C1');
ax_acc.tick_params('y', colors='C1')
ax_acc.set_ylim(1,100)
plt.show()
最后給出驗(yàn)證結(jié)果:
test = pd.read_csv('input/test.csv')
test_data = StandardScaler().fit_transform(np.float32(test.values)) # Convert the dataframe to a numpy array
test_data = test_data.reshape(-1, WIDTH, WIDTH, CHANNELS) # Reshape the data into 42000 2d images
test_pred = session.run(tf_pred, feed_dict={tf_data:test_data})
test_labels = np.argmax(test_pred, axis=1)
k = 0
print("Label Prediction: %i"%test_labels[k])
fig = plt.figure(figsize=(2,2)); plt.axis('off')
plt.imshow(test_data[k,:,:,0]); plt.show()
submission = pd.DataFrame(data={'ImageId':(np.arange(test_labels.shape[0])+1), 'Label':test_labels})
submission.to_csv('submission.csv', index=False)
submission.tail()
最近有點(diǎn)忙。柴淘。迫淹。未完待續(xù)