Tensorflow實(shí)踐:用神經(jīng)網(wǎng)絡(luò)訓(xùn)練分類器

任務(wù):

使用tensorflow訓(xùn)練一個(gè)神經(jīng)網(wǎng)絡(luò)作為分類器皆辽,分類的數(shù)據(jù)點(diǎn)如下:


螺旋形數(shù)據(jù)點(diǎn)

原理:

數(shù)據(jù)點(diǎn)一共有三個(gè)類別菊碟,而且是螺旋形交織在一起斤儿,顯然是線性不可分的裹虫,需要一個(gè)非線性的分類器。這里選擇神經(jīng)網(wǎng)絡(luò)娄琉。
輸入的數(shù)據(jù)點(diǎn)是二維的饶米,因此每個(gè)點(diǎn)只有x,y坐標(biāo)這個(gè)原始特征。這里設(shè)計(jì)的神經(jīng)網(wǎng)絡(luò)有兩個(gè)隱藏層车胡,每層有50個(gè)神經(jīng)元檬输,足夠抓住數(shù)據(jù)點(diǎn)的高維特征(實(shí)際上每層10個(gè)都?jí)蛴昧耍W詈筝敵鰧邮且粋€(gè)邏輯回歸匈棘,根據(jù)隱藏層計(jì)算出的50個(gè)特征來(lái)預(yù)測(cè)數(shù)據(jù)點(diǎn)的分類(紅丧慈、黃、藍(lán))主卫。
一般訓(xùn)練數(shù)據(jù)多的話逃默,應(yīng)該用隨機(jī)梯度下降來(lái)訓(xùn)練神經(jīng)網(wǎng)絡(luò),這里訓(xùn)練數(shù)據(jù)較少(300)簇搅,就直接批量梯度下降了完域。

# 導(dǎo)入包、初始化
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf

%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

# 生成螺旋形的線形不可分?jǐn)?shù)據(jù)點(diǎn)
np.random.seed(0)
N = 100 # 每個(gè)類的數(shù)據(jù)個(gè)數(shù)
D = 2 # 輸入維度
K = 3 # 類的個(gè)數(shù)
X = np.zeros((N*K,D))
num_train_examples = X.shape[0]
y = np.zeros(N*K, dtype='uint8')
for j in xrange(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j
fig = plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim([-1,1])
plt.ylim([-1,1])
螺旋形數(shù)據(jù)點(diǎn)

打印輸出輸入X和label的shape

num_label = 3
labels = (np.arange(num_label) == y[:,None]).astype(np.float32)
labels.shape
(300, 3)
X.shape
(300, 2)

用tensorflow構(gòu)建神經(jīng)網(wǎng)絡(luò)

import math

N = 100 # 每個(gè)類的數(shù)據(jù)個(gè)數(shù)
D = 2 # 輸入維度
num_label = 3 # 類的個(gè)數(shù)
num_data = N * num_label
hidden_size_1 = 50
hidden_size_2 = 50

beta = 0.001 # L2 正則化系數(shù)
learning_rate = 0.1 # 學(xué)習(xí)速率

labels = (np.arange(num_label) == y[:,None]).astype(np.float32)

graph = tf.Graph()
with graph.as_default():
    x = tf.constant(X.astype(np.float32))
    tf_labels = tf.constant(labels)
    
    # 隱藏層1
    hidden_layer_weights_1 = tf.Variable(
    tf.truncated_normal([D, hidden_size_1], stddev=math.sqrt(2.0/num_data)))
    hidden_layer_bias_1 = tf.Variable(tf.zeros([hidden_size_1]))
    
    # 隱藏層2
    hidden_layer_weights_2 = tf.Variable(
    tf.truncated_normal([hidden_size_1, hidden_size_2], stddev=math.sqrt(2.0/hidden_size_1)))
    hidden_layer_bias_2 = tf.Variable(tf.zeros([hidden_size_2]))
    
    # 輸出層
    out_weights = tf.Variable(
    tf.truncated_normal([hidden_size_2, num_label], stddev=math.sqrt(2.0/hidden_size_2)))
    out_bias = tf.Variable(tf.zeros([num_label]))
    
    z1 = tf.matmul(x, hidden_layer_weights_1) + hidden_layer_bias_1
    h1 = tf.nn.relu(z1)
    
    z2 = tf.matmul(h1, hidden_layer_weights_2) + hidden_layer_bias_2
    h2 = tf.nn.relu(z2)
    
    logits = tf.matmul(h2, out_weights) + out_bias
    
    # L2正則化
    regularization = tf.nn.l2_loss(hidden_layer_weights_1) + tf.nn.l2_loss(hidden_layer_weights_2) + tf.nn.l2_loss(out_weights)
    loss = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits(labels=tf_labels, logits=logits) + beta * regularization) 
    
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
    
    train_prediction = tf.nn.softmax(logits)

    weights = [hidden_layer_weights_1, hidden_layer_bias_1, hidden_layer_weights_2, hidden_layer_bias_2, out_weights, out_bias]
        
    

上一步相當(dāng)于搭建了神經(jīng)網(wǎng)絡(luò)的骨架瘩将,現(xiàn)在需要訓(xùn)練吟税。每1000步訓(xùn)練,打印交叉熵?fù)p失和正確率姿现。

num_steps = 50000

def accuracy(predictions, labels):
    return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
          / predictions.shape[0])

def relu(x):
    return np.maximum(0,x)
          

with tf.Session(graph=graph) as session:
    tf.global_variables_initializer().run()
    print('Initialized')
    for step in range(num_steps):
        _, l, predictions = session.run([optimizer, loss, train_prediction])
    
        if (step % 1000 == 0):
            print('Loss at step %d: %f' % (step, l))
            print('Training accuracy: %.1f%%' % accuracy(
                predictions, labels))
        
    w1, b1, w2, b2, w3, b3 = weights
    # 顯示分類器
    h = 0.02
    x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
    y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
    xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
                         np.arange(y_min, y_max, h))

    Z = np.dot(relu(np.dot(relu(np.dot(np.c_[xx.ravel(), yy.ravel()], w1.eval()) + b1.eval()), w2.eval()) + b2.eval()), w3.eval()) + b3.eval()
    Z = np.argmax(Z, axis=1)
    Z = Z.reshape(xx.shape)
    fig = plt.figure()
    plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
    plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
    plt.xlim(xx.min(), xx.max())
    plt.ylim(yy.min(), yy.max())

Initialized
Loss at step 0: 1.132545
Training accuracy: 43.7%
Loss at step 1000: 0.257016
Training accuracy: 94.0%
Loss at step 2000: 0.165511
Training accuracy: 98.0%
Loss at step 3000: 0.149266
Training accuracy: 99.0%
Loss at step 4000: 0.142311
Training accuracy: 99.3%
Loss at step 5000: 0.137762
Training accuracy: 99.3%
Loss at step 6000: 0.134356
Training accuracy: 99.3%
Loss at step 7000: 0.131588
Training accuracy: 99.3%
Loss at step 8000: 0.129299
Training accuracy: 99.3%
Loss at step 9000: 0.127340
Training accuracy: 99.3%
Loss at step 10000: 0.125686
Training accuracy: 99.3%
Loss at step 11000: 0.124293
Training accuracy: 99.3%
Loss at step 12000: 0.123130
Training accuracy: 99.3%
Loss at step 13000: 0.122149
Training accuracy: 99.3%
Loss at step 14000: 0.121309
Training accuracy: 99.3%
Loss at step 15000: 0.120542
Training accuracy: 99.3%
Loss at step 16000: 0.119895
Training accuracy: 99.3%
Loss at step 17000: 0.119335
Training accuracy: 99.3%
Loss at step 18000: 0.118836
Training accuracy: 99.3%
Loss at step 19000: 0.118376
Training accuracy: 99.3%
Loss at step 20000: 0.117974
Training accuracy: 99.3%
Loss at step 21000: 0.117601
Training accuracy: 99.3%
Loss at step 22000: 0.117253
Training accuracy: 99.3%
Loss at step 23000: 0.116887
Training accuracy: 99.3%
Loss at step 24000: 0.116561
Training accuracy: 99.3%
Loss at step 25000: 0.116265
Training accuracy: 99.3%
Loss at step 26000: 0.115995
Training accuracy: 99.3%
Loss at step 27000: 0.115750
Training accuracy: 99.3%
Loss at step 28000: 0.115521
Training accuracy: 99.3%
Loss at step 29000: 0.115310
Training accuracy: 99.3%
Loss at step 30000: 0.115111
Training accuracy: 99.3%
Loss at step 31000: 0.114922
Training accuracy: 99.3%
Loss at step 32000: 0.114743
Training accuracy: 99.3%
Loss at step 33000: 0.114567
Training accuracy: 99.3%
Loss at step 34000: 0.114401
Training accuracy: 99.3%
Loss at step 35000: 0.114242
Training accuracy: 99.3%
Loss at step 36000: 0.114086
Training accuracy: 99.3%
Loss at step 37000: 0.113933
Training accuracy: 99.3%
Loss at step 38000: 0.113785
Training accuracy: 99.3%
Loss at step 39000: 0.113644
Training accuracy: 99.3%
Loss at step 40000: 0.113504
Training accuracy: 99.3%
Loss at step 41000: 0.113366
Training accuracy: 99.3%
Loss at step 42000: 0.113229
Training accuracy: 99.3%
Loss at step 43000: 0.113096
Training accuracy: 99.3%
Loss at step 44000: 0.112966
Training accuracy: 99.3%
Loss at step 45000: 0.112838
Training accuracy: 99.3%
Loss at step 46000: 0.112711
Training accuracy: 99.3%
Loss at step 47000: 0.112590
Training accuracy: 99.3%
Loss at step 48000: 0.112472
Training accuracy: 99.3%
Loss at step 49000: 0.112358
Training accuracy: 99.3%
分類器.png
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末肠仪,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子备典,更是在濱河造成了極大的恐慌异旧,老刑警劉巖,帶你破解...
    沈念sama閱讀 216,544評(píng)論 6 501
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件提佣,死亡現(xiàn)場(chǎng)離奇詭異吮蛹,居然都是意外死亡荤崇,警方通過(guò)查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,430評(píng)論 3 392
  • 文/潘曉璐 我一進(jìn)店門潮针,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)术荤,“玉大人,你說(shuō)我怎么就攤上這事然低∠裁浚” “怎么了务唐?”我有些...
    開(kāi)封第一講書人閱讀 162,764評(píng)論 0 353
  • 文/不壞的土叔 我叫張陵雳攘,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我枫笛,道長(zhǎng)吨灭,這世上最難降的妖魔是什么? 我笑而不...
    開(kāi)封第一講書人閱讀 58,193評(píng)論 1 292
  • 正文 為了忘掉前任刑巧,我火速辦了婚禮喧兄,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘啊楚。我一直安慰自己吠冤,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,216評(píng)論 6 388
  • 文/花漫 我一把揭開(kāi)白布恭理。 她就那樣靜靜地躺著拯辙,像睡著了一般。 火紅的嫁衣襯著肌膚如雪颜价。 梳的紋絲不亂的頭發(fā)上涯保,一...
    開(kāi)封第一講書人閱讀 51,182評(píng)論 1 299
  • 那天,我揣著相機(jī)與錄音周伦,去河邊找鬼夕春。 笑死,一個(gè)胖子當(dāng)著我的面吹牛专挪,可吹牛的內(nèi)容都是我干的及志。 我是一名探鬼主播,決...
    沈念sama閱讀 40,063評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼寨腔,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼困肩!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起脆侮,我...
    開(kāi)封第一講書人閱讀 38,917評(píng)論 0 274
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤锌畸,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后靖避,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體潭枣,經(jīng)...
    沈念sama閱讀 45,329評(píng)論 1 310
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡比默,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,543評(píng)論 2 332
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了盆犁。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片命咐。...
    茶點(diǎn)故事閱讀 39,722評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖谐岁,靈堂內(nèi)的尸體忽然破棺而出醋奠,到底是詐尸還是另有隱情,我是刑警寧澤伊佃,帶...
    沈念sama閱讀 35,425評(píng)論 5 343
  • 正文 年R本政府宣布窜司,位于F島的核電站,受9級(jí)特大地震影響航揉,放射性物質(zhì)發(fā)生泄漏塞祈。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,019評(píng)論 3 326
  • 文/蒙蒙 一帅涂、第九天 我趴在偏房一處隱蔽的房頂上張望议薪。 院中可真熱鬧,春花似錦媳友、人聲如沸斯议。這莊子的主人今日做“春日...
    開(kāi)封第一講書人閱讀 31,671評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)哼御。三九已至,卻和暖如春搂抒,著一層夾襖步出監(jiān)牢的瞬間艇搀,已是汗流浹背。 一陣腳步聲響...
    開(kāi)封第一講書人閱讀 32,825評(píng)論 1 269
  • 我被黑心中介騙來(lái)泰國(guó)打工求晶, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留焰雕,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 47,729評(píng)論 2 368
  • 正文 我出身青樓芳杏,卻偏偏與公主長(zhǎng)得像矩屁,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子爵赵,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,614評(píng)論 2 353

推薦閱讀更多精彩內(nèi)容