上面學(xué)習(xí)了在Window和Linux上安裝keras環(huán)境蕴坪。既然裝了还栓,下面花5分鐘學(xué)習(xí)如何入門使用keras碌廓,很簡(jiǎn)單,不要怕剩盒!就像搭積木一樣簡(jiǎn)單谷婆。
搭積木一樣簡(jiǎn)單
Keras深度學(xué)習(xí)基礎(chǔ)
Keras的主要結(jié)構(gòu)是模型,它定義了深度學(xué)習(xí)網(wǎng)絡(luò)的圖層解雇纪挎∑谄叮可以像搭積木一樣,向現(xiàn)有模型添加更多圖層异袄,以構(gòu)建項(xiàng)目所需的自定義模型通砍。
以下是如何在深度學(xué)習(xí)中創(chuàng)建順序模型和一些常用層
1.順序模型
from keras.models import Sequential
from keras.layers import Dense, Activation,Conv2D,MaxPooling2D,Flatten,Dropout
model = Sequential()
2.卷積層
這是卷積層作為輸入層的示例,輸入形狀為320x320x3烤蜕,具有48個(gè)大小為3x3的濾波器封孙,并使用ReLU作為激活函數(shù)。
input_shape=(320,320,3) #this is the input shape of an image 320x320x3
model.add(Conv2D(48, (3, 3), activation='relu', input_shape= input_shape))
另一種類型是
model.add(Conv2D(48, (3, 3), activation='relu'))
卷積層
3. MaxPooling Layer
要對(duì)輸入表示進(jìn)行下采樣虎忌,請(qǐng)使用MaxPool2d并指定內(nèi)核大小
model.add(MaxPooling2D(pool_size=(2, 2)))
輸入表示進(jìn)行下采樣
4.Dense Layer
添加完全連接的圖層橱鹏,只需指定輸出尺寸
model.add(Dense(256呐籽,activation ='relu'))
5.DropOut層
以50%的概率添加DropOut層
model.add(Dropout(0.5))
編譯,培訓(xùn)和評(píng)估
在定義模型之后蚀瘸,開(kāi)始訓(xùn)練它們。首先需要使用loss函數(shù)和優(yōu)化器函數(shù)編譯網(wǎng)絡(luò)庶橱。這將允許網(wǎng)絡(luò)改變權(quán)重并最小化損失贮勃。
model.compile(loss ='mean_squared_error',optimizer ='adam')
現(xiàn)在開(kāi)始訓(xùn)練苏章,使用fit將訓(xùn)練和驗(yàn)證數(shù)據(jù)提供給模型寂嘉。這將允許您批量訓(xùn)練網(wǎng)絡(luò)并設(shè)置epochs。
model.fit(X_train枫绅,X_train泉孩,batch_size = 32,epochs = 10并淋,validation_data =(x_val寓搬,y_val))
最后一步是使用測(cè)試數(shù)據(jù)評(píng)估模型。
score = model.evaluate(x_test县耽,y_test句喷,batch_size = 32)
讓我們嘗試使用簡(jiǎn)單的線性回歸
<pre style="-webkit-tap-highlight-color: transparent; box-sizing: border-box; font-family: Consolas, Menlo, Courier, monospace; font-size: 16px; white-space: pre-wrap; position: relative; line-height: 1.5; color: rgb(153, 153, 153); margin: 1em 0px; padding: 12px 10px; background: rgb(244, 245, 246); border: 1px solid rgb(232, 232, 232); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
import matplotlib.pyplot as plt
x = data = np.linspace(1,2,200)
y = x4 + np.random.randn(x.shape) * 0.3
model = Sequential()
model.add(Dense(1, input_dim=1, activation='linear'))
model.compile(optimizer='sgd', loss='mse', metrics=['mse'])
weights = model.layers[0].get_weights()
w_init = weights[0][0][0]
b_init = weights[1][0]
print('Linear regression model is initialized with weights w: %.2f, b: %.2f' % (w_init, b_init))
model.fit(x,y, batch_size=1, epochs=30, shuffle=False)
weights = model.layers[0].get_weights()
w_final = weights[0][0][0]
b_final = weights[1][0]
print('Linear regression model is trained to have weight w: %.2f, b: %.2f' % (w_final, b_final))
predict = model.predict(data)
plt.plot(data, predict, 'b', data , y, 'k.')
plt.show()
訓(xùn)練數(shù)據(jù)后,輸出應(yīng)如下所示
初始權(quán)重
Linear regression model is initialized with weights w: 0.37, b: 0.00
和最終的權(quán)重
Linear regression model is trained to have weight w: 3.70, b: 0.61