版權(quán)聲明:小博主水平有限鹅士,希望大家多多指導(dǎo)券躁。
1、注意5糁选R舶荨!
Keras的運(yùn)行依賴于后端趾痘,一般有Tensorflow慢哈、Theano和CNTK三種。根據(jù)主流永票,推薦安裝TensorFlow作為Keras的backend卵贱。
所以滥沫,先安裝TensorFlow,再在tensorflow的虛擬環(huán)境中安裝Keras
2键俱、參考:
1兰绣、anaconda 下安裝tensorflow & keras - qy13913453196的博客 - CSDN博客
https://blog.csdn.net/qy13913453196/article/details/82589792?blog.csdn.net
3、Windows+Anaconda下搭建Keras環(huán)境 - qq_22885109的博客 - CSDN博客
https://blog.csdn.net/qq_22885109/article/details/80995134?blog.csdn.net
3编振、安裝教程
【1】安裝tensorflow
怎么安裝tensorflow缀辩?我寫(xiě)了專(zhuān)門(mén)的博客:
BG大龍:【TensorFlow】用Anaconda安裝tensorflow,并在IDE(VScode)運(yùn)行?zhuanlan.zhihu.com
【2】激活tensorflow虛擬環(huán)境踪央,在該tensorflow的虛擬環(huán)境下安裝Keras
激活環(huán)境臀玄,在cmd中輸入:conda activate tensorflow_env
在cmd中輸入:pip install keras
出現(xiàn)安裝信息,自從清華的鏡像通道被美國(guó)在2019.5.16封了畅蹂,只能訪問(wèn)國(guó)外網(wǎng)站下載健无,所以,慢慢等待進(jìn)度條……
也許液斜,你會(huì)遇到這樣的報(bào)錯(cuò)累贤,沒(méi)關(guān)系,重新輸入pip install keras 就可以
直到你看到了這個(gè)旗唁,說(shuō)明成功了
【3】在命令行中驗(yàn)證
激活環(huán)境畦浓,在cmd中輸入:conda activate tensorflow_env
在cmd中輸入:python
再輸入:import keras
會(huì)出現(xiàn)“Using TensorFlow backend”,說(shuō)明成功
【4】在基于tensorflow環(huán)境的jupyter網(wǎng)頁(yè)中驗(yàn)證()
輸入:import keras
會(huì)如下出現(xiàn)“Using TensorFlow backend”检疫,說(shuō)明成功
【5】跑一下實(shí)際例子讶请,來(lái)驗(yàn)證
(1)這里給出官方鏈接——keras中文文檔
Keras:基于Python的深度學(xué)習(xí)庫(kù) - Keras中文文檔
Keras:基于Python的深度學(xué)習(xí)庫(kù) - Keras中文文檔?keras-cn.readthedocs.io
(2)該例子來(lái)自于:
Sequential model - Keras中文文檔 Sequential model - Keras中文文檔
MLP的二分類(lèi)代碼:
有一個(gè)親測(cè)后的小細(xì)節(jié):我用的VScode作為python的IDE,如activation="relu"屎媳,必須雙引號(hào)夺溢。
而用pycharm作為python的IDE,如activation='relu'烛谊,必須單引號(hào)风响。
import numpy as np
import tensorflow as keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
# Generate dummy data
x_train = np.random.random((1000, 20))
y_train = np.random.randint(2, size=(1000, 1))
x_test = np.random.random((100, 20))
y_test = np.random.randint(2, size=(100, 1))
model = Sequential()
model.add(Dense(64, input_dim=20, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(64, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"])
model.fit(x_train, y_train,
epochs=20,
batch_size=128)
score = model.evaluate(x_test, y_test, batch_size=128)
運(yùn)行結(jié)果:
Epoch 1/20
1000/1000 [==============================] - 1s 882us/step - loss: 0.7117 - acc: 0.5040
Epoch 2/20
1000/1000 [==============================] - 0s 39us/step - loss: 0.7049 - acc: 0.5020
Epoch 3/20
1000/1000 [==============================] - 0s 43us/step - loss: 0.7016 - acc: 0.5000
Epoch 4/20
1000/1000 [==============================] - 0s 39us/step - loss: 0.7031 - acc: 0.5260
Epoch 5/20
1000/1000 [==============================] - ETA: 0s - loss: 0.7046 - acc: 0.515 - 0s 41us/step - loss: 0.7024 - acc: 0.4930
Epoch 6/20
1000/1000 [==============================] - 0s 52us/step - loss: 0.6999 - acc: 0.5040
Epoch 7/20
1000/1000 [==============================] - 0s 47us/step - loss: 0.6974 - acc: 0.5150
Epoch 8/20
1000/1000 [==============================] - 0s 40us/step - loss: 0.6937 - acc: 0.5250
Epoch 9/20
1000/1000 [==============================] - 0s 39us/step - loss: 0.6912 - acc: 0.5260
Epoch 10/20
1000/1000 [==============================] - 0s 37us/step - loss: 0.6891 - acc: 0.5260
Epoch 11/20
1000/1000 [==============================] - 0s 41us/step - loss: 0.6919 - acc: 0.5210
Epoch 12/20
1000/1000 [==============================] - 0s 43us/step - loss: 0.6926 - acc: 0.5190
Epoch 13/20
1000/1000 [==============================] - 0s 44us/step - loss: 0.6897 - acc: 0.5350
Epoch 14/20
1000/1000 [==============================] - 0s 41us/step - loss: 0.6940 - acc: 0.5140
Epoch 15/20
1000/1000 [==============================] - 0s 44us/step - loss: 0.6928 - acc: 0.5300
Epoch 16/20
1000/1000 [==============================] - 0s 56us/step - loss: 0.6925 - acc: 0.5360
Epoch 17/20
1000/1000 [==============================] - 0s 50us/step - loss: 0.6906 - acc: 0.5400
Epoch 18/20
1000/1000 [==============================] - 0s 44us/step - loss: 0.6882 - acc: 0.5330
Epoch 19/20
1000/1000 [==============================] - 0s 37us/step - loss: 0.6923 - acc: 0.5420
Epoch 20/20
1000/1000 [==============================] - 0s 40us/step - loss: 0.6893 - acc: 0.5280
100/100 [==============================] - 0s 10us/step
祝,學(xué)習(xí)好運(yùn)……