卷積神經網絡(CNN)浸卦、長短期記憶網絡(LSTM)以及門控單元網絡(GRU)是最常見的一類算法案糙,在kaggle比賽中經常被用來做預測和回歸靴庆。今天怒医,我們就拋磚引玉,做一個簡單的教程焰薄,如何用這些網絡預測時間序列扒袖。因為是做一個簡單教程季率,所以本例子中網絡的層數(shù)和每層的神經元個數(shù)沒有調試到最佳。根據不同的數(shù)據集飒泻,同學們可以自己調網絡結構和具體參數(shù)蠢络。
1.環(huán)境搭建
我們運行的環(huán)境是下載anaconda,然后在里面安裝keras刹孔,打開spyder運行程序即可髓霞。其中下載anaconda和安裝keras的教程在我們另一個博客“用CNN做電能質量擾動分類(2019-03-28)”中寫過了,這里就不贅述了结序。
2.數(shù)據集下載
下載時間序列數(shù)據集和程序纵潦。其中,網盤連接是:
https://pan.baidu.com/s/1TASK3gMZoDFvoE89LzR-5A返敬,密碼是“o0sl”寥院。
“nihe.csv”是我自己做的一個時間序列的數(shù)據集,一共有1000行4列其中凛澎,1-3列可以認為是X,第4列認為是Y沫换。我們現(xiàn)在要做的就是訓練3個X和Y之間的關系轧叽,然后給定X去預測Y刊棕。
3.預測
把下載的nihe.csv文件放到spyder 的默認路徑下,我的默認路徑是“D:\Matlab2018a\42”网严,新建一個.py文件嗤无,把程序放進去,運行即可铺峭。
4.CNN,LSTM,GRU預測時間序列的程序
1)GRU的程序
#1. load dataset
from pandas import read_csv
dataset = read_csv('nihe.csv')
values = dataset.values
#2.tranform data to [0,1]
from sklearn.preprocessing importMinMaxScaler
scaler=MinMaxScaler(feature_range=(0, 1))
XY= scaler.fit_transform(values)
X= XY[:,0:3]?? ?
Y = XY[:,3]
#3.split into train and test sets
n_train_hours = 950
trainX = X[:n_train_hours, :]
trainY =Y[:n_train_hours]
testX = X[n_train_hours:, :]
testY =Y[n_train_hours:]
train3DX =trainX.reshape((trainX.shape[0], 1, trainX.shape[1]))
test3DX = testX.reshape((testX.shape[0],1, testX.shape[1]))
#4. Define Network
from keras.models importSequential
from keras.layers import Dense
from keras.layers.recurrentimport GRU
model = Sequential()
model.add(GRU(units=5,input_shape=(train3DX.shape[1],train3DX.shape[2]),return_sequences=True))
model.add(GRU(units=3))
model.add(Dense(units=4,kernel_initializer='normal',activation='relu'))????
model.add(Dense(units=1,kernel_initializer='normal',activation='sigmoid'))?
#最后輸出層1個神經元和輸出的個數(shù)對應
# 5. compile the network
model.compile(loss='mae',optimizer='adam')
# 6. fit the network
history =model.fit(train3DX,trainY, epochs=100, batch_size=10,validation_data=(test3DX,testY), verbose=2, shuffle=False)
# 7. evaluate the network
from matplotlib import pyplot
pyplot.plot(history.history['loss'],label='train')
pyplot.plot(history.history['val_loss'],label='test')
pyplot.legend()
pyplot.show()
#8. make a prediction and invertscaling for forecast
from pandas import concat
forecasttestY0 =model.predict(test3DX)
inv_yhat=np.concatenate((testX,forecasttestY0), axis=1)
inv_y =scaler.inverse_transform(inv_yhat)
forecasttestY = inv_y[:,3]
# calculate RMSE
from math import sqrt
from sklearn.metrics importmean_squared_error
actualtestY=values[n_train_hours:,3]
rmse = sqrt(mean_squared_error(forecasttestY,actualtestY))
print('Test RMSE: %.3f' % rmse)
#plot the testY and actualtestY
pyplot.plot(actualtestY,label='train')
pyplot.plot(forecasttestY,label='test')
pyplot.legend()
pyplot.show()
2)LSTM的程序
#1. load dataset
from pandas import read_csv
dataset = read_csv('nihe.csv')
values = dataset.values
#2.tranform data to [0,1]
from sklearn.preprocessing importMinMaxScaler
scaler=MinMaxScaler(feature_range=(0, 1))
XY= scaler.fit_transform(values)
X= XY[:,0:3]???
Y = XY[:,3]
#3.split into train and test sets
n_train_hours = 950
trainX = X[:n_train_hours, :]
trainY =Y[:n_train_hours]
testX = X[n_train_hours:, :]
testY =Y[n_train_hours:]
train3DX =trainX.reshape((trainX.shape[0], 1, trainX.shape[1]))
test3DX = testX.reshape((testX.shape[0],1, testX.shape[1]))
#4. Define Network
from keras.models importSequential
from keras.layers import Dense
from keras.layers.recurrentimport LSTM
model = Sequential()
model.add(LSTM(units=5,input_shape=(train3DX.shape[1],train3DX.shape[2]),return_sequences=True))
model.add(LSTM(units=3))
model.add(Dense(units=4,kernel_initializer='normal',activation='relu'))????
model.add(Dense(units=1,kernel_initializer='normal',activation='sigmoid'))?
#最后輸出層1個神經元和輸出的個數(shù)對應
# 5. compile the network
model.compile(loss='mae',optimizer='adam')
# 6. fit the network
history =model.fit(train3DX,trainY, epochs=100, batch_size=10,validation_data=(test3DX,testY), verbose=2, shuffle=False)
# 7. evaluate the network
from matplotlib import pyplot
pyplot.plot(history.history['loss'],label='train')
pyplot.plot(history.history['val_loss'],label='test')
pyplot.legend()
pyplot.show()
#8. make a prediction and invertscaling for forecast
from pandas import concat
import numpy as np
forecasttestY0 =model.predict(test3DX)
#forecasttestY= np.expand_dims(a,axis=1)
inv_yhat=np.concatenate((testX,forecasttestY0), axis=1)
inv_y =scaler.inverse_transform(inv_yhat)
forecasttestY = inv_y[:,3]
# calculate RMSE
from math import sqrt
from sklearn.metrics importmean_squared_error
actualtestY=values[n_train_hours:,3]
rmse =sqrt(mean_squared_error(forecasttestY, actualtestY))
print('Test RMSE: %.3f' % rmse)
#plot the testY and actualtestY
pyplot.plot(actualtestY,label='train')
pyplot.plot(forecasttestY,label='test')
pyplot.legend()
pyplot.show()
3)CNN和LSTM的合并
#1. load dataset
from pandas import read_csv
dataset = read_csv('nihe.csv')
values = dataset.values
#2.tranform data to [0,1]? 3個屬性,第4個是待預測量
from sklearn.preprocessing importMinMaxScaler
scaler=MinMaxScaler(feature_range=(0, 1))
XY= scaler.fit_transform(values)
X= XY[:,0:3]???
Y = XY[:,3]
#3.split into train and test sets
950個訓練集桑李,剩下的都是驗證集
n_train_hours = 950
trainX = X[:n_train_hours, :]
trainY =Y[:n_train_hours]
testX = X[n_train_hours:, :]
testY =Y[n_train_hours:]
#LSTM的輸入格式要3維奠支,因此先做變換
train3DX =trainX.reshape((trainX.shape[0], 1, trainX.shape[1]))
test3DX =testX.reshape((testX.shape[0], 1, testX.shape[1]))
#4. Define Network
from keras.models importSequential
from keras.layers import Dense
from keras.layers.recurrentimport LSTM
from keras.layers.convolutionalimport Conv1D
from keras.layers.convolutionalimport MaxPooling1D
from keras.layers import Flatten
model = Sequential()
model.add(Conv1D(filters=10,kernel_size=1, padding='same', strides=1, activation='relu',input_shape=(1,3)))
model.add(MaxPooling1D(pool_size=1))
model.add(LSTM(units=3,return_sequences=True))
model.add(Flatten())
#可以把LSTM和Flatten刪除,僅保留LSTM
#model.add(LSTM(units=3))?
model.add(Dense(5,activation='relu'))
#在lstm層之后可以添加隱含層倍谜,也可以不加,直接加輸出層
#model.add(Dense(units=4,kernel_initializer='normal',activation='relu'))?
model.add(Dense(units=1,kernel_initializer='normal',activation='sigmoid'))??????
?#最后輸出層1個神經元和輸出的個數(shù)對應
# 5. compile the network
model.compile(loss='mae',optimizer='adam')
# 6. fit the network
history =model.fit(train3DX,trainY, epochs=100, batch_size=10,validation_data=(test3DX,testY), verbose=2, shuffle=False)
# 7. evaluate the network
from matplotlib import pyplot
pyplot.plot(history.history['loss'],label='train')
pyplot.plot(history.history['val_loss'],label='test')
pyplot.legend()
pyplot.show()
#8. make a prediction and invertscaling for forecast
from pandas import concat
import numpy as np
forecasttestY0 =model.predict(test3DX)
inv_yhat=np.concatenate((testX,forecasttestY0), axis=1)
inv_y =scaler.inverse_transform(inv_yhat)
forecasttestY = inv_y[:,3]
# calculate RMSE
from math import sqrt
from sklearn.metrics importmean_squared_error
actualtestY=values[n_train_hours:,3]
rmse =sqrt(mean_squared_error(forecasttestY, actualtestY))
print('Test RMSE: %.3f' % rmse)
#plot the testY and actualtestY
pyplot.plot(actualtestY,label='train')
pyplot.plot(forecasttestY,label='test')
pyplot.legend()
pyplot.show()