RNN实例

 

机器学习算法完整版见fenghaootong-github

航空公司客运流量预测

数据集

数据集有两列,分别是时间和客运流量,用到的主要是客运流量

导入模块

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import LSTM, Dense, Activation
Using TensorFlow backend.
import warnings
warnings.filterwarnings('ignore')

导入数据

#导入数据
df = pd.read_csv('../DATA/airData.csv', sep=',')
df = df.set_index('time')
df.head()
 passengers
time 
1949-01112
1949-02118
1949-03132
1949-04129
1949-05121
1949-06135
1949-07148
1949-08148
1949-09136
1949-10119
1949-11104
1949-12118
1950-01115
1950-02126
1950-03141
1950-04135
1950-05125
1950-06149
1950-07170
1950-08170
1950-09158
1950-10133
1950-11114
1950-12140
1951-01145
1951-02150
1951-03178
1951-04163
1951-05172
1951-06178
1958-07491
1958-08505
1958-09404
1958-10359
1958-11310
1958-12337
1959-01360
1959-02342
1959-03406
1959-04396
1959-05420
1959-06472
1959-07548
1959-08559
1959-09463
1959-10407
1959-11362
1959-12405
1960-01417
1960-02391
1960-03419
1960-04461
1960-05472
1960-06535
1960-07622
1960-08606
1960-09508
1960-10461
1960-11390
1960-12432

144 rows × 1 columns

画图

#画图
df['passengers'].plot()
plt.show()

这里写图片描述

数据预处理

#只用客运流量一列
df = pd.read_csv('DATA/data.csv', sep=',', usecols=[1])
data_all = np.array(df).astype(float)
#数据归一化
scaler = MinMaxScaler()
data_all = scaler.fit_transform(data_all)

时间序列

#时间序列
sequence_length=10
data = []
for i in range(len(data_all) - sequence_length - 1):
    data.append(data_all[i: i + sequence_length + 1])
reshaped_data = np.array(data).astype('float64')
reshaped_data
array([[[ 0.01544402],
        [ 0.02702703],
        [ 0.05405405],
        ..., 
        [ 0.06177606],
        [ 0.02895753],
        [ 0.        ]],

       [[ 0.02702703],
        [ 0.05405405],
        [ 0.04826255],
        ..., 
        [ 0.02895753],
        [ 0.        ],
        [ 0.02702703]],

       [[ 0.05405405],
        [ 0.04826255],
        [ 0.03281853],
        ..., 
        [ 0.        ],
        [ 0.02702703],
        [ 0.02123552]],

       ..., 
       [[ 0.4980695 ],
        [ 0.58108108],
        [ 0.6042471 ],
        ..., 
        [ 1.        ],
        [ 0.96911197],
        [ 0.77992278]],

       [[ 0.58108108],
        [ 0.6042471 ],
        [ 0.55405405],
        ..., 
        [ 0.96911197],
        [ 0.77992278],
        [ 0.68918919]],

       [[ 0.6042471 ],
        [ 0.55405405],
        [ 0.60810811],
        ..., 
        [ 0.77992278],
        [ 0.68918919],
        [ 0.55212355]]])

训练集和测试集

split = 0.8
np.random.shuffle(reshaped_data)
x = reshaped_data[:, :-1]
y = reshaped_data[:, -1]
split_boundary = int(reshaped_data.shape[0] * split)
train_x = x[: split_boundary]
test_x = x[split_boundary:]

train_y = y[: split_boundary]
test_y = y[split_boundary:]
train_x = np.reshape(train_x, (train_x.shape[0], train_x.shape[1], 1))
test_x = np.reshape(test_x, (test_x.shape[0], test_x.shape[1], 1))

搭建LSTM模型

#搭建LSTM模型
model = Sequential()
model.add(LSTM(input_dim=1, output_dim=50, return_sequences=True))
print(model.layers)
model.add(LSTM(100, return_sequences=False))
model.add(Dense(output_dim=1))
model.add(Activation('linear'))

model.compile(loss='mse', optimizer='rmsprop')
[<keras.layers.recurrent.LSTM object at 0x000001D78422DC50>]

模型训练

model.fit(train_x, train_y, batch_size=512, nb_epoch=100, validation_split=0.1)
predict = model.predict(test_x)
predict = np.reshape(predict, (predict.size, ))
Train on 95 samples, validate on 11 samples
Epoch 1/100
95/95 [==============================] - 0s 253us/step - loss: 0.0117 - val_loss: 0.0073
Epoch 2/100
95/95 [==============================] - 0s 248us/step - loss: 0.0121 - val_loss: 0.0093
Epoch 3/100
95/95 [==============================] - 0s 242us/step - loss: 0.0116 - val_loss: 0.0073
Epoch 4/100
95/95 [==============================] - 0s 253us/step - loss: 0.0120 - val_loss: 0.0092
Epoch 5/100
95/95 [==============================] - 0s 274us/step - loss: 0.0115 - val_loss: 0.0072
Epoch 6/100
95/95 [==============================] - 0s 258us/step - loss: 0.0119 - val_loss: 0.0091
Epoch 7/100
95/95 [==============================] - 0s 258us/step - loss: 0.0114 - val_loss: 0.0072
Epoch 8/100
95/95 [==============================] - 0s 248us/step - loss: 0.0118 - val_loss: 0.0090
Epoch 9/100
95/95 [==============================] - 0s 274us/step - loss: 0.0113 - val_loss: 0.0071
Epoch 10/100
95/95 [==============================] - 0s 263us/step - loss: 0.0117 - val_loss: 0.0089
Epoch 11/100
95/95 [==============================] - 0s 306us/step - loss: 0.0113 - val_loss: 0.0071
Epoch 12/100
95/95 [==============================] - 0s 253us/step - loss: 0.0116 - val_loss: 0.0088
Epoch 13/100
95/95 [==============================] - 0s 253us/step - loss: 0.0112 - val_loss: 0.0070
Epoch 14/100
95/95 [==============================] - 0s 290us/step - loss: 0.0115 - val_loss: 0.0087
Epoch 15/100
95/95 [==============================] - 0s 274us/step - loss: 0.0111 - val_loss: 0.0070
Epoch 16/100
95/95 [==============================] - 0s 263us/step - loss: 0.0114 - val_loss: 0.0086
Epoch 17/100
95/95 [==============================] - 0s 263us/step - loss: 0.0110 - val_loss: 0.0069
Epoch 18/100
95/95 [==============================] - 0s 258us/step - loss: 0.0113 - val_loss: 0.0085
Epoch 19/100
95/95 [==============================] - 0s 258us/step - loss: 0.0109 - val_loss: 0.0068
Epoch 20/100
95/95 [==============================] - 0s 306us/step - loss: 0.0112 - val_loss: 0.0084
Epoch 21/100
95/95 [==============================] - 0s 279us/step - loss: 0.0108 - val_loss: 0.0068
Epoch 22/100
95/95 [==============================] - 0s 279us/step - loss: 0.0111 - val_loss: 0.0083
Epoch 23/100
95/95 [==============================] - 0s 263us/step - loss: 0.0107 - val_loss: 0.0067
Epoch 24/100
95/95 [==============================] - 0s 279us/step - loss: 0.0110 - val_loss: 0.0082
Epoch 25/100
95/95 [==============================] - 0s 269us/step - loss: 0.0106 - val_loss: 0.0067
Epoch 26/100
95/95 [==============================] - 0s 263us/step - loss: 0.0109 - val_loss: 0.0081
Epoch 27/100
95/95 [==============================] - 0s 263us/step - loss: 0.0105 - val_loss: 0.0066
Epoch 28/100
95/95 [==============================] - 0s 258us/step - loss: 0.0108 - val_loss: 0.0079
Epoch 29/100
95/95 [==============================] - 0s 253us/step - loss: 0.0103 - val_loss: 0.0065
Epoch 30/100
95/95 [==============================] - 0s 269us/step - loss: 0.0107 - val_loss: 0.0078
Epoch 31/100
95/95 [==============================] - 0s 248us/step - loss: 0.0102 - val_loss: 0.0065
Epoch 32/100
95/95 [==============================] - 0s 253us/step - loss: 0.0105 - val_loss: 0.0077
Epoch 33/100
95/95 [==============================] - 0s 242us/step - loss: 0.0101 - val_loss: 0.0064
Epoch 34/100
95/95 [==============================] - 0s 258us/step - loss: 0.0104 - val_loss: 0.0076
Epoch 35/100
95/95 [==============================] - 0s 242us/step - loss: 0.0100 - val_loss: 0.0063
Epoch 36/100
95/95 [==============================] - 0s 253us/step - loss: 0.0103 - val_loss: 0.0074
Epoch 37/100
95/95 [==============================] - 0s 248us/step - loss: 0.0099 - val_loss: 0.0063
Epoch 38/100
95/95 [==============================] - 0s 237us/step - loss: 0.0102 - val_loss: 0.0073
Epoch 39/100
95/95 [==============================] - 0s 242us/step - loss: 0.0097 - val_loss: 0.0062
Epoch 40/100
95/95 [==============================] - 0s 242us/step - loss: 0.0100 - val_loss: 0.0072
Epoch 41/100
95/95 [==============================] - 0s 232us/step - loss: 0.0096 - val_loss: 0.0061
Epoch 42/100
95/95 [==============================] - 0s 263us/step - loss: 0.0099 - val_loss: 0.0070
Epoch 43/100
95/95 [==============================] - 0s 248us/step - loss: 0.0095 - val_loss: 0.0061
Epoch 44/100
95/95 [==============================] - 0s 242us/step - loss: 0.0097 - val_loss: 0.0069
Epoch 45/100
95/95 [==============================] - 0s 248us/step - loss: 0.0093 - val_loss: 0.0060
Epoch 46/100
95/95 [==============================] - 0s 248us/step - loss: 0.0096 - val_loss: 0.0067
Epoch 47/100
95/95 [==============================] - 0s 248us/step - loss: 0.0092 - val_loss: 0.0059
Epoch 48/100
95/95 [==============================] - 0s 253us/step - loss: 0.0095 - val_loss: 0.0066
Epoch 49/100
95/95 [==============================] - 0s 232us/step - loss: 0.0090 - val_loss: 0.0059
Epoch 50/100
95/95 [==============================] - 0s 258us/step - loss: 0.0093 - val_loss: 0.0064
Epoch 51/100
95/95 [==============================] - 0s 248us/step - loss: 0.0089 - val_loss: 0.0058
Epoch 52/100
95/95 [==============================] - 0s 248us/step - loss: 0.0092 - val_loss: 0.0063
Epoch 53/100
95/95 [==============================] - 0s 269us/step - loss: 0.0087 - val_loss: 0.0058
Epoch 54/100
95/95 [==============================] - 0s 258us/step - loss: 0.0090 - val_loss: 0.0061
Epoch 55/100
95/95 [==============================] - 0s 237us/step - loss: 0.0086 - val_loss: 0.0057
Epoch 56/100
95/95 [==============================] - 0s 248us/step - loss: 0.0088 - val_loss: 0.0060
Epoch 57/100
95/95 [==============================] - 0s 253us/step - loss: 0.0084 - val_loss: 0.0057
Epoch 58/100
95/95 [==============================] - 0s 232us/step - loss: 0.0087 - val_loss: 0.0058
Epoch 59/100
95/95 [==============================] - 0s 253us/step - loss: 0.0082 - val_loss: 0.0057
Epoch 60/100
95/95 [==============================] - 0s 263us/step - loss: 0.0085 - val_loss: 0.0057
Epoch 61/100
95/95 [==============================] - 0s 248us/step - loss: 0.0081 - val_loss: 0.0056
Epoch 62/100
95/95 [==============================] - 0s 237us/step - loss: 0.0083 - val_loss: 0.0055
Epoch 63/100
95/95 [==============================] - 0s 248us/step - loss: 0.0079 - val_loss: 0.0056
Epoch 64/100
95/95 [==============================] - 0s 279us/step - loss: 0.0082 - val_loss: 0.0053
Epoch 65/100
95/95 [==============================] - 0s 274us/step - loss: 0.0077 - val_loss: 0.0056
Epoch 66/100
95/95 [==============================] - 0s 295us/step - loss: 0.0080 - val_loss: 0.0052
Epoch 67/100
95/95 [==============================] - 0s 258us/step - loss: 0.0075 - val_loss: 0.0057
Epoch 68/100
95/95 [==============================] - 0s 242us/step - loss: 0.0078 - val_loss: 0.0051
Epoch 69/100
95/95 [==============================] - 0s 269us/step - loss: 0.0074 - val_loss: 0.0057
Epoch 70/100
95/95 [==============================] - 0s 253us/step - loss: 0.0076 - val_loss: 0.0049
Epoch 71/100
95/95 [==============================] - 0s 269us/step - loss: 0.0072 - val_loss: 0.0057
Epoch 72/100
95/95 [==============================] - 0s 279us/step - loss: 0.0075 - val_loss: 0.0048
Epoch 73/100
95/95 [==============================] - 0s 274us/step - loss: 0.0070 - val_loss: 0.0058
Epoch 74/100
95/95 [==============================] - 0s 258us/step - loss: 0.0073 - val_loss: 0.0047
Epoch 75/100
95/95 [==============================] - 0s 274us/step - loss: 0.0069 - val_loss: 0.0059
Epoch 76/100
95/95 [==============================] - 0s 248us/step - loss: 0.0071 - val_loss: 0.0046
Epoch 77/100
95/95 [==============================] - 0s 248us/step - loss: 0.0067 - val_loss: 0.0060
Epoch 78/100
95/95 [==============================] - 0s 284us/step - loss: 0.0070 - val_loss: 0.0046
Epoch 79/100
95/95 [==============================] - 0s 269us/step - loss: 0.0066 - val_loss: 0.0061
Epoch 80/100
95/95 [==============================] - 0s 248us/step - loss: 0.0069 - val_loss: 0.0045
Epoch 81/100
95/95 [==============================] - 0s 279us/step - loss: 0.0064 - val_loss: 0.0062
Epoch 82/100
95/95 [==============================] - 0s 258us/step - loss: 0.0067 - val_loss: 0.0045
Epoch 83/100
95/95 [==============================] - 0s 279us/step - loss: 0.0063 - val_loss: 0.0063
Epoch 84/100
95/95 [==============================] - 0s 242us/step - loss: 0.0065 - val_loss: 0.0044
Epoch 85/100
95/95 [==============================] - 0s 264us/step - loss: 0.0062 - val_loss: 0.0064
Epoch 86/100
95/95 [==============================] - 0s 253us/step - loss: 0.0064 - val_loss: 0.0044
Epoch 87/100
95/95 [==============================] - 0s 258us/step - loss: 0.0061 - val_loss: 0.0065
Epoch 88/100
95/95 [==============================] - 0s 248us/step - loss: 0.0064 - val_loss: 0.0045
Epoch 89/100
95/95 [==============================] - 0s 274us/step - loss: 0.0060 - val_loss: 0.0066
Epoch 90/100
95/95 [==============================] - 0s 269us/step - loss: 0.0063 - val_loss: 0.0044
Epoch 91/100
95/95 [==============================] - 0s 253us/step - loss: 0.0059 - val_loss: 0.0067
Epoch 92/100
95/95 [==============================] - 0s 253us/step - loss: 0.0062 - val_loss: 0.0045
Epoch 93/100
95/95 [==============================] - 0s 258us/step - loss: 0.0058 - val_loss: 0.0067
Epoch 94/100
95/95 [==============================] - 0s 242us/step - loss: 0.0061 - val_loss: 0.0045
Epoch 95/100
95/95 [==============================] - 0s 290us/step - loss: 0.0057 - val_loss: 0.0068
Epoch 96/100
95/95 [==============================] - 0s 263us/step - loss: 0.0060 - val_loss: 0.0045
Epoch 97/100
95/95 [==============================] - 0s 242us/step - loss: 0.0057 - val_loss: 0.0069
Epoch 98/100
95/95 [==============================] - 0s 269us/step - loss: 0.0060 - val_loss: 0.0046
Epoch 99/100
95/95 [==============================] - 0s 258us/step - loss: 0.0056 - val_loss: 0.0069
Epoch 100/100
95/95 [==============================] - 0s 269us/step - loss: 0.0059 - val_loss: 0.0046

比较

predict_y = scaler.inverse_transform([[i] for i in predict])
test = scaler.inverse_transform(test_y)

plt.plot(predict_y, 'g:', label='prediction')
plt.plot(test, 'r-', label='true')
plt.legend(['predict', 'true'])
plt.show()

这里写图片描述

转载于:https://www.cnblogs.com/htfeng/p/9931744.html

  • 0
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值