山东大学暑期项目实训——云主机服务比价与预测系统(十四)
使用tensorflow做预测
3.2各种模型!!!!!
1)单步模型
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-To67wq7l-1627622915610)(https://www.tensorflow.org/tutorials/structured_data/images/narrow_window.png?hl=zh-cn)(https://www.tensorflow.org/tutorials/structured_data/images/narrow_window.png?hl=zh-cn)]
single_step_window = WindowGenerator(
input_width=1, label_width=1, shift=1,
label_columns=['T (degC)'])
single_step_window
2)基准线
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Wz33w8n8-1627622982851)(https://www.tensorflow.org/tutorials/structured_data/images/baseline.png?hl=zh-cn)]
class Baseline(tf.keras.Model):
def __init__(self, label_index=None):
super().__init__()
self.label_index = label_index
def call(self, inputs):
if self.label_index is None:
return inputs
result = inputs[:, :, self.label_index]
return result[:, :, tf.newaxis]
baseline = Baseline(label_index=column_indices['T (degC)'])
# 编译模型
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(single_step_window.val)
performance['Baseline'] = baseline.evaluate(single_step_window.test, verbose=0)
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1,
label_columns=['T (degC)'])
print(wide_window)
3)线性模型
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zjaqKSFY-1627622982855)(https://www.tensorflow.org/tutorials/structured_data/images/narrow_window.png?hl=zh-cn)]
def linear():
"""
线性训练模型
:return:
"""
linear = tf.keras.Sequential([
tf.keras.layers.Dense(units=1)
])
print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', linear(single_step_window.example[0]).shape)
history = compile_and_fit(linear, single_step_window)
val_performance['Linear'] = linear.evaluate(single_step_window.val)
performance['Linear'] = linear.evaluate(single_step_window.test, verbose=0)
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)
wide_window.plot(linear)
return history
用于所有模型编译和训练的方法:
def compile_and_fit(model, window, patience=2):
"""
model进行编译和训练的方法
:param model: 模型
:param window: 训练的窗口
:param patience:耐心,容忍度
:return:
"""
MAX_EPOCHS = 20
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(),
metrics=[tf.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
return history
4)卷积神经网络
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Gt1Fk1bT-1627622982859)(https://upload.wikimedia.org/wikipedia/commons/thumb/e/e9/Max_pooling.png/314px-Max_pooling.png)]
卷积神经网络(Convolutional Neural Networks, CNN)是一类包含卷积计算且具有深度结构的前馈神经网络(Feedforward Neural Networks),是深度学习(deep learning)的代表算法之一 。
def CNN():
"""
卷积神经网络做预测
:return:
"""
CONV_WIDTH = 7
conv_window = WindowGenerator(
input_width=CONV_WIDTH,
label_width=1,
shift=1,
label_columns=['T (degC)'])
conv_model = tf.keras.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=(CONV_WIDTH,),
activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
])
print("Conv model on `conv_window`")
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', conv_model(conv_window.example[0]).shape)
history = compile_and_fit(conv_model, conv_window)
IPython.display.clear_output()
val_performance['Conv'] = conv_model.evaluate(conv_window.val)
performance['Conv'] = conv_model.evaluate(conv_window.test, verbose=0)
conv_window.plot(conv_model)
return history
5)递归神经网络
递归神经网络(recursive neural network)是具有树状阶层结构且网络节点按其连接顺序对输入信息进行递归的人工神经网络(Artificial Neural Network, ANN),是深度学习(deep learning)算法之一
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-1095fdhh-1627622982860)(/Users/yuanbao/Library/Application Support/typora-user-images/image-20210429105447260.png)]
def RNN():
"""
递归神经网络做预测
:return:
"""
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=1)
])
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', lstm_model(wide_window.example[0]).shape)
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)
wide_window.plot(lstm_model)
return history