用CNN实现多输入单输出数据回归预测,代码注释详尽,数据存Excel方便替换,以R2、MAE和MBE为指标计算

基于卷积神经网络CNN的数据回归预测
多输入单输出预测
代码含详细注释,不负责
数据存入Excel,替换方便,指标计算有决定系数R2,平均绝对误差MAE,平均相对误差MBE

ID:569727404808395

琵琶巷多话的苁蓉


基于卷积神经网络

  • 7
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
以下是使用TensorFlow实现CNN-GRU-Attention进行多变量时序特征提取的完整代码示例,包括输入数据预处理和预测价格评估: ```python import pandas as pd import numpy as np import tensorflow as tf from tensorflow.keras import layers, models # 数据预处理 def preprocess_data(data, n_steps, n_features): X, y = list(), list() for i in range(len(data)): end_ix = i + n_steps if end_ix > len(data) - 1: break seq_x, seq_y = data[i:end_ix, :-1], data[end_ix, -1] X.append(seq_x) y.append(seq_y) return tf.convert_to_tensor(X, dtype=tf.float32), tf.convert_to_tensor(y, dtype=tf.float32) # 构建CNN-GRU-Attention模型 def build_model(n_steps, n_features, n_outputs): # CNN inputs1 = layers.Input(shape=(n_steps, n_features, 1)) conv1 = layers.Conv2D(filters=64, kernel_size=(1,3), activation='relu')(inputs1) drop1 = layers.Dropout(0.5)(conv1) pool1 = layers.MaxPooling2D(pool_size=(1,2))(drop1) flat1 = layers.Flatten()(pool1) # GRU inputs2 = layers.Input(shape=(n_steps, n_features)) gru1 = layers.GRU(128, return_sequences=True)(inputs2) drop2 = layers.Dropout(0.5)(gru1) gru2 = layers.GRU(128)(drop2) # Attention attention = layers.concatenate([flat1, gru2]) attention = layers.Dense(64, activation='tanh')(attention) attention = layers.Dense(1, activation='softmax')(attention) attention = layers.Reshape((n_steps, 1))(attention) attention = layers.Lambda(lambda x: tf.keras.backend.repeat_elements(x, n_features, 2))(attention) attention = layers.Permute((2, 1))(attention) attention = layers.multiply([attention, inputs2]) attention = layers.Lambda(lambda x: tf.keras.backend.sum(x, axis=1))(attention) # 输出层 outputs = layers.Dense(n_outputs, activation='linear')(attention) model = models.Model(inputs=[inputs1, inputs2], outputs=outputs) model.compile(optimizer='adam', loss='mse') return model # 加载数据 df = pd.read_csv('data.csv') df['date'] = pd.to_datetime(df['date']) df = df.set_index('date') df = df.dropna() data = df.values # 数据归一化 from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() data = scaler.fit_transform(data) # 划分训练集和测试集 train_size = int(len(data) * 0.8) train_data = data[:train_size, :] test_data = data[train_size:, :] # 预处理数据 n_steps = 30 n_features = data.shape[1] - 1 train_X, train_y = preprocess_data(train_data, n_steps, n_features) test_X, test_y = preprocess_data(test_data, n_steps, n_features) # 构建模型 n_outputs = 1 model = build_model(n_steps, n_features, n_outputs) # 模型训练 model.fit([train_X[..., np.newaxis], train_X], train_y, epochs=50, batch_size=32) # 模型预测 y_pred = model.predict([test_X[..., np.newaxis], test_X]) # 反归一化 test_y = scaler.inverse_transform(test_y.numpy().reshape(-1, 1)) y_pred = scaler.inverse_transform(y_pred.reshape(-1, 1)) # 评估预测结果 from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score mse = mean_squared_error(test_y, y_pred) mae = mean_absolute_error(test_y, y_pred) r2 = r2_score(test_y, y_pred) print(f'MSE: {mse:.4f}, MAE: {mae:.4f}, R2: {r2:.4f}') ``` 在上面的代码,我们首先加载并预处理数据。接着,通过调用preprocess_data()函数将数据转换为可以输入模型的格式,其n_steps表示每个样本的时间步数,n_features表示每个时间步的特征数,n_outputs表示模型输出的维度。然后,我们构建并训练了CNN-GRU-Attention模型,并使用模型对测试集进行了预测。最后,我们反归一化预测结果,并使用sklearn库的评估函数评估了预测结果的表现。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值