datawhale AI夏令营 社会科学总结篇

   一眨眼,社会科学系列也准备收尾。一开始对于时间序列预测还想小试牛刀,但看了下数据觉得难度不小,简单跑完baseline发现这个效果已经很不错,接着是第二阶段对数据特征的解读,这是我之前没有想过的思路,居然需要专门出一章文字,一节完整的代码来研究数据的分布和周期变化。当然在这个过程中也被普及了很多电力交易市场的基础知识,比如最经典的鸭子曲线,风电火电之间的替代效应,还有ABM模型等等,确实学到了不少知识。

  回到代码本身,给出的baseline 已经相当优秀,在最后阶段的上分过程中,我试着调整部分内容,但是反而弄巧成拙,分数下降了一些。于是我想着从之前学过的内容入手,用时间序列预测相关的模型,想试试看这样有没有效果。

   首先就是Prophet算法,这是Facebook开源的时间序列预测算法,按照官网的介绍,它可以有效处理节假日信息,并按周、月、年对时间序列数据的变化趋势进行拟合。Prophet对具有强烈周期性特征的历史数据拟合效果很好,不仅可以处理时间序列存在一些异常值的情况,也可以处理部分缺失值的情形。 prophet 算法是基于时间序列分解和机器学习的拟合来做的,其中在拟合模型的时候使用了 pyStan 这个开源工具,因此能够在较快的时间内得到需要预测的结果。

   看到介绍我满心欢喜,觉得终于学到了屠龙之技,兴致勃勃地进行运行调试,但是pyStan 这个开源工具的安装一波三折,在vscode上折腾无果后,选择了魔搭果然成功安装。但是显然我低估了数据的难度,没想到Prophet算法的运行结果不如人意,得分居然暴涨(跌)到5w,我两眼一黑,吐血三升,决定另寻他路。

!pip install  prophet
!pip install  pystan
import pandas as pd
import numpy as np
from prophet import Prophet
import matplotlib.pyplot as plt
from pathlib import Path
import numpy as np
import pandas as pd
import seaborn as sns
import os
from tqdm import tqdm
import pandas as pd
import seaborn as sns
import matplotlib.pylab as plt
from pathlib import Path
import warnings
warnings.filterwarnings('ignore')
plt.style.use('ggplot')
plt.rcParams['font.sans-serif'] = ["WenQuanYi Micro Hei",'SimHei']
plt.rcParams['axes.unicode_minus'] = False

base_path=Path("data")
train_length=55392

electricity_price=pd.read_csv('../dataset/electricity_price_parsed.csv',parse_dates=["timestamp"],index_col=0)
electricity_price.columns=["demand","price"]
# sample_submit=pd.read_csv(base_path/"sample_submit.csv")

train_data=electricity_price.copy(deep=True)
electricity_price.head()

# 预处理数据以适应 Prophet
data = electricity_price.reset_index().rename(columns={'timestamp': 'ds', 'price': 'y'})

# 添加小时、天、月、年等特征
data['hour'] = data['ds'].dt.hour
data['day'] = data['ds'].dt.day
data['month'] = data['ds'].dt.month
data['year'] = data['ds'].dt.year
data['weekday'] = data['ds'].dt.weekday

# 添加是否为风季、低谷期、季度特征
data['is_windy_season'] = data['month'].isin([1, 2, 3, 4, 5, 9, 10, 11, 12]) | \
                          ((data['month'] == 7) & (data['day'] > 15)) | \
                          ((data['month'] == 8) & (data['day'] < 15))
data['is_valley'] = data['hour'].isin([10, 11, 12, 13, 14, 15])
data['quarter'] = data['ds'].dt.quarter

# 添加节假日信息
spring_festival_start_dates = ["2022-01-31", "2023-01-21", "2024-02-10"]
labor_start_dates = ["2022-04-30", "2023-04-29"]

def generate_holiday_dates(start_dates, duration):
    holidays = []
    for start_date in start_dates:
        holidays.extend(pd.date_range(start=start_date, periods=duration).tolist())
    return holidays

spring_festivals = generate_holiday_dates(spring_festival_start_dates, 7)
labor = generate_holiday_dates(labor_start_dates, 5)

data['is_spring_festival'] = data['ds'].isin(spring_festivals)
data['is_labor'] = data['ds'].isin(labor)

# 定义并训练 Prophet 模型
model = Prophet()

# 添加额外的回归变量
model.add_regressor('hour')
model.add_regressor('day')
model.add_regressor('month')
model.add_regressor('year')
model.add_regressor('weekday')
model.add_regressor('is_windy_season')
model.add_regressor('is_valley')
model.add_regressor('quarter')
model.add_regressor('is_spring_festival')
model.add_regressor('is_labor')

# 训练模型
model.fit(data)

# 创建未来的时间数据框
future = model.make_future_dataframe(periods=365 * 24 * 4, freq='15T')

# 为未来的数据框添加相同的回归变量
future['hour'] = future['ds'].dt.hour
future['day'] = future['ds'].dt.day
future['month'] = future['ds'].dt.month
future['year'] = future['ds'].dt.year
future['weekday'] = future['ds'].dt.weekday
future['is_windy_season'] = future['month'].isin([1, 2, 3, 4, 5, 9, 10, 11, 12]) | \
                            ((future['month'] == 7) & (future['day'] > 15)) | \
                            ((future['month'] == 8) & (future['day'] < 15))
future['is_valley'] = future['hour'].isin([10, 11, 12, 13, 14, 15])
future['quarter'] = future['ds'].dt.quarter
future['is_spring_festival'] = future['ds'].isin(spring_festivals)
future['is_labor'] = future['ds'].isin(labor)

# 预测
forecast = model.predict(future)

# 计算清算价格
forecast['clearing_price'] = (forecast['yhat_lower'] + forecast['yhat_upper']) / 2

# 先将日期列转换为 datetime 对象,以便进行日期范围筛选
forecast['ds'] = pd.to_datetime(forecast['ds'])

# 筛选日期范围
start_date = '2023-06-30'
end_date = '2024-04-19'
filtered_forecast = forecast[(forecast['ds'] >= start_date) & (forecast['ds'] <= end_date)]

# 生成所需的 'day' 和 'time' 列
filtered_forecast['day'] = filtered_forecast['ds'].dt.strftime('%-d/%-m/%Y')
filtered_forecast['time'] = filtered_forecast['ds'].dt.strftime('%-H:%M')

# 选择需要保存的列
result = filtered_forecast[['day', 'time', 'clearing_price']]

# 保存到 CSV 文件
result.to_csv('../dataset/pre1.csv', index=False)

   接下来我根据期末考试的内容想到了transformer模型,其实之前浅浅尝试了GRU模型,但是发现表现跟Prophet算法半斤八两(当然还是比他好一些),与其慢慢优化改进,不如一鼓作气直接用transformer,万一有奇迹发生呢.

  Transformer 模型是由 Google 在 2017 年提出的,旨在解决传统的序列到序列模型在处理长距离依赖问题上的不足。传统的 RNN 和 LSTM 模型在处理长文本序列时,容易出现梯度消失或爆炸问题,导致模型性能下降。Transformer 模型通过引入自注意力机制和多头注意力机制,成功地解决了这一问题。一开始只是用于自然语言处理领域,后来也慢慢扩展到时间序列预测中来。

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from sklearn.preprocessing import MinMaxScaler
from torch.utils.data import TensorDataset, DataLoader
from tqdm import tqdm
import time

class Config():
    data_path = '../dataset/electricity_price_parsed.csv'  # 更新为实际数据路径
    timestep = 96  # 增加时间步长以捕捉更长的依赖关系
    batch_size = 32
    feature_size = 2  # 我们有2个特征:需求和结算价格
    hidden_size = 128  # 调整隐藏层大小以便能被头数整除
    num_heads = 8
    output_size = 2  # 我们预测2个值:需求和结算价格
    num_layers = 2
    epochs = 25
    best_loss = float('inf')  # 初始设置为无穷大
    learning_rate = 0.0001  # 调整学习率
    model_name = 'transformer'
    save_path = './{}.pth'.format(model_name)

config = Config()

# 1. 加载时间序列数据
df = pd.read_csv(config.data_path)
df['timestamp'] = pd.to_datetime(df['timestamp'])
df.set_index('timestamp', inplace=True)

# 打印一些数据统计信息
print(df.head())
print(df.describe())

# 检查NaN或无限值
if df.isnull().values.any() or not np.isfinite(df.values).all():
    print("数据包含NaN或无限值,处理缺失值。")
    # 用列的均值填充NaN值
    df.fillna(df.mean(), inplace=True)

# 2. 归一化数据
scaler = MinMaxScaler()
data = scaler.fit_transform(df)

# 检查缩放后的NaN或无限值
if np.isnan(data).any() or np.isinf(data).any():
    print("缩放后的数据包含NaN或无限值。")
    exit()

# 打印归一化前后的数据统计信息
#print("数据归一化前的统计信息:")
#print(df.describe())
#print("数据归一化后的统计信息:")
# print(pd.DataFrame(data, columns=df.columns).describe())

# 将数据分割成序列的函数
def split_data(data, timestep):
    dataX, dataY = [], []
    for index in range(len(data) - timestep):
        dataX.append(data[index: index + timestep])
        dataY.append(data[index + timestep])
    dataX = np.array(dataX)
    dataY = np.array(dataY)
    train_size = int(np.round(0.8 * dataX.shape[0]))
    x_train = dataX[: train_size]
    y_train = dataY[: train_size]
    x_test = dataX[train_size:]
    y_test = dataY[train_size:]
    return [x_train, y_train, x_test, y_test]

# 3. 获取训练和测试数据
x_train, y_train, x_test, y_test = split_data(data, config.timestep)

# 4. 将数据转换为张量
x_train_tensor = torch.from_numpy(x_train).to(torch.float32)
y_train_tensor = torch.from_numpy(y_train).to(torch.float32)
x_test_tensor = torch.from_numpy(x_test).to(torch.float32)
y_test_tensor = torch.from_numpy(y_test).to(torch.float32)

# 5. 创建数据加载器
train_data = TensorDataset(x_train_tensor, y_train_tensor)
test_data = TensorDataset(x_test_tensor, y_test_tensor)

train_loader = DataLoader(train_data, config.batch_size, shuffle=False)
test_loader = DataLoader(test_data, config.batch_size, shuffle=False)

# 7. 定义Transformer网络
class TransformerModel(nn.Module):
    def __init__(self, feature_size, hidden_size, num_heads, num_layers, output_size):
        super(TransformerModel, self).__init__()
        self.encoder_layer = nn.TransformerEncoderLayer(d_model=hidden_size, nhead=num_heads, dim_feedforward=hidden_size)
        self.transformer = nn.TransformerEncoder(self.encoder_layer, num_layers=num_layers)
        self.fc = nn.Linear(hidden_size, output_size)
        self.embedding = nn.Linear(feature_size, hidden_size)

    def forward(self, x):
        x = self.embedding(x)
        x = self.transformer(x)
        x = self.fc(x[:, -1, :])  # 取最后一个时间步的输出
        return x

model = TransformerModel(config.feature_size, config.hidden_size, config.num_heads, config.num_layers, config.output_size)
loss_function = nn.MSELoss()
optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate)

# 8. 训练模型
for epoch in range(config.epochs):
    start_time = time.time()
    model.train()
    running_loss = 0
    train_bar = tqdm(train_loader)
    for data in train_bar:
        x_train, y_train = data
        optimizer.zero_grad()
        y_train_pred = model(x_train)
        
        # 打印模型输出和实际值
       # print(f"模型输出: {y_train_pred[:5]}")
       # print(f"实际值: {y_train[:5]}")
        
        loss = loss_function(y_train_pred, y_train)
        
        if torch.isnan(loss):
            print("遇到NaN损失,退出。")
            exit()
        loss.backward()
        
        # 增加梯度剪辑
        torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
        
        optimizer.step()
        running_loss += loss.item()
        train_bar.set_description("train epoch[{}/{}] loss:{:.3f}".format(epoch + 1, config.epochs, loss))

    avg_train_loss = running_loss / len(train_loader)
    train_ppl = np.exp(avg_train_loss)
    
    model.eval()
    test_loss = 0
    with torch.no_grad():
        test_bar = tqdm(test_loader)
        for data in test_bar:
            x_test, y_test = data
            y_test_pred = model(x_test)
            loss = loss_function(y_test_pred, y_test)
            test_loss += loss.item()

    avg_test_loss = test_loss / len(test_loader)
    test_ppl = np.exp(avg_test_loss)
    
    end_time = time.time()
    epoch_mins, epoch_secs = divmod(end_time - start_time, 60)
    
    print(f'Epoch: {epoch + 1:02} | Time: {int(epoch_mins)}m {int(epoch_secs)}s')
    print(f'\tTrain Loss: {avg_train_loss:.3f} | Train PPL: {train_ppl:.3f}')
    print(f'\t Val. Loss: {avg_test_loss:.3f} |  Val. PPL: {test_ppl:.3f}')
    
    if avg_test_loss < config.best_loss:
        config.best_loss = avg_test_loss
        torch.save(model.state_dict(), config.save_path)

print('训练结束')

# 9. 生成预测并保存为CSV文件
model.eval()
predictions = []

# 生成2023-07-01到2024-05-01的新时间戳,每15分钟一次
new_timestamps = pd.date_range(start='2023-07-01', end='2024-04-20', freq='15T')

# 使用训练和测试数据来确保有足够的数据进行预测
all_data_tensor = torch.cat((x_train_tensor, x_test_tensor), dim=0)

if len(new_timestamps) > len(all_data_tensor):
    print("没有足够的数据生成所需数量的预测。")
    exit()


with torch.no_grad():
    for i in range(len(new_timestamps)):
        x_input = all_data_tensor[i:i+1]
        y_pred = model(x_input)
        predictions.append(y_pred.numpy().flatten())

# 逆归一化预测结果
predictions = scaler.inverse_transform(predictions)

# 检查并调整负值的价格
predictions[predictions[:, 1] < 0, 1] += 200

# 生成结果数据框
results = pd.DataFrame(predictions, columns=['predicted_demand', 'predicted_clearing_price'], index=new_timestamps)

# 保存结果到CSV文件
results.to_csv('../dataset/pre3.csv', index=True)

   看起来白忙活了一场,transformer从3W然后提升到2W,但距离baseline还有一大段距离,只能说任重道远,起点遥不可及啊!

   言归正传,看来也不是什么场景都能用最流行的模型求出通解,最适合的才是最好的,不说了继续看baseline去!

  • 8
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值