Datawhale AI 夏令营task03笔记

本次Task在之前的基础上使用优化方案来尝试实现最佳的预测性能。

优化方案

一些常见的机器模型优化方案:

  1. 数据预处理

    • 数据清洗:去除噪声和不一致的数据。
    • 特征工程:创建或选择出能够提高模型性能的特征。
    • 数据增强:增加数据集的多样性以提高模型的泛化能力。
  2. 模型选择

    • 选择适合任务的模型架构,例如线性模型、决策树、集成方法(如随机森林和梯度提升)、深度学习模型等。
    • 尝试多种模型并进行比较,以选择最佳模型。
  3. 超参数调优

    • 网格搜索、随机搜索:系统地探索不同超参数组合。
    • 贝叶斯优化:利用贝叶斯优化算法来高效地寻找最佳超参数。
    • 自动化调参工具:使用如Optuna、Hyperopt、Keras Tuner等工具。
  4. 正则化

    • L1/L2正则化:防止过拟合。
    • Dropout:特别适用于神经网络中的过拟合防止。
    • 数据增强技术:通过对训练数据进行变换来防止过拟合。
  5. 模型压缩和加速

    • 蒸馏(Knowledge Distillation):将一个大的、复杂的模型的知识转移到较小的模型中。
    • 网络剪枝:剪掉不重要的连接或者神经元。
    • 量化:将模型参数从浮点数转化为低精度的数值类型(如8位整数)。
    • 高效模型架构:例如MobileNet、EfficientNet等。
  6. 优化算法

    • 使用适当的优化算法如Adam、RMSprop、SGD等。
    • 自适应学习率调整:使用学习率调度器,如学习率逐步下降、余弦退火等。
  7. 跨验证

    • 使用交叉验证来评估模型的性能,从而选择最佳的模型和超参数。
  8. 迁移学习

    • 使用预训练模型,并在特定任务上进行微调。
  9. 集成方法

    • Bagging、Boosting:如随机森林、XGBoost、LightGBM。
    • 堆叠(Stacking):将多个模型的预测结果作为输入,再训练一个新的模型。
  10. 分布式训练

    • 利用分布式计算资源,实现大规模模型的高效训练。可以使用框架如Horovod、DistBelief等。

特征优化

(1)历史平移特征:通过历史平移获取上个阶段的信息;

(2)差分特征:可以帮助获取相邻阶段的增长差异,描述数据的涨减变化情况。在此基础上还可以构建相邻数据比值变化、二阶差分等;

(3)窗口统计特征:窗口统计可以构建不同的窗口大小,然后基于窗口范围进统计均值、最大值、最小值、中位数、方差的信息,可以反映最近阶段数据的变化情况。

参考代码如下:

1.导入必要的库
import pandas as pd
import lightgbm as lgb
from sklearn.metrics import mean_squared_error
import warnings

warnings.filterwarnings('ignore')

2.读取数据
train = pd.read_csv('./train.csv')
test = pd.read_csv('./test.csv')

3.合并数据和排序
# 合并训练集和测试集,并按照 id 和 dt 进行降序排序
data = pd.concat([train, test], axis=0).reset_index(drop=True)
data = data.sort_values(['id', 'dt'], ascending=False).reset_index(drop=True)

4.创建历史平移特征
# 创建了从第10天到第35天的历史平移特征
for i in range(10, 36):
    data[f'target_shift{i}'] = data.groupby('id')['target'].shift(i)

5.创建差分特征
# 基于第10天的历史平移特征,创建了1阶到3阶的差分特征
for i in range(1, 4):
    data[f'target_shift10_diff{i}'] = data.groupby('id')['target_shift10'].diff(i)

6.创建窗口统计特征
# 对不同窗口大小(15, 30, 50, 70)的目标变量创建均值、最大值、最小值和标准差特征
for win in [15, 30, 50, 70]:
    data[f'target_win{win}_mean'] = data.groupby('id')['target'].rolling(window=win, min_periods=3,
                                                                         closed='left').mean().values
    data[f'target_win{win}_max'] = data.groupby('id')['target'].rolling(window=win, min_periods=3,
                                                                        closed='left').max().values
    data[f'target_win{win}_min'] = data.groupby('id')['target'].rolling(window=win, min_periods=3,
                                                                        closed='left').min().values
    data[f'target_win{win}_std'] = data.groupby('id')['target'].rolling(window=win, min_periods=3,
                                                                        closed='left').std().values

7.创建历史平移加窗口统计特征
# 对第10天的历史平移特征,创建了不同窗口大小的均值、最大值、最小值、求和和标准差特征
for win in [7, 14, 28, 35, 50, 70]:
    data[f'target_shift10_win{win}_mean'] = data.groupby('id')['target_shift10'].rolling(window=win, min_periods=3,
                                                                                         closed='left').mean().values
    data[f'target_shift10_win{win}_max'] = data.groupby('id')['target_shift10'].rolling(window=win, min_periods=3,
                                                                                        closed='left').max().values
    data[f'target_shift10_win{win}_min'] = data.groupby('id')['target_shift10'].rolling(window=win, min_periods=3,
                                                                                        closed='left').min().values
    data[f'target_shift10_win{win}_sum'] = data.groupby('id')['target_shift10'].rolling(window=win, min_periods=3,
                                                                                        closed='left').sum().values
    data[f'target_shift10_win{win}_std'] = data.groupby('id')['target_shift10'].rolling(window=win, min_periods=3,
                                                                                        closed='left').std().values

8.数据切分
# 将数据切分回训练集和测试集
train = data[data.target.notnull()].reset_index(drop=True)
test = data[data.target.isnull()].reset_index(drop=True)

9.确定输入特征
# 确定训练时使用的特征列,排除 id 和 target 列
train_cols = [f for f in data.columns if f not in ['id', 'target']]

10.定义训练模型函数
def time_model(lgb, train_df, test_df, cols, clf=None):
    # 训练集和验证集切分
    trn_x, trn_y = train_df[train_df.dt >= 31][cols], train_df[train_df.dt >= 31]['target']
    val_x, val_y = train_df[train_df.dt <= 30][cols], train_df[train_df.dt <= 30]['target']
    # 构建模型输入数据
    train_matrix = lgb.Dataset(trn_x, label=trn_y)
    valid_matrix = lgb.Dataset(val_x, label=val_y)
    # lightgbm参数
    lgb_params = {
        'boosting_type': 'gbdt',
        'objective': 'regression',
        'metric': 'mse',
        'min_child_weight': 5,
        'num_leaves': 2 ** 5,
        'lambda_l2': 10,
        'feature_fraction': 0.8,
        'bagging_fraction': 0.8,
        'bagging_freq': 4,
        'learning_rate': 0.05,
        'seed': 2024,
        'nthread': 16,
        'verbose': -1,
        'device_type': 'gpu'
    }
    # 训练模型
    model = lgb.train(lgb_params, train_matrix, 50000, valid_sets=[train_matrix, valid_matrix], categorical_feature=[],
                      verbose_eval=500, early_stopping_rounds=500)
    # 验证集和测试集结果预测
    val_pred = model.predict(val_x, num_iteration=model.best_iteration)
    test_pred = model.predict(test_df[cols], num_iteration=model.best_iteration)
    # 离线分数评估
    score = mean_squared_error(val_pred, val_y)
    print(score)

    return val_pred, test_pred


lgb_oof, lgb_test = time_model(lgb, train, test, train_cols)

11.调用训练模型函数并保存结果
# 训练模型并获得预测结果
test['target'] = lgb_test
test[['id', 'dt', 'target']].to_csv('submit.csv', index=None)

模型融合

进行模型融合的前提是有多个模型的输出结果,比如使用catboost、xgboost和lightgbm三个模型分别输出三个结果,这时就可以将三个结果进行融合,最常见的是将结果直接进行加权平均融合。

下面我们构建了cv_model函数,内部可以选择使用lightgbm、xgboost和catboost模型,可以依次跑完这三个模型,然后将三个模型的结果进行取平均进行融合。

对于每个模型均选择经典的K折交叉验证方法进行离线评估,大体流程如下:

1、K折交叉验证会把样本数据随机的分成K份;

2、每次随机的选择K-1份作为训练集,剩下的1份做验证集;

3、当这一轮完成后,重新随机选择K-1份来训练数据;

4、最后将K折预测结果取平均作为最终提交结果。

参考代码如下: 

1.导入必要的库
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedKFold, KFold, GroupKFold, GridSearchCV
import lightgbm as lgb
import xgboost as xgb
from catboost import CatBoostRegressor
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.preprocessing import StandardScaler
import logging
import warnings

warnings.filterwarnings('ignore')
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

2.数据预处理函数
def preprocess_data(train, test):
    # 如果训练集或测试集中存在缺失值,使用均值填充这些缺失值,并记录这一操作
    if train.isnull().sum().sum() or test.isnull().sum().sum():
        logging.warning('数据包含缺失值,进行填充处理。')
        train = train.fillna(train.mean())
        test = test.fillna(test.mean())

        # 对数据进行标准化处理,使各特征的均值为0,标准差为1, 排除id、target和dt列
    scaler = StandardScaler()
    train[train.columns.difference(['id', 'target', 'dt'])] = scaler.fit_transform(
        train[train.columns.difference(['id', 'target', 'dt'])])
    test[test.columns.difference(['id', 'target', 'dt'])] = scaler.transform(
        test[test.columns.difference(['id', 'target', 'dt'])])

    return train, test

3.模型交叉验证函数
def cv_model(clf, train_x, train_y, test_x, clf_name, seed=2024):
    folds = 5
    kf = KFold(n_splits=folds, shuffle=True, random_state=seed)
    oof = np.zeros(train_x.shape[0])
    test_predict = np.zeros(test_x.shape[0])
    cv_scores = []

    for i, (train_index, valid_index) in enumerate(kf.split(train_x, train_y)):
        logging.info(f'Starting fold {i + 1}/{folds}')
        trn_x, trn_y, val_x, val_y = train_x.iloc[train_index], train_y[train_index], train_x.iloc[valid_index], \
        train_y[valid_index]

# 针对不同分类器(LightGBM、XGBoost、CatBoost)分别进行训练和预测:
        if clf_name == "lgb":
            train_matrix = clf.Dataset(trn_x, label=trn_y)
            valid_matrix = clf.Dataset(val_x, label=val_y)
            params = {
                'boosting_type': 'gbdt',
                'objective': 'regression',
                'metric': 'mae',
                'min_child_weight': 6,
                'num_leaves': 2 ** 6,
                'lambda_l2': 10,
                'feature_fraction': 0.8,
                'bagging_fraction': 0.8,
                'bagging_freq': 4,
                'learning_rate': 0.05,
                'seed': seed,
                'nthread': -1,
                'verbose': -1,
                'device_type': 'gpu'
            }
            model = clf.train(params, train_matrix, 1000, valid_sets=[train_matrix, valid_matrix],
                              categorical_feature=[], verbose_eval=200, early_stopping_rounds=100)
            val_pred = model.predict(val_x, num_iteration=model.best_iteration)
            test_pred = model.predict(test_x, num_iteration=model.best_iteration)

        if clf_name == "xgb":
            xgb_params = {
                'booster': 'gbtree',
                'objective': 'reg:squarederror',
                'eval_metric': 'mae',
                'max_depth': 5,
                'lambda': 10,
                'subsample': 0.7,
                'colsample_bytree': 0.7,
                'colsample_bylevel': 0.7,
                'eta': 0.05,
                'tree_method': 'hist',
                'seed': seed,
                'nthread': -1,
            }
            train_matrix = clf.DMatrix(trn_x, label=trn_y)
            valid_matrix = clf.DMatrix(val_x, label=val_y)
            test_matrix = clf.DMatrix(test_x)

            watchlist = [(train_matrix, 'train'), (valid_matrix, 'eval')]

            model = clf.train(xgb_params, train_matrix, num_boost_round=1000, evals=watchlist, verbose_eval=200,
                              early_stopping_rounds=100)
            val_pred = model.predict(valid_matrix)
            test_pred = model.predict(test_matrix)

        if clf_name == "cat":
            params = {'learning_rate': 0.05,
                      'depth': 5,
                      'bootstrap_type': 'Bernoulli',
                      'random_seed': seed,
                      'od_type': 'Iter',
                      'od_wait': 100,
                      'random_seed': 11,
                      'allow_writing_files': False,
                      "task_type": "GPU"}

            model = clf(iterations=1000, **params)
            model.fit(trn_x, trn_y, eval_set=(val_x, val_y),
                      metric_period=200,
                      use_best_model=True,
                      cat_features=[],
                      verbose=1)

            val_pred = model.predict(val_x)
            test_pred = model.predict(test_x)

        oof[valid_index] = val_pred
        test_predict += test_pred / kf.n_splits

        score = mean_absolute_error(val_y, val_pred)
        cv_scores.append(score)
        logging.info(f'Fold {i + 1} MAE: {score}')

    mean_cv_score = np.mean(cv_scores)
    logging.info(f'{clf_name} Mean CV MAE: {mean_cv_score}')

    return oof, test_predict

4.主程序
def main():
    train = pd.read_csv('./train.csv')
    test = pd.read_csv('./test.csv')

    # 数据预处理
    train, test = preprocess_data(train, test)

    # 合并训练数据和测试数据
    data = pd.concat([train, test], axis=0).reset_index(drop=True)
    data = data.sort_values(['id', 'dt'], ascending=False).reset_index(drop=True)

    # 进行数据切分
    train = data[data.target.notnull()].reset_index(drop=True)
    test = data[data.target.isnull()].reset_index(drop=True)

    # 确定输入特征
    train_cols = [f for f in data.columns if f not in ['id', 'target', 'dt']]

    # 选择lightgbm模型
    lgb_oof, lgb_test = cv_model(lgb, train[train_cols], train['target'], test[train_cols], 'lgb')
    # 选择xgboost模型
    xgb_oof, xgb_test = cv_model(xgb, train[train_cols], train['target'], test[train_cols], 'xgb')
    # 选择catboost模型
    cat_oof, cat_test = cv_model(CatBoostRegressor, train[train_cols], train['target'], test[train_cols], 'cat')

    # 进行取平均融合
    final_test = (lgb_test + xgb_test + cat_test) / 3

    test['target'] = final_test
    test[['id', 'dt', 'target']].to_csv('submit.csv', index=None)
    logging.info('Submission file created.')


if __name__ == "__main__":
    main()

另外一种就是stacking融合,stacking是一种分层模型集成框架。以两层为例,第一层由多个基学习器组成,其输入为原始训练集,第二层的模型则是以第一层基学习器的输出作为特征加入训练集进行再训练,从而得到完整的stacking模型。

第一层:(类比cv_model函数)

  1. 划分训练数据为K折(5折为例,每次选择其中四份作为训练集,一份作为验证集)

  2. 针对各个模型RF、ET、GBDT、XGB,分别进行5次训练,每次训练保留一份样本用作训练时的验证,训练完成后分别对Validation set,Test set进行预测,对于Test set一个模型会对应5个预测结果,将这5个结果取平均;对于Validation set一个模型经过5次交叉验证后,所有验证集数据都含有一个标签。此步骤结束后:5个验证集(总数相当于训练集全部)在每个模型下分别有一个预测标签,每行数据共有4个标签(4个算法模型),测试集每行数据也拥有四个标签(4个模型分别预测得到的)

第二层:(类比stack_model函数)

  1. 将训练集中的四个标签外加真实标签当作五列新的特征作为新的训练集,选取一个训练模型,根据新的训练集进行训练,然后应用测试集的四个标签组成的测试集进行预测作为最终的result

参考代码如下:

import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedKFold, KFold, GroupKFold
import lightgbm as lgb
import xgboost as xgb
from catboost import CatBoostRegressor
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
import logging
import warnings
from sklearn.model_selection import RepeatedKFold

warnings.filterwarnings('ignore')
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')


def preprocess_data(train, test):
    # 数据检查及处理
    if train.isnull().sum().sum() or test.isnull().sum().sum():
        logging.warning('数据包含缺失值,进行填充处理。')
        train = train.fillna(train.mean())
        test = test.fillna(test.mean())

        # 数据标准化
    scaler = StandardScaler()
    train[train.columns.difference(['id', 'target', 'dt'])] = scaler.fit_transform(
        train[train.columns.difference(['id', 'target', 'dt'])])
    test[test.columns.difference(['id', 'target', 'dt'])] = scaler.transform(
        test[test.columns.difference(['id', 'target', 'dt'])])

    return train, test


def cv_model(clf, train_x, train_y, test_x, clf_name, seed=2024):
    folds = 5
    kf = KFold(n_splits=folds, shuffle=True, random_state=seed)
    oof = np.zeros(train_x.shape[0])
    test_predict = np.zeros(test_x.shape[0])
    cv_scores = []

    for i, (train_index, valid_index) in enumerate(kf.split(train_x, train_y)):
        logging.info(f'Starting fold {i + 1}/{folds}')
        trn_x, trn_y, val_x, val_y = train_x.iloc[train_index], train_y[train_index], train_x.iloc[valid_index], \
        train_y[valid_index]

        if clf_name == "lgb":
            train_matrix = clf.Dataset(trn_x, label=trn_y)
            valid_matrix = clf.Dataset(val_x, label=val_y)
            params = {
                'boosting_type': 'gbdt',
                'objective': 'regression',
                'metric': 'mae',
                'min_child_weight': 6,
                'num_leaves': 2 ** 6,
                'lambda_l2': 10,
                'feature_fraction': 0.8,
                'bagging_fraction': 0.8,
                'bagging_freq': 4,
                'learning_rate': 0.05,
                'seed': seed,
                'nthread': -1,
                'verbose': -1,
                'device_type': 'gpu'
            }
            model = clf.train(params, train_matrix, 1000, valid_sets=[train_matrix, valid_matrix],
                              categorical_feature=[], verbose_eval=200, early_stopping_rounds=100)
            val_pred = model.predict(val_x, num_iteration=model.best_iteration)
            test_pred = model.predict(test_x, num_iteration=model.best_iteration)

        if clf_name == "xgb":
            xgb_params = {
                'booster': 'gbtree',
                'objective': 'reg:squarederror',
                'eval_metric': 'mae',
                'max_depth': 5,
                'lambda': 10,
                'subsample': 0.7,
                'colsample_bytree': 0.7,
                'colsample_bylevel': 0.7,
                'eta': 0.05,
                'tree_method': 'hist',
                'seed': seed,
                'nthread': -1,
            }
            train_matrix = clf.DMatrix(trn_x, label=trn_y)
            valid_matrix = clf.DMatrix(val_x, label=val_y)
            test_matrix = clf.DMatrix(test_x)

            watchlist = [(train_matrix, 'train'), (valid_matrix, 'eval')]

            model = clf.train(xgb_params, train_matrix, num_boost_round=1000, evals=watchlist, verbose_eval=200,
                              early_stopping_rounds=100)
            val_pred = model.predict(valid_matrix)
            test_pred = model.predict(test_matrix)

        if clf_name == "cat":
            params = {'learning_rate': 0.05,
                      'depth': 5,
                      'bootstrap_type': 'Bernoulli',
                      'random_seed': seed,
                      'od_type': 'Iter',
                      'od_wait': 100,
                      'random_seed': 11,
                      'allow_writing_files': False,
                      "task_type": "GPU"}

            model = clf(iterations=1000, **params)
            model.fit(trn_x, trn_y, eval_set=(val_x, val_y),
                      metric_period=200,
                      use_best_model=True,
                      cat_features=[],
                      verbose=1)

            val_pred = model.predict(val_x)
            test_pred = model.predict(test_x)

        oof[valid_index] = val_pred
        test_predict += test_pred / kf.n_splits

        score = mean_absolute_error(val_y, val_pred)
        cv_scores.append(score)
        logging.info(f'Fold {i + 1} MAE: {score}')

    mean_cv_score = np.mean(cv_scores)
    logging.info(f'{clf_name} Mean CV MAE: {mean_cv_score}')

    return oof, test_predict


def stack_model(oof_1, oof_2, oof_3, predictions_1, predictions_2, predictions_3, y):
    '''
    输入的oof_1, oof_2, oof_3可以对应lgb_oof,xgb_oof,cat_oof
    predictions_1, predictions_2, predictions_3对应lgb_test,xgb_test,cat_test
    '''
    train_stack = pd.concat([pd.Series(oof_1), pd.Series(oof_2), pd.Series(oof_3)], axis=1)
    test_stack = pd.concat([pd.Series(predictions_1), pd.Series(predictions_2), pd.Series(predictions_3)], axis=1)

    oof = np.zeros((train_stack.shape[0],))
    predictions = np.zeros((test_stack.shape[0],))
    scores = []

    folds = RepeatedKFold(n_splits=5, n_repeats=2, random_state=2021)

    for fold_, (trn_idx, val_idx) in enumerate(folds.split(train_stack)):
        logging.info("fold n°{}".format(fold_ + 1))
        trn_data, trn_y = train_stack.iloc[trn_idx], y.iloc[trn_idx]
        val_data, val_y = train_stack.iloc[val_idx], y.iloc[val_idx]

        clf = Ridge(random_state=2021)
        clf.fit(trn_data, trn_y)

        oof[val_idx] = clf.predict(val_data)
        predictions += clf.predict(test_stack) / (5 * 2)

        score_single = mean_absolute_error(val_y, oof[val_idx])
        scores.append(score_single)
        logging.info(f'{fold_ + 1}/{5}, {score_single}')

    mean_score = np.mean(scores)
    logging.info(f'Mean Stacking MAE: {mean_score}')

    return oof, predictions


def main():
    train = pd.read_csv('./train.csv')
    test = pd.read_csv('./test.csv')

    # 数据预处理
    train, test = preprocess_data(train, test)

    # 合并训练数据和测试数据
    data = pd.concat([train, test], axis=0).reset_index(drop=True)
    data = data.sort_values(['id', 'dt'], ascending=False).reset_index(drop=True)

    # 进行数据切分
    train = data[data.target.notnull()].reset_index(drop=True)
    test = data[data.target.isnull()].reset_index(drop=True)

    # 确定输入特征
    train_cols = [f for f in data.columns if f not in ['id', 'target', 'dt']]

    # 选择lightgbm模型
    lgb_oof, lgb_test = cv_model(lgb, train[train_cols], train['target'], test[train_cols], 'lgb')
    # 选择xgboost模型
    xgb_oof, xgb_test = cv_model(xgb, train[train_cols], train['target'], test[train_cols], 'xgb')
    # 选择catboost模型
    cat_oof, cat_test = cv_model(CatBoostRegressor, train[train_cols], train['target'], test[train_cols], 'cat')

    # 使用stacking方法融合
    stack_oof, stack_pred = stack_model(pd.Series(lgb_oof), pd.Series(xgb_oof), pd.Series(cat_oof),
                                        pd.Series(lgb_test), pd.Series(xgb_test), pd.Series(cat_test), train['target'])

    test['target'] = stack_pred
    test[['id', 'dt', 'target']].to_csv('submit.csv', index=None)
    logging.info('Submission file created.')


if __name__ == "__main__":
    main()

深度学习

参考代码如下:

import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, RepeatVector, TimeDistributed, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau

# 从指定路径读取训练集和测试集数据文件。
train = pd.read_csv('./train.csv')
test = pd.read_csv('./test.csv')


# 数据预处理函数
def preprocess_data(df, look_back=100):
    # 数据标准化
    scaler = MinMaxScaler(feature_range=(0, 1))
    df[df.columns[3]] = scaler.fit_transform(df[df.columns[3]].values.reshape(-1, 1))

    # 将数据按照id进行分组
    grouped = df.groupby('id')
    datasets = {}
    for id, group in grouped:
        datasets[id] = group.values

    # 准备训练数据集
    X, Y = [], []
    for id, data in datasets.items():
        for i in range(10, 15):  # 每个id构建5个序列
            a = data[i:(i + look_back), 3]
            a = np.append(a, np.array([0] * (look_back - len(a))))
            X.append(a[::-1])
            Y.append(data[i - 10:i, 3][::-1])

    # 准备测试数据集
    OOT = []
    for id, data in datasets.items():
        a = data[:100, 3]
        a = np.append(a, np.array([0] * (100 - len(a))))
        OOT.append(a[::-1])

    return np.array(X, dtype=np.float64), np.array(Y, dtype=np.float64), np.array(OOT, dtype=np.float64), scaler


# 定义模型
def build_model(look_back, n_features, n_output):
    model = Sequential()
    model.add(LSTM(100, input_shape=(look_back, n_features), return_sequences=True))
    model.add(Dropout(0.3))
    model.add(LSTM(100, return_sequences=False))
    model.add(Dropout(0.3))
    model.add(RepeatVector(n_output))
    model.add(LSTM(100, return_sequences=True))
    model.add(Dropout(0.3))
    model.add(TimeDistributed(Dense(1)))
    model.compile(loss='mean_squared_error', optimizer=Adam(0.001))
    return model


# 设置参数
look_back = 100  # 序列长度
n_features = 1  # 每个时间点只有一个特征
n_output = 10  # 预测未来10个时间单位的值

# 预处理数据
X, Y, OOT, scaler = preprocess_data(train, look_back=look_back)

# 调整输入数据形状
X = X.reshape((X.shape[0], look_back, n_features))
Y = Y.reshape((Y.shape[0], n_output, n_features))
OOT = OOT.reshape((OOT.shape[0], look_back, n_features))

# 构建模型
model = build_model(look_back, n_features, n_output)

# 设置早停和学习率调度
early_stopping = EarlyStopping(monitor='val_loss', patience=5, verbose=1)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=3, verbose=1)

# 训练模型
history = model.fit(X, Y, epochs=100, batch_size=64, validation_split=0.2, verbose=1,
                    callbacks=[early_stopping, reduce_lr])

# 进行预测
predicted_values = model.predict(OOT)

# 将预测值反归一化
predicted_values = scaler.inverse_transform(predicted_values.reshape(-1, 1)).reshape(predicted_values.shape)

# 添加预测结果到测试数据集并保存到CSV文件
test_ids = test['id'].unique()
predicted_df = pd.DataFrame({
    'id': np.repeat(test_ids, n_output),
    'dt': np.tile(range(n_output), len(test_ids)),
    'target': predicted_values.flatten()
})

predicted_df.to_csv('submit.csv', index=None)

print(predicted_values)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值