写在前面
前文已经对三个数据集进行了处理,包括标签转换、去除离群点、构建完整时间序列、回归补全数据集、特征构建,并将其保存为com_training.txt文件,没看过的小伙伴可以—>点击查看前文。
当前数据集中包含的特征已经可以用于建模,其中lagging1、lagging2、lagging3、lagging4、lagging5分别代表该时间段以前2*i分钟的travel_time。
在我仔细看了迪哥教程后,终于对赛题有了清晰的认识,迪哥yyds!!
1.准备工作
老规矩,第一步导入相应工具包与数据集。注:ultis为外部python文件,里面包含了一些辅助函数(文章结尾资料分享里面有)。
import pandas as pd
import numpy as np
import xgboost as xgb
import joblib
from sklearn.model_selection import ParameterGrid
from ultis import *
df = pd.read_csv('com_training.txt',delimiter=';',
parse_dates=['time_interval_begin'],dtype={'link_ID':object})
df.head()
展示下数据集:
2.数据集分割
在这里将数据集根据特征分为两个。一个数据集命名为train_feature,包含lagging1 、lagging2、lagging3、lagging4、lagging5等关于时间序列的一系列特征,用于XGBoost回归模型的建立及预测。另一数据集命令为valid_feature,用于计算交叉验证下XGBoost回归模型的损失值,用于建模时评估该模型的好坏,详细函数请查看ultis.py文件 。
base_feature = [x for x in df.columns.values.tolist() if x not in ['link_ID','date','time_interval_begin','imputation1','minute_series', 'area','hour_en','day_of_week','travel_time']]
base_feature = [x for x in base_feature if x not in ['lagging1','lagging2','lagging3','lagging4','lagging5']]
train_feature = list(base_feature)
train_feature.extend(['lagging1','lagging2','lagging3','lagging4','lagging5'])
valid_feature = list(base_feature)
valid_feature.extend(['minute_series','travel_time'])
3.训练模型
采用ParameterGrid网格搜索的方法来找到最适合的参数,各位小伙伴可以在中括号里面多加一点参数来进行筛选(俺的小破电脑带不动了==)。
params_grid = {
'learning_rate':[0.05],'n_estimators':[100],
'max_depth':[7],'min_child_weight':[1],
'subsample':[0.6],'reg_alpha':[2]
}
grid = ParameterGrid(params_grid)
建模时采用交叉验证的方法,但是!!此次赛题中不能采用sklearn.modelsection模块中的交叉验证函数,因为假如时间序列不连续了,所建立的模型怎么能预测下一个时间段的发生的事情呢?我们需要手动将数据集分为几个含义连续时间序列的训练集和验证集,那就是选取几个时间区间分别赋值给几个dataframe,然后在训练时将4个dataframe合起来作为预测集,剩下1个为验证集,代码如下:
result = {}
def train(df,params,best=1):
train1 = df.loc[df['time_interval_begin'] <= pd.to_datetime('2017-03-24')]
train2 = df.loc[(df['time_interval_begin'] > pd.to_datetime('2017-03-24')) & (df['time_interval_begin'] <= pd.to_datetime('2017-04-18'))]
train3 = df.loc[(df['time_interval_begin'] > pd.to_datetime('2017-04-18')) & (df['time_interval_begin'] <= pd.to_datetime('2017-05-12'))]
train4 = df.loc[(df['time_interval_begin'] > pd.to_datetime('2017-05-12')) & (df['time_interval_begin'] <= pd.to_datetime('2017-06-06'))]
train5 = df.loc[(df['time_interval_begin'] > pd.to_datetime('2017-06-06')) & (df['time_interval_begin'] <= pd.to_datetime('2017-06-30'))]
model_1,loss_1,best_score_1,best_iteration_1 = fit_evalute(pd.concat([train1,train2,train3,train4]),train5,params)
print(loss_1,best_score_1,best_iteration_1)
model_2,loss_2,best_score_2,best_iteration_2 = fit_evalute(pd.concat([train2,train3,train1,train5]),train4,params)
print(loss_2,best_score_2,best_iteration_2)
model_3,loss_3,best_score_3,best_iteration_3 = fit_evalute(pd.concat([train2,train1,train4,train5]),train3,params)
print(loss_3,best_score_3,best_iteration_3)
model_4,loss_4,best_score_4,best_iteration_4 = fit_evalute(pd.concat([train1,train3,train4,train5]),train2,params)
print(loss_4,best_score_4,best_iteration_4)
model_5,loss_5,best_score_5,best_iteration_5 = fit_evalute(pd.concat([train2,train3,train4,train5]),train1,params)
print(loss_5,best_score_5,best_iteration_5)
loss = [loss_1,loss_2,loss_3,loss_4,loss_5]
result['loss_std'] = np.std(loss)
result['loss_mean'] = np.mean(loss)
result['n_estimators'] = str([best_iteration_1,best_iteration_2,best_iteration_3,best_iteration_4,best_iteration_5])
result['best_score'] = str([best_score_1,best_score_2,best_score_3,best_score_4,best_score_5])
print(result)
if np.mean(loss)<best:
best=np.mean(loss)
print ("best with: " + str(result))
return best
fit_evalute函数表示一次训练过程,返回每次训练所建模型、损失值、分数值(best_score)、迭代次数,代码如下:
from sklearn.model_selection import train_test_split
def fit_evalute(df,df_valid,params):
x = df[train_feature].values
y = df['travel_time'].values
x_train,x_valid,y_train,y_valid = train_test_split(x,y,test_size=0.2,random_state=929)
df_valid = df_valid[valid_feature].values
valid_data = bucket_data(df_valid)
eval_set = [(x_valid,y_valid)]
model = xgb.XGBRegressor(
learning_rate=params['learning_rate'],n_estimators=params['n_estimators'],
max_depth=params['max_depth'],min_child_weight=params['min_child_weight'],
subsample = params['subsample'],reg_alpha=params['reg_alpha'])
model.fit(x_train,y_train,early_stopping_rounds=10,verbose=False,eval_metric=mape_ln,eval_set=eval_set)
return model,cross_valid(model,valid_data,lagging=5),model.best_score,model.best_iteration
将每次训练模型的结果分布进行输出,还可打印5次训练中参数进行合体打印,如下:
合体输出如下:
可以看出模型整体损失及score值在合理范围内,说明所建立模型有效。
4.时间序列预测
模型已经建立完毕啦!现在我们想生成7月1日至7月31日的travel_time,该咋办呢?
首先我们可以根据7月1日0:00-0:02前2、4、6、8、10分钟的travel_time来预测7月1日0:00-0:02的数值,而第一次预测所得值可以用于第二次预测中,以此类推即可将该月的序列生成。
注:此次预测需要输入前一章节所得最好的模型参数。
代码如下:
def submission(train_feature,df,model):
test_df = df.loc[((df['time_interval_begin'].dt.year == 2017) &
(df['time_interval_begin'].dt.month == 7) &
(df['time_interval_begin'].dt.hour.isin([7, 14, 17])) &
(df['time_interval_begin'].dt.minute == 58))].copy()
#
test_df['lagging5'] = test_df['lagging4']
test_df['lagging4'] = test_df['lagging3']
test_df['lagging3'] = test_df['lagging2']
test_df['lagging2'] = test_df['lagging1']
test_df['lagging1'] = test_df['travel_time']
for i in range(30):
x_test = test_df[train_feature]
y_prediction = model.predict(x_test)
#每次循环中lagging1-5会依次进行传递
test_df['lagging5'] = test_df['lagging4']
test_df['lagging4'] = test_df['lagging3']
test_df['lagging3'] = test_df['lagging2']
test_df['lagging2'] = test_df['lagging1']
test_df['lagging1'] = y_prediction
test_df['predict'] = np.expm1(y_prediction)
test_df['time_interval_begin'] = test_df['time_interval_begin'] + pd.DateOffset(minutes=2)
test_df['time_interval'] = test_df['time_interval_begin'].map(lambda x: '['+str(x)+','+str(x+pd.DateOffset(minutes=2))+')')
test_df.time_interval = test_df.time_interval.astype(object)
test_df.to_csv('re.csv',sep=';',header=False,index=False)
train_df = df.loc[df['time_interval_begin'] < pd.to_datetime('2017-07-01')]
train_df = train_df.dropna()
x = train_df[train_feature].values
y = train_df['travel_time'].values
x_train,x_valid,y_train,y_valid = train_test_split(x,y,test_size=0.2,random_state=929)
eval_set = [(x_valid,y_valid)]
#选择最好的模型参数
model = xgb.XGBRegressor(
learning_rate=params['learning_rate'],n_estimators=params['n_estimators'],
max_depth=params['max_depth'],min_child_weight=params['min_child_weight'],
subsample = params['subsample'],reg_alpha=params['reg_alpha'])
model.fit(x_train,y_train,early_stopping_rounds=10,verbose=False,eval_metric=mape_ln,eval_set=eval_set)
submission(train_feature,df,model)
这样就全部完成啦,结果也保存至re.csv文件了,各位小伙伴可以试试!
文中所涉及数据集和py文件百度网盘链接如下:
链接:https://pan.baidu.com/s/18e8pO6h-LAVvsoJhsTnxqA
提取码:JRSY