Task2:入门lightgbm
基础概念入门
GBDT
GBDT (Gradient Boosting Decision Tree) 是机器学习中一个长盛不衰的模型,其主要思想是利用弱分类器(决策树)迭代训练以得到最优模型,该模型具有训练效果好、不易过拟合等优点。
GBDT不仅在工业界应用广泛,通常被用于多分类、点击率预测、搜索排序等任务;在各种数据挖掘竞赛中也是致命武器,据统计Kaggle上的比赛有一半以上的冠军方案都是基于GBDT。
LightGBM
LightGBM(Light Gradient Boosting Machine)是一个实现GBDT算法的框架,支持高效率的并行训练,并且具有更快的训练速度、更低的内存消耗、更好的准确率、支持分布式可以快速处理海量数据等优点。
LightGBM 框架中还包括随机森林和逻辑回归等模型。通常应用于二分类、多分类和排序等场景。
例如:在个性化商品推荐场景中,通常需要做点击预估模型。使用用户过往的行为(点击、曝光未点击、购买等)作为训练数据,来预测用户点击或购买的概率。根据用户行为和用户属性提取一些特征,包括:
-
类别特征(Categorical Feature):字符串类型,如性别(男/女)。
-
物品类型:服饰、玩具和电子等。
-
数值特征(Numrical Feature):整型或浮点型,如用户活跃度或商品价格等。
完整代码如下:import numpy as np
import pandas as pd
import sys
!{sys.executable} -m pip install lightgbm
import lightgbm as lgb
from sklearn.metrics import mean_squared_log_error, mean_absolute_error, mean_squared_error
import tqdm
import sys
import os
import gc
import argparse
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
# 不同type类型对应target的柱状图
type_target_df = train.groupby('type')['target'].mean().reset_index()
plt.figure(figsize=(8, 4))
plt.bar(type_target_df['type'], type_target_df['target'], color=['blue', 'green'])
plt.xlabel('Type')
plt.ylabel('Average Target Value')
plt.title('Bar Chart of Target by Type')
plt.show()
specific_id_df = train[train['id'] == '00037f39cf']
plt.figure(figsize=(10, 5))
plt.plot(specific_id_df['dt'], specific_id_df['target'], marker='o', linestyle='-')
plt.xlabel('DateTime')
plt.ylabel('Target Value')
plt.title("Line Chart of Target for ID '00037f39cf'")
plt.show()
# 合并训练数据和测试数据,并进行排序
data = pd.concat([test, train], axis=0, ignore_index=True)
data = data.sort_values(['id','dt'], ascending=False).reset_index(drop=True)
# 历史平移
for i in range(10,30):
data[f'last{i}_target'] = data.groupby(['id'])['target'].shift(i)
# 窗口统计
data[f'win3_mean_target'] = (data['last10_target'] + data['last11_target'] + data['last12_target']) / 3
# 进行数据切分
train = data[data.target.notnull()].reset_index(drop=True)
test = data[data.target.isnull()].reset_index(drop=True)
# 确定输入特征
train_cols = [f for f in data.columns if f not in ['id','target']]
def time_model(lgb, train_df, test_df, cols):
# 训练集和验证集切分
trn_x, trn_y = train_df[train_df.dt>=31][cols], train_df[train_df.dt>=31]['target']
val_x, val_y = train_df[train_df.dt<=30][cols], train_df[train_df.dt<=30]['target']
# 构建模型输入数据
train_matrix = lgb.Dataset(trn_x, label=trn_y)
valid_matrix = lgb.Dataset(val_x, label=val_y)
# lightgbm参数
lgb_params = {
'boosting_type': 'gbdt',
'objective': 'regression',
'metric': 'mse',
'min_child_weight': 5,
'num_leaves': 2 ** 5,
'lambda_l2': 10,
'feature_fraction': 0.8,
'bagging_fraction': 0.8,
'bagging_freq': 4,
'learning_rate': 0.05,
'seed': 2024,
'nthread' : 16
}
# 训练模型
model = lgb.train(
lgb_params,
train_matrix,
num_boost_round=50000,
valid_sets=[train_matrix, valid_matrix],
categorical_feature=[],
callbacks=[lgb.early_stopping(stopping_rounds=500)]
)
# 验证集和测试集结果预测
val_pred = model.predict(val_x, num_iteration=model.best_iteration)
test_pred = model.predict(test_df[cols], num_iteration=model.best_iteration)
# 离线分数评估
score = mean_squared_error(val_pred, val_y)
print(score)
return val_pred, test_pred
lgb_oof, lgb_test = time_model(lgb, train, test, train_cols)
# 保存结果文件到本地
test['target'] = lgb_test
test[['id','dt','target']].to_csv('submit.csv', index=None)
运行结果:
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.477147 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 5627 [LightGBM] [Info] Number of data points in the train set: 2760666, number of used features: 23 [LightGBM] [Info] Start training from score 32.114748 Training until validation scores don't improve for 500 rounds Early stopping, best iteration is: [956] training's l2: 171.445 valid_1's l2: 183.575 183.5745442665828