LightGBM用法速查表

本文介绍了LightGBM库在实际应用中的使用,包括读取数据、训练模型、设置样本权重、模型保存与加载、继续训练及调整参数、自定义损失函数以及与sklearn集成。通过例子展示了模型训练过程中的早停策略、特征重要性评估、超参数调优等关键步骤,并利用sklearn进行模型评估和网格搜索。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

LightGBM用法速查表

1.读取csv数据并指定参数建模
# coding: utf-8
import json
import lightgbm as lgb
import pandas as pd
from sklearn.metrics import mean_squared_error
# 加载数据集合
print('Load data...')
df_train = pd.read_csv('regression.train.txt', header=None, sep='\t')
df_test = pd.read_csv('regression.test.txt', header=None, sep='\t')
# 设定训练集和测试集
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values
# 构建lgb中的Dataset格式
lgb_train = lgb.Dataset(X_train, y_train)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)

# 敲定好一组参数
params = {
    'task': 'train',
    'boosting_type': 'gbdt',
    'objective': 'regression',
    'metric': {'l2', 'auc'},
    'num_leaves': 31,
    'learning_rate': 0.05,
    'feature_fraction': 0.9,
    'bagging_fraction': 0.8,
    'bagging_freq': 5,
    'verbose': 0
}

print('开始训练...')
# 训练
gbm = lgb.train(params,
                lgb_train,
                num_boost_round=20,
                valid_sets=lgb_eval,
                early_stopping_rounds=5)

# 保存模型
print('保存模型...')
# 保存模型到文件中
gbm.save_model('model.txt')

print('开始预测...')
# 预测
y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration)
# 评估
print('预估结果的rmse为:')
print(mean_squared_error(y_test, y_pred) ** 0.5)
#Load data...
#开始训练...
#[1]	valid_0's l2: 0.24288	valid_0's auc: 0.764496
#Training until validation scores don't improve for 5 rounds.
#[2]	valid_0's l2: 0.239307	valid_0's auc: 0.766173
#[3]	valid_0's l2: 0.235559	valid_0's auc: 0.785547
#[4]	valid_0's l2: 0.230771	valid_0's auc: 0.797786
#[5]	valid_0's l2: 0.226297	valid_0's auc: 0.805155
#[6]	valid_0's l2: 0.223692	valid_0's auc: 0.800979
#[7]	valid_0's l2: 0.220941	valid_0's auc: 0.806566
#[8]	valid_0's l2: 0.217982	valid_0's auc: 0.808566
#[9]	valid_0's l2: 0.215351	valid_0's auc: 0.809041
#[10]	valid_0's l2: 0.213064	valid_0's auc: 0.805953
#[11]	valid_0's l2: 0.211053	valid_0's auc: 0.804631
#[12]	valid_0's l2: 0.209336	valid_0's auc: 0.802922
#[13]	valid_0's l2: 0.207492	valid_0's auc: 0.802011
#[14]	valid_0's l2: 0.206016	valid_0's auc: 0.80193
#Early stopping, best iteration is:
#[9]	valid_0's l2: 0.215351	valid_0's auc: 0.809041
#保存模型...
#开始预测...
#预估结果的rmse为:
#0.4640593794679212
2.添加样本权重训练
# coding: utf-8
import json
import lightgbm as lgb
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error
import warnings
warnings.filterwarnings("ignore")
# 加载数据集
print('加载数据...')
df_train = pd.read_csv('binary.train', header=None, sep='\t')
df_test = pd.read_csv('./data/binary.test', header=None, sep='\t')
W_train = pd.read_csv('binary.train.weight', header=None)[0]
W_test = pd.read_csv('binary.test.weight', header=None)[0]

y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values
num_train, num_feature = X_train.shape

# 加载数据的同时加载权重
lgb_train = lgb.Dataset(X_train, y_train,
                        weight=W_train, free_raw_data=False)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train,
                       weight=W_test, free_raw_data=False)

# 设定参数
params = {
    'boosting_type': 'gbdt',
    'objective': 'binary',
    'metric': 'binary_logloss',
    'num_leaves': 31,
    'learning_rate': 0.05,
    'feature_fraction': 0.9,
    'bagging_fraction': 0.8,
    'bagging_freq': 5,
    'verbose': 0
}

# 产出特征名称
feature_name = ['feature_' + str(col) for col in range(num_feature)]

print('开始训练...')
gbm = lgb.train(params,
                lgb_train,
                num_boost_round=10,
                valid_sets=lgb_train,  # 评估训练集
                feature_name=feature_name,
                categorical_feature=[21])
#加载数据...
#开始训练...
#[1]	valid_0's binary_logloss: 0.681265
#[2]	valid_0's binary_logloss: 0.673318
#[3]	valid_0's binary_logloss: 0.664193
#[4]	valid_0's binary_logloss: 0.655501
#[5]	valid_0's binary_logloss: 0.650956
#[6]	valid_0's binary_logloss: 0.644803
#[7]	valid_0's binary_logloss: 0.637567
#[8]	valid_0's binary_logloss: 0.631224
#[9]	valid_0's binary_logloss: 0.624958
#[10]	valid_0's binary_logloss: 0.619398
3.模型的载入与预测
# 查看特征名称
print('完成10轮训练...')
print('第7个特征为:')
print(repr(lgb_train.feature_name[6]))

# 存储模型
gbm.save_model('./model/lgb_model.txt')

# 特征名称
print('特征名称:')
print(gbm.feature_name())

# 特征重要度
print('特征重要度:')
print(list(gbm.feature_importance()))

# 加载模型
print('加载模型用于预测')
bst = lgb.Booster(model_file='./model/lgb_model.txt')
# 预测
y_pred = bst.predict(X_test)
# 在测试集评估效果
print('在测试集上的rmse为:')
print(mean_squared_error(y_test, y_pred) ** 0.5)
# 查看特征名称
print('完成10轮训练...')
print('第7个特征为:')
print(repr(lgb_train.feature_name[6]))

# 存储模型
gbm.save_model('./model/lgb_model.txt')

# 特征名称
print('特征名称:')
print(gbm.feature_name())

# 特征重要度
print('特征重要度:')
print(list(gbm.feature_importance()))

# 加载模型
print('加载模型用于预测')
bst = lgb.Booster(model_file='./model/lgb_model.txt')
# 预测
y_pred = bst.predict(X_test)
# 在测试集评估效果
print('在测试集上的rmse为:')
print(mean_squared_error(y_test, y_pred) ** 0.5)
# 查看特征名称
print('完成10轮训练...')
print('第7个特征为:')
print(repr(lgb_train.feature_name[6]))# 存储模型
gbm.save_model('./model/lgb_model.txt')# 特征名称
print('特征名称:')
print(gbm.feature_name())# 特征重要度
print('特征重要度:')
print(list(gbm.feature_importance()))# 加载模型
print('加载模型用于预测')
bst = lgb.Booster(model_file='./model/lgb_model.txt')
# 预测
y_pred = bst.predict(X_test)
# 在测试集评估效果
print('在测试集上的rmse为:')
print(mean_squared_error(y_test, y_pred) ** 0.5)
#完成10轮训练...
#第7个特征为:
#'feature_6'
#特征名称:
#[u'feature_0', u'feature_1', u'feature_2', u'feature_3', u'feature_4', u'feature_5', u'feature_6', u'feature_7', u'feature_8', u'feature_9', u'feature_10', u'feature_11', u'feature_12', u'feature_13', u'feature_14', u'feature_15', u'feature_16', u'feature_17', u'feature_18', u'feature_19', u'feature_20', u'feature_21', u'feature_22', u'feature_23', u'feature_24', u'feature_25', u'feature_26', u'feature_27']
#特征重要度:
#[8, 5, 1, 19, 7, 33, 2, 0, 2, 10, 5, 2, 0, 9, 3, 3, 0, 2, 2, 5, 1, 0, 36, 3, 33, 45, 29, 35]
#加载模型用于预测
#在测试集上的rmse为:
#0.4629245607636925
4.接着之前的模型继续训练
# 继续训练
# 从./model/model.txt中加载模型初始化
gbm = lgb.train(params,
                lgb_train,
                num_boost_round=10,
                init_model='./model/lgb_model.txt',
                valid_sets=lgb_eval)

print('以旧模型为初始化,完成第 10-20 轮训练...')

# 在训练的过程中调整超参数
# 比如这里调整的是学习率
gbm = lgb.train(params,
                lgb_train,
                num_boost_round=10,
                init_model=gbm,
                learning_rates=lambda iter: 0.05 * (0.99 ** iter),
                valid_sets=lgb_eval)

print('逐步调整学习率完成第 20-30 轮训练...')

# 调整其他超参数
gbm = lgb.train(params,
                lgb_train,
                num_boost_round=10,
                init_model=gbm,
                valid_sets=lgb_eval,
                callbacks=[lgb.reset_parameter(bagging_fraction=[0.7] * 5 + [0.6] * 5)])
#print('逐步调整bagging比率完成第 30-40 轮训练...')
#[11]	valid_0's binary_logloss: 0.616177
#[12]	valid_0's binary_logloss: 0.611792
#[13]	valid_0's binary_logloss: 0.607043
#[14]	valid_0's binary_logloss: 0.602314
#[15]	valid_0's binary_logloss: 0.598433
#[16]	valid_0's binary_logloss: 0.595238
#[17]	valid_0's binary_logloss: 0.592047
#[18]	valid_0's binary_logloss: 0.588673
#[19]	valid_0's binary_logloss: 0.586084
#[20]	valid_0's binary_logloss: 0.584033
#以旧模型为初始化,完成第 10-20 轮训练...
#[21]	valid_0's binary_logloss: 0.616177
#[22]	valid_0's binary_logloss: 0.611834
#[23]	valid_0's binary_logloss: 0.607177
#[24]	valid_0's binary_logloss: 0.602577
#[25]	valid_0's binary_logloss: 0.59831
#[26]	valid_0's binary_logloss: 0.595259
#[27]	valid_0's binary_logloss: 0.592201
#[28]	valid_0's binary_logloss: 0.589017
#[29]	valid_0's binary_logloss: 0.586597
#[30]	valid_0's binary_logloss: 0.584454
#逐步调整学习率完成第 20-30 轮训练...
#[31]	valid_0's binary_logloss: 0.616053
#[32]	valid_0's binary_logloss: 0.612291
#[33]	valid_0's binary_logloss: 0.60856
#[34]	valid_0's binary_logloss: 0.605387
#[35]	valid_0's binary_logloss: 0.601744
#[36]	valid_0's binary_logloss: 0.598556
#[37]	valid_0's binary_logloss: 0.595585
#[38]	valid_0's binary_logloss: 0.593228
#[39]	valid_0's binary_logloss: 0.59018
#[40]	valid_0's binary_logloss: 0.588391
#逐步调整bagging比率完成第 30-40 轮训练...
5.自定义损失函数
# 类似在xgboost中的形式
# 自定义损失函数需要
def loglikelood(preds, train_data):
    labels = train_data.get_label()
    preds = 1. / (1. + np.exp(-preds))
    grad = preds - labels
    hess = preds * (1. - preds)
    return grad, hess


# 自定义评估函数
def binary_error(preds, train_data):
    labels = train_data.get_label()
    return 'error', np.mean(labels != (preds > 0.5)), False


gbm = lgb.train(params,
                lgb_train,
                num_boost_round=10,
                init_model=gbm,
                fobj=loglikelood,
                feval=binary_error,
                valid_sets=lgb_eval)

print('用自定义的损失函数与评估标准完成第40-50轮...')
#[41]	valid_0's binary_logloss: 0.614429	valid_0's error: 0.268
#[42]	valid_0's binary_logloss: 0.610689	valid_0's error: 0.26
#[43]	valid_0's binary_logloss: 0.606267	valid_0's error: 0.264
#[44]	valid_0's binary_logloss: 0.601949	valid_0's error: 0.258
#[45]	valid_0's binary_logloss: 0.597271	valid_0's error: 0.266
#[46]	valid_0's binary_logloss: 0.593971	valid_0's error: 0.276
#[47]	valid_0's binary_logloss: 0.591427	valid_0's error: 0.278
#[48]	valid_0's binary_logloss: 0.588301	valid_0's error: 0.284
#[49]	valid_0's binary_logloss: 0.586562	valid_0's error: 0.288
#[50]	valid_0's binary_logloss: 0.584056	valid_0's error: 0.288
#用自定义的损失函数与评估标准完成第40-50轮...

sklearn与LightGBM配合使用

1.LightGBM建模,sklearn评估
# coding: utf-8
import lightgbm as lgb
import pandas as pd
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV

# 加载数据
print('加载数据...')
df_train = pd.read_csv('regression.train.txt', header=None, sep='\t')
df_test = pd.read_csv('.regression.test.txt', header=None, sep='\t')

# 取出特征和标签
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values

print('开始训练...')
# 直接初始化LGBMRegressor
# 这个LightGBM的Regressor和sklearn中其他Regressor基本是一致的
gbm = lgb.LGBMRegressor(objective='regression',
                        num_leaves=31,
                        learning_rate=0.05,
                        n_estimators=20)

# 使用fit函数拟合
gbm.fit(X_train, y_train,
        eval_set=[(X_test, y_test)],
        eval_metric='l1',
        early_stopping_rounds=5)

# 预测
print('开始预测...')
y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration_)
# 评估预测结果
print('预测结果的rmse是:')
print(mean_squared_error(y_test, y_pred) ** 0.5)

#加载数据...
#开始训练...
#[1]	valid_0's l1: 0.491735
#Training until validation scores don't improve for 5 rounds.
#[2]	valid_0's l1: 0.486563
#[3]	valid_0's l1: 0.481489
#[4]	valid_0's l1: 0.476848
#[5]	valid_0's l1: 0.47305
#[6]	valid_0's l1: 0.469049
#[7]	valid_0's l1: 0.465556
#[8]	valid_0's l1: 0.462208
#[9]	valid_0's l1: 0.458676
#[10]	valid_0's l1: 0.454998
#[11]	valid_0's l1: 0.452047
#[12]	valid_0's l1: 0.449158
#[13]	valid_0's l1: 0.44608
#[14]	valid_0's l1: 0.443554
#[15]	valid_0's l1: 0.440643
#[16]	valid_0's l1: 0.437687
#[17]	valid_0's l1: 0.435454
#[18]	valid_0's l1: 0.433288
#[19]	valid_0's l1: 0.431297
#[20]	valid_0's l1: 0.428946
#Did not meet early stopping. Best iteration is:
#[20]	valid_0's l1: 0.428946
#开始预测...
#预测结果的rmse是:
#0.4441153344254208
2.网格搜索查找最优超参数
# 配合scikit-learn的网格搜索交叉验证选择最优超参数
estimator = lgb.LGBMRegressor(num_leaves=31)

param_grid = {
    'learning_rate': [0.01, 0.1, 1],
    'n_estimators': [20, 40]
}

gbm = GridSearchCV(estimator, param_grid)

gbm.fit(X_train, y_train)

print('用网格搜索找到的最优超参数为:')
print(gbm.best_params_)
#用网格搜索找到的最优超参数为:
#{'n_estimators': 40, 'learning_rate': 0.1}
3.绘图解释
# coding: utf-8
import lightgbm as lgb
import pandas as pd

try:
    import matplotlib.pyplot as plt
except ImportError:
    raise ImportError('You need to install matplotlib for plotting.')

# 加载数据集
print('加载数据...')
df_train = pd.read_csv('./data/regression.train.txt', header=None, sep='\t')
df_test = pd.read_csv('./data/regression.test.txt', header=None, sep='\t')

# 取出特征和标签
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values

# 构建lgb中的Dataset数据格式
lgb_train = lgb.Dataset(X_train, y_train)
lgb_test = lgb.Dataset(X_test, y_test, reference=lgb_train)

# 设定参数
params = {
    'num_leaves': 5,
    'metric': ('l1', 'l2'),
    'verbose': 0
}

evals_result = {}  # to record eval results for plotting

print('开始训练...')
# 训练
gbm = lgb.train(params,
                lgb_train,
                num_boost_round=100,
                valid_sets=[lgb_train, lgb_test],
                feature_name=['f' + str(i + 1) for i in range(28)],
                categorical_feature=[21],
                evals_result=evals_result,
                verbose_eval=10)

print('在训练过程中绘图...')
ax = lgb.plot_metric(evals_result, metric='l1')
plt.show()

print('画出特征重要度...')
ax = lgb.plot_importance(gbm, max_num_features=10)
plt.show()

print('画出第84颗树...')
ax = lgb.plot_tree(gbm, tree_index=83, figsize=(20, 8), show_info=['split_gain'])
plt.show()

#print('用graphviz画出第84颗树...')
#graph = lgb.create_tree_digraph(gbm, tree_index=83, name='Tree84')
#graph.render(view=True)

#加载数据...
#开始训练...
#[10]	training's l2: 0.217995	training's l1: 0.457448	valid_1's l2: 0.21641	valid_1's l1: 0.456464
#[20]	training's l2: 0.205099	training's l1: 0.436869	valid_1's l2: 0.201616	valid_1's l1: 0.434057
#[30]	training's l2: 0.197421	training's l1: 0.421302	valid_1's l2: 0.192514	valid_1's l1: 0.417019
#[40]	training's l2: 0.192856	training's l1: 0.411107	valid_1's l2: 0.187258	valid_1's l1: 0.406303
#[50]	training's l2: 0.189593	training's l1: 0.403695	valid_1's l2: 0.183688	valid_1's l1: 0.398997
#[60]	training's l2: 0.187043	training's l1: 0.398704	valid_1's l2: 0.181009	valid_1's l1: 0.393977
#[70]	training's l2: 0.184982	training's l1: 0.394876	valid_1's l2: 0.178803	valid_1's l1: 0.389805
#[80]	training's l2: 0.1828	training's l1: 0.391147	valid_1's l2: 0.176799	valid_1's l1: 0.386476
#[90]	training's l2: 0.180817	training's l1: 0.388101	valid_1's l2: 0.175775	valid_1's l1: 0.384404
#[100]	training's l2: 0.179171	training's l1: 0.385174	valid_1's l2: 0.175321	valid_1's l1: 0.382929
#在训练过程中绘图...

在这里插入图片描述
画出特征重要度…
在这里插入图片描述
画出第84颗树…
在这里插入图片描述

### 如何在 VXE-Table 中实现复选框的全选与取消选择功能 VXE-Table 是一款基于 Vue 的高性能表格组件库,支持多种复杂场景下的数据展示需求。以下是关于如何在 VXE-Table 中实现复选框的全选与取消选择功能的具体说明。 #### 1. 基础配置 为了实现全选和取消选择的功能,在初始化 `vxe-table` 表格时需要设置一些必要的属性。例如,通过启用 `checkbox-config` 配置项来开启复选框功能[^1]: ```vue <template> <vxe-table :data="tableData" ref="xTable"> <!-- 启用复选框 --> <vxe-column type="checkbox" width="60"></vxe-column> <vxe-column field="name" title="Name"></vxe-column> <vxe-column field="age" title="Age"></vxe-column> </vxe-table> </template> <script> export default { data() { return { tableData: [ { name: 'John', age: 25 }, { name: 'Doe', age: 30 } ] }; } }; </script> ``` 上述代码片段展示了如何在列定义中加入复选框列,并绑定到表格的数据源上。 --- #### 2. 实现全选逻辑 要实现全选功能,可以通过监听复选框的状态变化事件并手动控制其行为。具体来说,可以利用 `check-all-method` 或者直接操作 API 来完成这一目标[^4]。 下面是一个完整的示例,演示了如何通过按钮触发全选或取消全选操作: ```vue <template> <div> <button @click="selectAll">全选</button> <button @click="clearAll">取消全选</button> <vxe-table :data="tableData" ref="xTable"> <vxe-column type="checkbox" width="60"></vxe-column> <vxe-column field="name" title="Name"></vxe-column> <vxe-column field="age" title="Age"></vxe-column> </vxe-table> </div> </template> <script> import XEUtils from 'xe-utils'; export default { data() { return { tableData: [ { name: 'John', age: 25 }, { name: 'Doe', age: 30 } ], selectionList: [] }; }, methods: { selectAll() { const $table = this.$refs.xTable; $table.setAllCheckboxRow(true); // 设置所有行被勾选 }, clearAll() { const $table = this.$refs.xTable; $table.clearCheckboxRow(); // 清除所有已勾选的行 } } }; </script> ``` 在此代码中,调用了 `setAllCheckboxRow` 和 `clearCheckboxRow` 方法分别用于执行全选和清除操作--- #### 3. 获取选中记录的值 当用户完成了某些行的选择之后,通常还需要获取这些选定行的相关信息以便进一步处理。这一步骤可通过访问 `$refs.xTable.getCheckboxRecords()` 完成: ```javascript getSelectedRows() { const selectedRows = this.$refs.xTable.getCheckboxRecords(); console.log('选中的行:', selectedRows); } ``` 此方法会返回一个包含所有当前已被选中行的对象数组。 --- #### 4. 处理联动关系 除了基本的全选/反选外,还需注意保持“全选”复选框与其他单个复选框之间的同步状态一致性。即当页面上的任意一项发生变化时,应自动调整顶部“全选”的显示状况;反之亦然[^3]。 可以在 `cell-change` 或其他类似的回调函数内部编写相应的判断条件来进行实时监控以及更新视图层的表现形式。 --- ### 总结 综上所述,借助于 VXE-Table 提供的强大工具集及其灵活可定制化的特性,我们可以轻松构建出满足实际业务需求的各种交互体验良好的界面控件组合方案。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值