一、LightGBM模型
1.简介
顾名思义,lightGBM包含两个关键点:light即轻量级,GBM 梯度提升机。
LightGBM 是一个梯度 boosting 框架,使用基于学习算法的决策树。它可以说是分布式的,高效的,有以下优势:
更快的训练效率
低内存使用
更高的准确率
支持并行化学习
可处理大规模数据
2. 特点
概括来说,lightGBM主要有以下特点:
基于Histogram的决策树算法
带深度限制的Leaf-wise的叶子生长策略
直方图做差加速
直接支持类别特征(Categorical Feature)
Cache命中率优化
基于直方图的稀疏特征优化
多线程优化
与常用的机器学习算法进行比较:速度飞起
二、LightGBM实践
from sklearn.model_selection import GridSearchCV
import pickle
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import roc_auc_score,f1_score,accuracy_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
import lightgbm as LGB
import numpy as np
feat_path="E:/Jupyter/anava/DaGuanBei_Analysis/feat/"
model_path="E:/Jupyter/anava/DaGuanBei_Analysis/model/"
result_path="E:/Jupyter/anava/DaGuanBei_Analysis/result/"
#自定义验证集的评价函数
def f1_score_vali(preds, data_vali):
labels = data_vali.get_label()
preds = np.argmax(preds.reshape(19, -1), axis=0)
score_vali = f1_score(y_true=labels, y_pred=preds, average='macro')
return 'f1_score', score_vali, True
#读取特征
data_fp=open(feat_path + "data_w_tfidf.pkl", 'rb')
x_train, y_train, x_test = pickle.load(data_fp)
xTrain, xVal, yTrain, yVal = train_test_split(x_train, y_train, test_size=0.30, random_state=531)
d_train = LGB.Dataset(data=xTrain, label=yTrain)
d_vali = LGB.Dataset(data=xVal, label=yVal)
#构建模型
params = {
'boosting': 'gbdt',
'application': 'multiclassova',
'num_class': 19,
'learning_rate': 0.1,
'num_leaves': 31,
'max_depth': -1,
'lambda_l1': 0,
'lambda_l2': 0.5,
'bagging_fraction': 1.0,
}
y_prob = bst.predict(xVal)
predict = np.argmax(y_prob .reshape(19, -1), axis=0)
print(len(predict ),len(yVal))
accuracy=accuracy_score(yVal,predict)
f1score=f1_score(yVal,predict ,average='micro')
print('Accuracy :%2f%%'% (accuracy*100.0))
print('F1_score :%f1'% f1score)