机器学习笔记-XGBoost+lightGBM

本文通过实例介绍了如何使用XGBoost和lightGBM进行机器学习,包括数据预处理、特征抽取、模型训练(如XGBoost的深度调优和lightGBM的基本训练与网格搜索),展示了这两种算法在实际问题中的应用。
摘要由CSDN通过智能技术生成

机器学习笔记-XGBoost+lightGBM

XGBoost

  • 例子
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction import DictVectorizer
from sklearn.tree import DecisionTreeClassifier, export_graphviz

# 1.获取数据
titan = pd.read_csv("http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic.txt")

# 2.数据基本处理
# 2.1 确定特征值,目标值
x = titan[["pclass", "age", "sex"]]
y = titan["survived"]

# 2.2 缺失值处理
x['age'].fillna(value=titan["age"].mean(), inplace=True)

# 2.3 数据集划分
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=22, test_size=0.2)

# 3.特征工程(字典特征抽取)
x_train = x_train.to_dict(orient="records")
x_test = x_test.to_dict(orient="records")

transfer = DictVectorizer()

x_train = transfer.fit_transform(x_train)
x_test = transfer.fit_transform(x_test)

# 4.xgboost模型训练
# 4.1 初步模型训练
from xgboost import XGBClassifier

xg = XGBClassifier()

xg.fit(x_train, y_train)

xg.score(x_test, y_test)

# 4.2 对max_depth进行调优
depth_range  = range(10)
score = []

for i in depth_range:
    xg = XGBClassifier(eta=1, gamma=0, max_depth=i)
    xg.fit(x_train, y_train)
    
    s = xg.score(x_test, y_test)
    
    print(s)
    score.append(s)
    
# 4.3 调优结果可视化
import matplotlib.pyplot as plt

plt.plot(depth_range, score)

plt.show()

lightGBM

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_squared_error
import lightgbm as lgb

# 读取数据
iris = load_iris()
data = iris.data
target = iris.target

# 数据基本处理
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.2)

# 模型训练
# 模型基本训练
gbm = lgb.LGBMRegressor(objective="regression", learning_rate=0.05, n_estimators=20)

gbm.fit(X_train, y_train, eval_set=[(X_test, y_test)], eval_metric="l1", early_stopping_rounds=3)
gbm.score(X_test, y_test)

# 通过网格搜索进行训练
estimators = lgb.LGBMRegressor(num_leaves=31)
param_grid = {
    "learning_rate": [0.01, 0.1, 1],
    "n_estmators":[20, 40, 60, 80]
}
gbm = GridSearchCV(estimators, param_grid, cv=5)
gbm.fit(X_train, y_train)

gbm = lgb.LGBMRegressor(objective="regression", learning_rate=0.1, n_estimators=20)

gbm.fit(X_train, y_train, eval_set=[(X_test, y_test)], eval_metric="l1", early_stopping_rounds=3)
gbm.score(X_test, y_test)
  • 11
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值