【数据挖掘】心跳信号分类预测My_Task4建模与调参

4.1 学习目标

  • 学习机器学习模型的建模过程与调参流程
  • 完成相应学习打卡任务

4.2 内容介绍

  • 逻辑回归模型:

    • 理解逻辑回归模型;
    • 逻辑回归模型的应用;
    • 逻辑回归的优缺点;
  • 树模型:

    • 理解树模型;
    • 树模型的应用;
    • 树模型的优缺点;
  • 集成模型

    • 基于bagging思想的集成模型
      • 随机森林模型
    • 基于boosting思想的集成模型
      • XGBoost模型
      • LightGBM模型
      • CatBoost模型
  • 模型对比与性能评估:

    • 回归模型/树模型/集成模型;
    • 模型评估方法;
    • 模型评价结果;
  • 模型调参:

    • 贪心调参方法;

    • 网格调参方法;

    • 贝叶斯调参方法;

4.5 代码示例

4.5.1 导入相关和相关设置

import pandas as pd
import numpy as np
from sklearn.metrics import f1_score

import os 
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")

4.5.2 读取数据

reduce_mean_usage 函数通过调整数据类型,帮助我们减少数据在内存中占用的空间

def reduce_mem_usage(df):
    start_mem = df.memory_usage().sum() / 1024**2 
    print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
    
    for col in df.columns:
        col_type = df[col].dtype
        
        if col_type != object:
            c_min = df[col].min()
            c_max = df[col].max()
            if str(col_type)[:3] == 'int':
                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
                    df[col] = df[col].astype(np.int8)
                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
                    df[col] = df[col].astype(np.int16)
                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
                    df[col] = df[col].astype(np.int32)
                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
                    df[col] = df[col].astype(np.int64)  
            else:
                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
                    df[col] = df[col].astype(np.float16)
                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
                    df[col] = df[col].astype(np.float32)
                else:
                    df[col] = df[col].astype(np.float64)
        else:
            df[col] = df[col].astype('category')

    end_mem = df.memory_usage().sum() / 1024**2 
    print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
    print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
    
    return df
# 读取数据
data = pd.read_csv('./train.csv')
# 简单预处理
data_list = []
for items in data.values:
    data_list.append([items[0]] + [float(i) for i in items[1].split(',')] + [items[2]])

data = pd.DataFrame(np.array(data_list))
data.columns = ['id'] + ['s_'+str(i) for i in range(len(data_list[0])-2)] + ['label']

data = reduce_mem_usage(data)
Memory usage of dataframe is 157.93 MB
Memory usage after optimization is: 39.67 MB
Decreased by 74.9%

4.5.3 简单建模

基于树模型的算法特性,异常值、缺失值处理可以跳过,但是对于业务较为了解的同学也可以自己对缺失异常值进行处理,效果可能会更优于模型处理的结果。

注:以下建模的据集并未构造任何特征,直接使用原特征。本次主要任务还是模建模调参。

  • 建模之前的预操作
from sklearn.model_selection import KFold
# 分离数据集,方便进行交叉验证
x_train = data.drop(['id','label'],axis=1)
y_train = data['label']

# 五折交叉验证
folds = 5 
seed = 2021
kf = KFold(n_splits = folds,shuffle=True,random_state=seed)

  • 因为树模型中没有f1-score评价指标,所以需要自定义评价指标,在模型迭代中返回验证集f1-score变化情况。
def f1_score_vail(preds,data_vali):
    labels = data_vali.get_label()  # 这里reshape()将其转换为4行一列
    preds = np.argmax(preds.reshape(4,-1),axis=0)
    score_vail = f1_score(y_true=labels,y_pred=preds,average = 'macro')
    return 'f1_score',score_vail,True
  • 使用Lightgbm进行建模
"""对训练集数据进行划分,分成训练集和验证集,并进行相应的操作"""
from sklearn.model_selection import train_test_split
import lightgbm as lgb
# 数据集划分  validation 验证
x_train_split,x_val,y_train_split,y_val = train_test_split(x_train,y_train,test_size=0.2)
train_matrix = lgb.Dataset(x_train_split,label=y_train_split)
valid_matrix = lgb.Dataset(x_val,label=y_val)

params = {
    "learning_rate": 0.1,
    "boosting": 'gbdt',  
    "lambda_l2": 0.1,
    "max_depth": -1,
    "num_leaves": 128,
    "bagging_fraction": 0.8,
    "feature_fraction": 0.8,
    "metric": None,
    "objective": "multiclass",
    "num_class": 4,
    "nthread": 10,
    "verbose": -1,
}

"""使用训练集数据进行模型训练"""
model = lgb.train(params,
                  train_set=train_matrix,
                  valid_sets=valid_matrix,
                  num_boost_round=2000,
                  verbose_eval=50,
                  early_stopping_rounds=200,
                  feval=f1_score_vail
)
Training until validation scores don't improve for 200 rounds
[50]	valid_0's multi_logloss: 0.048446	valid_0's f1_score: 0.96088
[100]	valid_0's multi_logloss: 0.0430823	valid_0's f1_score: 0.966723
[150]	valid_0's multi_logloss: 0.0448085	valid_0's f1_score: 0.969935
[200]	valid_0's multi_logloss: 0.0464403	valid_0's f1_score: 0.971709
[250]	valid_0's multi_logloss: 0.0478184	valid_0's f1_score: 0.972054
Early stopping, best iteration is:
[86]	valid_0's multi_logloss: 0.0428137	valid_0's f1_score: 0.965724
  • 对验证集进行预测

a = np.array([[1, 5, 5, 2],
[9, 6, 2, 8],
[3, 7, 9, 1]])
b=np.argmax(a, axis=0)#对二维矩阵来讲a[0][1]会有两个索引方向,第一个方向为a[0],默认按列方向搜索最大值
#a的第一列为1,9,3,最大值为9,所在位置为1,
#a的第一列为5,6,7,最大值为7,所在位置为2,
#此此类推,因为a有4列,所以得到的b为1行4列,
print(b)#[1 2 2 1]

c=np.argmax(a, axis=1)#现在按照a[0][1]中的a[1]方向,即行方向搜索最大值,
#a的第一行为1,5,5,2,最大值为5(虽然有2个5,但取第一个5所在的位置),索引值为1,
#a的第2行为9,6,2,8,最大值为9,索引值为0,
#因为a有3行,所以得到的c有3个值,即为1行3列

argmax:https://blog.csdn.net/weixin_38145317/article/details/79650188

val_pre_lgb = model.predict(x_val, num_iteration=model.best_iteration)
preds = np.argmax(val_pre_lgb, axis=1)
score = f1_score(y_true=y_val, y_pred=preds, average='macro')
print('未调参前lightgbm单模型在验证集上的f1:{}'.format(score))
未调参前lightgbm单模型在验证集上的f1:0.9657243693065285
  • 更进一步的,使用5折交叉验证进行建模预测
cv_scores = []
for i,(train_index,valid_index) in enumerate(kf.split(x_train,y_train)):
    print('************************************ {} ************************************'.format(str(i+1)))
    x_train_split,y_train_split,x_val,y_val = x_train.iloc[train_index],y_train[train_index],x_train.iloc[valid_index],y_train[valid_index]
    
    train_matrix = lgb.Dataset(x_train_split,label=y_train_split)
    valid_matrix = lgb.Dataset(x_val,label=y_val)
    
    params = {
                "learning_rate": 0.1,
                "boosting": 'gbdt',  
                "lambda_l2": 0.1,
                "max_depth": -1,
                "num_leaves": 128,
                "bagging_fraction": 0.8,
                "feature_fraction": 0.8,
                "metric": None,
                "objective": "multiclass",
                "num_class": 4,
                "nthread": 10,
                "verbose": -1,
            }
    
    model = lgb.train(
                       params,
                       train_set=train_matrix,
                       valid_sets=valid_matrix,
                       num_boost_round=2000,
                       verbose_eval=100,
                       early_stopping_rounds=200,
                       feval=f1_score_vail
    )
    val_pred = model.predict(x_val,num_iteration=model.best_iteration)
    
    val_pred = np.argmax(val_pred,axis=1)
    cv_scores.append(f1_score(y_true=y_val, y_pred=val_pred, average='macro'))
    print(cv_scores)
print("lgb_scotrainre_list:{}".format(cv_scores))
print("lgb_score_mean:{}".format(np.mean(cv_scores)))
print("lgb_score_std:{}".format(np.std(cv_scores)))
************************************ 1 ************************************
Training until validation scores don't improve for 200 rounds
[100]	valid_0's multi_logloss: 0.0408155	valid_0's f1_score: 0.966797
[200]	valid_0's multi_logloss: 0.0437957	valid_0's f1_score: 0.971239
Early stopping, best iteration is:
[96]	valid_0's multi_logloss: 0.0406453	valid_0's f1_score: 0.967452
[0.9674515729721614]
************************************ 2 ************************************
Training until validation scores don't improve for 200 rounds
[100]	valid_0's multi_logloss: 0.0472933	valid_0's f1_score: 0.965828
[200]	valid_0's multi_logloss: 0.0514952	valid_0's f1_score: 0.968138
Early stopping, best iteration is:
[87]	valid_0's multi_logloss: 0.0467472	valid_0's f1_score: 0.96567
[0.9674515729721614, 0.9656700872844327]
************************************ 3 ************************************
Training until validation scores don't improve for 200 rounds
[100]	valid_0's multi_logloss: 0.0378154	valid_0's f1_score: 0.971004
[200]	valid_0's multi_logloss: 0.0405053	valid_0's f1_score: 0.973736
Early stopping, best iteration is:
[93]	valid_0's multi_logloss: 0.037734	valid_0's f1_score: 0.970004
[0.9674515729721614, 0.9656700872844327, 0.9700043639844769]
************************************ 4 ************************************
Training until validation scores don't improve for 200 rounds
[100]	valid_0's multi_logloss: 0.0495142	valid_0's f1_score: 0.967106
[200]	valid_0's multi_logloss: 0.0542324	valid_0's f1_score: 0.969746
Early stopping, best iteration is:
[84]	valid_0's multi_logloss: 0.0490886	valid_0's f1_score: 0.965566
[0.9674515729721614, 0.9656700872844327, 0.9700043639844769, 0.9655663272378014]
************************************ 5 ************************************
Training until validation scores don't improve for 200 rounds
[100]	valid_0's multi_logloss: 0.0412544	valid_0's f1_score: 0.964054
[200]	valid_0's multi_logloss: 0.0443025	valid_0's f1_score: 0.965507
Early stopping, best iteration is:
[96]	valid_0's multi_logloss: 0.0411855	valid_0's f1_score: 0.963114
[0.9674515729721614, 0.9656700872844327, 0.9700043639844769, 0.9655663272378014, 0.9631137190307674]
lgb_scotrainre_list:[0.9674515729721614, 0.9656700872844327, 0.9700043639844769, 0.9655663272378014, 0.9631137190307674]
lgb_score_mean:0.9663612141019279
lgb_score_std:0.0022854824074775683

4.5.4 模型调参

  • 1.贪心调参
    先使用当前对模型影响最大的参数进行调参,达到当前参数下的模型最优化,再使用对模型影响次之的参数进行调参,如此下去,直到所有的参数调整完毕.

这个方法的缺点就是可能会调到局部最优而不是全局最优,但是只需要一步一步的进行参数最优化调试即可,容易理解。

需要注意的是在树模型中参数调整的顺序,也就是各个参数对模型的影响程度,这里列举一下日常调参过程中常用的参数和调参顺序:

  • ①:max_depth,num_leaves
  • ②:min_data_in_leaf,min_child_weight
  • ③:bagging_fraction,feature_fraction,bagging_freq
  • ④:reg_lambda、reg_alpha
  • ⑤:min_split_gain
from sklearn.model_selection import cross_val_score
# 调objective
best_obj = dict()
for obj in objective:
    model = LGBMRegressor(objective=obj)
    """预测并计算roc的相关指标"""
    score = cross_val_score(model, X_train, y_train, cv=5, scoring='f1').mean()
    best_obj[obj] = score

# num_leaves
best_leaves = dict()
for leaves in num_leaves:
    model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0], num_leaves=leaves)
    """预测并计算roc的相关指标"""
    score = cross_val_score(model, X_train, y_train, cv=5, scoring='f1').mean()
    best_leaves[leaves] = score

# max_depth
best_depth = dict()
for depth in max_depth:
    model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0],
                          num_leaves=min(best_leaves.items(), key=lambda x:x[1])[0],
                          max_depth=depth)
    """预测并计算roc的相关指标"""
    score = cross_val_score(model, X_train, y_train, cv=5, scoring='f1').mean()
    best_depth[depth] = score

"""
可依次将模型的参数通过上面的方式进行调整优化,并且通过可视化观察在每一个最优参数下模型的得分情况
"""
---------------------------------------------------------------------------

NameError                                 Traceback (most recent call last)

<ipython-input-12-dba9ecf3027e> in <module>
      2 # 调objective
      3 best_obj = dict()
----> 4 for obj in objective:
      5     model = LGBMRegressor(objective=obj)
      6     """预测并计算roc的相关指标"""


NameError: name 'objective' is not defined

可依次将模型的参数通过上面的方式进行调整优化,并且通过可视化观察在每一个最优参数下模型的得分情况

  • 2.网格搜索
    sklearn 提供GridSearchCV用于进行网格搜索,只需要把模型的参数输进去,就能给出最优化的结果和参数。相比起贪心调参,网格搜索的结果会更优,但是网格搜索只适合于小数据集,一旦数据的量级上去了,很难得出结果。

同样以Lightgbm算法为例,进行网格搜索调参:

"""通过网格搜索确定最优参数"""
from sklearn.model_selection import GridSearchCV

def get_best_cv_params(learning_rate=0.1, n_estimators=581,num_leaves=31,max_depth=-1,bagging_fraction=1.0,feature_fraction=1.0,bagging_fraction_freq=0,min_data_in_leaf=20,min_child_weight=0.001,min_split_gain=0,reg_lambda=0,reg_alpha=0,pagram_grid=None):
    # 设置5折交叉验证
    cv_fold = KFold(n_splits=5,shuffle=True,random_state=2021)
    
    model_lgb = lgb.LGBMClassifier(learning_rate = learning_rate,
                                   n_estimators = n_estimators,
                                   num_leaves = num_leaves,
                                   max_depth = max_depth,
                                   bagging_fraction = bagging_fraction,
                                   feature_fraction = feature_fraction,
                                   bagging_freq = bagging_feq,
                                   min_data_in_leaf = min_data_in_leaf,
                                   min_child_weight = min_child_weight,
                                   min_split_gain = min_split_gain,
                                   reg_lambda = reg_lambda,
                                   reg_alpha = reg_alpha,
                                   n_jobs = 8)
    
    f1 = make_scorer(f1_score,average = 'micro')
    grid_search = GridSearchCV(estimator = model_lgb,
                               cv = cv_fold,
                               pagram_grid = pagram_grid,
                               scoring = f1)
    
    grid_search.fit(x_train,y_train)
    
    print('模型当前最优参数为:{}'.format(grid_search.best_params_))
    print('模型当前最优得分为:{}'.format(grid_search.best_score_))
"""以下代码未运行,耗时较长,请谨慎运行,且每一步的最优参数需要在下一步进行手动更新,请注意"""

"""
需要注意一下的是,除了获取上面的获取num_boost_round时候用的是原生的lightgbm(因为要用自带的cv)
下面配合GridSearchCV时必须使用sklearn接口的lightgbm。
"""
"""设置n_estimators 为581,调整num_leaves和max_depth,这里选择先粗调再细调"""
lgb_params = {'num_leaves': range(10, 80, 5), 'max_depth': range(3,10,2)}
get_best_cv_params(learning_rate=0.1, n_estimators=581, num_leaves=None, max_depth=None, min_data_in_leaf=20, 
                   min_child_weight=0.001,bagging_fraction=1.0, feature_fraction=1.0, bagging_freq=0, 
                   min_split_gain=0, reg_lambda=0, reg_alpha=0, param_grid=lgb_params)

"""num_leaves为30,max_depth为7,进一步细调num_leaves和max_depth"""
lgb_params = {'num_leaves': range(25, 35, 1), 'max_depth': range(5,9,1)}
get_best_cv_params(learning_rate=0.1, n_estimators=85, num_leaves=None, max_depth=None, min_data_in_leaf=20, 
                   min_child_weight=0.001,bagging_fraction=1.0, feature_fraction=1.0, bagging_freq=0, 
                   min_split_gain=0, reg_lambda=0, reg_alpha=0, param_grid=lgb_params)

"""
确定min_data_in_leaf为45,min_child_weight为0.001 ,下面进行bagging_fraction、feature_fraction和bagging_freq的调参
"""
lgb_params = {'bagging_fraction': [i/10 for i in range(5,10,1)], 
              'feature_fraction': [i/10 for i in range(5,10,1)],
              'bagging_freq': range(0,81,10)
             }
get_best_cv_params(learning_rate=0.1, n_estimators=85, num_leaves=29, max_depth=7, min_data_in_leaf=45, 
                   min_child_weight=0.001,bagging_fraction=None, feature_fraction=None, bagging_freq=None, 
                   min_split_gain=0, reg_lambda=0, reg_alpha=0, param_grid=lgb_params)

"""
确定bagging_fraction为0.4、feature_fraction为0.6、bagging_freq为 ,下面进行reg_lambda、reg_alpha的调参
"""
lgb_params = {'reg_lambda': [0,0.001,0.01,0.03,0.08,0.3,0.5], 'reg_alpha': [0,0.001,0.01,0.03,0.08,0.3,0.5]}
get_best_cv_params(learning_rate=0.1, n_estimators=85, num_leaves=29, max_depth=7, min_data_in_leaf=45, 
                   min_child_weight=0.001,bagging_fraction=0.9, feature_fraction=0.9, bagging_freq=40, 
                   min_split_gain=0, reg_lambda=None, reg_alpha=None, param_grid=lgb_params)

"""
确定reg_lambda、reg_alpha都为0,下面进行min_split_gain的调参
"""
lgb_params = {'min_split_gain': [i/10 for i in range(0,11,1)]}
get_best_cv_params(learning_rate=0.1, n_estimators=85, num_leaves=29, max_depth=7, min_data_in_leaf=45, 
                   min_child_weight=0.001,bagging_fraction=0.9, feature_fraction=0.9, bagging_freq=40, 
                   min_split_gain=None, reg_lambda=0, reg_alpha=0, param_grid=lgb_params)
"""
参数确定好了以后,我们设置一个比较小的learning_rate 0.005,来确定最终的num_boost_round
"""
# 设置5折交叉验证
# cv_fold = StratifiedKFold(n_splits=5, random_state=0, shuffle=True, )
final_params = {
                 'boosting_type':'gbdt',
                 'learning_rate':0.01,
                 'num_leaves':29,
                 'max_depth':7,
                 'objective':'multiclass',
                 'num_class':4,
                 'min_data_in_leaf':45,
                 'min_child_weight':0.001,
                 'bagging_fraction':0.9,
                 'feature_fraction':0.9,
                 'bagging_freq':40,
                 'min_split_gain':0,
                 'reg_lambda':0,
                 'reg_alpha':0,
                 'nthread':6
}

cv_result = lgb.cv(train_set=lgb_train,
                   early_stopping_rounds=20,
                   num_boost_round=5000,
                   nfold=5,
                   stratified=True,
                   params = final_params,
                   feval=f1_score_vali,
                   seed=0,
                  )
range(10, 80, 5)

在实际调整过程中,可先设置一个较大的学习率(上面的例子中0.1),通过Lgb原生的cv函数进行树个数的确定,之后再通过上面的示例代码进行参数的调整优化

最后针对最优的参数设置一个较小的学习率(例如0.05),同样通过cv函数确定树的个数,确定最终的参数。

需要注意的是,针对大数据集,上面每一层参数的调整都需要耗费较长时间,

  • 贝叶斯调参
    在使用之前需要先安装包bayesian-optimization,运行如下命令即可:

贝叶斯调参的主要思想是:给定优化的目标函数(广义的函数,只需指定输入和输出即可,无需知道内部结构以及数学性质),通过不断地添加样本点来更新目标函数的后验分布(高斯过程,直到后验分布基本贴合于真实分布)。简单的说,就是考虑了上一次参数的信息,从而更好的调整当前的参数。

贝叶斯调参的步骤如下: 😅

  • 定义优化函数(rf_cv)
  • 建立模型
  • 定义待优化的参数
  • 得到优化结果,并返回要优化的分数指标
from sklearn.model_selection import cross_val_score

"""定义优化函数"""
def rf_cv_lgb(num_leaves, max_depth, bagging_fraction, feature_fraction, bagging_freq, min_data_in_leaf, 
              min_child_weight, min_split_gain, reg_lambda, reg_alpha):
    # 建立模型
    model_lgb = lgb.LGBMClassifier(boosting_type='gbdt', objective='multiclass', num_class=4,
                                   learning_rate=0.1, n_estimators=5000,
                                   num_leaves=int(num_leaves), max_depth=int(max_depth), 
                                   bagging_fraction=round(bagging_fraction, 2),                      	                                    feature_fraction=round(feature_fraction, 2),
                                   bagging_freq=int(bagging_freq),                                                                          min_data_in_leaf=int(min_data_in_leaf),
                                   min_child_weight=min_child_weight, min_split_gain=min_split_gain,
                                   reg_lambda=reg_lambda, reg_alpha=reg_alpha,
                                   n_jobs= 8
                                  )
    f1 = make_scorer(f1_score, average='micro')
    val = cross_val_score(model_lgb, X_train_split, y_train_split, cv=5, scoring=f1).mean()

    return val
from bayes_opt import BayesianOptimization
"""定义优化参数"""
bayes_lgb = BayesianOptimization(
    rf_cv_lgb, 
    {
        'num_leaves':(10, 200),
        'max_depth':(3, 20),
        'bagging_fraction':(0.5, 1.0),
        'feature_fraction':(0.5, 1.0),
        'bagging_freq':(0, 100),
        'min_data_in_leaf':(10,100),
        'min_child_weight':(0, 10),
        'min_split_gain':(0.0, 1.0),
        'reg_alpha':(0.0, 10),
        'reg_lambda':(0.0, 10),
    }
)

"""开始优化"""
bayes_lgb.maximize(n_iter=10)
|   iter    |  target   | baggin... | baggin... | featur... | max_depth | min_ch... | min_da... | min_sp... | num_le... | reg_alpha | reg_la... |
-------------------------------------------------------------------------------------------------------------------------------------------------



---------------------------------------------------------------------------

KeyError                                  Traceback (most recent call last)

D:\Ljc\Aacon\Anaconda3\lib\site-packages\bayes_opt\target_space.py in probe(self, params)
    190         try:
--> 191             target = self._cache[_hashable(x)]
    192         except KeyError:


KeyError: (0.6508155736019849, 49.54695263456983, 0.893495504752377, 19.43624882376842, 7.359802977985826, 34.759929140940606, 0.9692072315173706, 111.96538389153179, 2.1595497164382427, 4.059557646933362)


During handling of the above exception, another exception occurred:


NameError                                 Traceback (most recent call last)

<ipython-input-18-2c2786145eac> in <module>
     18 
     19 """开始优化"""
---> 20 bayes_lgb.maximize(n_iter=10)


D:\Ljc\Aacon\Anaconda3\lib\site-packages\bayes_opt\bayesian_optimization.py in maximize(self, init_points, n_iter, acq, kappa, kappa_decay, kappa_decay_delay, xi, **gp_params)
    183                 iteration += 1
    184 
--> 185             self.probe(x_probe, lazy=False)
    186 
    187             if self._bounds_transformer:


D:\Ljc\Aacon\Anaconda3\lib\site-packages\bayes_opt\bayesian_optimization.py in probe(self, params, lazy)
    114             self._queue.add(params)
    115         else:
--> 116             self._space.probe(params)
    117             self.dispatch(Events.OPTIMIZATION_STEP)
    118 


D:\Ljc\Aacon\Anaconda3\lib\site-packages\bayes_opt\target_space.py in probe(self, params)
    192         except KeyError:
    193             params = dict(zip(self._keys, x))
--> 194             target = self.target_func(**params)
    195             self.register(x, target)
    196         return target


<ipython-input-16-45eb6e6667c6> in rf_cv_lgb(num_leaves, max_depth, bagging_fraction, feature_fraction, bagging_freq, min_data_in_leaf, min_child_weight, min_split_gain, reg_lambda, reg_alpha)
     14                                    n_jobs= 8
     15                                   )
---> 16     f1 = make_scorer(f1_score, average='micro')
     17     val = cross_val_score(model_lgb, X_train_split, y_train_split, cv=5, scoring=f1).mean()
     18 


NameError: name 'make_scorer' is not defined
"""显示优化结果"""
bayes_lgb.max

参数优化完成后,我们可以根据优化后的参数建立新的模型.降低学习率并寻找最优模型迭代次数

"""调整一个较小的学习率,并通过cv函数确定当前最优的迭代次数"""
base_params_lgb = {
                    'boosting_type': 'gbdt',
                    'objective': 'multiclass',
                    'num_class': 4,
                    'learning_rate': 0.01,
                    'num_leaves': 138,
                    'max_depth': 11,
                    'min_data_in_leaf': 43,
                    'min_child_weight':6.5,
                    'bagging_fraction': 0.64,
                    'feature_fraction': 0.93,
                    'bagging_freq': 49,
                    'reg_lambda': 7,
                    'reg_alpha': 0.21,
                    'min_split_gain': 0.288,
                    'nthread': 10,
                    'verbose': -1,
}

cv_result_lgb = lgb.cv(
    train_set=train_matrix,
    early_stopping_rounds=1000, 
    num_boost_round=20000,
    nfold=5,
    stratified=True,
    shuffle=True,
    params=base_params_lgb,
    feval=f1_score_vali,
    seed=0
)
print('迭代次数{}'.format(len(cv_result_lgb['f1_score-mean'])))
print('最终模型的f1为{}'.format(max(cv_result_lgb['f1_score-mean'])))
---------------------------------------------------------------------------

NameError                                 Traceback (most recent call last)

<ipython-input-19-3920783d8434> in <module>
     27     shuffle=True,
     28     params=base_params_lgb,
---> 29     feval=f1_score_vali,
     30     seed=0
     31 )


NameError: name 'f1_score_vali' is not defined

模型参数已经确定,建立最终模型并对验证集进行验证

import lightgbm as lgb
"""使用lightgbm 5折交叉验证进行建模预测"""
cv_scores = []
for i, (train_index, valid_index) in enumerate(kf.split(X_train, y_train)):
    print('************************************ {} ************************************'.format(str(i+1)))
    x_train_split, y_train_split, x_val, y_val = x_train_split.iloc[train_index], y_train[train_index], X_train.iloc[valid_index], y_train[valid_index]

    train_matrix = lgb.Dataset(x_train_split, label=y_train_split)
    valid_matrix = lgb.Dataset(x_val, label=y_val)

    params = {
                'boosting_type': 'gbdt',
                'objective': 'multiclass',
                'num_class': 4,
                'learning_rate': 0.01,
                'num_leaves': 138,
                'max_depth': 11,
                'min_data_in_leaf': 43,
                'min_child_weight':6.5,
                'bagging_fraction': 0.64,
                'feature_fraction': 0.93,
                'bagging_freq': 49,
                'reg_lambda': 7,
                'reg_alpha': 0.21,
                'min_split_gain': 0.288,
                'nthread': 10,
                'verbose': -1,
    }

    model = lgb.train(params, train_set=train_matrix, num_boost_round=4833, valid_sets=valid_matrix, 
                      verbose_eval=1000, early_stopping_rounds=200, feval=f1_score_vali)
    val_pred = model.predict(X_val, num_iteration=model.best_iteration)
    val_pred = np.argmax(val_pred, axis=1)
    cv_scores.append(f1_score(y_true=y_val, y_pred=val_pred, average='macro'))
    print(cv_scores)

print("lgb_scotrainre_list:{}".format(cv_scores))
print("lgb_score_mean:{}".format(np.mean(cv_scores)))
print("lgb_score_std:{}".format(np.std(cv_scores)))

模型调参小总结

  • 集成模型内置的cv函数可以较快的进行单一参数的调节,一般可以用来优先确定树模型的迭代次数

  • 数据量较大的时候(例如本次项目的数据),网格搜索调参会特别特别慢,不建议尝试

  • 集成模型中原生库和sklearn下的库部分参数不一致,需要注意,具体可以参考xgb和lgb的官方API

    xgb原生库APIsklearn库下xgbAPI

    lgb原生库APIsklearn库下lgbAPI

4.6 经验总结

在本节中,我们主要完成了建模与调参的工作,首先在建模的过程中通过划分数据集、交叉验证等方式对模型的性能进行评估验证

最后我们对模型进行调参,这部分介绍了贪心调参、网格搜索调参、贝叶斯调参共三种调参手段,重点使用贝叶斯调参对本次项目进行简单优化,大家在实际操作的过程中可以参考调参思路进行优化,不必拘泥于以上教程所写的具体实例。

<1> 终于深刻体会那句我们是炼丹的[调参]
<2> 数据和特征决定了机器学习的上限,而模型和算法只是逼近这个上限而已,那么感觉特征工程和模型调参都贼难,努力成为一个优秀的炼丹师
<3> 不知道为什么将最优的参数尝试弄在Task1的baseline,结果反向上分了害

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

数据闲逛人

谢谢大嘎喔~ 开心就好

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值