task4—模型建立与调参

建模与调参

1、学习目标
了解常用的机器学习模型,并掌握机器学习模型的建模与调参流程

2、内容介绍
在这里插入图片描述

3、相关原理介绍与推荐

3.1 线性回归模型

https://zhuanlan.zhihu.com/p/49480391

3.2 决策树模型

https://zhuanlan.zhihu.com/p/65304798

3.3 GBDT模型

https://zhuanlan.zhihu.com/p/45145899

3.4 XGBoost模型

https://zhuanlan.zhihu.com/p/86816771

3.5 LightGBM模型

https://zhuanlan.zhihu.com/p/89360721

3.6 推荐教材:

  • 《机器学习》 https://book.douban.com/subject/26708119/
  • 《统计学习方法》 https://book.douban.com/subject/10590856/
  • 《Python大战机器学习》 https://book.douban.com/subject/26987890/
  • 《面向机器学习的特征工程》 https://book.douban.com/subject/26826639/
  • 《数据科学家访谈录》 https://book.douban.com/subject/30129410/

4、代码示例

#4.1读取数据
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')

#reduce_mem_usage 函数通过调整数据类型,帮助我们减少数据在内存中占用的空间
def reduce_mem_usage(df):
    """ iterate through all the columns of a dataframe and modify the data type
        to reduce memory usage.        
    """
    start_mem = df.memory_usage().sum() 
    print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
    
    for col in df.columns:
        col_type = df[col].dtype
        
        if col_type != object:
            c_min = df[col].min()
            c_max = df[col].max()
            if str(col_type)[:3] == 'int':
                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
                    df[col] = df[col].astype(np.int8)
                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
                    df[col] = df[col].astype(np.int16)
                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
                    df[col] = df[col].astype(np.int32)
                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
                    df[col] = df[col].astype(np.int64)  
            else:
                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
                    df[col] = df[col].astype(np.float16)
                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
                    df[col] = df[col].astype(np.float32)
                else:
                    df[col] = df[col].astype(np.float64)
        else:
            df[col] = df[col].astype('category')

    end_mem = df.memory_usage().sum() 
    print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
    print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
    return df
sample_feature = reduce_mem_usage(pd.read_csv('data_for_tree.csv'))
Memory usage of dataframe is 60507376.00 MB
Memory usage after optimization is: 15724183.00 MB
Decreased by 74.0%
continuous_feature_names = [x for x in sample_feature.columns if x not in ['price','brand','model','brand']]
#4.2线性回归&五折交叉验证&模拟真实业务情况
sample_feature=sample_feature.dropna().replace('-',0).reset_index(drop=True)
sample_feature['notRepairedDamage']=sample_feature['notRepairedDamage'].astype(np.float32)
train=sample_feature[continuous_feature_names+['price']]

train_x=train[continuous_feature_names]
train_y=train['price']
#1)简单建模
from sklearn.linear_model import LinearRegression
model=LinearRegression(normalize=True)
model=model.fit(train_x,train_y)

#查看训练的线性回归模型的截距(intercept)与权重(coef)
'intercept:'+ str(model.intercept_)

sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)
[('v_6', 3367077.440622133),
 ('v_8', 700656.3543473331),
 ('v_9', 170626.2409139148),
 ('v_7', 32318.17310353781),
 ('v_12', 20480.562677916147),
 ('v_3', 17871.475646823015),
 ('v_11', 11482.12313883297),
 ('v_13', 11263.399851890374),
 ('v_10', 2681.35398073225),
 ('gearbox', 881.8328203286436),
 ('fuelType', 363.90247374082395),
 ('bodyType', 189.5855271618518),
 ('city', 44.95362239827937),
 ('power', 28.557627369994453),
 ('brand_price_median', 0.5103099160113423),
 ('brand_price_std', 0.45032755468657987),
 ('brand_amount', 0.1488113889360842),
 ('brand_price_max', 0.003190205361316529),
 ('train', 4.0978193283081055e-08),
 ('seller', -2.151820808649063e-06),
 ('offerType', -2.409564331173897e-06),
 ('brand_price_sum', -2.1750008141260988e-05),
 ('name', -0.00029815823324172306),
 ('used_time', -0.002526148775599711),
 ('brand_price_average', -0.4048195975445554),
 ('brand_price_min', -2.246718360060834),
 ('power_bin', -34.4567603971541),
 ('v_14', -274.91399236939696),
 ('kilometer', -372.8976211832127),
 ('notRepairedDamage', -495.22823840863475),
 ('v_0', -2044.6895623854225),
 ('v_5', -11046.342844305635),
 ('v_4', -15123.010532415077),
 ('v_2', -26106.906443720665),
 ('v_1', -45560.92511432782)]
from matplotlib import pyplot as plt

subsample_index=np.random.randint(low=0,high=len(train_y),size=50)


plt.scatter(train_x['v_9'][subsample_index], train_y[subsample_index], color='black')
plt.scatter(train_x['v_9'][subsample_index], model.predict(train_x.loc[subsample_index]), color='blue')
plt.xlabel('v_9')
plt.ylabel('price')
plt.legend(['True Price','Predicted Price'],loc='upper right')
print('The predicted price is obvious different from true price')
plt.show()
The predicted price is obvious different from true price

在这里插入图片描述

部分预测值出现了小于0的情况,说明我们的模型存在一些问题

import seaborn as sns
print('It is clear to see the price shows a typical exponential distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y)
plt.subplot(1,2,2)
sns.distplot(train_y[train_y < np.quantile(train_y, 0.9)])
It is clear to see the price shows a typical exponential distribution





<AxesSubplot:xlabel='price', ylabel='Density'>

在这里插入图片描述

通过作图我们发现数据的标签(price)呈现长尾分布,不利于我们的建模预测。原因是很多模型都假设数据误差项符合正态分布,而长尾分布的数据违背了这一假设。参考博客:https://blog.csdn.net/Noob_daniel/article/details/76087829

train_y_ln = np.log(train_y + 1)
import seaborn as sns
print('The transformed price seems like normal distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y_ln)
plt.subplot(1,2,2)
sns.distplot(train_y_ln[train_y_ln < np.quantile(train_y_ln, 0.9)])
The transformed price seems like normal distribution





<AxesSubplot:xlabel='price', ylabel='Density'>

在这里插入图片描述

在这里我们对标签进行了 l o g ( x + 1 ) log(x+1) log(x+1) 变换,使标签贴近于正态分布

model = model.fit(train_x, train_y_ln)

print('intercept:'+ str(model.intercept_))
sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)
intercept:18.748799410584716





[('v_9', 8.050814718162309),
 ('v_5', 5.755006071718437),
 ('v_12', 1.6209338574036165),
 ('v_1', 1.4779562855683),
 ('v_11', 1.1697442831313065),
 ('v_13', 0.9411177044348086),
 ('v_7', 0.7119532586067604),
 ('v_3', 0.6851304032602217),
 ('v_0', 0.008645115506763867),
 ('power_bin', 0.0084836756704011),
 ('gearbox', 0.007926460119203465),
 ('fuelType', 0.0066840651104466756),
 ('bodyType', 0.004516721198707884),
 ('power', 0.000717663927784045),
 ('brand_price_min', 3.336608177047885e-05),
 ('brand_amount', 2.897953210744573e-06),
 ('brand_price_median', 1.2322229009759989e-06),
 ('brand_price_std', 6.517010298355936e-07),
 ('brand_price_average', 6.336762187140104e-07),
 ('brand_price_max', 6.191738899756211e-07),
 ('train', -2.9558577807620168e-12),
 ('offerType', -1.1120704357381328e-10),
 ('brand_price_sum', -1.5124114795960058e-10),
 ('seller', -1.688249540165998e-10),
 ('name', -7.021721782926934e-08),
 ('used_time', -4.126534951683297e-06),
 ('city', -0.0022172516746003144),
 ('v_14', -0.00428557939949293),
 ('kilometer', -0.013835904291027086),
 ('notRepairedDamage', -0.27029439979161973),
 ('v_4', -0.832075967445632),
 ('v_2', -0.9504887847009229),
 ('v_10', -1.627162801392907),
 ('v_8', -40.35060721926321),
 ('v_6', -238.78517488927326)]
plt.scatter(train_x['v_9'][subsample_index], train_y[subsample_index], color='black')
plt.scatter(train_x['v_9'][subsample_index], np.exp(model.predict(train_x.loc[subsample_index])), color='blue')
plt.xlabel('v_9')
plt.ylabel('price')
plt.legend(['True Price','Predicted Price'],loc='upper right')
print('The predicted price seems normal after np.log transforming')
plt.show()
The predicted price seems normal after np.log transforming

在这里插入图片描述

再次进行可视化,发现预测结果与真实值较为接近,且未出现异常状况

2)五折交叉验证

在使用训练集对参数进行训练的时候,经常会发现人们通常会将一整个训练集分为三个部分(比如mnist手写训练集)。一般分为:训练集(train_set),评估集(valid_set),测试集(test_set)这三个部分。这其实是为了保证训练效果而特意设置的。其中测试集很好理解,其实就是完全不参与训练的数据,仅仅用来观测测试效果的数据。而训练集和评估集则牵涉到下面的知识了。

因为在实际的训练中,训练的结果对于训练集的拟合程度通常还是挺好的(初始条件敏感),但是对于训练集之外的数据的拟合程度通常就不那么令人满意了。因此我们通常并不会把所有的数据集都拿来训练,而是分出一部分来(这一部分不参加训练)对训练集生成的参数进行测试,相对客观的判断这些参数对训练集之外的数据的符合程度。这种思想就称为交叉验证(Cross Validation)

from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_absolute_error,make_scorer

def log_transfer(func):
    def wrapper(y,yhat):
        result = func(np.log(y),np.nan_to_num(np.log(yhat)))
        return result
    return wrapper

scores = cross_val_score(model, X=train_x, y=train_y, verbose=1, cv = 5, scoring=make_scorer(log_transfer(mean_absolute_error)))
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done   5 out of   5 | elapsed:    0.7s finished
print('AVG:', np.mean(scores))
AVG: 1.365429605643482

使用线性回归模型,对未处理标签的特征数据进行五折交叉验证(Error 1.36)

scores = cross_val_score(model, X=train_x, y=train_y_ln, verbose=1, cv = 5, scoring=make_scorer(mean_absolute_error))
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done   5 out of   5 | elapsed:    0.7s finished
print('AVG:', np.mean(scores))
AVG: 0.19323301498528495

使用线性回归模型,对处理过标签的特征数据进行五折交叉验证(Error 0.19)

scores = pd.DataFrame(scores.reshape(1,-1))
scores.columns = ['cv' + str(x) for x in range(1, 6)]
scores.index = ['MAE']
scores
cv1cv2cv3cv4cv5
MAE0.19080.1937620.1941310.1918230.19565

3)模拟真实业务情况
但在事实上,由于我们并不具有预知未来的能力,五折交叉验证在某些与时间相关的数据集上反而反映了不真实的情况。通过2018年的二手车价格预测2017年的二手车价格,这显然是不合理的,因此我们还可以采用时间顺序对数据集进行分隔。在本例中,我们选用靠前时间的4/5样本当作训练集,靠后时间的1/5当作验证集,最终结果与五折交叉验证差距不大

import datetime
sample_feature=sample_feature.reset_index(drop=True)
split_point=len(sample_feature)//5*4

train=sample_feature.loc[:split_point].dropna()
val = sample_feature.loc[split_point:].dropna()

train_x = train[continuous_feature_names]
train_y_ln=np.log(train['price']+1)
val_x=val[continuous_feature_names]
val_y_ln=np.log(val['price']+1)


model=model.fit(train_x,train_y_ln)

mean_absolute_error(val_y_ln, model.predict(val_x))
0.19566623018078097

4)绘制学习率曲线与验证曲线

from sklearn.model_selection import learning_curve, validation_curve
? learning_curve
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,n_jobs=1, train_size=np.linspace(.1, 1.0, 5 )):  
    plt.figure()  
    plt.title(title)  
    if ylim is not None:  
        plt.ylim(*ylim)  
    plt.xlabel('Training example')  
    plt.ylabel('score')  
    train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_size, scoring = make_scorer(mean_absolute_error))  
    train_scores_mean = np.mean(train_scores, axis=1)  
    train_scores_std = np.std(train_scores, axis=1)  
    test_scores_mean = np.mean(test_scores, axis=1)  
    test_scores_std = np.std(test_scores, axis=1)  
    plt.grid()#区域  
    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,  
                     train_scores_mean + train_scores_std, alpha=0.1,  
                     color="r")  
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,  
                     test_scores_mean + test_scores_std, alpha=0.1,  
                     color="g")  
    plt.plot(train_sizes, train_scores_mean, 'o-', color='r',  
             label="Training score")  
    plt.plot(train_sizes, test_scores_mean,'o-',color="g",  
             label="Cross-validation score")  
    plt.legend(loc="best")  
    return plt  
plot_learning_curve(LinearRegression(), 'Liner_model', train_x[:1000], train_y_ln[:1000], ylim=(0.0, 0.5), cv=5, n_jobs=1)  

<module 'matplotlib.pyplot' from 'E:\\Program Files\\Anaconda3\\envs\\tfc\\lib\\site-packages\\matplotlib\\pyplot.py'>

在这里插入图片描述

3、多种模型对比

train=sample_feature[continuous_feature_names+['price']].dropna()

train_x=train[continuous_feature_names]
train_y=train['price']
train_y_ln=np.log(train_y+1)

3.1线性模型&嵌入式特征选择
本章节默认,学习者已经了解关于过拟合、模型复杂度、正则化等概念。否则请寻找相关资料或参考如下连接:

  • 用简单易懂的语言描述「过拟合 overfitting」? https://www.zhihu.com/question/32246256/answer/55320482
  • 模型复杂度与模型的泛化能力 http://yangyingming.com/article/434/
  • 正则化的直观理解 https://blog.csdn.net/jinping_shi/article/details/52433975

在过滤式和包裹式特征选择方法中,特征选择过程与学习器训练过程有明显的分别。而嵌入式特征选择在学习器训练过程中自动地进行特征选择。嵌入式选择最常用的是L1正则化与L2正则化。在对线性回归模型加入两种正则化方法后,他们分别变成了岭回归与Lasso回归。

from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso

models = [LinearRegression(),
          Ridge(),
          Lasso()]

result = dict()
for model in models:
    model_name = str(model).split('(')[0]
    scores = cross_val_score(model, X=train_x, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error))
    result[model_name] = scores
    print(model_name + ' is finished')
LinearRegression is finished
Ridge is finished
Lasso is finished
result = pd.DataFrame(result)
result.index = ['cv' + str(x) for x in range(1, 6)]
result
LinearRegressionRidgeLasso
cv10.1908000.1948420.383742
cv20.1937620.1976440.381951
cv30.1941310.1981230.384090
cv40.1918230.1956650.380416
cv50.1956500.1995480.383910
model = LinearRegression().fit(train_x, train_y_ln)
print('intercept:'+ str(model.intercept_))
sns.barplot(abs(model.coef_), continuous_feature_names)
intercept:18.748799413355343





<AxesSubplot:>

在这里插入图片描述

model = Ridge().fit(train_x, train_y_ln)
print('intercept:'+ str(model.intercept_))
sns.barplot(abs(model.coef_), continuous_feature_names)
intercept:4.667963020579042





<AxesSubplot:>

在这里插入图片描述

L2正则化在拟合过程中通常都倾向于让权值尽可能小,最后构造一个所有参数都比较小的模型。因为一般认为参数值小的模型比较简单,能适应不同的数据集,也在一定程度上避免了过拟合现象。可以设想一下对于一个线性回归方程,若参数很大,那么只要数据偏移一点点,就会对结果造成很大的影响;但如果参数足够小,数据偏移得多一点也不会对结果造成什么影响,专业一点的说法是『抗扰动能力强』

model = Lasso().fit(train_x, train_y_ln)
print('intercept:'+ str(model.intercept_))
sns.barplot(abs(model.coef_), continuous_feature_names)
intercept:8.67204299573008





<AxesSubplot:>

在这里插入图片描述

L1正则化有助于生成一个稀疏权值矩阵,进而可以用于特征选择。如下图,我们发现power与userd_time特征非常重要。

除此之外,决策树通过信息熵或GINI指数选择分裂节点时,优先选择的分裂特征也更加重要,这同样是一种特征选择的方法。XGBoost与LightGBM模型中的model_importance指标正是基于此计算的

3.2非线性模型
除了线性模型以外,还有许多我们常用的非线性模型如下,在此篇幅有限不再一一讲解原理。我们选择了部分常用模型与线性模型进行效果比对。

from sklearn.linear_model import LinearRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neural_network import MLPRegressor
from xgboost.sklearn import XGBRegressor
from lightgbm.sklearn import LGBMRegressor
models = [LinearRegression(),
          DecisionTreeRegressor(),
          RandomForestRegressor(),
          GradientBoostingRegressor(),
          MLPRegressor(solver='lbfgs', max_iter=100), 
          XGBRegressor(n_estimators = 100, objective='reg:squarederror'), 
          LGBMRegressor(n_estimators = 100)]
result = dict()
for model in models:
    model_name = str(model).split('(')[0]
    scores = cross_val_score(model, X=train_x, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error))
    result[model_name] = scores
    print(model_name + ' is finished')
LinearRegression is finished
DecisionTreeRegressor is finished
RandomForestRegressor is finished
GradientBoostingRegressor is finished
MLPRegressor is finished
XGBRegressor is finished
LGBMRegressor is finished
result = pd.DataFrame(result)
result.index = ['cv' + str(x) for x in range(1, 6)]
result
LinearRegressionDecisionTreeRegressorRandomForestRegressorGradientBoostingRegressorMLPRegressorXGBRegressorLGBMRegressor
cv10.1908000.1894330.1313780.168813128.8406620.1368220.140941
cv20.1937620.1924340.1343710.171846124.0513610.1404350.144713
cv30.1941310.1895090.1340910.170886136.7278320.1394880.144099
cv40.1918230.1869700.1320960.169083558.3513460.1369290.142516
cv50.1956500.1901710.1347550.174072221.6556830.1398430.144853

可以看到随机森林模型在每一个fold中均取得了更好的效果

4模型调参
在此我们介绍了三种常用的调参方法如下:

  • 贪心算法 https://www.jianshu.com/p/ab89df9759c8
    所谓贪心算法是指,在对问题求解时,总是做出在当前看来是最好的选择。也就是说,不从整体最优上加以考虑,它所做出的仅仅是在某种意义上的局部最优解。
  • 网格调参 https://blog.csdn.net/weixin_43172660/article/details/83032029
    一种调参的方法,当你算法模型效果不是很好时,可以通过该方法来调整参数,通过循环遍历,尝试每一种参数组合,返回最好的得分值的参数组合
  • 贝叶斯调参 https://blog.csdn.net/linxid/article/details/81189154
    手动调参十分耗时,网格和随机搜索不需要人力,但需要很长的运行时间。因此,诞生了许多自动调整超参数的方法。贝叶斯优化是一种用模型找到函数最小值方法,已经应用于机器学习问题中的超参数搜索,这种方法性能好,同时比随机搜索省时。
## LGB的参数集合:

objective = ['regression', 'regression_l1', 'mape', 'huber', 'fair']

num_leaves = [3,5,10,15,20,40, 55]
max_depth = [3,5,10,15,20,40, 55]
bagging_fraction = []
feature_fraction = []
drop_rate = []
#4.1贪心调参
best_obj = dict()
for obj in objective:
    model = LGBMRegressor(objective=obj)
    score = np.mean(cross_val_score(model, X=train_x, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_obj[obj] = score
    
best_leaves = dict()
for leaves in num_leaves:
    model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0], num_leaves=leaves)
    score = np.mean(cross_val_score(model, X=train_x, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_leaves[leaves] = score
    
best_depth = dict()
for depth in max_depth:
    model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0],
                          num_leaves=min(best_leaves.items(), key=lambda x:x[1])[0],
                          max_depth=depth)
    score = np.mean(cross_val_score(model, X=train_x, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_depth[depth] = score
sns.lineplot(x=['0_initial','1_turning_obj','2_turning_leaves','3_turning_depth'], y=[0.143 ,min(best_obj.values()), min(best_leaves.values()), min(best_depth.values())])
<AxesSubplot:>

在这里插入图片描述

#4.2Grid search调参
from sklearn.model_selection import GridSearchCV

parameters = {'objective': objective , 'num_leaves': num_leaves, 'max_depth': max_depth}
model = LGBMRegressor()
clf = GridSearchCV(model, parameters, cv=5)
clf = clf.fit(train_x, train_y)


clf.best_params_

model = LGBMRegressor(objective='regression',
                          num_leaves=55,
                          max_depth=15)

np.mean(cross_val_score(model, X=train_x, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))

---------------------------------------------------------------------------

KeyboardInterrupt                         Traceback (most recent call last)

<ipython-input-44-26958ed0d623> in <module>
      5 model = LGBMRegressor()
      6 clf = GridSearchCV(model, parameters, cv=5)
----> 7 clf = clf.fit(train_x, train_y)
      8 
      9 


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
     61             extra_args = len(args) - len(all_args)
     62             if extra_args <= 0:
---> 63                 return f(*args, **kwargs)
     64 
     65             # extra_args > 0


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params)
    839                 return results
    840 
--> 841             self._run_search(evaluate_candidates)
    842 
    843             # multimetric is determined here because in the case of a callable


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\sklearn\model_selection\_search.py in _run_search(self, evaluate_candidates)
   1286     def _run_search(self, evaluate_candidates):
   1287         """Search all candidates in param_grid"""
-> 1288         evaluate_candidates(ParameterGrid(self.param_grid))
   1289 
   1290 


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\sklearn\model_selection\_search.py in evaluate_candidates(candidate_params, cv, more_results)
    807                                    (split_idx, (train, test)) in product(
    808                                    enumerate(candidate_params),
--> 809                                    enumerate(cv.split(X, y, groups))))
    810 
    811                 if len(out) < 1:


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\joblib\parallel.py in __call__(self, iterable)
   1042                 self._iterating = self._original_iterator is not None
   1043 
-> 1044             while self.dispatch_one_batch(iterator):
   1045                 pass
   1046 


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\joblib\parallel.py in dispatch_one_batch(self, iterator)
    857                 return False
    858             else:
--> 859                 self._dispatch(tasks)
    860                 return True
    861 


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\joblib\parallel.py in _dispatch(self, batch)
    775         with self._lock:
    776             job_idx = len(self._jobs)
--> 777             job = self._backend.apply_async(batch, callback=cb)
    778             # A job can complete so quickly than its callback is
    779             # called before we get here, causing self._jobs to


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\joblib\_parallel_backends.py in apply_async(self, func, callback)
    206     def apply_async(self, func, callback=None):
    207         """Schedule a func to be run"""
--> 208         result = ImmediateResult(func)
    209         if callback:
    210             callback(result)


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\joblib\_parallel_backends.py in __init__(self, batch)
    570         # Don't delay the application, to avoid keeping the input
    571         # arguments in memory
--> 572         self.results = batch()
    573 
    574     def get(self):


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\joblib\parallel.py in __call__(self)
    261         with parallel_backend(self._backend, n_jobs=self._n_jobs):
    262             return [func(*args, **kwargs)
--> 263                     for func, args, kwargs in self.items]
    264 
    265     def __reduce__(self):


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\joblib\parallel.py in <listcomp>(.0)
    261         with parallel_backend(self._backend, n_jobs=self._n_jobs):
    262             return [func(*args, **kwargs)
--> 263                     for func, args, kwargs in self.items]
    264 
    265     def __reduce__(self):


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\sklearn\utils\fixes.py in __call__(self, *args, **kwargs)
    220     def __call__(self, *args, **kwargs):
    221         with config_context(**self.config):
--> 222             return self.function(*args, **kwargs)


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\sklearn\model_selection\_validation.py in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, return_n_test_samples, return_times, return_estimator, split_progress, candidate_progress, error_score)
    591             estimator.fit(X_train, **fit_params)
    592         else:
--> 593             estimator.fit(X_train, y_train, **fit_params)
    594 
    595     except Exception as e:


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\lightgbm\sklearn.py in fit(self, X, y, sample_weight, init_score, eval_set, eval_names, eval_sample_weight, eval_init_score, eval_metric, early_stopping_rounds, verbose, feature_name, categorical_feature, callbacks, init_model)
    820                     eval_init_score=eval_init_score, eval_metric=eval_metric,
    821                     early_stopping_rounds=early_stopping_rounds, verbose=verbose, feature_name=feature_name,
--> 822                     categorical_feature=categorical_feature, callbacks=callbacks, init_model=init_model)
    823         return self
    824 


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\lightgbm\sklearn.py in fit(self, X, y, sample_weight, init_score, group, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_group, eval_metric, early_stopping_rounds, verbose, feature_name, categorical_feature, callbacks, init_model)
    686                               evals_result=evals_result, fobj=self._fobj, feval=eval_metrics_callable,
    687                               verbose_eval=verbose, feature_name=feature_name,
--> 688                               callbacks=callbacks, init_model=init_model)
    689 
    690         if evals_result:


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\lightgbm\engine.py in train(params, train_set, num_boost_round, valid_sets, valid_names, fobj, feval, init_model, feature_name, categorical_feature, early_stopping_rounds, evals_result, verbose_eval, learning_rates, keep_training_booster, callbacks)
    226     # construct booster
    227     try:
--> 228         booster = Booster(params=params, train_set=train_set)
    229         if is_valid_contain_train:
    230             booster.set_train_data_name(train_data_name)


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\lightgbm\basic.py in __init__(self, params, train_set, model_file, model_str, silent)
   2227                 )
   2228             # construct booster object
-> 2229             train_set.construct()
   2230             # copy the parameters from train_set
   2231             params.update(train_set.get_params())


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\lightgbm\basic.py in construct(self)
   1470                                 init_score=self.init_score, predictor=self._predictor,
   1471                                 silent=self.silent, feature_name=self.feature_name,
-> 1472                                 categorical_feature=self.categorical_feature, params=self.params)
   1473             if self.free_raw_data:
   1474                 self.data = None


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\lightgbm\basic.py in _lazy_init(self, data, label, reference, weight, group, init_score, predictor, silent, feature_name, categorical_feature, params)
   1210                                                                                              feature_name,
   1211                                                                                              categorical_feature,
-> 1212                                                                                              self.pandas_categorical)
   1213         label = _label_from_pandas(label)
   1214 


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\lightgbm\basic.py in _data_from_pandas(data, feature_name, categorical_feature, pandas_categorical)
    510             raise ValueError('Input data must be 2 dimensional and non empty.')
    511         if feature_name == 'auto' or feature_name is None:
--> 512             data = data.rename(columns=str)
    513         cat_cols = list(data.select_dtypes(include=['category']).columns)
    514         cat_cols_not_ordered = [col for col in cat_cols if not data[col].cat.ordered]


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\pandas\util\_decorators.py in wrapper(*args, **kwargs)
    310         @wraps(func)
    311         def wrapper(*args, **kwargs) -> Callable[..., Any]:
--> 312             return func(*args, **kwargs)
    313 
    314         kind = inspect.Parameter.POSITIONAL_OR_KEYWORD


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\pandas\core\frame.py in rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
   4447             inplace=inplace,
   4448             level=level,
-> 4449             errors=errors,
   4450         )
   4451 


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\pandas\core\generic.py in rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
   1032 
   1033         self._check_inplace_and_allows_duplicate_labels(inplace)
-> 1034         result = self if inplace else self.copy(deep=copy)
   1035 
   1036         for axis_no, replacements in enumerate((index, columns)):


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\pandas\core\generic.py in copy(self, deep)
   5993         dtype: object
   5994         """
-> 5995         data = self._mgr.copy(deep=deep)
   5996         self._clear_item_cache()
   5997         return self._constructor(data).__finalize__(self, method="copy")


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\pandas\core\internals\managers.py in copy(self, deep)
    819             new_axes = list(self.axes)
    820 
--> 821         res = self.apply("copy", deep=deep)
    822         res.axes = new_axes
    823         return res


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\pandas\core\internals\managers.py in apply(self, f, align_keys, ignore_failures, **kwargs)
    425                     applied = b.apply(f, **kwargs)
    426                 else:
--> 427                     applied = getattr(b, f)(**kwargs)
    428             except (TypeError, NotImplementedError):
    429                 if not ignore_failures:


E:\Program Files\Anaconda3\envs\tfc\lib\site-packages\pandas\core\internals\blocks.py in copy(self, deep)
    754         values = self.values
    755         if deep:
--> 756             values = values.copy()
    757         return self.make_block_same_class(values, ndim=self.ndim)
    758 


KeyboardInterrupt: 
#4.3贝叶斯调参
from bayes_opt import BayesianOptimization

def rf_cv(num_leaves, max_depth, subsample, min_child_samples):
    val = cross_val_score(
        LGBMRegressor(objective = 'regression_l1',
            num_leaves=int(num_leaves),
            max_depth=int(max_depth),
            subsample = subsample,
            min_child_samples = int(min_child_samples)
        ),
        X=train_x, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)
    ).mean()
    return 1 - val
rf_bo = BayesianOptimization(
    rf_cv,
    {
    'num_leaves': (2, 100),
    'max_depth': (2, 100),
    'subsample': (0.1, 1),
    'min_child_samples' : (2, 100)
    }
)


rf_bo.maximize()
|   iter    |  target   | max_depth | min_ch... | num_le... | subsample |
-------------------------------------------------------------------------
| [0m 1       [0m | [0m 0.8204  [0m | [0m 62.42   [0m | [0m 39.67   [0m | [0m 5.592   [0m | [0m 0.7953  [0m |
| [95m 2       [0m | [95m 0.8659  [0m | [95m 83.13   [0m | [95m 89.65   [0m | [95m 66.31   [0m | [95m 0.4288  [0m |
| [95m 3       [0m | [95m 0.8689  [0m | [95m 82.52   [0m | [95m 23.73   [0m | [95m 97.69   [0m | [95m 0.6415  [0m |
| [0m 4       [0m | [0m 0.8662  [0m | [0m 17.5    [0m | [0m 65.45   [0m | [0m 67.33   [0m | [0m 0.1217  [0m |
| [0m 5       [0m | [0m 0.867   [0m | [0m 92.53   [0m | [0m 62.52   [0m | [0m 73.34   [0m | [0m 0.467   [0m |
| [0m 6       [0m | [0m 0.8063  [0m | [0m 2.0     [0m | [0m 2.0     [0m | [0m 100.0   [0m | [0m 0.1     [0m |
| [95m 7       [0m | [95m 0.8693  [0m | [95m 54.06   [0m | [95m 77.84   [0m | [95m 100.0   [0m | [95m 1.0     [0m |
| [0m 8       [0m | [0m 0.8062  [0m | [0m 2.0     [0m | [0m 100.0   [0m | [0m 100.0   [0m | [0m 0.1     [0m |
| [0m 9       [0m | [0m 0.8673  [0m | [0m 55.52   [0m | [0m 51.45   [0m | [0m 75.59   [0m | [0m 0.1     [0m |
| [0m 10      [0m | [0m 0.8691  [0m | [0m 82.59   [0m | [0m 22.88   [0m | [0m 96.42   [0m | [0m 0.1579  [0m |
| [95m 11      [0m | [95m 0.8695  [0m | [95m 100.0   [0m | [95m 100.0   [0m | [95m 100.0   [0m | [95m 0.1     [0m |
| [0m 12      [0m | [0m 0.7719  [0m | [0m 2.0     [0m | [0m 100.0   [0m | [0m 2.108   [0m | [0m 0.1     [0m |
| [0m 13      [0m | [0m 0.8629  [0m | [0m 100.0   [0m | [0m 2.0     [0m | [0m 47.37   [0m | [0m 1.0     [0m |
| [0m 14      [0m | [0m 0.7719  [0m | [0m 100.0   [0m | [0m 100.0   [0m | [0m 2.0     [0m | [0m 0.1     [0m |
| [0m 15      [0m | [0m 0.7719  [0m | [0m 2.0     [0m | [0m 2.0     [0m | [0m 2.0     [0m | [0m 1.0     [0m |
| [0m 16      [0m | [0m 0.8691  [0m | [0m 100.0   [0m | [0m 2.0     [0m | [0m 100.0   [0m | [0m 1.0     [0m |
| [0m 17      [0m | [0m 0.8693  [0m | [0m 100.0   [0m | [0m 60.66   [0m | [0m 100.0   [0m | [0m 0.1     [0m |
| [0m 18      [0m | [0m 0.8664  [0m | [0m 46.5    [0m | [0m 88.58   [0m | [0m 68.37   [0m | [0m 1.0     [0m |
| [0m 19      [0m | [0m 0.7719  [0m | [0m 100.0   [0m | [0m 2.0     [0m | [0m 2.0     [0m | [0m 0.1     [0m |
| [0m 20      [0m | [0m 0.8654  [0m | [0m 66.89   [0m | [0m 2.0     [0m | [0m 63.27   [0m | [0m 1.0     [0m |
| [0m 21      [0m | [0m 0.8665  [0m | [0m 100.0   [0m | [0m 21.98   [0m | [0m 70.46   [0m | [0m 1.0     [0m |
| [0m 22      [0m | [0m 0.8695  [0m | [0m 71.77   [0m | [0m 100.0   [0m | [0m 100.0   [0m | [0m 1.0     [0m |
| [0m 23      [0m | [0m 0.8627  [0m | [0m 41.17   [0m | [0m 57.59   [0m | [0m 46.87   [0m | [0m 1.0     [0m |
| [0m 24      [0m | [0m 0.8687  [0m | [0m 78.51   [0m | [0m 77.03   [0m | [0m 92.37   [0m | [0m 1.0     [0m |
| [0m 25      [0m | [0m 0.8688  [0m | [0m 59.26   [0m | [0m 2.095   [0m | [0m 95.75   [0m | [0m 0.6038  [0m |
| [0m 26      [0m | [0m 0.8668  [0m | [0m 91.0    [0m | [0m 2.102   [0m | [0m 72.42   [0m | [0m 0.367   [0m |
| [0m 27      [0m | [0m 0.8693  [0m | [0m 36.4    [0m | [0m 44.85   [0m | [0m 100.0   [0m | [0m 0.1     [0m |
| [0m 28      [0m | [0m 0.8664  [0m | [0m 31.37   [0m | [0m 35.16   [0m | [0m 68.76   [0m | [0m 1.0     [0m |
| [0m 29      [0m | [0m 0.8678  [0m | [0m 34.35   [0m | [0m 62.78   [0m | [0m 80.72   [0m | [0m 1.0     [0m |
| [0m 30      [0m | [0m 0.8639  [0m | [0m 74.21   [0m | [0m 31.78   [0m | [0m 51.66   [0m | [0m 0.1     [0m |
=========================================================================
1 - rf_bo.max['target']
0.13049715979707488

5、总结
在本章中,我们完成了建模与调参的工作,并对我们的模型进行了验证。此外,我们还采用了一些基本方法来提高预测的精度,提升如下图所示。

plt.figure(figsize=(13,5))
sns.lineplot(x=['0_origin','1_log_transfer','2_L1_&_L2','3_change_model','4_parameter_turning'], y=[1.36 ,0.19, 0.19, 0.14, 0.13])
<AxesSubplot:>

在这里插入图片描述

感谢DataWhale ,会在此学习更多的。这个代码是自己运行过的,虽然一些概念没有很懂,但是也给了我大致的一个轮廓。希望后面有多标签预测的项目。再次感谢!

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Leslie模型是一个深度学习模型是为了找到模型的最佳超数值。在Leslie模型中,常见的超数包括学习率、批量大小、动量和权重衰减。学习率决定了模型在每次更新权重时的步长,批量大小决定了每次迭代中使用的样本数量,动量可以帮助模型更快地收敛,而权重衰减可以控制模型的复杂度。 为了找到最佳超数值,可以使用梯度下降等优化算法来进行。梯度下降是一种常用的优化算法,通过计算损失函数对于每个超数的梯度来更新超数的值,从而使得模型能够逐渐收敛到最佳值。 此外,对于Leslie模型来说,隐藏层的层数也是一个重要的超数。隐藏层的层数决定了模型的复杂性,一般来说,全连接层越多越好,但是必须有非线性激活函数和Dropout来避免过拟合。对于复杂模型来说,设置1-2层的全连接层通常就足够了。 因此,Leslie模型可以通过整学习率、批量大小、动量、权重衰减和隐藏层的层数来找到最佳超数值,从而使得模型能够获得最佳结果。 #### 引用[.reference_title] - *1* [ 天桥师秘籍:一份深度学习超技术指南 ...](https://blog.csdn.net/weixin_33713707/article/details/89551812)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* [天桥师秘籍:一份深度学习超技术指南](https://blog.csdn.net/weixin_33745006/article/details/112013213)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] - *3* [全连接层tricks](https://blog.csdn.net/weixin_42419611/article/details/116756820)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值