天池二手车交易价格预测Task04(建模调参)

本章主要讲述了机器学习模型的建模与调参流程。之前的特征工程与数据清洗都是为最终的模型来服务的,模型的建立和调参决定最终的结果。

首先引用下Miracle8070 大神关于本章内容的思维导图

1 代码示例

1.1 读取数据

import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')

reduce_mem_usage函数通过调整数据类型,帮助我们减少数据在内存中占用的空间。

def reduce_mem_usage(df):
    """ iterate through all the columns of a dataframe and modify the data type
        to reduce memory usage.        
    """
    start_mem = df.memory_usage().sum() 
    print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
    
    for col in df.columns:
        col_type = df[col].dtype
        
        if col_type != object:
            c_min = df[col].min()
            c_max = df[col].max()
            if str(col_type)[:3] == 'int':
                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
                    df[col] = df[col].astype(np.int8)
                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
                    df[col] = df[col].astype(np.int16)
                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
                    df[col] = df[col].astype(np.int32)
                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
                    df[col] = df[col].astype(np.int64)  
            else:
                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
                    df[col] = df[col].astype(np.float16)
                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
                    df[col] = df[col].astype(np.float32)
                else:
                    df[col] = df[col].astype(np.float64)
        else:
            df[col] = df[col].astype('category')

    end_mem = df.memory_usage().sum() 
    print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
    print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
    return df
sample_feature = reduce_mem_usage(pd.read_csv('data_for_tree.csv'))
Memory usage of dataframe is 60507328.00 MB
Memory usage after optimization is: 15724107.00 MB
Decreased by 74.0%

获取连续性特征列的名字:

continuous_feature_names = [x for x in sample_feature.columns if x not in ['price','brand','model','brand']]

2 线性回归模型

线性回归(Linear Regression)模型是最简单的回归模型,下面我们首先使用线性回归来作为二手车价格预测的模型。

线性回归是利用称为线性回归方程的最小平方函数对一个或多个自变量和因变量之间关系进行建模的一种回归分析,在本文中,假设我们将预测的二手车价格我们用Y来表示, 而我们构造的特征我们用x_{}i表示,则它们之间的关系为:

                                                                 Y=w_{1}x_{1}+w_{2}x_{2}+\cdots +w_{n}x_{n}+a

模型训练的实质就是通过训练集找到合适的权重w_{i},然后再对测试集X进行预测Y_{test}

本文使用线性回归模型都是调用sklearn包中的LinearRegression。

sample_feature = sample_feature.dropna().replace('-', 0).reset_index(drop=True)
sample_feature['notRepairedDamage'] = sample_feature['notRepairedDamage'].astype(np.float32)
train = sample_feature[continuous_feature_names + ['price']]
train_X = train[continuous_feature_names]
train_y = train['price']


from sklearn.linear_model import LinearRegression
model = LinearRegression(normalize=True)
model = model.fit(train_X, train_y)
"""查看训练的线性回归模型的截距(intercept)与权重(coef)"""
print('intercept: ' + str(model.intercept_))
sorted(dict(zip(continue_fea, model.coef_)).items(), key=lambda x: x[1], reverse=True)


## 结果:
intercept: -178881.74591832393
[('v_6', 3342612.384537345),
 ('v_8', 684205.534533214),
 ('v_9', 178967.94192530424),
 ('v_7', 35223.07319016895),
 ('v_5', 21917.550249749802),
 ('v_3', 12782.03250792227),
 ('v_12', 11654.925634146672),
 ('v_13', 9884.194615297649),
 ('v_11', 5519.182176035517),
 ('v_10', 3765.6101415594258),
 ('gearbox', 900.3205339198406),
 ('fuelType', 353.5206495542567),
 ('bodyType', 186.51797317460046),
 ('city', 45.17354204168846),
 ('power', 31.163045441455335),
 ('brand_price_median', 0.535967111869784),
 ('brand_price_std', 0.4346788365040235),
 ('brand_amount', 0.15308295553300566),
 ('brand_price_max', 0.003891831020467389),
 ('seller', -1.2684613466262817e-06),
 ('offerType', -4.759058356285095e-06),
 ('brand_price_sum', -2.2430642281682917e-05),
 ('name', -0.00042591632723759166),
 ('used_time', -0.012574429533889028),
 ('brand_price_average', -0.414105722833381),
 ('brand_price_min', -2.3163823428971835),
 ('train', -5.392535065078232),
 ('power_bin', -59.24591853031839),
 ('v_14', -233.1604256172217),
 ('kilometer', -372.96600915402496),
 ('notRepairedDamage', -449.29703564695365),
 ('v_0', -1490.6790578168238),
 ('v_4', -14219.648899108111),
 ('v_2', -16528.55239086934),
 ('v_1', -42869.43976200439)]

ps:线性回归模型中的权重为w_{i},截距则为公式中的a

通过图形化揭示二手车价格与相关特征的关系:

from matplotlib import pyplot as plt

subsample_index = np.random.randint(low=0, high=len(train_y), size=50)
plt.scatter(train_X['v_9'][subsample_index], train_y[subsample_index], color='black')
plt.scatter(train_X['v_9'][subsample_index], model.predict(train_X.loc[subsample_index]), color='blue')
plt.xlabel('v_9')
plt.ylabel('price')
plt.legend(['True Price','Predicted Price'],loc='upper right')
print('The predicted price is obvious different from true price')
plt.show()

上图是特征v_9的值与价格的散点图,图片发现模型的预测结果(蓝色点)与真实标签(黑色点)的分布差异较大,且部分预测值出现了小于0的情况,说明我们的模型存在一些问题,需要进一步地调参。

price的分布图

import seaborn as sb

print('It is clear to see the price shows a typical exponential distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y)
plt.subplot(1,2,2)
sns.distplot(train_y[train_y < np.quantile(train_y, 0.9)])

根据上一节的特征工程可知,price呈现长尾分布,不利于我们的建模预测。原因是很多模型都假设数据误差项符合正态分布,而长尾分布的数据违背了这一假设。参考博客:https://blog.csdn.net/Noob_daniel/article/details/76087829

下面我们通过处理对标签进行 𝑙𝑜𝑔(𝑥+1)变换,使标签贴近于正态分布。

import seaborn as sns

train_y_ln = np.log(train_y + 1)
print('The transformed price seems like normal distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y_ln)
plt.subplot(1,2,2)
sns.distplot(train_y_ln[train_y_ln < np.quantile(train_y_ln, 0.9)])

变换后的price分布图

调整之后再次对模型进行训练:

model = model.fit(train_X, train_y_ln)

print('intercept:'+ str(model.intercept_))
sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)

#权重和结果的展示
intercept:23.515920686637713
[('v_9', 6.043993029165403),
 ('v_12', 2.0357439855551394),
 ('v_11', 1.3607608712255672),
 ('v_1', 1.3079816298861897),
 ('v_13', 1.0788833838535354),
 ('v_3', 0.9895814429387444),
 ('gearbox', 0.009170812023421397),
 ('fuelType', 0.006447089787635784),
 ('bodyType', 0.004815242907679581),
 ('power_bin', 0.003151801949447194),
 ('power', 0.0012550361843629999),
 ('train', 0.0001429273782925814),
 ('brand_price_min', 2.0721302299502698e-05),
 ('brand_price_average', 5.308179717783439e-06),
 ('brand_amount', 2.8308531339942507e-06),
 ('brand_price_max', 6.764442596115763e-07),
 ('offerType', 1.6765966392995324e-10),
 ('seller', 9.308109838457312e-12),
 ('brand_price_sum', -1.3473184925468486e-10),
 ('name', -7.11403461065247e-08),
 ('brand_price_median', -1.7608143661053008e-06),
 ('brand_price_std', -2.7899058266986454e-06),
 ('used_time', -5.6142735899344175e-06),
 ('city', -0.0024992974087053223),
 ('v_14', -0.012754139659375262),
 ('kilometer', -0.013999175312751872),
 ('v_0', -0.04553774829634237),
 ('notRepairedDamage', -0.273686961116076),
 ('v_7', -0.7455902679730504),
 ('v_4', -0.9281349233755761),
 ('v_2', -1.2781892166433606),
 ('v_5', -1.5458846136756323),
 ('v_10', -1.8059217242413748),
 ('v_8', -42.611729973490604),
 ('v_6', -241.30992120503035)]

 

可视化展示price,发现预测结果与真实值较为接近,且未出现异常状况。

这里再次引用下Miracle8070 大神的总结:

  • 线性模型是很简单的一个模型,我们虽然后面不会用到,但是后面建立模型,训练和预测模型的步骤和线性模型基本上是一致的,依然是.fit(X,Y), .predict(X_test)方法。所以在这里先体会一下如何建立一个模型,并且对它训练和预测。
  • 线性模型这个操作中,有些方法还是可以用于其他模型的,比如模型训练之后,我们可以通过某种方式去看哪些特征对模型更加重要,这个在特征筛选的时候非常非常有用(还记得嵌入式或者包裹式特征选择方法吗),所以这里算是一个简单的温习操作。
  • 通过查看模型的训练效果,可能还会有意外的收获,就比如这里的price这个分布,从模型的训练效果也可以看出来这个分布可能有问题。

3 交叉验证

在使用训练集对参数进行训练的时候,经常会发现人们通常会将一整个训练集分为三个部分(比如mnist手写训练集)。一般分为:训练集(train_set),验证集(valid_set),测试集(test_set)这三个部分。这其实是为了保证训练效果而特意设置的。其中测试集很好理解,其实就是完全不参与训练的数据,仅仅用来观测测试效果的数据。而训练集和评估集则牵涉到下面的知识了。

因为在实际的训练中,训练的结果对于训练集的拟合程度通常还是挺好的(初始条件敏感),但是对于训练集之外的数据的拟合程度通常就不那么令人满意了。因此我们通常并不会把所有的数据集都拿来训练,而是分出一部分来(这一部分不参加训练)对训练集生成的参数进行测试,相对客观的判断这些参数对训练集之外的数据的符合程度。这种思想就称为交叉验证(Cross Validation)

若使用线性回归模型,对未处理标签的特征数据进行五折交叉验证并计算平均得分:

from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_absolute_error,  make_scorer

def log_transfer(func):
    def wrapper(y, yhat):
        result = func(np.log(y), np.nan_to_num(np.log(yhat)))
        return result
    return wrapper

scores = cross_val_score(model, X=train_X, y=train_y, verbose=1, cv = 5, scoring=make_scorer(log_transfer(mean_absolute_error)))
print('AVG:', np.mean(scores))

#结果
AVG: 1.3641908155886227

若使用线性回归模型,对处理过标签的特征数据进行五折交叉验证并计算平均得分:

scores = cross_val_score(model, X=train_X, y=train_y_ln, verbose=1, cv = 5, scoring=make_scorer(mean_absolute_error))
print('AVG:', np.mean(scores))


#结果
AVG: 0.19382863663604424

使用五折交叉验证验证MAE得分:

scores = pd.DataFrame(scores.reshape(1,-1))
scores.columns = ['cv' + str(x) for x in range(1, 6)]
scores.index = ['MAE']
scores

 

 

4 模拟真实业务场景

但在事实上,由于我们并不具有预知未来的能力,五折交叉验证在某些与时间相关的数据集上反而反映了不真实的情况。通过2018年的二手车价格预测2017年的二手车价格,这显然是不合理的,因此我们还可以采用时间顺序对数据集进行分隔。在本例中,我们选用靠前时间的4/5样本当作训练集,靠后时间的1/5当作验证集,最终结果与五折交叉验证差距不大。

import datetime

sample_feature = sample_feature.reset_index(drop=True)
split_point = len(sample_feature) // 5 * 4
train = sample_feature.loc[:split_point].dropna()
val = sample_feature.loc[split_point:].dropna()

train_X = train[continuous_feature_names]
train_y_ln = np.log(train['price'] + 1)
val_X = val[continuous_feature_names]
val_y_ln = np.log(val['price'] + 1)

model = model.fit(train_X, train_y_ln)
mean_absolute_error(val_y_ln, model.predict(val_X))

#结果
0.19443858353490887

绘制学习率曲线与验证曲线:

from sklearn.model_selection import learning_curve, validation_curve

? learning_curve
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,n_jobs=1, train_size=np.linspace(.1, 1.0, 5 )):  
    plt.figure()  
    plt.title(title)  
    if ylim is not None:  
        plt.ylim(*ylim)  
    plt.xlabel('Training example')  
    plt.ylabel('score')  
    train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_size, scoring = make_scorer(mean_absolute_error))  
    train_scores_mean = np.mean(train_scores, axis=1)  
    train_scores_std = np.std(train_scores, axis=1)  
    test_scores_mean = np.mean(test_scores, axis=1)  
    test_scores_std = np.std(test_scores, axis=1)  
    plt.grid()#区域  
    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,  
                     train_scores_mean + train_scores_std, alpha=0.1,  
                     color="r")  
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,  
                     test_scores_mean + test_scores_std, alpha=0.1,  
                     color="g")  
    plt.plot(train_sizes, train_scores_mean, 'o-', color='r',  
             label="Training score")  
    plt.plot(train_sizes, test_scores_mean,'o-',color="g",  
             label="Cross-validation score")  
    plt.legend(loc="best")  
    return plt  

plot_learning_curve(LinearRegression(), 'Liner_model', train_X[:1000], train_y_ln[:1000], ylim=(0.0, 0.5), cv=5, n_jobs=1)  

5 多种模型对比(这里主要讲非线性模型)

下面的代码主要使用决策树模型、随机森林模型、梯度提升树模型、多层感知机模型(MLP)、XGBoost模型、LGB模型。

from sklearn.linear_model import LinearRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neural_network import MLPRegressor
from xgboost.sklearn import XGBRegressor
from lightgbm.sklearn import LGBMRegressor

models = [LinearRegression(),
          DecisionTreeRegressor(),
          RandomForestRegressor(),
          GradientBoostingRegressor(),
          MLPRegressor(solver='lbfgs', max_iter=100), 
          XGBRegressor(n_estimators = 100, objective='reg:squarederror'), 
          LGBMRegressor(n_estimators = 100)]

result = dict()
for model in models:
    model_name = str(model).split('(')[0]
    scores = cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error))
    result[model_name] = scores
    print(model_name + ' is finished')

result = pd.DataFrame(result)
result.index = ['cv' + str(x) for x in range(1, 6)]
result

 

可以看到随机森林模型在每一个fold中均取得了更好的效果。

6 模型调参

在此我们介绍了三种常用的调参方法如下:

## LGB的参数集合:

objective = ['regression', 'regression_l1', 'mape', 'huber', 'fair']

num_leaves = [3,5,10,15,20,40, 55]
max_depth = [3,5,10,15,20,40, 55]
bagging_fraction = []
feature_fraction = []
drop_rate = []

6.1 贪心调参

贪心调参的意思是对模型影响最大的参数调优,直到最优化;再拿下一个影响最大的参数调优,如此下去,直到所有的参数调整完毕。

best_obj = dict()
for obj in objective:
    model = LGBMRegressor(objective=obj)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_obj[obj] = score
    
best_leaves = dict()
for leaves in num_leaves:
    model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0], num_leaves=leaves)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_leaves[leaves] = score
    
best_depth = dict()
for depth in max_depth:
    model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0],
                          num_leaves=min(best_leaves.items(), key=lambda x:x[1])[0],
                          max_depth=depth)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_depth[depth] = score

sns.lineplot(x=['0_initial','1_turning_obj','2_turning_leaves','3_turning_depth'], y=[0.143 ,min(best_obj.values()), min(best_leaves.values()), min(best_depth.values())])

6.2 Grid Search 调参

GridSearchCV就是网格搜索,它属于自动调参,只要把参数输进去,就能给出最优化的结果和参数。但是这个方法适合于小数据集,一旦数据的量级上去了,很难得出结果。这个在这里面优势不大, 因为数据集很大,不太能跑出结果,但是也整理一下,有时候还是很好用的。

from sklearn.model_selection import GridSearchCV

parameters = {'objective': objective , 'num_leaves': num_leaves, 'max_depth': max_depth}
model = LGBMRegressor()
clf = GridSearchCV(model, parameters, cv=5)
clf = clf.fit(train_X, train_y)
clf.best_params_

#结果
{'max_depth': 15, 'num_leaves': 55, 'objective': 'regression'}
model = LGBMRegressor(objective='regression',
                          num_leaves=55,
                          max_depth=15)
np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))

#结果
0.13626164479243302

6.3 贝叶斯调参

这儿需要安装个包: pip install bayesian-optimization

贝叶斯优化用于机器学习调参,主要思想是,给定优化的目标函数(广义的函数,只需指定输入和输出即可,无需知道内部结构以及数学性质),通过不断地添加样本点来更新目标函数的后验分布(高斯过程,直到后验分布基本贴合于真实分布。简单的说,就是考虑了上一次参数的信息,从而更好的调整当前的参数。

from bayes_opt import BayesianOptimization

def rf_cv(num_leaves, max_depth, subsample, min_child_samples):
    val = cross_val_score(
        LGBMRegressor(objective = 'regression_l1',
            num_leaves=int(num_leaves),
            max_depth=int(max_depth),
            subsample = subsample,
            min_child_samples = int(min_child_samples)
        ),
        X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)
    ).mean()
    return 1 - val

rf_bo = BayesianOptimization(
    rf_cv,
    {
    'num_leaves': (2, 100),
    'max_depth': (2, 100),
    'subsample': (0.1, 1),
    'min_child_samples' : (2, 100)
    }
)
rf_bo.maximize()



#结果
|   iter    |  target   | max_depth | min_ch... | num_le... | subsample |
-------------------------------------------------------------------------
|  1        |  0.8649   |  89.57    |  47.3     |  55.13    |  0.1792   |
|  2        |  0.8477   |  99.86    |  60.91    |  15.35    |  0.4716   |
|  3        |  0.8698   |  81.74    |  83.32    |  92.59    |  0.9559   |
|  4        |  0.8627   |  90.2     |  8.754    |  43.34    |  0.7772   |
|  5        |  0.8115   |  10.07    |  86.15    |  4.109    |  0.3416   |
|  6        |  0.8701   |  99.15    |  9.158    |  99.47    |  0.494    |
|  7        |  0.806    |  2.166    |  2.416    |  97.7     |  0.224    |
|  8        |  0.8701   |  98.57    |  97.67    |  99.87    |  0.3703   |
|  9        |  0.8703   |  99.87    |  43.03    |  99.72    |  0.9749   |
|  10       |  0.869    |  10.31    |  99.63    |  99.34    |  0.2517   |
|  11       |  0.8703   |  52.27    |  99.56    |  98.97    |  0.9641   |
|  12       |  0.8669   |  99.89    |  8.846    |  66.49    |  0.1437   |
|  13       |  0.8702   |  68.13    |  75.28    |  98.71    |  0.153    |
|  14       |  0.8695   |  84.13    |  86.48    |  91.9     |  0.7949   |
|  15       |  0.8702   |  98.09    |  59.2     |  99.65    |  0.3275   |
|  16       |  0.87     |  68.97    |  98.62    |  98.93    |  0.2221   |
|  17       |  0.8702   |  99.85    |  63.74    |  99.63    |  0.4137   |
|  18       |  0.8703   |  45.87    |  99.05    |  99.89    |  0.3238   |
|  19       |  0.8702   |  79.65    |  46.91    |  98.61    |  0.8999   |
|  20       |  0.8702   |  99.25    |  36.73    |  99.05    |  0.1262   |
|  21       |  0.8702   |  85.51    |  85.34    |  99.77    |  0.8917   |
|  22       |  0.8696   |  99.99    |  38.51    |  89.13    |  0.9884   |
|  23       |  0.8701   |  63.29    |  97.93    |  99.94    |  0.9585   |
|  24       |  0.8702   |  93.04    |  71.42    |  99.94    |  0.9646   |
|  25       |  0.8701   |  99.73    |  16.21    |  99.38    |  0.9778   |
|  26       |  0.87     |  86.28    |  58.1     |  99.47    |  0.107    |
|  27       |  0.8703   |  47.28    |  99.83    |  99.65    |  0.4674   |
|  28       |  0.8703   |  68.29    |  99.51    |  99.4     |  0.2757   |
|  29       |  0.8701   |  76.49    |  73.41    |  99.86    |  0.9394   |
|  30       |  0.8695   |  37.27    |  99.87    |  89.87    |  0.7588   |
=========================================================================

结果:

1 - rf_bo.max['target']

#结果
0.1296693644053145

 

参考链接:

Datawhale 零基础入门数据挖掘-Task3 特征工程

2  零基础数据挖掘入门系列(五) - 模型建立与调参

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值