天池二手车交易价格预测— 4 建模调参

4.1 建模调参学习目标

了解常用的机器学习模型,并掌握机器学习模型的建模与调参流程

4.2 建模调参相关内容

  • 常用算法或模型
    • 线性回归模型
    • 树模型
    • GBDT模型
    • XGBoost模型
    • LightGBM模型
  • 模型性能验证
    • 评价函数与目标函数
    • 交叉验证方法
    • 留一验证方法
    • 针对时间序列的验证
    • 绘制学习率曲线
    • 绘制验证曲线
  • 嵌入式特征选择
    • Lasso回归
    • Ridge回归
    • 决策树
  • 模型对比
    • 常用线性模型
    • 常用非线性模型
  • 模型调参
    • 贪心调参方法
    • 网格调参方法
    • 贝叶斯调参方法

步骤:

  1. 读取数据
  2. 用常用的算法或模型进行简单建模
  3. 交叉验证
  4. 模拟真实业务情况
  5. 绘制学习曲线或验证曲线
  6. 进行模型对比
  7. 模型调参

4.3 数据分析

4.3.1 读取数据

import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
samplefeature = reducememusage( pd.read_csv('data_for_tree.csv'))

reduce_mem_usage函数通过调整数据类型,帮助我们减少数据在内存中占用的空间。

def reduce_mem_usage(df):
    """ iterate through all the columns of a dataframe and modify the data type
        to reduce memory usage.        
    """
    start_mem = df.memory_usage().sum() 
    print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
    
    for col in df.columns:
        col_type = df[col].dtype
        
        if col_type != object:
            c_min = df[col].min()
            c_max = df[col].max()
            if str(col_type)[:3] == 'int':
                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
                    df[col] = df[col].astype(np.int8)
                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
                    df[col] = df[col].astype(np.int16)
                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
                    df[col] = df[col].astype(np.int32)
                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
                    df[col] = df[col].astype(np.int64)  
            else:
                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
                    df[col] = df[col].astype(np.float16)
                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
                    df[col] = df[col].astype(np.float32)
                else:
                    df[col] = df[col].astype(np.float64)
        else:
            df[col] = df[col].astype('category')
 
    end_mem = df.memory_usage().sum() 
    print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
    print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
    return df
sample_feature = reduce_mem_usage(pd.read_csv('data_for_tree.csv'))

获取连续性特征列的名字:

continuous_feature_names = [x for x in sample_feature.columns if x not in ['price','brand','model','brand']]

4.3.2 线性回归模型

线性回归(Linear Regression)模型是最简单的回归模型,线性回归是利用称为线性回归方程的最小平方函数对一个或多个自变量和因变量之间关系进行建模的一种回归分析。

模型训练的实质就是通过训练集找到合适的权重 w i w_{i} wi,然后再对测试集X进行预测 Y t e s t Y_{test} Ytest

下面我们首先调用sklearn包中的LinearRegression来作为二手车价格预测的模型。

sample_feature = sample_feature.dropna().replace('-', 0).reset_index(drop=True)
sample_feature['notRepairedDamage'] = sample_feature['notRepairedDamage'].astype(np.float32)
train = sample_feature[continuous_feature_names + ['price']]
train_X = train[continuous_feature_names]
train_y = train['price']
 
 
from sklearn.linear_model import LinearRegression
model = LinearRegression(normalize=True)
model = model.fit(train_X, train_y)
"""查看训练的线性回归模型的截距(intercept)与权重(coef)"""
print('intercept: ' + str(model.intercept_))
sorted(dict(zip(continue_fea, model.coef_)).items(), key=lambda x: x[1], reverse=True)

intercept: -178881.74591832393
[(‘v_6’, 3342612.384537345),
(‘v_8’, 684205.534533214),
(‘v_9’, 178967.94192530424),
(‘v_7’, 35223.07319016895),
(‘v_5’, 21917.550249749802),
(‘v_3’, 12782.03250792227),
(‘v_12’, 11654.925634146672),
(‘v_13’, 9884.194615297649),
(‘v_11’, 5519.182176035517),
(‘v_10’, 3765.6101415594258),
(‘gearbox’, 900.3205339198406),
(‘fuelType’, 353.5206495542567),
(‘bodyType’, 186.51797317460046),
(‘city’, 45.17354204168846),
(‘power’, 31.163045441455335),
(‘brand_price_median’, 0.535967111869784),
(‘brand_price_std’, 0.4346788365040235),
(‘brand_amount’, 0.15308295553300566),
(‘brand_price_max’, 0.003891831020467389),
(‘seller’, -1.2684613466262817e-06),
(‘offerType’, -4.759058356285095e-06),
(‘brand_price_sum’, -2.2430642281682917e-05),
(‘name’, -0.00042591632723759166),
(‘used_time’, -0.012574429533889028),
(‘brand_price_average’, -0.414105722833381),
(‘brand_price_min’, -2.3163823428971835),
(‘train’, -5.392535065078232),
(‘power_bin’, -59.24591853031839),
(‘v_14’, -233.1604256172217),
(‘kilometer’, -372.96600915402496),
(‘notRepairedDamage’, -449.29703564695365),
(‘v_0’, -1490.6790578168238),
(‘v_4’, -14219.648899108111),
(‘v_2’, -16528.55239086934),
(‘v_1’, -42869.43976200439)]

通过图形化揭示二手车价格与相关特征的关系:

from matplotlib import pyplot as plt
 
subsample_index = np.random.randint(low=0, high=len(train_y), size=50)
plt.scatter(train_X['v_9'][subsample_index], train_y[subsample_index], color='black')
plt.scatter(train_X['v_9'][subsample_index], model.predict(train_X.loc[subsample_index]), color='blue')
plt.xlabel('v_9')
plt.ylabel('price')
plt.legend(['True Price','Predicted Price'],loc='upper right')
print('The predicted price is obvious different from true price')
plt.show()

在这里插入图片描述上图是特征v_9的值与价格的散点图,图片发现模型的预测结果(蓝色点)与真实标签(黑色点)的分布差异较大,且部分预测值出现了小于0的情况,说明我们的模型存在一些问题,需要进一步地调参。

price的分布图:

import seaborn as sb
 
print('It is clear to see the price shows a typical exponential distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y)
plt.subplot(1,2,2)
sns.distplot(train_y[train_y < np.quantile(train_y, 0.9)])

在这里插入图片描述根据上一节的特征工程可知,price呈现长尾分布,不利于我们的建模预测。原因是很多模型都假设数据误差项符合正态分布,而长尾分布的数据违背了这一假设。

对标签进行 𝑙𝑜𝑔(𝑥+1)变换,使标签贴近于正态分布。

import seaborn as sns
 
train_y_ln = np.log(train_y + 1)
print('The transformed price seems like normal distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y_ln)
plt.subplot(1,2,2)
sns.distplot(train_y_ln[train_y_ln < np.quantile(train_y_ln, 0.9)])

在这里插入图片描述再次对模型进行训练:

model = model.fit(train_X, train_y_ln)
 
print('intercept:'+ str(model.intercept_))
sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)

intercept:23.515920686637713
[(‘v_9’, 6.043993029165403),
(‘v_12’, 2.0357439855551394),
(‘v_11’, 1.3607608712255672),
(‘v_1’, 1.3079816298861897),
(‘v_13’, 1.0788833838535354),
(‘v_3’, 0.9895814429387444),
(‘gearbox’, 0.009170812023421397),
(‘fuelType’, 0.006447089787635784),
(‘bodyType’, 0.004815242907679581),
(‘power_bin’, 0.003151801949447194),
(‘power’, 0.0012550361843629999),
(‘train’, 0.0001429273782925814),
(‘brand_price_min’, 2.0721302299502698e-05),
(‘brand_price_average’, 5.308179717783439e-06),
(‘brand_amount’, 2.8308531339942507e-06),
(‘brand_price_max’, 6.764442596115763e-07),
(‘offerType’, 1.6765966392995324e-10),
(‘seller’, 9.308109838457312e-12),
(‘brand_price_sum’, -1.3473184925468486e-10),
(‘name’, -7.11403461065247e-08),
(‘brand_price_median’, -1.7608143661053008e-06),
(‘brand_price_std’, -2.7899058266986454e-06),
(‘used_time’, -5.6142735899344175e-06),
(‘city’, -0.0024992974087053223),
(‘v_14’, -0.012754139659375262),
(‘kilometer’, -0.013999175312751872),
(‘v_0’, -0.04553774829634237),
(‘notRepairedDamage’, -0.273686961116076),
(‘v_7’, -0.7455902679730504),
(‘v_4’, -0.9281349233755761),
(‘v_2’, -1.2781892166433606),
(‘v_5’, -1.5458846136756323),
(‘v_10’, -1.8059217242413748),
(‘v_8’, -42.611729973490604),
(‘v_6’, -241.30992120503035)]

在这里插入图片描述
可视化展示price,发现预测结果与真实值较为接近,且未出现异常状况。

4.3.3 交叉验证

在使用训练集对参数进行训练的时候,经常会发现人们通常会将一整个训练集分为三个部分(比如mnist手写训练集)。一般分为:训练集(train_set),验证集(valid_set),测试集(test_set)这三个部分。这其实是为了保证训练效果而特意设置的。其中测试集很好理解,其实就是完全不参与训练的数据,仅仅用来观测测试效果的数据。而训练集和评估集则牵涉到下面的知识了。

因为在实际的训练中,训练的结果对于训练集的拟合程度通常还是挺好的(初始条件敏感),但是对于训练集之外的数据的拟合程度通常就不那么令人满意了。因此我们通常并不会把所有的数据集都拿来训练,而是分出一部分来(这一部分不参加训练)对训练集生成的参数进行测试,相对客观的判断这些参数对训练集之外的数据的符合程度。这种思想就称为交叉验证(Cross Validation)

本文中使用五折交叉验证:

from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_absolute_error,  make_scorer
 
def log_transfer(func):
    def wrapper(y, yhat):
        result = func(np.log(y), np.nan_to_num(np.log(yhat)))
        return result
    return wrapper
 
scores = cross_val_score(model, X=train_X, y=train_y, verbose=1, cv = 5, scoring=make_scorer(log_transfer(mean_absolute_error)))
print('AVG:', np.mean(scores))

AVG: 1.3641908155886227

scores = cross_val_score(model, X=train_X, y=train_y_ln, verbose=1, cv = 5, scoring=make_scorer(mean_absolute_error))
print('AVG:', np.mean(scores))

AVG: 0.19382863663604424

MAE得分:

scores = pd.DataFrame(scores.reshape(1,-1))
scores.columns = ['cv' + str(x) for x in range(1, 6)]
scores.index = ['MAE']
scores

在这里插入图片描述

4.3.4 模拟真实业务场景

事实上,由于我们并不具有预知未来的能力,五折交叉验证在某些与时间相关的数据集上反而反映了不真实的情况。通过2018年的二手车价格预测2017年的二手车价格,这显然是不合理的,因此我们还可以采用时间顺序对数据集进行分隔。在本例中,我们选用靠前时间的4/5样本当作训练集,靠后时间的1/5当作验证集,最终结果与五折交叉验证差距不大。

import datetime
 
sample_feature = sample_feature.reset_index(drop=True)
split_point = len(sample_feature) // 5 * 4
train = sample_feature.loc[:split_point].dropna()
val = sample_feature.loc[split_point:].dropna()
 
train_X = train[continuous_feature_names]
train_y_ln = np.log(train['price'] + 1)
val_X = val[continuous_feature_names]
val_y_ln = np.log(val['price'] + 1)
 
model = model.fit(train_X, train_y_ln)
mean_absolute_error(val_y_ln, model.predict(val_X))

0.19443858353490887

绘制学习率曲线与验证曲线:

from sklearn.model_selection import learning_curve, validation_curve
 
? learning_curve
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,n_jobs=1, train_size=np.linspace(.1, 1.0, 5 )):  
    plt.figure()  
    plt.title(title)  
    if ylim is not None:  
        plt.ylim(*ylim)  
    plt.xlabel('Training example')  
    plt.ylabel('score')  
    train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_size, scoring = make_scorer(mean_absolute_error))  
    train_scores_mean = np.mean(train_scores, axis=1)  
    train_scores_std = np.std(train_scores, axis=1)  
    test_scores_mean = np.mean(test_scores, axis=1)  
    test_scores_std = np.std(test_scores, axis=1)  
    plt.grid()#区域  
    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,  
                     train_scores_mean + train_scores_std, alpha=0.1,  
                     color="r")  
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,  
                     test_scores_mean + test_scores_std, alpha=0.1,  
                     color="g")  
    plt.plot(train_sizes, train_scores_mean, 'o-', color='r',  
             label="Training score")  
    plt.plot(train_sizes, test_scores_mean,'o-',color="g",  
             label="Cross-validation score")  
    plt.legend(loc="best")  
    return plt  
 
plot_learning_curve(LinearRegression(), 'Liner_model', train_X[:1000], train_y_ln[:1000], ylim=(0.0, 0.5), cv=5, n_jobs=1)  

在这里插入图片描述

4.3.5 多种模型对比

使用sklearn自带的决策树模型、随机森林模型、梯度提升树模型、多层感知机模型(MLP)、XGBoost模型、LGB模型进行对比分析。

from sklearn.linear_model import LinearRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neural_network import MLPRegressor
from xgboost.sklearn import XGBRegressor
from lightgbm.sklearn import LGBMRegressor
 
models = [LinearRegression(),
          DecisionTreeRegressor(),
          RandomForestRegressor(),
          GradientBoostingRegressor(),
          MLPRegressor(solver='lbfgs', max_iter=100), 
          XGBRegressor(n_estimators = 100, objective='reg:squarederror'), 
          LGBMRegressor(n_estimators = 100)]
 
result = dict()
for model in models:
    model_name = str(model).split('(')[0]
    scores = cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error))
    result[model_name] = scores
    print(model_name + ' is finished')
 
result = pd.DataFrame(result)
result.index = ['cv' + str(x) for x in range(1, 6)]
result

在这里插入图片描述
可以看到随机森林模型效果较好。

4.3.6 模型调参

在此我们介绍了三种常用的调参方法如下:

## LGB的参数集合:
 
objective = ['regression', 'regression_l1', 'mape', 'huber', 'fair']
 
num_leaves = [3,5,10,15,20,40, 55]
max_depth = [3,5,10,15,20,40, 55]
bagging_fraction = []
feature_fraction = []
drop_rate = []

贪心调参:对模型影响最大的参数调优,直到最优化;再拿下一个影响最大的参数调优,如此下去,直到所有的参数调整完毕。

best_obj = dict()
for obj in objective:
    model = LGBMRegressor(objective=obj)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_obj[obj] = score
    
best_leaves = dict()
for leaves in num_leaves:
    model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0], num_leaves=leaves)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_leaves[leaves] = score
    
best_depth = dict()
for depth in max_depth:
    model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0],
                          num_leaves=min(best_leaves.items(), key=lambda x:x[1])[0],
                          max_depth=depth)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
    best_depth[depth] = score
 
sns.lineplot(x=['0_initial','1_turning_obj','2_turning_leaves','3_turning_depth'], y=[0.143 ,min(best_obj.values()), min(best_leaves.values()), min(best_depth.values())])

在这里插入图片描述GridSearchCV就是网格搜索,它属于自动调参,只要把参数输进去,就能给出最优化的结果和参数。但是这个方法适合于小数据集,一旦数据的量级上去了,很难得出结果。这个在这里面优势不大, 因为数据集很大,不太能跑出结果,但是也整理一下,有时候还是很好用的。

from sklearn.model_selection import GridSearchCV
 
parameters = {'objective': objective , 'num_leaves': num_leaves, 'max_depth': max_depth}
model = LGBMRegressor()
clf = GridSearchCV(model, parameters, cv=5)
clf = clf.fit(train_X, train_y)
clf.best_params_

{‘max_depth’: 15, ‘num_leaves’: 55, ‘objective’: ‘regression’}

model = LGBMRegressor(objective='regression',
                          num_leaves=55,
                          max_depth=15)
np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))

0.13626164479243302

贝叶斯优化用于机器学习调参,主要思想是,给定优化的目标函数(广义的函数,只需指定输入和输出即可,无需知道内部结构以及数学性质),通过不断地添加样本点来更新目标函数的后验分布(高斯过程,直到后验分布基本贴合于真实分布。简单的说,就是考虑了上一次参数的信息,从而更好的调整当前的参数。

from bayes_opt import BayesianOptimization
 
def rf_cv(num_leaves, max_depth, subsample, min_child_samples):
    val = cross_val_score(
        LGBMRegressor(objective = 'regression_l1',
            num_leaves=int(num_leaves),
            max_depth=int(max_depth),
            subsample = subsample,
            min_child_samples = int(min_child_samples)
        ),
        X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)
    ).mean()
    return 1 - val
 
rf_bo = BayesianOptimization(
    rf_cv,
    {
    'num_leaves': (2, 100),
    'max_depth': (2, 100),
    'subsample': (0.1, 1),
    'min_child_samples' : (2, 100)
    }
)
rf_bo.maximize()

| iter | target | max_depth | min_ch… | num_le… | subsample |-------------------------------------------------------------------------
| 1 | 0.8649 | 89.57 | 47.3 | 55.13 | 0.1792 |
| 2 | 0.8477 | 99.86 | 60.91 | 15.35 | 0.4716 |
| 3 | 0.8698 | 81.74 | 83.32 | 92.59 | 0.9559 |
| 4 | 0.8627 | 90.2 | 8.754 | 43.34 | 0.7772 |
| 5 | 0.8115 | 10.07 | 86.15 | 4.109 | 0.3416 |
| 6 | 0.8701 | 99.15 | 9.158 | 99.47 | 0.494 |
| 7 | 0.806 | 2.166 | 2.416 | 97.7 | 0.224 |
| 8 | 0.8701 | 98.57 | 97.67 | 99.87 | 0.3703 |
| 9 | 0.8703 | 99.87 | 43.03 | 99.72 | 0.9749 |
| 10 | 0.869 | 10.31 | 99.63 | 99.34 | 0.2517 |
| 11 | 0.8703 | 52.27 | 99.56 | 98.97 | 0.9641 |
| 12 | 0.8669 | 99.89 | 8.846 | 66.49 | 0.1437 |
| 13 | 0.8702 | 68.13 | 75.28 | 98.71 | 0.153 |
| 14 | 0.8695 | 84.13 | 86.48 | 91.9 | 0.7949 |
| 15 | 0.8702 | 98.09 | 59.2 | 99.65 | 0.3275 |
| 16 | 0.87 | 68.97 | 98.62 | 98.93 | 0.2221 |
| 17 | 0.8702 | 99.85 | 63.74 | 99.63 | 0.4137 |
| 18 | 0.8703 | 45.87 | 99.05 | 99.89 | 0.3238 |
| 19 | 0.8702 | 79.65 | 46.91 | 98.61 | 0.8999 |
| 20 | 0.8702 | 99.25 | 36.73 | 99.05 | 0.1262 |
| 21 | 0.8702 | 85.51 | 85.34 | 99.77 | 0.8917 |
| 22 | 0.8696 | 99.99 | 38.51 | 89.13 | 0.9884 |
| 23 | 0.8701 | 63.29 | 97.93 | 99.94 | 0.9585 |
| 24 | 0.8702 | 93.04 | 71.42 | 99.94 | 0.9646 |
| 25 | 0.8701 | 99.73 | 16.21 | 99.38 | 0.9778 |
| 26 | 0.87 | 86.28 | 58.1 | 99.47 | 0.107 |
| 27 | 0.8703 | 47.28 | 99.83 | 99.65 | 0.4674 |
| 28 | 0.8703 | 68.29 | 99.51 | 99.4 | 0.2757 |
| 29 | 0.8701 | 76.49 | 73.41 | 99.86 | 0.9394 |
| 30 | 0.8695 | 37.27 | 99.87 | 89.87 | 0.7588 |
=========================================================================

1 - rf_bo.max['target']

0.1296693644053145

天池大赛是一个面向数据科学竞赛的平台,近年来二手车交易价格预测一直是热门赛题之一。在这个比赛中,参赛者需要利用给定的二手车交易数据集,通过数据挖掘和机器学习的方法,预测二手车交易价格。这不仅对于参赛者来说是一次实战锻炼,同时也对于二手车交易市场具有一定的指导意义。 CSND作为一个大型的IT技术社区,对于数据科学和机器学习领域有着丰富的技术资源和人才储备。因此,CSND的技术团队及社区成员在天池大赛二手车交易价格预测中表现突出。他们不仅能够熟练运用数据挖掘和机器学习的算法,还能够结合实际场景进行问题建模和特征工程,提高了预测模型的准确性和鲁棒性。 在比赛中,CSND的选手们使用了多种机器学习算法,例如线性回归、决策树、随机森林、GBDT等,针对数据集的特点进行了合理的选择和调参。同时,他们还对缺失值、异常值和分类特征进行了有效的处理,提高了模型的稳健性。在模型评估和优化方面,CSND选手们还运用了交叉验证、模型融合等方法,进一步提升了预测效果。 总的来说,CSND在天池大赛二手车交易价格预测中展现出了数据科学领域的实力和水平,为整个社区树立了一个良好的技术典范。通过这样的比赛,CSND的技术团队和社区成员们得到了技术上的提升和实战经验的积累,也为二手车交易市场的定价和交易提供了有益的参考。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值