文章目录
赛题背景:零基础入门数据挖掘 - 二手车交易价格预测
本文将集中进行模型的选择以及调参,分别选择了线性模型以及随机森林、xgboost、lightgbm等非线性模型模拟,并对LGB运用贪心调参、网格调参、贝叶斯调参等进一步增强模型的真实性。
1、数据加载,降低内存
加载出在上一环节特征工程部分生成的可以用来做决策树的数据集。
#读取数据
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
通过阿泽分享的一个自定义的reduce_mem_usage 函数,调整数据类型,降低内存空间,通过精度转换降低内存损耗。程序执行后可以看出内存空间减少了74%左右,效果非常显著。
def reduce_mem_usage(df):
""" iterate through all the columns of a dataframe and modify the data type
to reduce memory usage.
"""
start_mem = df.memory_usage().sum()
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype('category')
end_mem = df.memory_usage().sum()
print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
path = 'C:/Users/Kingfish/Desktop/TianChi/'
sample_feature = reduce_mem_usage(pd.read_csv(path+'newdata_for_tree.csv'))
2、简单线性回归模型
之前阅读过这篇文章[7个回归分析方法对于回归分析介绍的比较透彻,不再进行详细介绍了,直接进行本文的回归。
对于数据集中的‘’notRepairedDamage’特征,根据前面EDA和特征工程部分,发现除了有0,1,还有’-'形式的缺失值,现予以处理。随后,构造训练特征以及label为模型做准备。
continuous_feature_names = [x for x in sample_feature.columns if x not in ['price','model','brand']]
sample_feature = sample_feature.dropna().replace('-', 0).reset_index(drop=True)
sample_feature['notRepairedDamage'] = sample_feature['notRepairedDamage'].astype(np.float32)
train = sample_feature[continuous_feature_names + ['price']]
train_X = train[continuous_feature_names]
train_y = train['price']
#简单建模,线性回归,并且对于特征进行标准化,可以查看到截距项以及各参数的系数,模型的解释性比较强。
from sklearn.linear_model import LinearRegression
model = LinearRegression(normalize=True)
model = model.fit(train_X, train_y)
#查看训练的线性回归模型的截距(intercept)与权重(coef)
'intercept:'+ str(model.intercept_)
sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)
进一步,完善模型效果
from matplotlib import pyplot as plt
subsample_index = np.random.randint(low=0, high=len(train_y), size=50)
plt.scatter(train_X['v_9'][subsample_index], train_y[subsample_index], color='black')
plt.scatter(train_X['v_9'][subsample_index], model.predict(train_X.loc[subsample_index]), color='blue')
plt.xlabel('v_9')
plt.ylabel('price')
plt.legend(['True Price','Predicted Price'],loc='upper right')
print('The predicted price is obvious different from true price')
plt.show()
绘制特征v_9的值与标签的散点图
图片发现模型的预测结果(蓝色点)与真实标签(黑色点)的分布差异较大,且部分预测值出现了小于0的情况,说明当前模型需要新一步调整。在EDA部分,我们也了解到‘price’变量更符合无界约翰逊分布或者指数分布,且具有长尾分布的特点。
import seaborn as sns
print('It is clear to see the price shows a typical exponential distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y)
plt.subplot(1,2,2)
sns.distplot(train_y[train_y < np.quantile(train_y, 0.9)])
在这里我们对标签进行了 log(x+1)变换,使标签贴近于正态分布
train_y_ln = np.log(train_y + 1)
import seaborn as sns
print('The transformed price seems like normal distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y_ln)
plt.subplot(1,2,2)
sns.distplot(train_y_ln[train_y_ln < np.quantile(train_y_ln, 0.9)])
model = model.fit(train_X, train_y_ln)
print('intercept:'+ str(model.intercept_))
sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)
对数变化后,标签price分布更接近于正态分布,也符合数据误差项符合正态分布的模型假设。
#再次进行可视化,发现预测结果与真实值较为接近,且未出现异常状况
plt.scatter(train_X['v_9'][subsample_index], train_y[subsample_index], color='black')
plt.scatter(train_X['v_9'][subsample_index], np.exp(model.predict(train_X.loc[subsample_index])), color='blue')
plt.xlabel('v_9')
plt.ylabel('price')
plt.legend(['True Price','Predicted Price'],loc='upper right')
print('The predicted price seems normal after np.log transforming')
plt.show()
3、模型性能
3.1 五折交叉验证
#载入交叉验证函数,回归评价函数MAE
from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_absolute_error, make_scorer
#函数装饰器,自定义给函数做一个改变,对比变化前后的效果区别,并且都取了对数。
def log_transfer(func):
def wrapper(y, yhat):
result = func(np.log(y), np.nan_to_num(np.log(yhat)))
return result
return wrapper
scores = cross_val_score(model, X=train_X, y=train_y, verbose=1, cv = 5, scoring=make_scorer(log_transfer(mean_absolute_error)))
#使用线性回归模型,对未处理标签的特征数据进行五折交叉验证
(Error 1.36)
print('AVG:', np.mean(scores))
#使用线性回归模型,对处理过标签的特征数据进行五折交叉验证(Error 0.19)
scores = cross_val_score(model, X=train_X, y=train_y_ln, verbose=1, cv = 5, scoring=make_scorer(mean_absolute_error))
print('AVG:', np.mean(scores))
scores = pd.DataFrame(scores.reshape(1,-1))
scores.columns = ['cv' + str(x) for x in range(1, 6)]
scores.index = ['MAE']
scores
采用时间顺序对数据集进行分隔,靠前时间的4/5样本当作训练集,靠后时间的1/5当作验证集
import datetime
sample_feature = sample_feature.reset_index(drop=True)
split_point = len(sample_feature) // 5 * 4
train = sample_feature.loc[:split_point].dropna()
val = sample_feature.loc[split_point:].dropna()
train_X = train[continuous_feature_names]
train_y_ln = np.log(train['price'] + 1)
val_X = val[continuous_feature_names]
val_y_ln = np.log(val['price'] + 1)
model = model.fit(train_X, train_y_ln)
mean_absolute_error(val_y_ln, model.predict(val_X))
#0.19443858353490887
3.2 绘制学习率曲线与验证曲线
from sklearn.model_selection import learning_curve, validation_curve
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,n_jobs=1, train_size=np.linspace(.1, 1.0, 5 )):
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel('Training example')
plt.ylabel('score')
train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_size, scoring = make_scorer(mean_absolute_error))
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()#区域
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1,
color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color='r',
label="Training score")
plt.plot(train_sizes, test_scores_mean,'o-',color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
plot_learning_curve(LinearRegression(), 'Liner_model', train_X[:1000], train_y_ln[:1000], ylim=(0.0, 0.5), cv=5, n_jobs=1)
4、多种模型对比
train = sample_feature[continuous_feature_names + ['price']].dropna()
train_X = train[continuous_feature_names]
train_y = train['price']
train_y_ln = np.log(train_y + 1)
4.1 嵌入式特征选择
根据“如无必要,勿增实体”的原则,需要特征选择处理。
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
models = [LinearRegression(),
Ridge(),
Lasso()]
result = dict()
for model in models:
model_name = str(model).split('(')[0]
scores = cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer
result[model_name] = scores
print(model_name + ' is finished')
result = pd.DataFrame(result)
result.index = ['cv' + str(x) for x in range(1, 6)]
result
效果比较
model = LinearRegression().fit(train_X, train_y_ln)
print('intercept:'+ str(model.intercept_))
sns.barplot(abs(model.coef_), continuous_feature_names)
model = Ridge().fit(train_X, train_y_ln)
print('intercept:'+ str(model.intercept_))
sns.barplot(abs(model.coef_), continuous_feature_names)
model = Lasso().fit(train_X, train_y_ln)
print('intercept:'+ str(model.intercept_))
sns.barplot(abs(model.coef_), continuous_feature_names)
图形中左上是线性回归,中间是L2岭回归,右下是lasso回归,同正则化原理一致,通过加入参数惩罚项,降低了模型复杂度,岭回归可以让参数压缩,lasso回归可以将参数值压缩为0可以很好地解决模型过拟合问题。
4.2 非线性模型比较
from sklearn.linear_model import LinearRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neural_network import MLPRegressor
from xgboost.sklearn import XGBRegressor
from lightgbm.sklearn import LGBMRegressor
models = [LinearRegression(),
DecisionTreeRegressor(),
RandomForestRegressor(),
GradientBoostingRegressor(),
MLPRegressor(solver='lbfgs', max_iter=100),
XGBRegressor(n_estimators = 100, objective='reg:squarederror'),
LGBMRegressor(n_estimators = 100)]
result = dict()
for model in models:
model_name = str(model).split('(')[0]
scores = cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error))
result[model_name] = scores
print(model_name + ' is finished')
result = pd.DataFrame(result)
result.index = ['cv' + str(x) for x in range(1, 6)]
result
5、 LGB调参
LightGBM中的主要调节的参数包括核心参数、学习控制参数、IO 参数、目标参数、度量参数等。
#LGB的参数集合
objective = ['regression', 'regression_l1', 'mape', 'huber', 'fair']
num_leaves = [3,5,10,15,20,40, 55]
max_depth = [3,5,10,15,20,40, 55]
bagging_fraction = []
feature_fraction = []
drop_rate = []
参数1objective
[default=regression],表示模型的用途
‘regression’表示回归任务,但是使用L2损失函数。
‘regression_l1’:表示回归任务,但是使用L1损失函数。
‘huber’: 表示回归任务,但是使用huber 损失函数。
‘fair’: 表示回归任务,但是使用fair 损失函数。
‘binary’: 表示二分类任务,使用对数损失函数作为目标函数。
‘multiclass’: 表示多分类任务,使用softmax 函数作为目标函数。必须设置num_class 参数
‘multiclassova’ : 表示多分类任务,使用one-vs-all 的二分类目标函数。必须设置num_class 参数
‘xentropy’: 目标函数为交叉熵(同时具有可选择的线性权重)。要求标签是[0,1] 之间的数值。
‘xentlambda’ : 替代了参数化的cross_entropy 。要求标签是[0,1]之间的数值。
参数2、num_leaves, [default=31] 一棵树上的叶子数。
参数3、max_depth, [default=-1]限制树模型的最大深度。如果小于0,则表示没有限制。
参数4、feature_fraction, [default=1.0]
如果小于1.0,则lightgbm 会在每次迭代中随机选择部分特征。如0.8 表示:在每棵树训练之前选择80% 的特征来训练。
参数5、bagging_fraction, [default=1]类似于 feature_fraction, 但是它将在不进行重采样的情况下随机选择部分数据
5.1 贪心调参
best_obj = dict()
for obj in objective:
model = LGBMRegressor(objective=obj)
score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
best_obj[obj] = score
best_leaves = dict()
for leaves in num_leaves:
model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0], num_leaves=leaves)
score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
best_leaves[leaves] = score
best_depth = dict()
for depth in max_depth:
model = LGBMRegressor(objective=min(best_obj.items(), key=lambda x:x[1])[0],
num_leaves=min(best_leaves.items(), key=lambda x:x[1])[0],
max_depth=depth)
score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
best_depth[depth] = score
sns.lineplot(x=['0_initial','1_turning_obj','2_turning_leaves','3_turning_depth'], y=[0.143 ,min(best_obj.values()), min(best_leaves.values()), min(best_depth.values())])
5.2 Grid Search网格调参
from sklearn.model_selection import GridSearchCV
parameters = {'objective': objective , 'num_leaves': num_leaves, 'max_depth': max_depth}
model = LGBMRegressor()
clf = GridSearchCV(model, parameters, cv=5)
clf = clf.fit(train_X, train_y)
clf.best_params_
#{'max_depth': 15, 'num_leaves': 55, 'objective': 'regression'}
5.3 贝叶斯调参
#pip install bayesian-optimization
from bayes_opt import BayesianOptimization
def rf_cv(num_leaves, max_depth, subsample, min_child_samples):
val = cross_val_score(
LGBMRegressor(objective = 'regression_l1',
num_leaves=int(num_leaves),
max_depth=int(max_depth),
subsample = subsample,
min_child_samples = int(min_child_samples)
),
X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)
).mean()
return 1 - val
rf_bo = BayesianOptimization(
rf_cv,
{
'num_leaves': (2, 100),
'max_depth': (2, 100),
'subsample': (0.1, 1),
'min_child_samples' : (2, 100)
}
)
1 - rf_bo.max['target']
调参方法汇总
链接: xgboost&lightgbm调参指南.
6、总结
模型的训练应该更加追求结果的真实性,即更客观的反映训练数据和输出结果关系。