二手车价格预测实战(二)——特征工程

特征工程方法

特征:在机器学习中,特征描述为解释现象发生的一组特点。当这些特点转换为一些可测量的形式时,它们就称作特征。
特征工程:从数据集的已有特征创建新特征的过程

数值特征

数值类型的数据具有实际测量意义,分为连续型(身高体重等)和离散型(计数等),有如下8种处理方法:
1.截断:超出合理范围的很可能是噪声,需要截断(长尾的数据,可先对数转换再截断)。
2.二值化
3.分桶:固定宽度分桶、分位数分桶、使用模型分桶。
4.缩放:标准化、最大最小值、最大绝对值、基于L1或L2范数、平方根、对数、Box-Cox。
5.缺失值处理:补均值、补中位数、用模型预测缺失值。
6.特征交叉:两个数值变量的加减乘除、FM和FFM模型。
7.非线性编码:多项式核与高斯核、将随机森林模型的叶节点进行编码喂给线性模型、基因算法、局部线性嵌入、谱嵌入、t-SNE。
8.行统计量:如空值的个数、0的个数、正值个数与负值个数,以及均值、方差、最小值、最大值、偏度、峰度。
box-cox转换
在做线性回归的过程中,一般线性模型假定; Y=Xβ + ε, 其中ε满足正态分布,但是利用实际数据建立回归模型时,个别变量的系数通不过。例如往往不可观测的误差 ε 可能是和预测变量相关的,不服从正态分布,于是给线性回归的最小二乘估计系数的结果带来误差,为了使模型满足线性性、独立性、方差齐性以及正态性,需改变数据形式,故应用box-cox转换。

类别特征

类别数据取值可以是数值(如1代表男,0代表女),但是作为数值没有任何数值意义。
1.自然数编码:每个类别分配一个编号。对类别编号进行洗牌,训练多个模型进行融合可以进一步提升模型效果。(弊:简单模型容易欠拟合,复杂模型容易过拟合)
2.独热编码:每个特征取值对应一维特征,从而得到稀疏的特征矩阵。(弊:矩阵稀疏)
3.分层编码:比如邮编、身份证号等,取不同的位数进行分层,然后按层次进行自然数编码。
4.交叉组合(类别+类别):笛卡儿积操作、基于分组统计。
5.交叉组合(类别+数值):如统计产品在某个区域的销量、价格、平均价差等。

特征选择

经过各种方式构造特征可以构造上到数百个特征,里面可能包含很多冗余特征,此外,在时间资源有限的条件下不可能对所有特征进行训练,因此,希望能利用较少特征表征数据的信息以达到多特征的效果,此外,还有利于改善模型的通用性,降低过拟合风险。最常用的方法是相关系数法以及模型输出特征重要性的方法。

代码实践

导入包

#常用头文件 
# -*- coding: utf-8 -*- 
import sys
import importlib
importlib.reload(sys)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import missingno as msno
from operator import itemgetter
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
import warnings
warnings.filterwarnings('ignore')
#from sklearn.ensemble import RandomForestClassifier,RandomForestRegressor
pd.set_option('display.max_rows', None)    #行全展示 
pd.set_option('display.max_columns', None) #列全展示 

读取数据

train_data=pd.read_csv('./used_car_data/used_car_train_20200313.csv',sep=' ')
test_data=pd.read_csv('./used_car_data/used_car_testA_20200313.csv',sep=' ')
print('train_data.shape:',train_data.shape)
print('test_data.shape:',test_data.shape)

删除无用特征

根据之前数据探索得出的结论,先处理对应特征。

train_data.drop(['seller','offerType'],axis=1,inplace=True)
test_data.drop(['seller','offerType'],axis=1,inplace=True)
train_data['notRepairedDamage'].replace('-', np.nan, inplace=True)
test_data['notRepairedDamage'].replace('-', np.nan, inplace=True)
train_data['model']=train_data['model'].fillna('null')
train_data=train_data[~train_data['model'].isin(['null'])]

缺失值填充

竞赛中的缺失值最好能填充或者单独处理。如果删除,将会减少训练数据量,此外测试集中有缺失值则无法处理。对于含缺失值的数据删除后只剩下21%的数据。

train_data_droped=train_data.dropna()
num_rows_lost=round(100*(train_data.shape[0]-train_data_droped.shape[0])/float(train_data.shape[0]))

在这里插入图片描述
在这里插入图片描述
分类数据填充众数

from sklearn.base import TransformerMixin
class CustomCategoryImputer(TransformerMixin):
    def __init__(self, cols=None):
        self.cols=cols      
    def transform(self, df):
        X=df.copy()
        for col in self.cols:
            X[col].fillna(X[col].value_counts().index[0],inplace=True)           
        return X  
    def fit(self, *_):
        return self   
cci=CustomCategoryImputer(cols=['bodyType','fuelType','gearbox','notRepairedDamage'])
train_data_cci_imputed=cci.fit_transform(train_data)
cols=['bodyType','fuelType','gearbox','notRepairedDamage']
for col in cols:
    test_data[col].fillna(train_data[col].value_counts().index[0],inplace=True)

异常数据修正

功率为0且大于600的值置为空,利用IterativeImputer迭代填充。

train_data_cci_imputed['power'].replace(0,np.nan,inplace=True)
train_data_cci_imputed.loc[train_data_cci_imputed['power']>600,'power']=np.nan
#训练集
train_data_cut=train_data_cci_imputed.loc[:,['brand','bodyType','fuelType','gearbox','power']]
it_imputer=IterativeImputer(max_iter=10, random_state=0)
train_data_it_imputed=pd.DataFrame(it_imputer.fit_transform(train_data_cut),columns=train_data_cut.columns)    
train_data_cci_imputed['power']=train_data_it_imputed['power']   
# 测试集
test_data['power'].replace(0,np.nan,inplace=True)
test_data.loc[test_data['power']>600,'power']=np.nan
test_data['power']=test_data['power'].fillna(-1)
test_data_fillna=test_data[test_data['power'].isin([-1])]
#合并
train_data_power_info=train_data_cci_imputed.loc[:,['SaleID','brand','bodyType','fuelType','gearbox','power']]
test_data_power_info=test_data.loc[:,['SaleID','brand','bodyType','fuelType','gearbox','power']]
power_info=pd.concat([train_data_power_info,test_data_power_info],axis=0)
power_info.reset_index(drop=True,inplace=True)
power_info_cut=power_info.loc[:,['brand','bodyType','fuelType','gearbox','power']]
power_info_cut['power'].replace(-1,np.nan,inplace=True)
it_imputer=IterativeImputer(max_iter=10, random_state=0)
power_info_it_imputed=pd.DataFrame(it_imputer.fit_transform(power_info_cut),columns=power_info_cut.columns)
power_info_it_imputed['SaleID']=power_info['SaleID']
power_info_it_imputed.rename(columns={'power':'fixpower'},inplace=True)
power_info_it_imputed=power_info_it_imputed.loc[:,['SaleID','fixpower']]
test_data=pd.merge(test_data,power_info_it_imputed,on='SaleID',how='left')
test_data['power']=test_data['power'].mask(test_data['power']==-1,test_data['fixpower'])    
del test_data['fixpower']

特征转换

对价格、功率进行log转换,使分布更为集中。从baseline中得知regionCode为德国邮编,后3位为城市。此外,注册月份为00的利用当年的月份众数填充,并利用注册时间与上线时间差值计算二手车使用时长(单位分别为天、月、年)

train_data_cci_imputed['log_price']=np.log(train_data_cci_imputed['price'])
train_data_cci_imputed['log_power']=np.log(train_data_cci_imputed['power'])
df_train=train_data_cci_imputed.copy()
df_test=test_data.copy()
#德国邮编,后3位为城市
df_train['city']=df_train['regionCode'].apply(lambda x : str(x)[-3:])
df_test['city']=df_test['regionCode'].apply(lambda x : str(x)[-3:])
#注册年
df_train['regyear']=df_train['regDate'].apply(lambda x : str(x)[:4])
df_test['regyear']=df_test['regDate'].apply(lambda x : str(x)[:4])
df_train['regmonth']=df_train['regDate'].apply(lambda x : str(x)[4:6])
df_test['regmonth']=df_test['regDate'].apply(lambda x : str(x)[4:6])
df_train['regday']=df_train['regDate'].apply(lambda x : str(x)[6:8])
df_test['regday']=df_test['regDate'].apply(lambda x : str(x)[6:8])
df_train['regmonth'].replace('00',np.nan,inplace=True)
df_test['regmonth'].replace('00',np.nan,inplace=True)
# 注册月份00修正
year_month_info=pd.DataFrame()
year_month_info=df_train.groupby('regyear')['regmonth'].agg(lambda x: x.value_counts().index[0]).reset_index()
year_month_info.rename(columns={'regmonth':'fixregmonth'},inplace=True)
df_train['regmonth'].replace(np.nan,'00',inplace=True)
df_test['regmonth'].replace(np.nan,'00',inplace=True)
df_train=pd.merge(df_train,year_month_info,on='regyear',how='left')
df_train['regmonth']=df_train['regmonth'].mask(df_train['regmonth']=='00',df_train['fixregmonth'])
del df_train['fixregmonth']  
df_test=pd.merge(df_test,year_month_info,on='regyear',how='left')
df_test['regmonth']=df_test['regmonth'].mask(df_test['regmonth']=='00',df_test['fixregmonth'])
del df_test['fixregmonth']  
#使用时长(单位分别为日、月份、年)
df_train['fix_regdate']=df_train['regyear']+df_train['regmonth']+df_train['regday']
df_test['fix_regdate']=df_test['regyear']+df_test['regmonth']+df_test['regday']
df_train['used_day']=(pd.to_datetime(df_train['creatDate'], format='%Y%m%d', errors='coerce') - 
                            pd.to_datetime(df_train['fix_regdate'], format='%Y%m%d', errors='coerce')).dt.days
df_test['used_day']=(pd.to_datetime(df_test['creatDate'], format='%Y%m%d', errors='coerce') - 
                            pd.to_datetime(df_test['fix_regdate'], format='%Y%m%d', errors='coerce')).dt.days
df_train['used_year']=(pd.to_datetime(df_train['creatDate'], format='%Y%m%d').dt.year)-(pd.to_datetime(df_train['fix_regdate'], format='%Y%m%d').dt.year)
df_test['used_year']=(pd.to_datetime(df_test['creatDate'], format='%Y%m%d').dt.year)-(pd.to_datetime(df_test['fix_regdate'], format='%Y%m%d').dt.year)
df_train['used_month']=((pd.to_datetime(df_train['creatDate'], format='%Y%m%d').dt.year)-(pd.to_datetime(df_train['fix_regdate'], format='%Y%m%d').dt.year))*12+((pd.to_datetime(df_train['creatDate'], format='%Y%m%d').dt.month)-(pd.to_datetime(df_train['fix_regdate'], format='%Y%m%d').dt.month))
df_test['used_month']=((pd.to_datetime(df_test['creatDate'], format='%Y%m%d').dt.year)-(pd.to_datetime(df_test['fix_regdate'], format='%Y%m%d').dt.year))*12+((pd.to_datetime(df_train['creatDate'], format='%Y%m%d').dt.month)-(pd.to_datetime(df_test['fix_regdate'], format='%Y%m%d').dt.month))
df_train.drop(['regDate','regionCode','creatDate','regyear','regmonth','regday','fix_regdate','creatDate'],axis=1,inplace=True)
df_test.drop(['regDate','regionCode','creatDate','regyear','regmonth','regday','fix_regdate','creatDate'],axis=1,inplace=True)

特征拓展

拓展特征,可利用特征统计值进行拓展。

df_train2=pd.DataFrame()
df_test2=pd.DataFrame()
train_gb_brand = df_train.groupby("brand")
all_info = {}
for kind, kind_data in train_gb_brand:
    info = {}
    kind_data = kind_data[kind_data['price'] > 0]
    info['brand_amount'] = len(kind_data)
    info['brand_price_max'] = kind_data.price.max()
    info['brand_price_median'] = kind_data.price.median()
    info['brand_price_min'] = kind_data.price.min()
    info['brand_price_sum'] = kind_data.price.sum()
    info['brand_price_std'] = kind_data.price.std()
    info['brand_price_average'] = round(kind_data.price.sum() / (len(kind_data) + 1), 2)
    all_info[kind] = info
brand_fe = pd.DataFrame(all_info).T.reset_index().rename(columns={"index": "brand"})
df_train2 = df_train.merge(brand_fe, how='left', on='brand')
df_test2=df_test.merge(brand_fe, how='left', on='brand')
#
train_gb_city = df_train.groupby("city")
all_info = {}
for kind, kind_data in train_gb_city:
    info = {}
    kind_data = kind_data[kind_data['price'] > 0]
    info['city_amount'] = len(kind_data)
    info['city_price_max'] = kind_data.price.max()
    info['city_price_median'] = kind_data.price.median()
    info['city_price_min'] = kind_data.price.min()
    info['city_price_sum'] = kind_data.price.sum()
    info['city_price_std'] = kind_data.price.std()
    info['city_price_average'] = round(kind_data.price.sum() / (len(kind_data) + 1), 2)
    all_info[kind] = info
brand_fe = pd.DataFrame(all_info).T.reset_index().rename(columns={"index": "city"})
df_train2 = df_train2.merge(brand_fe, how='left', on='city')
df_test2=df_test2.merge(brand_fe, how='left', on='city')
#
train_gb_used_month = df_train.groupby("used_month")
all_info = {}
for kind, kind_data in train_gb_used_month:
    info = {}
    kind_data = kind_data[kind_data['price'] > 0]
    info['used_month_amount'] = len(kind_data)
    info['used_month_price_max'] = kind_data.price.max()
    info['used_month_price_median'] = kind_data.price.median()
    info['used_month_price_min'] = kind_data.price.min()
    info['used_month_price_sum'] = kind_data.price.sum()
    info['used_month_price_std'] = kind_data.price.std()
    info['used_month_price_average'] = round(kind_data.price.sum() / (len(kind_data) + 1), 2)
    all_info[kind] = info
brand_fe = pd.DataFrame(all_info).T.reset_index().rename(columns={"index": "used_month"})
df_train2 = df_train2.merge(brand_fe, how='left', on='used_month')
df_test2=df_test2.merge(brand_fe, how='left', on='used_month')
#
train_gb_model = df_train.groupby("model")
all_info = {}
for kind, kind_data in train_gb_model:
    info = {}
    kind_data = kind_data[kind_data['price'] > 0]
    info['model_amount'] = len(kind_data)
    info['model_price_max'] = kind_data.price.max()
    info['model_price_median'] = kind_data.price.median()
    info['model_price_min'] = kind_data.price.min()
    info['model_price_sum'] = kind_data.price.sum()
    info['model_price_std'] = kind_data.price.std()
    info['model_price_average'] = round(kind_data.price.sum() / (len(kind_data) + 1), 2)
    all_info[kind] = info
brand_fe = pd.DataFrame(all_info).T.reset_index().rename(columns={"index": "model"})
df_train2 = df_train2.merge(brand_fe, how='left', on='model')
df_test2=df_test2.merge(brand_fe, how='left', on='model')
#
train_gb_bodyType = df_train.groupby("bodyType")
all_info = {}
for kind, kind_data in train_gb_bodyType:
    info = {}
    kind_data = kind_data[kind_data['price'] > 0]
    info['bodyType_amount'] = len(kind_data)
    info['bodyType_price_max'] = kind_data.price.max()
    info['bodyType_price_median'] = kind_data.price.median()
    info['bodyType_price_min'] = kind_data.price.min()
    info['bodyType_price_sum'] = kind_data.price.sum()
    info['bodyType_price_std'] = kind_data.price.std()
    info['bodyType_price_average'] = round(kind_data.price.sum() / (len(kind_data) + 1), 2)
    all_info[kind] = info
brand_fe = pd.DataFrame(all_info).T.reset_index().rename(columns={"index": "bodyType"})
df_train2 = df_train2.merge(brand_fe, how='left', on='bodyType')
df_test2=df_test2.merge(brand_fe, how='left', on='bodyType')
#
train_gb_fuelType = df_train.groupby("fuelType")
all_info = {}
for kind, kind_data in train_gb_fuelType:
    info = {}
    kind_data = kind_data[kind_data['price'] > 0]
    info['fuelType_amount'] = len(kind_data)
    info['fuelType_price_max'] = kind_data.price.max()
    info['fuelType_price_median'] = kind_data.price.median()
    info['fuelType_price_min'] = kind_data.price.min()
    info['fuelType_price_sum'] = kind_data.price.sum()
    info['fuelType_price_std'] = kind_data.price.std()
    info['fuelType_price_average'] = round(kind_data.price.sum() / (len(kind_data) + 1), 2)
    all_info[kind] = info
brand_fe = pd.DataFrame(all_info).T.reset_index().rename(columns={"index": "fuelType"})
df_train2 = df_train2.merge(brand_fe, how='left', on='fuelType')
df_test2=df_test2.merge(brand_fe, how='left', on='fuelType')
#
train_gb_gearbox = df_train.groupby("gearbox")
all_info = {}
for kind, kind_data in train_gb_gearbox:
    info = {}
    kind_data = kind_data[kind_data['price'] > 0]
    info['gearbox_amount'] = len(kind_data)
    info['gearbox_price_max'] = kind_data.price.max()
    info['gearbox_price_median'] = kind_data.price.median()
    info['gearbox_price_min'] = kind_data.price.min()
    info['gearbox_price_sum'] = kind_data.price.sum()
    info['gearbox_price_std'] = kind_data.price.std()
    info['gearbox_price_average'] = round(kind_data.price.sum() / (len(kind_data) + 1), 2)
    all_info[kind] = info
brand_fe = pd.DataFrame(all_info).T.reset_index().rename(columns={"index": "gearbox"})
df_train2 = df_train2.merge(brand_fe, how='left', on='gearbox')
df_test2=df_test2.merge(brand_fe, how='left', on='gearbox')
# 类别数据统计
brand_nuq=['model','bodyType','fuelType','gearbox']
for fea in brand_nuq:
    gp1 = df_train.groupby('brand')[fea].nunique().reset_index().rename(columns={fea: "brand_%s_nuq_num" % fea})
    gp2 = df_train.groupby(fea)['brand'].nunique().reset_index().rename(columns={'brand': "%s_brand_nuq_num" % fea})
    df_train2 = pd.merge(df_train2, gp1, how='left', on=['brand'])
    df_train2 = pd.merge(df_train2, gp2, how='left', on=[fea])    

特征选择

过滤式:

def corr_heatmap(data):
    correlations = data.corr()
    # Create color map ranging between two colors 
    cmap = sns.diverging_palette(220, 10, as_cmap=True)
    fig, ax = plt.subplots(figsize=(10,10))
    sns.heatmap(correlations, cmap=cmap, vmax=1.0, center=0, fmt='.2f',
                square=True, linewidths=.5, annot=True, cbar_kws={"shrink": .75})
    plt.show()
    df_train2_corr2=df_train2.loc[:,['log_price','used_month_amount','used_month_price_max','used_month_price_median','used_month_price_min',
                                'used_month_price_sum','used_month_price_std','used_month_price_average','model_amount','model_price_max',
                                'model_price_median','model_price_min','model_price_sum','model_price_std','model_price_average' ]]
 corr_heatmap(df_train2_corr2)

在这里插入图片描述
嵌入式:

import xgboost as xgb
from xgboost import plot_importance
from sklearn.feature_selection import SelectFromModel
model=xgb.XGBRegressor(max_depth=5, learning_rate=0.1, n_estimators=160, silent=False, objective='reg:gamma')
model.fit(train_X, train_y)
plot_importance(model)
plt.show()

在这里插入图片描述

工具推荐

在做特征工程期间,偶然之间了解到自动构建特征工程工具Featuretools,学习了其使用方法。Featuretools是一个用于执行自动特征工程的开源库,旨在快速推进特征生成过程,从而有更多时间专注于机器学习模型构建的其他方面。
Featuretools有三个主要组件:

  • 实体:Entities
  • 深度特征合成:Deep Feature Synthesis
  • 特征基元:Feature primitives

总结

在经过此次系统了解特征工程的理论知识,并根据相关理论知识进行实践操作,实践过程中查找资料,了解了IterativeImputer、mask函数的功能(此前没用过),此外还了解了多项式特征PolynomialFeatures、自动构造特征工程工具featuretools,增长了见识。此外,在python编程方面还有所欠缺,不熟悉pipeline的使用,代码效率较低。

参考资料:
https://www.cnblogs.com/king-lps/p/7843395.html
https://www.pythonheidong.com/blog/article/141691/
https://blog.csdn.net/yH0VLDe8VG8ep9VGe/article/details/82393015
https://coladrill.github.io/2018/10/02/%E7%89%B9%E5%BE%81%E5%B7%A5%E7%A8%8B-%E6%96%87%E5%AD%97%E6%80%BB%E7%BB%93%E7%89%88/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值