数据挖掘训练营-数据分析学习笔记

本笔记为阿里云天池龙珠计划数据挖掘训练营的学习内容,链接为:

AI训练营数据挖掘-阿里云天池

二、 EDA-数据探索性分析

Tip:此部分为零基础入门数据挖掘的 Task2 EDA-数据探索性分析部分,带你来了解数据,熟悉数据,和数据做朋友,欢迎大家后续多多交流。

赛题:零基础入门数据挖掘 - 二手车交易价格预测

地址:零基础入门数据挖掘 - 二手车交易价格预测_学习赛_天池大赛-阿里云天池的赛制

2.1 EDA目标

  • EDA的价值主要在于熟悉数据集,了解数据集,对数据集进行验证来确定所获得数据集可以用于接下来的机器学习或者深度学习使用。

  • 当了解了数据集之后我们下一步就是要去了解变量间的相互关系以及变量与预测值之间的存在关系。

  • 引导数据科学从业者进行数据处理以及特征工程的步骤,使数据集的结构和特征集让接下来的预测问题更加可靠。

  • 完成对于数据的探索性分析,并对于数据进行一些图表或者文字总结并打卡。

2.2 内容介绍

  1. 载入各种数据科学以及可视化库:
    • 数据科学库 pandas、numpy、scipy;
    • 可视化库 matplotlib、seabon;
    • 其他;
  2. 载入数据:
    • 载入训练集和测试集;
    • 简略观察数据(head()+shape);
  3. 数据总览:
    • 通过describe()来熟悉数据的相关统计量
    • 通过info()来熟悉数据类型
  4. 判断数据缺失和异常
    • 查看每列的存在nan情况
    • 异常值检测
  5. 了解预测值的分布
    • 总体分布概况(无界约翰逊分布等)
    • 查看skewness and kurtosis
    • 查看预测值的具体频数
  6. 特征分为类别特征和数字特征,并对类别特征查看unique分布
  7. 数字特征分析
    • 相关性分析
    • 查看几个特征的偏度和峰值
    • 每个数字特征的分布可视化
    • 数字特征相互之间的关系可视化
    • 多变量互相回归关系可视化
  8. 类型特征分析
    • unique分布
    • 类别特征箱形图可视化
    • 类别特征的小提琴图可视化
    • 类别特征的柱形图可视化
    • 特征的每个类别频数可视化(count_plot)
  9. 用ydata_profiling生成数据报告

2.3 代码示例

2.3.1 载入各种数据科学以及可视化库

以下库都是pip install 安装, 有特殊情况我会单独说明,例如 pip install pandas -i Simple Index

!pip install missingno

Defaulting to user installation because normal site-packages is not writeable

Looking in indexes: https://mirrors.aliyun.com/pypi/simple

Requirement already satisfied: missingno in /data/nas/workspace/envs/python3.6/site-packages (0.4.2)

Requirement already satisfied: scipy in /opt/conda/lib/python3.6/site-packages (from missingno) (1.5.4)

Requirement already satisfied: matplotlib in /opt/conda/lib/python3.6/site-packages (from missingno) (3.3.3)

Requirement already satisfied: numpy in /opt/conda/lib/python3.6/site-packages (from missingno) (1.19.1)

Requirement already satisfied: seaborn in /opt/conda/lib/python3.6/site-packages (from missingno) (0.10.1)

Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /opt/conda/lib/python3.6/site-packages (from matplotlib->missingno) (2.4.7)

Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.6/site-packages (from matplotlib->missingno) (0.10.0)

Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/lib/python3.6/site-packages (from matplotlib->missingno) (2.8.1)

Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/lib/python3.6/site-packages (from matplotlib->missingno) (1.2.0)

Requirement already satisfied: pillow>=6.2.0 in /opt/conda/lib/python3.6/site-packages (from matplotlib->missingno) (8.0.1)

Requirement already satisfied: pandas>=0.22.0 in /opt/conda/lib/python3.6/site-packages (from seaborn->missingno) (0.23.4)

Requirement already satisfied: six in /opt/conda/lib/python3.6/site-packages (from cycler>=0.10->matplotlib->missingno) (1.15.0)

Requirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.6/site-packages (from pandas>=0.22.0->seaborn->missingno) (2020.4)

#coding:utf-8
#导入warnings包,利用过滤器来实现忽略警告语句。
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import missingno as msno

2.3.2 载入数据

path = './data/'
## 1) 载入训练集和测试集
Train_data = pd.read_csv(path+'train.csv', sep=' ')
Test_data = pd.read_csv(path+'testA.csv', sep=' ')

所有特征集均脱敏处理(方便大家观看)

  • name - 汽车编码
  • regDate - 汽车注册时间
  • model - 车型编码
  • brand - 品牌
  • bodyType - 车身类型
  • fuelType - 燃油类型
  • gearbox - 变速箱
  • power - 汽车功率
  • kilometer - 汽车行驶公里
  • notRepairedDamage - 汽车有尚未修复的损坏
  • regionCode - 看车地区编码
  • seller - 销售方
  • offerType - 报价类型
  • creatDate - 广告发布时间
  • price - 汽车价格
  • v_0', 'v_1', 'v_2', 'v_3', 'v_4', 'v_5', 'v_6', 'v_7', 'v_8', 'v_9', 'v_10', 'v_11', 'v_12', 'v_13','v_14'(根据汽车的评论、标签等大量信息得到的embedding向量)【人工构造 匿名特征】

## 2) 简略观察数据(head()+tail()+shape)
Train_data.head().append(Train_data.tail())

[4]:

SaleIDnameregDatemodelbrandbodyTypefuelTypegearboxpowerkilometer...v_5v_6v_7v_8v_9v_10v_11v_12v_13v_14
007362004040230.061.00.00.06012.5...0.2356760.1019880.1295490.0228160.097462-2.8818032.804097-2.4208210.7952920.914762
1122622003030140.012.00.00.0015.0...0.2647770.1210040.1357310.0265970.020582-4.9004822.096338-1.030483-1.7226740.245522
221487420040403115.0151.00.00.016312.5...0.2514100.1149120.1651470.0621730.027075-4.8467491.8035591.565330-0.832687-0.229963
337186519960908109.0100.00.01.019315.0...0.2742930.1103000.1219640.0333950.000000-4.5095991.285940-0.501868-2.438353-0.478699
4411108020120103110.051.00.00.0685.0...0.2280360.0732050.0918800.0788190.121534-1.8962400.9107830.9311102.8345181.923482
14999514999516397820000607121.0104.00.01.016315.0...0.2802640.0003100.0484410.0711580.0191741.988114-2.9839730.589167-1.304370-0.302592
14999614999618453520091102116.0110.00.00.012510.0...0.2532170.0007770.0840790.0996810.0793711.839166-2.7746152.5539940.924196-0.272160
1499971499971475872010100360.0111.01.00.0906.0...0.2333530.0007050.1188720.1001180.0979142.439812-1.6306772.2901971.8919220.414931
149998149998459072006031234.0103.01.00.015615.0...0.2563690.0002520.0814790.0835580.0814982.075380-2.6337191.4149370.431981-1.659014
1499991499991776721999020419.0286.00.01.019312.5...0.2844750.0000000.0400720.0625430.0258191.978453-3.1799130.031724-1.483350-0.342674

10 rows × 31 columns

Train_data.shape

[5]:

(150000, 31)
Test_data.head().append(Test_data.tail())

[6]:

SaleIDnameregDatemodelbrandbodyTypefuelTypegearboxpowerkilometer...v_5v_6v_7v_8v_9v_10v_11v_12v_13v_14
01500006693220111212222.045.01.01.031315.0...0.2644050.1218000.0708990.1065580.078867-7.050969-0.8546264.8001510.620011-3.664654
11500011749601999021119.0210.00.00.07512.5...0.2617450.0000000.0967330.0137050.0523833.679418-0.729039-3.796107-1.541230-0.757055
215000253562009030482.0210.00.00.01097.0...0.2602160.1120810.0780820.0620780.050540-4.9266901.0011060.8265620.1382260.754033
315000350688201004050.000.00.01.01607.0...0.2604660.1067270.0811460.0759710.048268-4.8646370.5054931.8703790.3660381.312775
41500041614281997070326.0142.00.00.07515.0...0.2509990.0000000.0778060.0286000.0817093.616475-0.673236-3.197685-0.025678-0.101290
4999519999520903199605034.044.00.00.011615.0...0.2846640.1300440.0498330.0288070.004616-5.9785111.303174-1.207191-1.981240-0.357695
49996199996708199910110.000.00.00.07515.0...0.2681010.1080950.0660390.0254680.025971-3.9138251.759524-2.075658-1.1548470.169073
4999719999766932004041249.010.01.01.022415.0...0.2694320.1057240.1176520.0574790.015669-4.6390650.6547131.137756-1.3905310.254420
49998199998969002002000827.010.00.01.033415.0...0.2611520.0004900.1373660.0862160.0513831.833504-2.8286872.465630-0.911682-2.057353
4999919999919338420041109166.061.0NaN1.0689.0...0.2287300.0003000.1035340.0806250.1242642.914571-1.1352700.5476282.094057-1.552150

10 rows × 30 columns

Test_data.shape

[7]:

(50000, 30)

要养成看数据集的head()以及shape的习惯,这会让你每一步更放心,避免接下来的连串的错误, 如果对自己的pandas等操作不放心,建议执行一步看一下,这样会有效的方便你进行理解函数并进行操作

2.3.3 总览数据概况

  1. describe中有每列的统计量,个数count、平均值mean、方差std、最小值min、中位数25% 50% 75% 、以及最大值,看这个信息主要是瞬间掌握数据的大概的范围以及每个值的异常值的判断,比如有的时候会发现9999999 -1 等值,这些其实都是nan的另外一种表达方式,有的时候需要注意下
  2. info 通过info来了解数据每列的type,有助于了解是否存在除了nan以外的特殊符号异常,即object类型的特征里存在的异常值
## 1) 通过describe()来熟悉数据的相关统计量
Train_data.describe()

[8]:

SaleIDnameregDatemodelbrandbodyTypefuelTypegearboxpowerkilometer...v_5v_6v_7v_8v_9v_10v_11v_12v_13v_14
count150000.000000150000.0000001.500000e+05149999.000000150000.000000145494.000000141320.000000144019.000000150000.000000150000.000000...150000.000000150000.000000150000.000000150000.000000150000.000000150000.000000150000.000000150000.000000150000.000000150000.000000
mean74999.50000068349.1728732.003417e+0747.1290218.0527331.7923690.3758420.224943119.31654712.597160...0.2482040.0449230.1246920.0581440.061996-0.0010000.0090350.0048130.000313-0.000688
std43301.41452761103.8750955.364988e+0449.5360407.8649561.7606400.5486770.417546177.1684193.919576...0.0458040.0517430.2014100.0291860.0356923.7723863.2860712.5174781.2889881.038685
min0.0000000.0000001.991000e+070.0000000.0000000.0000000.0000000.0000000.0000000.500000...0.0000000.0000000.0000000.0000000.000000-9.168192-5.558207-9.639552-4.153899-6.546556
25%37499.75000011156.0000001.999091e+0710.0000001.0000000.0000000.0000000.00000075.00000012.500000...0.2436150.0000380.0624740.0353340.033930-3.722303-1.951543-1.871846-1.057789-0.437034
50%74999.50000051638.0000002.003091e+0730.0000006.0000001.0000000.0000000.000000110.00000015.000000...0.2577980.0008120.0958660.0570140.0584841.624076-0.358053-0.130753-0.0362450.141246
75%112499.250000118841.2500002.007111e+0766.00000013.0000003.0000001.0000000.000000150.00000015.000000...0.2652970.1020090.1252430.0793820.0874912.8443571.2550221.7769330.9428130.680378
max149999.000000196812.0000002.015121e+07247.00000039.0000007.0000006.0000001.00000019312.00000015.000000...0.2918380.1514201.4049360.1607910.22278712.35701118.81904213.84779211.1476698.658418

8 rows × 30 columns

Test_data.describe()

[9]:

SaleIDnameregDatemodelbrandbodyTypefuelTypegearboxpowerkilometer...v_5v_6v_7v_8v_9v_10v_11v_12v_13v_14
count50000.00000050000.0000005.000000e+0450000.00000050000.00000048587.00000047107.00000048090.00000050000.00000050000.000000...50000.00000050000.00000050000.00000050000.00000050000.00000050000.00000050000.00000050000.00000050000.00000050000.000000
mean174999.50000068542.2232802.003393e+0746.8445208.0562401.7821850.3734050.224350119.88362012.595580...0.2486690.0450210.1227440.0579970.062000-0.017855-0.013742-0.013554-0.0031470.001516
std14433.90106761052.8081335.368870e+0449.4695487.8194771.7607360.5464420.417158185.0973873.908979...0.0446010.0517660.1959720.0292110.0356533.7479853.2312582.5159621.2865971.027360
min150000.0000000.0000001.991000e+070.0000000.0000000.0000000.0000000.0000000.0000000.500000...0.0000000.0000000.0000000.0000000.000000-9.160049-5.411964-8.916949-4.123333-6.112667
25%162499.75000011203.5000001.999091e+0710.0000001.0000000.0000000.0000000.00000075.00000012.500000...0.2437620.0000440.0626440.0350840.033714-3.700121-1.971325-1.876703-1.060428-0.437920
50%174999.50000052248.5000002.003091e+0729.0000006.0000001.0000000.0000000.000000109.00000015.000000...0.2578770.0008150.0958280.0570840.0587641.613212-0.355843-0.142779-0.0359560.138799
75%187499.250000118856.5000002.007110e+0765.00000013.0000003.0000001.0000000.000000150.00000015.000000...0.2653280.1020250.1254380.0790770.0874892.8327081.2629141.7643350.9414690.681163
max199999.000000196805.0000002.015121e+07246.00000039.0000007.0000006.0000001.00000020000.00000015.000000...0.2916180.1532651.3588130.1563550.21477512.33887218.85621812.9504985.9132732.624622

8 rows × 29 columns

## 2) 通过info()来熟悉数据类别
Train_data.info()
<class 'pandas.core.frame.DataFrame'>

RangeIndex: 150000 entries, 0 to 149999

Data columns (total 31 columns):

SaleID               150000 non-null int64

name                 150000 non-null int64

regDate              150000 non-null int64

model                149999 non-null float64

brand                150000 non-null int64

bodyType             145494 non-null float64

fuelType             141320 non-null float64

gearbox              144019 non-null float64

power                150000 non-null int64

kilometer            150000 non-null float64

notRepairedDamage    150000 non-null object

regionCode           150000 non-null int64

seller               150000 non-null int64

offerType            150000 non-null int64

creatDate            150000 non-null int64

price                150000 non-null int64

v_0                  150000 non-null float64

v_1                  150000 non-null float64

v_2                  150000 non-null float64

v_3                  150000 non-null float64

v_4                  150000 non-null float64

v_5                  150000 non-null float64

v_6                  150000 non-null float64

v_7                  150000 non-null float64

v_8                  150000 non-null float64

v_9                  150000 non-null float64

v_10                 150000 non-null float64

v_11                 150000 non-null float64

v_12                 150000 non-null float64

v_13                 150000 non-null float64

v_14                 150000 non-null float64

dtypes: float64(20), int64(10), object(1)

memory usage: 35.5+ MB
Test_data.info()
<class 'pandas.core.frame.DataFrame'>

RangeIndex: 50000 entries, 0 to 49999

Data columns (total 30 columns):

SaleID               50000 non-null int64

name                 50000 non-null int64

regDate              50000 non-null int64

model                50000 non-null float64

brand                50000 non-null int64

bodyType             48587 non-null float64

fuelType             47107 non-null float64

gearbox              48090 non-null float64

power                50000 non-null int64

kilometer            50000 non-null float64

notRepairedDamage    50000 non-null object

regionCode           50000 non-null int64

seller               50000 non-null int64

offerType            50000 non-null int64

creatDate            50000 non-null int64

v_0                  50000 non-null float64

v_1                  50000 non-null float64

v_2                  50000 non-null float64

v_3                  50000 non-null float64

v_4                  50000 non-null float64

v_5                  50000 non-null float64

v_6                  50000 non-null float64

v_7                  50000 non-null float64

v_8                  50000 non-null float64

v_9                  50000 non-null float64

v_10                 50000 non-null float64

v_11                 50000 non-null float64

v_12                 50000 non-null float64

v_13                 50000 non-null float64

v_14                 50000 non-null float64

dtypes: float64(20), int64(9), object(1)

memory usage: 11.4+ MB

2.3.4 判断数据缺失和异常

## 1) 查看每列的存在nan情况,isnull().sum()方法只能统计数值型特征的缺失值
Train_data.isnull().sum()

[12]:

SaleID                  0
,name                    0
,regDate                 0
,model                   1
,brand                   0
,bodyType             4506
,fuelType             8680
,gearbox              5981
,power                   0
,kilometer               0
,notRepairedDamage       0
,regionCode              0
,seller                  0
,offerType               0
,creatDate               0
,price                   0
,v_0                     0
,v_1                     0
,v_2                     0
,v_3                     0
,v_4                     0
,v_5                     0
,v_6                     0
,v_7                     0
,v_8                     0
,v_9                     0
,v_10                    0
,v_11                    0
,v_12                    0
,v_13                    0
,v_14                    0
,dtype: int64
Test_data.isnull().sum()

[13]:

SaleID                  0
,name                    0
,regDate                 0
,model                   0
,brand                   0
,bodyType             1413
,fuelType             2893
,gearbox              1910
,power                   0
,kilometer               0
,notRepairedDamage       0
,regionCode              0
,seller                  0
,offerType               0
,creatDate               0
,v_0                     0
,v_1                     0
,v_2                     0
,v_3                     0
,v_4                     0
,v_5                     0
,v_6                     0
,v_7                     0
,v_8                     0
,v_9                     0
,v_10                    0
,v_11                    0
,v_12                    0
,v_13                    0
,v_14                    0
,dtype: int64
# nan可视化,查看缺失值的数量
missing = Train_data.isnull().sum()
missing = missing[missing > 0]  # 取缺失值大于0的列
missing.sort_values(inplace=True)  # 默认升序排列
missing.plot.bar()  # DataFrame和Series均可以直接画柱状图,y轴为count

[14]:

<AxesSubplot:>

通过以上两句可以很直观的了解哪些列存在 “nan”, 并可以把nan的个数打印,主要的目的在于 nan存在的个数是否真的很大,如果很小一般选择填充,如果使用lgb等树模型可以直接空缺,让树自己去优化,但如果nan存在的过多、可以考虑删掉,上图中model只缺失1个,可以选择填充

# 可视化看下缺失值,查看缺失值的分布,随机选取250个样本,图中右侧心跳线左侧的数字对应特征缺失值最多的一个样本,右侧的数字对应特征无缺失值的一个样本,该方法既展示了单个特征的缺失值分布,同时也展示了每个样本缺失值的分布
msno.matrix(Train_data.sample(250))

[15]:

<AxesSubplot:>

msno.bar(Train_data.sample(1000))

[16]:

<AxesSubplot:>

# 可视化看下缺失值
msno.matrix(Test_data.sample(250))

[17]:

<AxesSubplot:>

msno.bar(Test_data.sample(1000))

[18]:

<AxesSubplot:>

测试集的缺和训练集的差不多情况, 可视化有四列有缺

## 2) 查看异常值检测
Train_data.info()
<class 'pandas.core.frame.DataFrame'>

RangeIndex: 150000 entries, 0 to 149999

Data columns (total 31 columns):

SaleID               150000 non-null int64

name                 150000 non-null int64

regDate              150000 non-null int64

model                149999 non-null float64

brand                150000 non-null int64

bodyType             145494 non-null float64

fuelType             141320 non-null float64

gearbox              144019 non-null float64

power                150000 non-null int64

kilometer            150000 non-null float64

notRepairedDamage    150000 non-null object

regionCode           150000 non-null int64

seller               150000 non-null int64

offerType            150000 non-null int64

creatDate            150000 non-null int64

price                150000 non-null int64

v_0                  150000 non-null float64

v_1                  150000 non-null float64

v_2                  150000 non-null float64

v_3                  150000 non-null float64

v_4                  150000 non-null float64

v_5                  150000 non-null float64

v_6                  150000 non-null float64

v_7                  150000 non-null float64

v_8                  150000 non-null float64

v_9                  150000 non-null float64

v_10                 150000 non-null float64

v_11                 150000 non-null float64

v_12                 150000 non-null float64

v_13                 150000 non-null float64

v_14                 150000 non-null float64

dtypes: float64(20), int64(10), object(1)

memory usage: 35.5+ MB

可以发现除了notRepairedDamage 为object类型其他都为数字 这里我们把他的几个不同的值都进行显示就知道了

Train_data['notRepairedDamage'].value_counts()  # 单独对object类型的特征统计缺失值

[21]:

0.0    111361
,-       24324
,1.0     14315
,Name: notRepairedDamage, dtype: int64

可以看出来‘ - ’也为空缺值,因为很多模型对nan有直接的处理,这里我们先不做处理,先替换成nan

Train_data['notRepairedDamage'].replace('-', np.nan, inplace=True)
Train_data['notRepairedDamage'].value_counts()

[23]:

0.0    111361
,1.0     14315
,Name: notRepairedDamage, dtype: int64
Train_data.isnull().sum()

[24]:

SaleID                   0
,name                     0
,regDate                  0
,model                    1
,brand                    0
,bodyType              4506
,fuelType              8680
,gearbox               5981
,power                    0
,kilometer                0
,notRepairedDamage    24324
,regionCode               0
,seller                   0
,offerType                0
,creatDate                0
,price                    0
,v_0                      0
,v_1                      0
,v_2                      0
,v_3                      0
,v_4                      0
,v_5                      0
,v_6                      0
,v_7                      0
,v_8                      0
,v_9                      0
,v_10                     0
,v_11                     0
,v_12                     0
,v_13                     0
,v_14                     0
,dtype: int64
Test_data['notRepairedDamage'].value_counts()

[25]:

0.0    37249
,-       8031
,1.0     4720
,Name: notRepairedDamage, dtype: int64
Test_data['notRepairedDamage'].replace('-', np.nan, inplace=True)

以下两个类别特征严重倾斜,一般不会对预测有什么帮助,故这边先删掉,当然你也可以继续挖掘,但是一般意义不大

Train_data["seller"].value_counts()  # 可逐个查看所有特征,判断哪些特征严重倾斜

[27]:

0    149999
,1         1
,Name: seller, dtype: int64
Train_data["offerType"].value_counts()

[28]:

0    150000
,Name: offerType, dtype: int64
del Train_data["seller"]
del Train_data["offerType"]
del Test_data["seller"]
del Test_data["offerType"]

2.3.5 了解预测值的分布

Train_data['price']

[30]:

0          1850
,1          3600
,2          6222
,3          2400
,4          5200
,5          8000
,6          3500
,7          1000
,8          2850
,9           650
,10         3100
,11         5450
,12         1600
,13         3100
,14         6900
,15         3200
,16        10500
,17         3700
,18          790
,19         1450
,20          990
,21         2800
,22          350
,23          599
,24         9250
,25         3650
,26         2800
,27         2399
,28         4900
,29         2999
,          ...  
,149970      900
,149971     3400
,149972      999
,149973     3500
,149974     4500
,149975     3990
,149976     1200
,149977      330
,149978     3350
,149979     5000
,149980     4350
,149981     9000
,149982     2000
,149983    12000
,149984     6700
,149985     4200
,149986     2800
,149987     3000
,149988     7500
,149989     1150
,149990      450
,149991    24950
,149992      950
,149993     4399
,149994    14780
,149995     5900
,149996     9500
,149997     7500
,149998     4999
,149999     4700
,Name: price, Length: 150000, dtype: int64
Train_data['price'].value_counts()

[31]:

500      2337
,1500     2158
,1200     1922
,1000     1850
,2500     1821
,600      1535
,3500     1533
,800      1513
,2000     1378
,999      1356
,750      1279
,4500     1271
,650      1257
,1800     1223
,2200     1201
,850      1198
,700      1174
,900      1107
,1300     1105
,950      1104
,3000     1098
,1100     1079
,5500     1079
,1600     1074
,300      1071
,550      1042
,350      1005
,1250     1003
,6500      973
,1999      929
,         ... 
,21560       1
,7859        1
,3120        1
,2279        1
,6066        1
,6322        1
,4275        1
,10420       1
,43300       1
,305         1
,1765        1
,15970       1
,44400       1
,8885        1
,2992        1
,31850       1
,15413       1
,13495       1
,9525        1
,7270        1
,13879       1
,3760        1
,24250       1
,11360       1
,10295       1
,25321       1
,8886        1
,8801        1
,37920       1
,8188        1
,Name: price, Length: 3763, dtype: int64
## 1) 总体分布概况(无界约翰逊分布等)
import scipy.stats as st
y = Train_data['price']
plt.figure(1); plt.title('Johnson SU')
sns.distplot(y, kde=False, fit=st.johnsonsu)  # 结合约翰逊分布画图展示price的概率密度分布
plt.figure(2); plt.title('Normal')
sns.distplot(y, kde=False, fit=st.norm)  # 拟合正态分布
plt.figure(3); plt.title('Log Normal')
sns.distplot(y, kde=False, fit=st.lognorm)  # 拟合对数正态分布

[32]:

<AxesSubplot:title={'center':'Log Normal'}, xlabel='price'>

价格不服从正态分布,所以在进行回归之前,它必须进行转换(模型在正态分布数据上性能更好)。虽然对数变换做的很好,但最佳拟合是无界约翰逊分布

## 2) 查看skewness and kurtosis
sns.distplot(Train_data['price'])
print("Skewness: %f" % Train_data['price'].skew())
print("Kurtosis: %f" % Train_data['price'].kurt())
Skewness: 3.346487

Kurtosis: 18.995183

Train_data.skew(), Train_data.kurt()  # v_4最接近正态分布

[34]:

(SaleID               6.017846e-17
, name                 5.576058e-01
, regDate              2.849508e-02
, model                1.484388e+00
, brand                1.150760e+00
, bodyType             9.915299e-01
, fuelType             1.595486e+00
, gearbox              1.317514e+00
, power                6.586318e+01
, kilometer           -1.525921e+00
, notRepairedDamage    2.430640e+00
, regionCode           6.888812e-01
, creatDate           -7.901331e+01
, price                3.346487e+00
, v_0                 -1.316712e+00
, v_1                  3.594543e-01
, v_2                  4.842556e+00
, v_3                  1.062920e-01
, v_4                  3.679890e-01
, v_5                 -4.737094e+00
, v_6                  3.680730e-01
, v_7                  5.130233e+00
, v_8                  2.046133e-01
, v_9                  4.195007e-01
, v_10                 2.522046e-02
, v_11                 3.029146e+00
, v_12                 3.653576e-01
, v_13                 2.679152e-01
, v_14                -1.186355e+00
, dtype: float64, SaleID                 -1.200000
, name                   -1.039945
, regDate                -0.697308
, model                   1.740483
, brand                   1.076201
, bodyType                0.206937
, fuelType                5.880049
, gearbox                -0.264161
, power                5733.451054
, kilometer               1.141934
, notRepairedDamage       3.908072
, regionCode             -0.340832
, creatDate            6881.080328
, price                  18.995183
, v_0                     3.993841
, v_1                    -1.753017
, v_2                    23.860591
, v_3                    -0.418006
, v_4                    -0.197295
, v_5                    22.934081
, v_6                    -1.742567
, v_7                    25.845489
, v_8                    -0.636225
, v_9                    -0.321491
, v_10                   -0.577935
, v_11                   12.568731
, v_12                    0.268937
, v_13                   -0.438274
, v_14                    2.393526
, dtype: float64)
sns.distplot(Train_data.skew(),color='blue',axlabel ='Skewness')

[35]:

<AxesSubplot:xlabel='Skewness'>

sns.distplot(Train_data.kurt(),color='orange',axlabel ='Kurtness')

[36]:

<AxesSubplot:xlabel='Kurtness'>

skew、kurt说明参考https://www.cnblogs.com/wyy1480/p/10474046.html

# 通过峰度和偏度展现,发现大多数特征接近正态分布

## 3) 查看预测值的具体频数,具有较大的异常值
plt.hist(Train_data['price'], orientation = 'vertical',histtype = 'bar', color ='red')
plt.show()

查看频数, 大于20000得值极少,其实这里也可以把这些当作特殊得值(异常值)直接用填充或者删掉,再继续进行

# log变换z之后的分布较均匀,可以进行log变换进行预测,这也是预测问题常用的trick

  # 虽然无界约翰逊拟合更好,但是对数变换更常用

plt.hist(np.log(Train_data['price']), orientation = 'vertical',histtype = 'bar', color ='red') 
plt.show()

2.3.6 特征分为类别特征和数字特征,并对类别特征查看unique分布

数据类型

  • name - 汽车编码
  • regDate - 汽车注册时间
  • model - 车型编码
  • brand - 品牌
  • bodyType - 车身类型
  • fuelType - 燃油类型
  • gearbox - 变速箱
  • power - 汽车功率
  • kilometer - 汽车行驶公里
  • notRepairedDamage - 汽车有尚未修复的损坏
  • regionCode - 看车地区编码
  • seller - 销售方 【以删】
  • offerType - 报价类型 【以删】
  • creatDate - 广告发布时间
  • price - 汽车价格
  • v_0', 'v_1', 'v_2', 'v_3', 'v_4', 'v_5', 'v_6', 'v_7', 'v_8', 'v_9', 'v_10', 'v_11', 'v_12', 'v_13','v_14'(根据汽车的评论、标签等大量信息得到的embedding向量)【人工构造 匿名特征】
# 分离label即预测值
Y_train = Train_data['price']
# 这个区别方式适用于没有直接label coding的数据
# 这里不适用(因为部分特征已经进行了label coding,例如brand、bodyType、regionCode),需要人为根据实际含义来区分
# 数字特征
# numeric_features = Train_data.select_dtypes(include=[np.number])
# numeric_features.columns
# # 类型特征
# categorical_features = Train_data.select_dtypes(include=[np.object])
# categorical_features.columns
numeric_features = ['power', 'kilometer', 'v_0', 'v_1', 'v_2', 'v_3', 'v_4', 'v_5', 'v_6', 'v_7', 'v_8', 'v_9', 'v_10', 'v_11', 'v_12', 'v_13','v_14' ]
categorical_features = ['name', 'model', 'brand', 'bodyType', 'fuelType', 'gearbox', 'notRepairedDamage', 'regionCode',]
# 特征nunique分布
for cat_fea in categorical_features:
    print(cat_fea + "的特征分布如下:")
    print("{}特征有{}个不同的值".format(cat_fea, Train_data[cat_fea].nunique()))
    print(Train_data[cat_fea].value_counts())
name的特征分布如下:

name特征有个99662不同的值

708       282

387       282

55        280

1541      263

203       233

53        221

713       217

290       197

1186      184

911       182

2044      176

1513      160

1180      158

631       157

893       153

2765      147

473       141

1139      137

1108      132

444       129

306       127

2866      123

2402      116

533       114

1479      113

422       113

4635      110

725       110

964       109

1373      104

         ... 

89083       1

95230       1

164864      1

173060      1

179207      1

181256      1

185354      1

25564       1

19417       1

189324      1

162719      1

191373      1

193422      1

136082      1

140180      1

144278      1

146327      1

148376      1

158621      1

1404        1

15319       1

46022       1

64463       1

976         1

3025        1

5074        1

7123        1

11221       1

13270       1

174485      1

Name: name, Length: 99662, dtype: int64

model的特征分布如下:

model特征有个248不同的值

0.0      11762

19.0      9573

4.0       8445

1.0       6038

29.0      5186

48.0      5052

40.0      4502

26.0      4496

8.0       4391

31.0      3827

13.0      3762

17.0      3121

65.0      2730

49.0      2608

46.0      2454

30.0      2342

44.0      2195

5.0       2063

10.0      2004

21.0      1872

73.0      1789

11.0      1775

23.0      1696

22.0      1524

69.0      1522

63.0      1469

7.0       1460

16.0      1349

88.0      1309

66.0      1250

         ...  

141.0       37

133.0       35

216.0       30

202.0       28

151.0       26

226.0       26

231.0       23

234.0       23

233.0       20

198.0       18

224.0       18

227.0       17

237.0       17

220.0       16

230.0       16

239.0       14

223.0       13

236.0       11

241.0       10

232.0       10

229.0       10

235.0        7

246.0        7

243.0        4

244.0        3

245.0        2

209.0        2

240.0        2

242.0        2

247.0        1

Name: model, Length: 248, dtype: int64

brand的特征分布如下:

brand特征有个40不同的值

0     31480

4     16737

14    16089

10    14249

1     13794

6     10217

9      7306

5      4665

13     3817

11     2945

3      2461

7      2361

16     2223

8      2077

25     2064

27     2053

21     1547

15     1458

19     1388

20     1236

12     1109

22     1085

26      966

30      940

17      913

24      772

28      649

32      592

29      406

37      333

2       321

31      318

18      316

36      228

34      227

33      218

23      186

35      180

38       65

39        9

Name: brand, dtype: int64

bodyType的特征分布如下:

bodyType特征有个8不同的值

0.0    41420

1.0    35272

2.0    30324

3.0    13491

4.0     9609

5.0     7607

6.0     6482

7.0     1289

Name: bodyType, dtype: int64

fuelType的特征分布如下:

fuelType特征有个7不同的值

0.0    91656

1.0    46991

2.0     2212

3.0      262

4.0      118

5.0       45

6.0       36

Name: fuelType, dtype: int64

gearbox的特征分布如下:

gearbox特征有个2不同的值

0.0    111623

1.0     32396

Name: gearbox, dtype: int64

notRepairedDamage的特征分布如下:

notRepairedDamage特征有个2不同的值

0.0    111361

1.0     14315

Name: notRepairedDamage, dtype: int64

regionCode的特征分布如下:

regionCode特征有个7905不同的值

419     369

764     258

125     137

176     136

462     134

428     132

24      130

1184    130

122     129

828     126

70      125

827     120

207     118

1222    117

2418    117

85      116

2615    115

2222    113

759     112

188     111

1757    110

1157    109

2401    107

1069    107

3545    107

424     107

272     107

451     106

450     105

129     105

       ... 

6324      1

7372      1

7500      1

8107      1

2453      1

7942      1

5135      1

6760      1

8070      1

7220      1

8041      1

8012      1

5965      1

823       1

7401      1

8106      1

5224      1

8117      1

7507      1

7989      1

6505      1

6377      1

8042      1

7763      1

7786      1

6414      1

7063      1

4239      1

5931      1

7267      1

Name: regionCode, Length: 7905, dtype: int64
# 特征nunique分布,测试集
for cat_fea in categorical_features:
    print(cat_fea + "的特征分布如下:")
    print("{}特征有{}个不同的值".format(cat_fea, Test_data[cat_fea].nunique()))
    print(Test_data[cat_fea].value_counts())
name的特征分布如下:

name特征有个37453不同的值

55        97

708       96

387       95

1541      88

713       74

53        72

1186      67

203       67

631       65

911       64

2044      62

2866      60

1139      57

893       54

1180      52

2765      50

1108      50

290       48

1513      47

691       45

473       44

299       43

444       41

422       39

964       39

1479      38

1273      38

306       36

725       35

4635      35

          ..

46786      1

48835      1

165572     1

68204      1

171719     1

59080      1

186062     1

11985      1

147155     1

134869     1

138967     1

173792     1

114403     1

59098      1

59144      1

40679      1

61161      1

128746     1

55022      1

143089     1

14066      1

147187     1

112892     1

46598      1

159481     1

22270      1

89855      1

42752      1

48899      1

11808      1

Name: name, Length: 37453, dtype: int64

model的特征分布如下:

model特征有个247不同的值

0.0      3896

19.0     3245

4.0      3007

1.0      1981

29.0     1742

48.0     1685

26.0     1525

40.0     1409

8.0      1397

31.0     1292

13.0     1210

17.0     1087

65.0      915

49.0      866

46.0      831

30.0      803

10.0      709

5.0       696

44.0      676

21.0      659

11.0      603

23.0      591

73.0      561

69.0      555

7.0       526

63.0      493

22.0      443

16.0      412

66.0      411

88.0      391

         ... 

124.0       9

193.0       9

151.0       8

198.0       8

181.0       8

239.0       7

233.0       7

216.0       7

231.0       6

133.0       6

236.0       6

227.0       6

220.0       5

230.0       5

234.0       4

224.0       4

241.0       4

223.0       4

229.0       3

189.0       3

232.0       3

237.0       3

235.0       2

245.0       2

209.0       2

242.0       1

240.0       1

244.0       1

243.0       1

246.0       1

Name: model, Length: 247, dtype: int64

brand的特征分布如下:

brand特征有个40不同的值

0     10348

4      5763

14     5314

10     4766

1      4532

6      3502

9      2423

5      1569

13     1245

11      919

7       795

3       773

16      771

8       704

25      695

27      650

21      544

15      511

20      450

19      450

12      389

22      363

30      324

17      317

26      303

24      268

28      225

32      193

29      117

31      115

18      106

2       104

37       92

34       77

33       76

36       67

23       62

35       53

38       23

39        2

Name: brand, dtype: int64

bodyType的特征分布如下:

bodyType特征有个8不同的值

0.0    13985

1.0    11882

2.0     9900

3.0     4433

4.0     3303

5.0     2537

6.0     2116

7.0      431

Name: bodyType, dtype: int64

fuelType的特征分布如下:

fuelType特征有个7不同的值

0.0    30656

1.0    15544

2.0      774

3.0       72

4.0       37

6.0       14

5.0       10

Name: fuelType, dtype: int64

gearbox的特征分布如下:

gearbox特征有个2不同的值

0.0    37301

1.0    10789

Name: gearbox, dtype: int64

notRepairedDamage的特征分布如下:

notRepairedDamage特征有个2不同的值

0.0    37249

1.0     4720

Name: notRepairedDamage, dtype: int64

regionCode的特征分布如下:

regionCode特征有个6971不同的值

419     146

764      78

188      52

125      51

759      51

2615     50

462      49

542      44

85       44

1069     43

451      41

828      40

757      39

1688     39

2154     39

1947     39

24       39

2690     38

238      38

2418     38

827      38

1184     38

272      38

233      38

70       37

703      37

2067     37

509      37

360      37

176      37

       ... 

5512      1

7465      1

1290      1

3717      1

1258      1

7401      1

7920      1

7925      1

5151      1

7527      1

7689      1

8114      1

3237      1

6003      1

7335      1

3984      1

7367      1

6001      1

8021      1

3691      1

4920      1

6035      1

3333      1

5382      1

6969      1

7753      1

7463      1

7230      1

826       1

112       1

Name: regionCode, Length: 6971, dtype: int64

2.3.7 数字特征分析

numeric_features.append('price')
numeric_features

[45]:

['power',
, 'kilometer',
, 'v_0',
, 'v_1',
, 'v_2',
, 'v_3',
, 'v_4',
, 'v_5',
, 'v_6',
, 'v_7',
, 'v_8',
, 'v_9',
, 'v_10',
, 'v_11',
, 'v_12',
, 'v_13',
, 'v_14',
, 'price']
Train_data.head()

[46]:

SaleIDnameregDatemodelbrandbodyTypefuelTypegearboxpowerkilometer...v_5v_6v_7v_8v_9v_10v_11v_12v_13v_14
007362004040230.061.00.00.06012.5...0.2356760.1019880.1295490.0228160.097462-2.8818032.804097-2.4208210.7952920.914762
1122622003030140.012.00.00.0015.0...0.2647770.1210040.1357310.0265970.020582-4.9004822.096338-1.030483-1.7226740.245522
221487420040403115.0151.00.00.016312.5...0.2514100.1149120.1651470.0621730.027075-4.8467491.8035591.565330-0.832687-0.229963
337186519960908109.0100.00.01.019315.0...0.2742930.1103000.1219640.0333950.000000-4.5095991.285940-0.501868-2.438353-0.478699
4411108020120103110.051.00.00.0685.0...0.2280360.0732050.0918800.0788190.121534-1.8962400.9107830.9311102.8345181.923482

5 rows × 29 columns

## 1) 相关性分析(同样尝试了类别特征与price之间的相关性,但是相关性较低
price_numeric = Train_data[numeric_features]
correlation = price_numeric.corr()  # 两两特征的相关性
print(correlation['price'].sort_values(ascending = False),'\n')  # 打印price与其它特征之间的相关性
price        1.000000

v_12         0.692823

v_8          0.685798

v_0          0.628397

power        0.219834

v_5          0.164317

v_2          0.085322

v_6          0.068970

v_1          0.060914

v_14         0.035911

v_13        -0.013993

v_7         -0.053024

v_4         -0.147085

v_9         -0.206205

v_10        -0.246175

v_11        -0.275320

kilometer   -0.440519

v_3         -0.730946

Name: price, dtype: float64 
f , ax = plt.subplots(figsize = (7, 7))   # 创建一个包含单个子图的对象,尺寸7*7
plt.title('Correlation of Numeric Features with Price',y=1,size=16)
sns.heatmap(correlation,square = True,  vmax=0.8)  # 将特征之间的关联展现在热力图之中,右侧条形图为相关性指数

  # 同样尝试画出类别特征与price的关联性热力图,图像以深褐色为主,而数值特征与price的关联性热力图以红色为主,所以明显数值特征之间、数值特征与price之间的关联性更强

[48]:

<AxesSubplot:title={'center':'Correlation of Numeric Features with Price'}>

del price_numeric['price']  # 因为price的分布前面已经分析过了
## 2) 查看几个特征的偏度和峰值
for col in numeric_features:
    print('{:15}'.format(col), 
          'Skewness: {:05.2f}'.format(Train_data[col].skew()) ,  
          'Kurtosis: {:06.2f}'.format(Train_data[col].kurt())  
         )

  # 从输出结果中得知,kilometer、v_0、v_1、v_3、v_4、v_6、v_8、v_9、v_10、v_12、v_13、v_14更接近正态分布

power           Skewness: 65.86     Kurtosis: 5733.45

kilometer       Skewness: -1.53     Kurtosis: 001.14

v_0             Skewness: -1.32     Kurtosis: 003.99

v_1             Skewness: 00.36     Kurtosis: -01.75

v_2             Skewness: 04.84     Kurtosis: 023.86

v_3             Skewness: 00.11     Kurtosis: -00.42

v_4             Skewness: 00.37     Kurtosis: -00.20

v_5             Skewness: -4.74     Kurtosis: 022.93

v_6             Skewness: 00.37     Kurtosis: -01.74

v_7             Skewness: 05.13     Kurtosis: 025.85

v_8             Skewness: 00.20     Kurtosis: -00.64

v_9             Skewness: 00.42     Kurtosis: -00.32

v_10            Skewness: 00.03     Kurtosis: -00.58

v_11            Skewness: 03.03     Kurtosis: 012.57

v_12            Skewness: 00.37     Kurtosis: 000.27

v_13            Skewness: 00.27     Kurtosis: -00.44

v_14            Skewness: -1.19     Kurtosis: 002.39

price           Skewness: 03.35     Kurtosis: 019.00

## 3) 每个数字特征的分布可视化
f = pd.melt(Train_data, value_vars=numeric_features)  # 将Train_data在numeric_features中的特征进行行列互换,既列变为variable:value

  # 使用FacetGrid在一幅图中展示f中所有特征的概率分布,每行2列

g = sns.FacetGrid(f, col="variable",  col_wrap=2, sharex=False, sharey=False) # 布局
g = g.map(sns.distplot, "value")  # 填充图形

可以看出匿名特征相对分布均匀,接近正态分布(个别匿名特征分布不均匀)
## 4) 数字特征相互之间的关系可视化,除了v_1和v_6之间呈明显的线性关系,其它特征之间的线性关系不明显
sns.set()  # sns的默认设置
columns = ['price', 'v_12', 'v_8' , 'v_0', 'power', 'v_5',  'v_2', 'v_6', 'v_1', 'v_14']
sns.pairplot(Train_data[columns],size = 2 ,kind ='scatter',diag_kind='kde')
plt.show()

Train_data.columns

[53]:

Index(['SaleID', 'name', 'regDate', 'model', 'brand', 'bodyType', 'fuelType',
,       'gearbox', 'power', 'kilometer', 'notRepairedDamage', 'regionCode',
,       'creatDate', 'price', 'v_0', 'v_1', 'v_2', 'v_3', 'v_4', 'v_5', 'v_6',
,       'v_7', 'v_8', 'v_9', 'v_10', 'v_11', 'v_12', 'v_13', 'v_14'],
,      dtype='object')
Y_train

[54]:

0          1850
,1          3600
,2          6222
,3          2400
,4          5200
,5          8000
,6          3500
,7          1000
,8          2850
,9           650
,10         3100
,11         5450
,12         1600
,13         3100
,14         6900
,15         3200
,16        10500
,17         3700
,18          790
,19         1450
,20          990
,21         2800
,22          350
,23          599
,24         9250
,25         3650
,26         2800
,27         2399
,28         4900
,29         2999
,          ...  
,149970      900
,149971     3400
,149972      999
,149973     3500
,149974     4500
,149975     3990
,149976     1200
,149977      330
,149978     3350
,149979     5000
,149980     4350
,149981     9000
,149982     2000
,149983    12000
,149984     6700
,149985     4200
,149986     2800
,149987     3000
,149988     7500
,149989     1150
,149990      450
,149991    24950
,149992      950
,149993     4399
,149994    14780
,149995     5900
,149996     9500
,149997     7500
,149998     4999
,149999     4700
,Name: price, Length: 150000, dtype: int64
此处是多变量之间的关系可视化,可视化更多学习可参考很不错的文章 Seaborn-05-Pairplot多变量图 - 简书
## 5) 多变量互相回归关系可视化,既数据与线性回归模型拟合
fig, ((ax1, ax2), (ax3, ax4), (ax5, ax6), (ax7, ax8), (ax9, ax10)) = plt.subplots(nrows=5, ncols=2, figsize=(24, 20))
# ['v_12', 'v_8' , 'v_0', 'power', 'v_5',  'v_2', 'v_6', 'v_1', 'v_14']
# 此处可以用v_12_scatter_plot = Train_data[['price', 'v_12']]替代
v_12_scatter_plot = pd.concat([Y_train,Train_data['v_12']],axis = 1)

  # 此处可用sns.regplot(x='v_12', y='price, data=v_12_scatter_plot, ax=ax1)代替,因为scatter和fit_reg的默认值为True,既数据展示为scatter,并拟合线性回归模型

sns.regplot(x='v_12',y = 'price', data = v_12_scatter_plot,scatter= True, fit_reg=True, ax=ax1)
v_8_scatter_plot = pd.concat([Y_train,Train_data['v_8']],axis = 1)
sns.regplot(x='v_8',y = 'price',data = v_8_scatter_plot,scatter= True, fit_reg=True, ax=ax2)
v_0_scatter_plot = pd.concat([Y_train,Train_data['v_0']],axis = 1)
sns.regplot(x='v_0',y = 'price',data = v_0_scatter_plot,scatter= True, fit_reg=True, ax=ax3)
power_scatter_plot = pd.concat([Y_train,Train_data['power']],axis = 1)
sns.regplot(x='power',y = 'price',data = power_scatter_plot,scatter= True, fit_reg=True, ax=ax4)
v_5_scatter_plot = pd.concat([Y_train,Train_data['v_5']],axis = 1)
sns.regplot(x='v_5',y = 'price',data = v_5_scatter_plot,scatter= True, fit_reg=True, ax=ax5)
v_2_scatter_plot = pd.concat([Y_train,Train_data['v_2']],axis = 1)
sns.regplot(x='v_2',y = 'price',data = v_2_scatter_plot,scatter= True, fit_reg=True, ax=ax6)
v_6_scatter_plot = pd.concat([Y_train,Train_data['v_6']],axis = 1)
sns.regplot(x='v_6',y = 'price',data = v_6_scatter_plot,scatter= True, fit_reg=True, ax=ax7)
v_1_scatter_plot = pd.concat([Y_train,Train_data['v_1']],axis = 1)
sns.regplot(x='v_1',y = 'price',data = v_1_scatter_plot,scatter= True, fit_reg=True, ax=ax8)
v_14_scatter_plot = pd.concat([Y_train,Train_data['v_14']],axis = 1)
sns.regplot(x='v_14',y = 'price',data = v_14_scatter_plot,scatter= True, fit_reg=True, ax=ax9)
v_13_scatter_plot = pd.concat([Y_train,Train_data['v_13']],axis = 1)
sns.regplot(x='v_13',y = 'price',data = v_13_scatter_plot,scatter= True, fit_reg=True, ax=ax10)

[55]:

<AxesSubplot:xlabel='v_13', ylabel='price'>

2.3.8 类别特征分析

## 1) unique分布,查看类别分布是否稀疏
for fea in categorical_features:
    print('{}:{}'.format(fea, Train_data[fea].nunique()))
name:99662
model:248
brand:40
bodyType:8
fuelType:7
gearbox:2
notRepairedDamage:2
regionCode:7905
## 2) 类别特征箱形图可视化
# 因为 name和 regionCode的类别太稀疏了,这里我们把不稀疏的几类画一下
categorical_features = ['model',
 'brand',
 'bodyType',
 'fuelType',
 'gearbox',
 'notRepairedDamage']
for c in categorical_features:
    Train_data[c] = Train_data[c].astype('category')  # 将类别特征显式转换为category
    if Train_data[c].isnull().any():  # 若c列存在缺失值则为True
        Train_data[c] = Train_data[c].cat.add_categories(['MISSING'])  # 因为前面将c列转换为category,所以此处给c列增加一个类别,否则后续填充缺失值时会报错
        Train_data[c] = Train_data[c].fillna('MISSING')  # 使用MISSING填充c列中的缺失值
def boxplot(x, y, **kwargs):
    sns.boxplot(x=x, y=y)
    x=plt.xticks(rotation=90)  # x坐标轴的标签旋转90度
f = pd.melt(Train_data, id_vars=['price'], value_vars=categorical_features)
g = sns.FacetGrid(f, col="variable",  col_wrap=2, sharex=False, sharey=False)
g = g.map(boxplot, "value", "price")  # 将函数boxplot映射到g的各个子集上

Train_data.columns

[59]:

Index(['SaleID', 'name', 'regDate', 'model', 'brand', 'bodyType', 'fuelType',
,       'gearbox', 'power', 'kilometer', 'notRepairedDamage', 'regionCode',
,       'creatDate', 'price', 'v_0', 'v_1', 'v_2', 'v_3', 'v_4', 'v_5', 'v_6',
,       'v_7', 'v_8', 'v_9', 'v_10', 'v_11', 'v_12', 'v_13', 'v_14'],
,      dtype='object')
## 3) 类别特征的小提琴图可视化,判断缺失值的分布?
catg_list = categorical_features
target = 'price'
for catg in catg_list :
    sns.violinplot(x=catg, y=target, data=Train_data)
    plt.show()

categorical_features = ['model',
 'brand',
 'bodyType',
 'fuelType',
 'gearbox',
 'notRepairedDamage']
## 4) 类别特征的柱形图可视化
def bar_plot(x, y, **kwargs):
    sns.barplot(x=x, y=y)
    x=plt.xticks(rotation=90)

  # 将数据转变为3列,目标值、特征名、特征值

f = pd.melt(Train_data, id_vars=['price'], value_vars=categorical_features)

  # 使用转换后的数据创建一个图形栅格,使用特征名定义每个栅格

g = sns.FacetGrid(f, col="variable",  col_wrap=2, sharex=False, sharey=False)

  # 在每个图形栅格上展示特征值和目标值的柱状图,x轴代表特征值,y轴代表对应价格的均值

g = g.map(bar_plot, "value", "price")

 # 由上述柱形图可以看出,bodyType的缺失值和类别1接近;fuelType、gearbox、notRepairedDamage的缺失值与其它类别的价格都不一样,可能是它们的混合值

##  5) 类别特征的每个类别频数可视化(count_plot)
def count_plot(x,  **kwargs):
    sns.countplot(x=x)
    x=plt.xticks(rotation=90)
f = pd.melt(Train_data,  value_vars=categorical_features)
g = sns.FacetGrid(f, col="variable",  col_wrap=2, sharex=False, sharey=False)

   # 在每个图形栅格上展示特征值和目标值的柱状图,x轴代表特征值,y轴代表对应的数量,与上述柱形图相比,缺少了“价格”特征

g = g.map(count_plot, "value")

# 通过上述频数图,可以将fuelType、gearbox、notRepariedDamage的缺失值置为频数最高的类别

2.3.9 用ydata_profiling生成数据报告

用ydata_profiling生成一个较为全面的可视化和数据报告(较为简单、方便) 最终打开html文件即可

!pip install ydata_profiling
Collecting ydata-profiling
  Using cached ydata_profiling-4.6.4-py2.py3-none-any.whl.metadata (20 kB)
Requirement already satisfied: scipy<1.12,>=1.4.1 in d:\anaconda\lib\site-packages (from ydata-profiling) (1.9.3)
Requirement already satisfied: pandas!=1.4.0,<3,>1.1 in d:\anaconda\lib\site-packages (from ydata-profiling) (1.4.2)
Requirement already satisfied: matplotlib<3.9,>=3.2 in d:\anaconda\lib\site-packages (from ydata-profiling) (3.5.1)
Collecting pydantic>=2 (from ydata-profiling)
  Using cached pydantic-2.6.1-py3-none-any.whl.metadata (83 kB)
Requirement already satisfied: PyYAML<6.1,>=5.0.0 in d:\anaconda\lib\site-packages (from ydata-profiling) (6.0.1)
Requirement already satisfied: jinja2<3.2,>=2.11.1 in d:\anaconda\lib\site-packages (from ydata-profiling) (2.11.3)
Requirement already satisfied: visions==0.7.5 in d:\anaconda\lib\site-packages (from visions[type_image_path]==0.7.5->ydata-profiling) (0.7.5)
Requirement already satisfied: numpy<1.26,>=1.16.0 in d:\anaconda\lib\site-packages (from ydata-profiling) (1.24.3)
Requirement already satisfied: htmlmin==0.1.12 in d:\anaconda\lib\site-packages (from ydata-profiling) (0.1.12)
Requirement already satisfied: phik<0.13,>=0.11.1 in d:\anaconda\lib\site-packages (from ydata-profiling) (0.12.4)
Requirement already satisfied: requests<3,>=2.24.0 in d:\anaconda\lib\site-packages (from ydata-profiling) (2.31.0)
Requirement already satisfied: tqdm<5,>=4.48.2 in d:\anaconda\lib\site-packages (from ydata-profiling) (4.65.0)
Requirement already satisfied: seaborn<0.13,>=0.10.1 in d:\anaconda\lib\site-packages (from ydata-profiling) (0.11.2)
Requirement already satisfied: multimethod<2,>=1.4 in d:\anaconda\lib\site-packages (from ydata-profiling) (1.11)
Requirement already satisfied: statsmodels<1,>=0.13.2 in d:\anaconda\lib\site-packages (from ydata-profiling) (0.13.2)
Requirement already satisfied: typeguard<5,>=4.1.2 in d:\anaconda\lib\site-packages (from ydata-profiling) (4.1.5)
Requirement already satisfied: imagehash==4.3.1 in d:\anaconda\lib\site-packages (from ydata-profiling) (4.3.1)
Requirement already satisfied: wordcloud>=1.9.1 in d:\anaconda\lib\site-packages (from ydata-profiling) (1.9.3)
Requirement already satisfied: dacite>=1.8 in d:\anaconda\lib\site-packages (from ydata-profiling) (1.8.1)
Requirement already satisfied: numba<0.59.0,>=0.56.0 in d:\anaconda\lib\site-packages (from ydata-profiling) (0.58.0)
Requirement already satisfied: PyWavelets in d:\anaconda\lib\site-packages (from imagehash==4.3.1->ydata-profiling) (1.3.0)
Requirement already satisfied: pillow in d:\anaconda\lib\site-packages (from imagehash==4.3.1->ydata-profiling) (10.0.1)
Requirement already satisfied: attrs>=19.3.0 in d:\anaconda\lib\site-packages (from visions==0.7.5->visions[type_image_path]==0.7.5->ydata-profiling) (23.1.0)
Requirement already satisfied: networkx>=2.4 in d:\anaconda\lib\site-packages (from visions==0.7.5->visions[type_image_path]==0.7.5->ydata-profiling) (3.1)
Requirement already satisfied: tangled-up-in-unicode>=0.0.4 in d:\anaconda\lib\site-packages (from visions==0.7.5->visions[type_image_path]==0.7.5->ydata-profiling) (0.2.0)
WARNING: visions 0.7.5 does not provide the extra 'type-image-path'
Requirement already satisfied: MarkupSafe>=0.23 in d:\anaconda\lib\site-packages (from jinja2<3.2,>=2.11.1->ydata-profiling) (2.0.1)
Requirement already satisfied: cycler>=0.10 in d:\anaconda\lib\site-packages (from matplotlib<3.9,>=3.2->ydata-profiling) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in d:\anaconda\lib\site-packages (from matplotlib<3.9,>=3.2->ydata-profiling) (4.25.0)
Requirement already satisfied: kiwisolver>=1.0.1 in d:\anaconda\lib\site-packages (from matplotlib<3.9,>=3.2->ydata-profiling) (1.4.4)
Requirement already satisfied: packaging>=20.0 in d:\anaconda\lib\site-packages (from matplotlib<3.9,>=3.2->ydata-profiling) (23.1)
Requirement already satisfied: pyparsing>=2.2.1 in d:\anaconda\lib\site-packages (from matplotlib<3.9,>=3.2->ydata-profiling) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7 in d:\anaconda\lib\site-packages (from matplotlib<3.9,>=3.2->ydata-profiling) (2.8.2)
Requirement already satisfied: llvmlite<0.42,>=0.41.0dev0 in d:\anaconda\lib\site-packages (from numba<0.59.0,>=0.56.0->ydata-profiling) (0.41.0)
Requirement already satisfied: pytz>=2020.1 in d:\anaconda\lib\site-packages (from pandas!=1.4.0,<3,>1.1->ydata-profiling) (2023.3.post1)
Requirement already satisfied: joblib>=0.14.1 in d:\anaconda\lib\site-packages (from phik<0.13,>=0.11.1->ydata-profiling) (1.2.0)
Collecting annotated-types>=0.4.0 (from pydantic>=2->ydata-profiling)
  Using cached annotated_types-0.6.0-py3-none-any.whl.metadata (12 kB)
Collecting pydantic-core==2.16.2 (from pydantic>=2->ydata-profiling)
  Downloading pydantic_core-2.16.2-cp39-none-win_amd64.whl.metadata (6.6 kB)
Requirement already satisfied: typing-extensions>=4.6.1 in d:\anaconda\lib\site-packages (from pydantic>=2->ydata-profiling) (4.7.1)
Requirement already satisfied: charset-normalizer<4,>=2 in d:\anaconda\lib\site-packages (from requests<3,>=2.24.0->ydata-profiling) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in d:\anaconda\lib\site-packages (from requests<3,>=2.24.0->ydata-profiling) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in d:\anaconda\lib\site-packages (from requests<3,>=2.24.0->ydata-profiling) (1.26.18)
Requirement already satisfied: certifi>=2017.4.17 in d:\anaconda\lib\site-packages (from requests<3,>=2.24.0->ydata-profiling) (2023.11.17)
Requirement already satisfied: patsy>=0.5.2 in d:\anaconda\lib\site-packages (from statsmodels<1,>=0.13.2->ydata-profiling) (0.5.2)
Requirement already satisfied: colorama in d:\anaconda\lib\site-packages (from tqdm<5,>=4.48.2->ydata-profiling) (0.4.6)
Requirement already satisfied: importlib-metadata>=3.6 in d:\anaconda\lib\site-packages (from typeguard<5,>=4.1.2->ydata-profiling) (6.0.0)
Requirement already satisfied: zipp>=0.5 in d:\anaconda\lib\site-packages (from importlib-metadata>=3.6->typeguard<5,>=4.1.2->ydata-profiling) (3.11.0)
Requirement already satisfied: six in d:\anaconda\lib\site-packages (from patsy>=0.5.2->statsmodels<1,>=0.13.2->ydata-profiling) (1.16.0)
Downloading ydata_profiling-4.6.4-py2.py3-none-any.whl (357 kB)
   ---------------------------------------- 357.8/357.8 kB 44.9 kB/s eta 0:00:00
Downloading pydantic-2.6.1-py3-none-any.whl (394 kB)
   ---------------------------------------- 394.8/394.8 kB 23.6 kB/s eta 0:00:00
Downloading pydantic_core-2.16.2-cp39-none-win_amd64.whl (1.9 MB)
   ---------------------------------------- 1.9/1.9 MB 36.9 kB/s eta 0:00:00
Downloading annotated_types-0.6.0-py3-none-any.whl (12 kB)
Installing collected packages: pydantic-core, annotated-types, pydantic, ydata-profiling
  Attempting uninstall: pydantic
    Found existing installation: pydantic 1.10.12
    Uninstalling pydantic-1.10.12:
      Successfully uninstalled pydantic-1.10.12
Successfully installed annotated-types-0.6.0 pydantic-2.6.1 pydantic-core-2.16.2 ydata-profiling-4.6.4
WARNING: There was an error checking the latest version of pip.
import ydata_profiling
pfr = ydata_profiling.ProfileReport(Train_data)
pfr.to_file("./example.html")

2.4 经验总结

所给出的EDA步骤为广为普遍的步骤,在实际的不管是工程还是比赛过程中,这只是最开始的一步,也是最基本的一步。

接下来一般要结合模型的效果以及特征工程等来分析数据的实际建模情况,根据自己的一些理解,查阅文献,对实际问题做出判断和深入的理解。

最后不断进行EDA与数据处理和挖掘,来到达更好的数据结构和分布以及较为强势相关的特征


数据探索在机器学习中我们一般称为EDA(Exploratory Data Analysis):

是指对已有的数据(特别是调查或观察得来的原始数据)在尽量少的先验假定下进行探索,通过作图、制表、方程拟合、计算特征量等手段探索数据的结构和规律的一种数据分析方法。

数据探索有利于我们发现数据的一些特性,数据之间的关联性,对于后续的特征构建是很有帮助的。

  1. 对于数据的初步分析(直接查看数据,或.sum(), .mean(),.descirbe()等统计函数)可以从:样本数量,训练集数量,是否有时间特征,是否是时许问题,特征所表示的含义(非匿名特征),特征类型(字符类似,int,float,time),特征的缺失情况(注意缺失的在数据中的表现形式,有些是空的有些是”NAN”符号等),特征的均值方差情况。

  2. 分析记录某些特征值缺失占比30%以上样本的缺失处理,有助于后续的模型验证和调节,分析特征应该是填充(填充方式是什么,均值填充,0填充,众数填充等),还是舍去,还是先做样本分类用不同的特征模型去预测。

  3. 对于异常值做专门的分析,分析特征异常的label是否为异常值(或者偏离均值较远或者特殊符号),异常值是否应该剔除,还是用正常值填充,是记录异常,还是机器本身异常等。

  4. 对于Label做专门的分析,分析标签的分布情况等。

  5. 进一步分析可以通过对特征作图,特征和label联合做图(统计图,离散图),直观了解特征的分布情况,通过这一步也可以发现数据之中的一些异常值等,通过箱型图分析一些特征值的偏离情况,对于特征和特征联合作图,对于特征和label联合作图,分析其中的一些关联性。

  • 42
    点赞
  • 51
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值