DataWhale_数据分析训练营task2(第二章part1数据清洗与特征处理)


【回顾&引言】前面一章的内容大家可以感觉到我们主要是对基础知识做一个梳理,让大家了解数据分析的一些操作,主要做了数据的各个角度的观察。那么在这里,我们主要是做数据分析的流程性学习,主要是包括了数据清洗以及数据的特征处理,数据重构以及数据可视化。这些内容是为数据分析最后的建模和模型评价做一个铺垫。

第二章part1:数据清洗及特征处理

我们拿到的数据通常是不干净的,所谓的不干净,就是数据中有缺失值,有一些异常点等,需要经过一定的处理才能继续做后面的分析或建模,所以拿到数据的第一步是进行数据清洗,本章我们将学习缺失值、重复值、字符串和数据转换等操作,将数据清洗成可以分析或建模的亚子。

开始之前,导入numpy、pandas包和数据
#加载所需的库
import numpy as np 
import pandas as pd
#加载数据train.csv
df = pd.read_csv('./train.csv')
df.head(3)
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS

2.1 缺失值观察与处理

我们拿到的数据经常会有很多缺失值,比如我们可以看到Cabin列存在NaN,那其他列还有没有缺失值,这些缺失值要怎么处理呢

2.1.1 任务一:缺失值观察

(1) 请查看每个特征缺失值个数
(2) 请查看Age, Cabin, Embarked列的数据
以上方式都有多种方式,所以大家多多益善

#写入代码
df.info()


<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId    891 non-null int64
Survived       891 non-null int64
Pclass         891 non-null int64
Name           891 non-null object
Sex            891 non-null object
Age            714 non-null float64
SibSp          891 non-null int64
Parch          891 non-null int64
Ticket         891 non-null object
Fare           891 non-null float64
Cabin          204 non-null object
Embarked       889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.6+ KB
#写入代码
df.isnull().sum()


PassengerId      0
Survived         0
Pclass           0
Name             0
Sex              0
Age            177
SibSp            0
Parch            0
Ticket           0
Fare             0
Cabin          687
Embarked         2
dtype: int64
#写入代码
df[['Age','Cabin','Embarked']].head(3)


AgeCabinEmbarked
022.0NaNS
138.0C85C
226.0NaNS
2.1.2 任务二:对缺失值进行处理(对df做的处理只要inplace=False就不会改变df)

(1)处理缺失值一般有几种思路

(2) 请尝试对Age列的数据的缺失值进行处理

(3) 请尝试使用不同的方法直接对整张表的缺失值进行处理

#处理缺失值的一般思路:
#提醒:可使用的函数有--->dropna函数与fillna函数
df.dropna().head(3)


PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
6701McCarthy, Mr. Timothy Jmale54.0001746351.8625E46S
#写入代码
df.fillna(0).head(3)


PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.25000S
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.92500S
#写入代码
df[df['Age']==None]=0
df.head(3)

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS

【思考1】dropna和fillna有哪些参数,分别如何使用呢?
DataFrame.dropna(axis=0, how=‘any’, thresh=None, subset=None, inplace=False)返回DataFrame

  • axis为轴 axis=0删除有nan的行,axis=1删除有nan的列
  • how代表方式 包括any和all,any方式下存在任意nan值就删除,all方式下整行/列为nan才删除
  • thresh代表删除阈值 行/列超过thresh个非nan的值才不删除
  • subset代表只针对某些列/行进行搜索丢失值
  • inplace代表是否替换原df

DataFrame.fillna(value=None, method=None, axis=None, inplace=False, limit=None, downcast=None)

  • value代表在nan值处要替换的数值
  • method代表填充方式{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None} 如ffill代表前向传播非nan值,遇到第一个非nan值后,该行/列后边的nan用该值填充
  • axis代表填充沿着的轴
  • limit代表指定method后,沿轴填充nan的最大数量
  • downcast代表改变数据类型。
  • inplace代表是否替换原df

【思考2】检索空缺值用np.nan要比用None好,这是为什么?

#思考回答


pandas的底层是numpy,很多操作支持nan但不支持None。

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html

2.2 重复值观察与处理

由于这样那样的原因,数据中会不会存在重复值呢,如果存在要怎样处理呢

2.2.1 任务一:请查看数据中的重复值
#写入代码
df[df.duplicated()]


PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
2.2.2 任务二:对重复值进行处理

(1)重复值有哪些处理方式呢?

(2)处理我们数据的重复值

方法多多益善

#重复值有哪些处理方式:
df.drop_duplicates().head()


PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS
2.2.3 任务三:将前面清洗的数据保存为csv格式
#写入代码
df.to_csv('./test_clear.csv')


2.3 特征观察与处理

我们对特征进行一下观察,可以把特征大概分为两大类:
数值型特征:Survived ,Pclass, Age ,SibSp, Parch, Fare,其中Survived, Pclass为离散型数值特征,Age,SibSp, Parch, Fare为连续型数值特征
文本型特征:Name, Sex, Cabin,Embarked, Ticket,其中Sex, Cabin, Embarked, Ticket为类别型文本特征,数值型特征一般可以直接用于模型的训练,但有时候为了模型的稳定性及鲁棒性会对连续变量进行离散化。文本型特征往往需要转换成数值型特征才能用于建模分析。

2.3.1 任务一:对年龄进行分箱(离散化)处理

(1) 分箱操作是什么?

(2) 将连续变量Age平均分箱成5个年龄段,并分别用类别变量12345表示

(3) 将连续变量Age划分为[0,5) [5,15) [15,30) [30,50) [50,80)五个年龄段,并分别用类别变量12345表示

(4) 将连续变量Age按10% 30% 50 70% 90%五个年龄段,并用分类变量12345表示

(5) 将上面的获得的数据分别进行保存,保存为csv格式

#分箱操作是什么:
将连续特征按照数值大小拆分为n段。

#写入代码
# 平均分为5段
df['AgeBand'] = pd.cut(df['Age'], 5, labels=['1','2','3','4','5'])
df.head(5)

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBand
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS2
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C3
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS2
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S3
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS3
#写入代码
df.to_csv('./test_ave.csv')


# 写入代码
# 将连续变量Age划分为[0,5) [5,15) [15,30) [30,50) [50,80)五个年龄段,并分别用类别变量12345表示
df['AgeBand'] = pd.cut(df['Age'],[0,5,15,30,50,80],labels=['1','2','3','4','5'])
df.head()


PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBand
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS3
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C4
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS3
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S4
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS4
df.to_csv('./test_cut.csv')
# 将连续变量Age按10% 30% 50 70% 90%五个年龄段,并用分类变量12345表示
df['AgeBand'] = pd.qcut(df['Age'],[0,0.1,0.3,0.5,0.7,0.9],labels=['1','2','3','4','5'])
df.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBand
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS2
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C5
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS3
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S4
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS4
df.to_csv('test_pr.csv')

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.qcut.html

2.3.2 任务二:对文本变量进行转换

(1) 查看文本变量名及种类
(2) 将文本变量Sex, Cabin ,Embarked用数值变量12345表示
(3) 将文本变量Sex, Cabin, Embarked用one-hot编码表示

(1) 查看文本变量名及种类


方法一:value_counts

#写入代码
df['Sex'].value_counts()


male      577
female    314
Name: Sex, dtype: int64
#写入代码
df['Cabin'].value_counts()


B96 B98            4
G6                 4
C23 C25 C27        4
D                  3
F33                3
C22 C26            3
F2                 3
E101               3
B18                2
D20                2
C68                2
C126               2
F4                 2
B49                2
B57 B59 B63 B66    2
C52                2
C123               2
E67                2
E44                2
E8                 2
E33                2
B35                2
C125               2
D26                2
D17                2
C124               2
E121               2
D33                2
C93                2
C2                 2
                  ..
B4                 1
E34                1
C49                1
A31                1
C99                1
B82 B84            1
E58                1
D10 D12            1
D30                1
E63                1
D47                1
C106               1
A16                1
T                  1
B71                1
C90                1
B69                1
D56                1
D46                1
A7                 1
D7                 1
C118               1
B80                1
B42                1
C47                1
C110               1
C82                1
A34                1
A5                 1
E36                1
Name: Cabin, Length: 147, dtype: int64
#写入代码
df['Embarked'].value_counts()


S    644
C    168
Q     77
Name: Embarked, dtype: int64

方法二:unique

df['Sex'].unique()
array(['male', 'female'], dtype=object)
df['Sex'].nunique()
2

(2) 将文本变量Sex, Cabin ,Embarked用数值变量12345表示


方法1:replace

df['Sex_num']=df['Sex'].replace(['male','female'],['1','2'])
df.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBandSex_num
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS21
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C52
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS32
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S42
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS41

方法二:map

df['Sex_num']=df['Sex'].map({'male':1,'female':2})
df.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBandSex_num
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS21
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C52
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS32
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S42
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS41

方法三: 使用sklearn.preprocessing的LabelEncoder

from sklearn.preprocessing import LabelEncoder
for feat in ['Cabin','Ticket']:
    lb1 = LabelEncoder()
    # 建立特征与标签的映射
    label_dict = dict(zip(df[feat].unique(),range(df[feat].nunique())))
    df[feat+'_labelencode'] = df[feat].map(label_dict)
    df[feat+'_labelencode'] = lb1.fit_transform(df[feat].astype(str))
df.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBandSex_numCabin_labelencodeTicket_labelencode
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS21147523
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C5281596
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS32147669
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S425549
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS41147472
(3) 将文本变量Sex, Cabin, Embarked用one-hot编码表示
for feat in ['Age','Cabin','Embarked']:
    x = pd.get_dummies(df[feat],prefix=feat)
    df = pd.concat([df,x],axis=1)
df.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFare...Cabin_F G73Cabin_F2Cabin_F33Cabin_F38Cabin_F4Cabin_G6Cabin_TEmbarked_CEmbarked_QEmbarked_S
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500...0000000001
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833...0000000100
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250...0000000001
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000...0000000001
4503Allen, Mr. William Henrymale35.0003734508.0500...0000000001

5 rows × 254 columns

2.3.3 任务三:从纯文本Name特征里提取出Titles的特征(所谓的Titles就是Mr,Miss,Mrs等)
#写入代码

df['title']=df.Name.str.extract('([A-Za-z]+)\.', expand=False)
df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFare...Cabin_F2Cabin_F33Cabin_F38Cabin_F4Cabin_G6Cabin_TEmbarked_CEmbarked_QEmbarked_Stitle
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500...000000001Mr
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833...000000100Mrs
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250...000000001Miss
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000...000000001Mrs
4503Allen, Mr. William Henrymale35.0003734508.0500...000000001Mr

5 rows × 255 columns

#保存最终你完成的已经清理好的数据
df.to_csv('test_fin.csv')
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值