先加载库和导入数据
#加载所需的库
import numpy as np
import pandas as pd
#加载数据train.csv
df = pd.read_csv('train.csv')
print(df.head(3))
# PassengerId Survived Pclass ... Fare Cabin Embarked
# 0 1 0 3 ... 7.2500 NaN S
# 1 2 1 1 ... 71.2833 C85 C
# 2 3 1 3 ... 7.9250 NaN S
#
# [3 rows x 12 columns]
可以看到3
不包括标题行
拿到的数据一般是不干净的,不干净就是数据中有缺失值,有一些异常点等,需要经过一定的处理才能继续做后面的分析或建模,所以拿到数据的第一步是进行数据清洗。
接下来将学习缺失值、重复值、字符串和数据转换等操作,将数据清洗成可以分析或建模的样子。
2.1 缺失值观察
可以看到在上面的Cabin
列存在NaN
,那其他列还有没有缺失值,这些缺失值要怎么处理呢?
首先观察缺失值
(1) 请查看每个特征缺失值个数
方法一:df.info()
# 缺失值观察
print(df.info())
# <class 'pandas.core.frame.DataFrame'>
# RangeIndex: 891 entries, 0 to 890
# Data columns (total 12 columns):
# PassengerId 891 non-null int64
# Survived 891 non-null int64
# Pclass 891 non-null int64
# Name 891 non-null object
# Sex 891 non-null object
# Age 714 non-null float64
# SibSp 891 non-null int64
# Parch 891 non-null int64
# Ticket 891 non-null object
# Fare 891 non-null float64
# Cabin 204 non-null object
# Embarked 889 non-null object
# dtypes: float64(2), int64(5), object(5)
# memory usage: 83.7+ KB
# None
方法二:df.isnull().sum()
print(df.isnull().sum())
# PassengerId 0
# Survived 0
# Pclass 0
# Name 0
# Sex 0
# Age 177
# SibSp 0
# Parch 0
# Ticket 0
# Fare 0
# Cabin 687
# Embarked 2
# dtype: int64
感觉方法二比较明显
(2) 请查看Age
, Cabin
, Embarked
列的数据以上方式都有多种方式
# 查看`Age`, `Cabin`, `Embarked`列的数据以上方式都有多种方式
print(df[['Age','Cabin','Embarked']].head(3))
# Age Cabin Embarked
# 0 22.0 NaN S
# 1 38.0 C85 C
# 2 26.0 NaN S
可以看到是文件名[['特征1','特征2','特征3','特征4']]
2.2 缺失值处理
(1)处理缺失值一般有几种思路
'''处理缺失值'''
df[df['Age']==None]=0
print(df.head())
# PassengerId Survived Pclass ... Fare Cabin Embarked
# 0 1 0 3 ... 7.2500 NaN S
# 1 2 1 1 ... 71.2833 C85 C
# 2 3 1 3 ... 7.9250 NaN S
# 3 4 1 1 ... 53.1000 C123 S
# 4 5 0 3 ... 8.0500 NaN S
#
# [5 rows x 12 columns]
df[df['Age'].isnull()] = 0
print(df.head())
# PassengerId Survived Pclass ... Fare Cabin Embarked
# 0 1 0 3 ... 7.2500 NaN S
# 1 2 1 1 ... 71.2833 C85 C
# 2 3 1 3 ... 7.9250 NaN S
# 3 4 1 1 ... 53.1000 C123 S
# 4 5 0 3 ... 8.0500 NaN S
#
# [5 rows x 12 columns]
df[df['Age'] == np.nan] = 0
print(df.head())
# PassengerId Survived Pclass ... Fare Cabin Embarked
# 0 1 0 3 ... 7.2500 NaN S
# 1 2 1 1 ... 71.2833 C85 C
# 2 3 1 3 ... 7.9250 NaN S
# 3 4 1 1 ... 53.1000 C123 S
# 4 5 0 3 ... 8.0500 NaN S
#
# [5 rows x 12 columns]
感觉这里的操作是看不到结果的
【思考】检索空缺值用np.nan
要比用None
好,这是为什么?
【回答】数值列读取数据后,空缺值的数据类型为float64
所以用None
一般索引不到,比较的时候最好用np.nan
所以推荐第三个方法文件[文件['特征'] == np.nan] = 0
(2) 请尝试对Age列的数据的缺失值进行处理
(3) 请尝试使用不同的方法直接对整张表的缺失值进行处理
【思考】dropna和fillna有哪些参数,分别如何使用呢?
【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html
【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html
2.3 重复值观察
'''请查看数据中的重复值'''
print(df[df.duplicated()])
# PassengerId Survived Pclass Name Sex ... Parch Ticket Fare Cabin Embarked
# 17 0 0 0 0 0 ... 0 0 0.0 0 0
# 19 0 0 0 0 0 ... 0 0 0.0 0 0
# 26 0 0 0 0 0 ... 0 0 0.0 0 0
# 28 0 0 0 0 0 ... 0 0 0.0 0 0
# 29 0 0 0 0 0 ... 0 0 0.0 0 0
# .. ... ... ... ... .. ... ... ... ... ... ...
# 859 0 0 0 0 0 ... 0 0 0.0 0 0
# 863 0 0 0 0 0 ... 0 0 0.0 0 0
# 868 0 0 0 0 0 ... 0 0 0.0 0 0
# 878 0 0 0 0 0 ... 0 0 0.0 0 0
# 888 0 0 0 0 0 ... 0 0 0.0 0 0
#
# [176 rows x 12 columns]
2.4 对重复值进行处理
(1)重复值有哪些处理方式呢?
(2)处理我们数据的重复值
将前面清洗的数据保存为csv格式
'''将前面清洗的数据保存为csv格式'''
df.to_csv('test_clear.csv')
关于特征观察:
我们对特征进行一下观察,可以把特征大概分为两大类:
数值型特征:Survived ,Pclass, Age ,SibSp, Parch, Fare,其中Survived, Pclass为离散型数值特征,Age,SibSp, Parch, Fare为连续型数值特征
文本型特征:Name, Sex, Cabin,Embarked, Ticket,其中Sex, Cabin, Embarked, Ticket为类别型文本特征。
数值型特征一般可以直接用于模型的训练,但有时候为了模型的稳定性及鲁棒性会对连续变量进行离散化。文本型特征往往需要转换成数值型特征才能用于建模分析。
对年龄进行分箱(离散化)处理
(1) 分箱操作是什么?
(2) 将连续变量Age平均分箱成5个年龄段,并分别用类别变量12345表示
#将连续变量Age平均分箱成5个年龄段,并分别用类别变量12345表示
df['AgeBand'] = pd.cut(df['Age'], 5,labels = ['1','2','3','4','5'])
print(df.head())
# PassengerId Survived Pclass ... Cabin Embarked AgeBand
# 0 1 0 3 ... NaN S 2
# 1 2 1 1 ... C85 C 3
# 2 3 1 3 ... NaN S 2
# 3 4 1 1 ... C123 S 3
# 4 5 0 3 ... NaN S 3
#
# [5 rows x 13 columns]
df.to_csv('test_ave.csv')
(3) 将连续变量Age划分为[0,5) [5,15) [15,30) [30,50) [50,80)五个年龄段,并分别用类别变量12345表示
#将连续变量Age划分为[0,5) [5,15) [15,30) [30,50) [50,80)五个年龄段,并分别用类别变量12345表示
df['AgeBand'] = pd.cut(df['Age'],[0,5,15,30,50,80],labels = ['1','2','3','4','5'])
print(df.head(3))
# PassengerId Survived Pclass ... Cabin Embarked AgeBand
# 0 1 0 3 ... NaN S 3
# 1 2 1 1 ... C85 C 4
# 2 3 1 3 ... NaN S 3
#
# [3 rows x 13 columns]
df.to_csv('test_cut.csv')
(4) 将连续变量Age按10% 30% 50 70% 90%五个年龄段,并用分类变量12345表示
(5) 将上面的获得的数据分别进行保存,保存为csv格式
2.5 对文本变量进行转换
(1) 查看文本变量名及种类
'''对文本变量进行转换'''
#查看类别文本变量名及种类
#方法一: value_counts
print(df['Sex'].value_counts())
# male 453
# female 261
# 0 177
# Name: Sex, dtype: int64
print(df['Cabin'].value_counts())
# 0 177
# B96 B98 4
# C23 C25 C27 4
# G6 4
# C22 C26 3
# ...
# D19 1
# E40 1
# B4 1
# E49 1
# F G63 1
# Name: Cabin, Length: 135, dtype: int64
print(df['Embarked'].value_counts())
# S 554
# 0 177
# C 130
# Q 28
# Name: Embarked, dtype: int64
#方法二: unique
print(df['Sex'].unique())
# ['male' 'female' 0]
print(df['Sex'].nunique())
# 3
(2) 将文本变量Sex, Cabin ,Embarked用数值变量12345表示
'''将类别文本转换为12345'''
#方法一: replace
df['Sex_num'] = df['Sex'].replace(['male','female'],[1,2])
print(df.head())
# PassengerId Survived Pclass ... Embarked AgeBand Sex_num
# 0 1 0 3 ... S 3 1
# 1 2 1 1 ... C 4 2
# 2 3 1 3 ... S 3 2
# 3 4 1 1 ... S 4 2
# 4 5 0 3 ... S 4 1
#
# [5 rows x 14 columns]
#方法二: map
df['Sex_num'] = df['Sex'].map({'male': 1, 'female': 2})
print(df.head())
# PassengerId Survived Pclass ... Embarked AgeBand Sex_num
# 0 1 0 3 ... S 3 1.0
# 1 2 1 1 ... C 4 2.0
# 2 3 1 3 ... S 3 2.0
# 3 4 1 1 ... S 4 2.0
# 4 5 0 3 ... S 4 1.0
#
# [5 rows x 14 columns]
#方法三: 使用sklearn.preprocessing的LabelEncoder
from sklearn.preprocessing import LabelEncoder
for feat in ['Cabin', 'Ticket']:
lbl = LabelEncoder()
label_dict = dict(zip(df[feat].unique(), range(df[feat].nunique())))
df[feat + "_labelEncode"] = df[feat].map(label_dict)
df[feat + "_labelEncode"] = lbl.fit_transform(df[feat].astype(str))
print(df.head())
# PassengerId Survived ... Cabin_labelEncode Ticket_labelEncode
# 0 1 0 ... 135 409
# 1 2 1 ... 74 472
# 2 3 1 ... 135 533
# 3 4 1 ... 50 41
# 4 5 0 ... 135 374
#
# [5 rows x 16 columns]
(3) 将文本变量Sex, Cabin, Embarked用one-hot编码表示
# 方法一: OneHotEncoder
for feat in ["Age", "Embarked"]:
# x = pd.get_dummies(df["Age"] // 6)
# x = pd.get_dummies(pd.cut(df['Age'],5))
x = pd.get_dummies(df[feat], prefix=feat)
df = pd.concat([df, x], axis=1)
# df[feat] = pd.get_dummies(df[feat], prefix=feat)
print(df.head())
# PassengerId Survived Pclass ... Embarked_C Embarked_Q Embarked_S
# 0 1 0 3 ... 0 0 1
# 1 2 1 1 ... 1 0 0
# 2 3 1 3 ... 0 0 1
# 3 4 1 1 ... 0 0 1
# 4 5 0 3 ... 0 0 1
#
# [5 rows x 109 columns]
2.6 从纯文本Name特征里提取出Titles的特征(所谓的Titles就是Mr,Miss,Mrs等)
'''从纯文本Name特征里提取出Titles的特征(所谓的Titles就是Mr,Miss,Mrs等)'''
df['Title'] = df.Name.str.extract('([A-Za-z]+)\.', expand=False)
print(df.head())
# PassengerId Survived Pclass ... Embarked_Q Embarked_S Title
# 0 1 0 3 ... 0 1 Mr
# 1 2 1 1 ... 0 0 Mrs
# 2 3 1 3 ... 0 1 Miss
# 3 4 1 1 ... 0 1 Mrs
# 4 5 0 3 ... 0 1 Mr
#
# [5 rows x 110 columns]
# 保存上面的为最终结论
df.to_csv('test_fin.csv')