Task03:数据重构

2 数据重构

2.1 数据的合并

2.1.1 将data文件夹里面的所有数据都载入,观察数据的之间的关系

# 导入基本库
import numpy as np
import pandas as pd
# 载入data文件中的:train-left-up.csv
df1=pd.read_csv('./data/train-left-up.csv')
df1.head()
PassengerIdSurvivedPclassName
0103Braund, Mr. Owen Harris
1211Cumings, Mrs. John Bradley (Florence Briggs Th...
2313Heikkinen, Miss. Laina
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)
4503Allen, Mr. William Henry
df2=pd.read_csv('./data/train-left-down.csv')
df2.head()
PassengerIdSurvivedPclassName
044002Kvillner, Mr. Johan Henrik Johannesson
144112Hart, Mrs. Benjamin (Esther Ada Bloomfield)
244203Hampe, Mr. Leon
344303Petterson, Mr. Johan Emil
444412Reynaldo, Ms. Encarnacion
df3=pd.read_csv('./data/train-right-up.csv')
df3.head()
SexAgeSibSpParchTicketFareCabinEmbarked
0male22.010A/5 211717.2500NaNS
1female38.010PC 1759971.2833C85C
2female26.000STON/O2. 31012827.9250NaNS
3female35.01011380353.1000C123S
4male35.0003734508.0500NaNS
df4=pd.read_csv('./data/train-right-down.csv')
df4.head()
SexAgeSibSpParchTicketFareCabinEmbarked
0male31.000C.A. 1872310.500NaNS
1female45.011F.C.C. 1352926.250NaNS
2male20.0003457699.500NaNS
3male25.0103470767.775NaNS
4female28.00023043413.000NaNS

2.1.2 使用concat方法:将数据train-left-up.csv和train-right-up.csv横向合并为一张表,并保存这张表为result_up

result_up=pd.concat([df1,df3],axis=1)
result_up.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS

2.1.3 使用concat方法:将train-left-down和train-right-down横向合并为一张表,并保存这张表为result_down。然后将上边的result_up和result_down纵向合并为result。

result_down=pd.concat([df2,df4],axis=1)
result_down.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
044002Kvillner, Mr. Johan Henrik Johannessonmale31.000C.A. 1872310.500NaNS
144112Hart, Mrs. Benjamin (Esther Ada Bloomfield)female45.011F.C.C. 1352926.250NaNS
244203Hampe, Mr. Leonmale20.0003457699.500NaNS
344303Petterson, Mr. Johan Emilmale25.0103470767.775NaNS
444412Reynaldo, Ms. Encarnacionfemale28.00023043413.000NaNS
result=pd.concat([result_up,result_down],axis=0)
result.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS

2.1.4 使用DataFrame自带的方法join方法和append:实现2.1.2和2.1.3

result_up=df1.join(df3)
result_up.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS
result_down=df2.join(df4)
result_down.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
044002Kvillner, Mr. Johan Henrik Johannessonmale31.000C.A. 1872310.500NaNS
144112Hart, Mrs. Benjamin (Esther Ada Bloomfield)female45.011F.C.C. 1352926.250NaNS
244203Hampe, Mr. Leonmale20.0003457699.500NaNS
344303Petterson, Mr. Johan Emilmale25.0103470767.775NaNS
444412Reynaldo, Ms. Encarnacionfemale28.00023043413.000NaNS
result=result_up.append(result_down)
result.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS

2.1.5 使用Panads的merge方法和DataFrame的append方法:实现2.1.2和2.1.3

参数补充
how:指的是连接方式有inner(内连接),left(左外连接),right(右外连接),outer(全外连接);默认为inner!
left_index:使用左则DataFrame中的行索引做为连接键
right_index:使用右则DataFrame中的行索引做为连接键
suffixes:字符串值组成的元组,用于指定当左右DataFrame存在相同列名时在列名后面附加的后缀名称,默认为(’_x’,’_y’)

result_up=pd.merge(df1,df3,left_index=True,right_index=True)
result_up.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS
result_down=pd.merge(df2,df4,left_index=True,right_index=True)
result_down.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
044002Kvillner, Mr. Johan Henrik Johannessonmale31.000C.A. 1872310.500NaNS
144112Hart, Mrs. Benjamin (Esther Ada Bloomfield)female45.011F.C.C. 1352926.250NaNS
244203Hampe, Mr. Leonmale20.0003457699.500NaNS
344303Petterson, Mr. Johan Emilmale25.0103470767.775NaNS
444412Reynaldo, Ms. Encarnacionfemale28.00023043413.000NaNS
result=result_up.append(result_down)
result.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS

【思考】对比merge、join以及concat的方法的不同以及相同。思考一下在2.1.4和2.15的情况下,为什么都要求使用DataFrame的append方法,如何只要求使用merge或者join可不可以完成2.1.4和2.15呢?

#用merge完成2.1.4
result=pd.merge(result_up,result_down,how='left')
result.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS
#用join完成2.1.4
#上面提过列名相同需要修改
result=result_up.join(result_down,rsuffix='_2')
result.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFare...Pclass_2Name_2Sex_2Age_2SibSp_2Parch_2Ticket_2Fare_2Cabin_2Embarked_2
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500...2Kvillner, Mr. Johan Henrik Johannessonmale31.000C.A. 1872310.500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833...2Hart, Mrs. Benjamin (Esther Ada Bloomfield)female45.011F.C.C. 1352926.250NaNS
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250...3Hampe, Mr. Leonmale20.0003457699.500NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000...3Petterson, Mr. Johan Emilmale25.0103470767.775NaNS
4503Allen, Mr. William Henrymale35.0003734508.0500...2Reynaldo, Ms. Encarnacionfemale28.00023043413.000NaNS

5 rows × 24 columns

思考回答:
merge默认以重叠的列名为连接键,上面d1和d3是完全两个不同的表,所以在连接的时候就要指定left_index和right_index。
join当两个表中列名不同时,不加任何参数就可以直接用,有重名列时要通过参数lsuffix, rsuffix。
concat基于轴向连接,关键参数为axis。
append可以很方便连接两个相同列名的dataframe且不用加参数。

2.1.6 完成的数据保存为result.csv

result.to_csv('result.csv')

2.2 换一种角度看数据

2.2.1 将我们的数据变为Series类型的数据

df = pd.read_csv('result.csv')
df.head()
Unnamed: 0PassengerIdSurvivedPclassNameSexAgeSibSpParchTicket...Pclass_2Name_2Sex_2Age_2SibSp_2Parch_2Ticket_2Fare_2Cabin_2Embarked_2
00103Braund, Mr. Owen Harrismale22.010A/5 21171...2Kvillner, Mr. Johan Henrik Johannessonmale31.000C.A. 1872310.500NaNS
11211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 17599...2Hart, Mrs. Benjamin (Esther Ada Bloomfield)female45.011F.C.C. 1352926.250NaNS
22313Heikkinen, Miss. Lainafemale26.000STON/O2. 3101282...3Hampe, Mr. Leonmale20.0003457699.500NaNS
33411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.010113803...3Petterson, Mr. Johan Emilmale25.0103470767.775NaNS
44503Allen, Mr. William Henrymale35.000373450...2Reynaldo, Ms. Encarnacionfemale28.00023043413.000NaNS

5 rows × 25 columns

out=df.stack()
out.head()
0  Unnamed: 0                           0
   PassengerId                          1
   Survived                             0
   Pclass                               3
   Name           Braund, Mr. Owen Harris
dtype: object
out.to_csv('unit_result.csv')
test=pd.read_csv('unit_result.csv')
test.head()
Unnamed: 0Unnamed: 10
00Unnamed: 00
10PassengerId1
20Survived0
30Pclass3
40NameBraund, Mr. Owen Harris
# 导入基本库
import numpy as np
import pandas as pd
# 载入上一个任务人保存的文件中:result.csv,并查看这个文件
df=pd.read_csv('result.csv')
df.head()
Unnamed: 0PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
00103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
11211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
22313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
33411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
44503Allen, Mr. William Henrymale35.0003734508.0500NaNS

2.3 数据聚合与运算

2.3.1 通过教材《Python for Data Analysis》P303、Google or anything来学习了解GroupBy机制

按照分组键进行分组,再按照某列进行应用,产生一个新Series

2.3.2 计算泰坦尼克号男性与女性的平均票价

df1=df['Fare'].groupby(df['Sex'])
means=df1.mean()
means
Sex
female    44.479818
male      25.523893
Name: Fare, dtype: float64

2.3.3 统计泰坦尼克号中男女的存活人数

# 存活的记为1,死亡记为0,存活的通过sum相加
df2=df['Survived'].groupby(df['Sex'])
sums=df2.sum()
sums
Sex
female    233
male      109
Name: Survived, dtype: int64

2.3.4 计算客舱不同等级的存活人数

df3=df['Survived'].groupby(df['Pclass'])
sums=df3.sum()
sums
Pclass
1    136
2     87
3    119
Name: Survived, dtype: int64

【思考】从数据分析的角度,上面的统计结果可以得出那些结论

思考心得 :
女性的平均票价比男性的贵,一定概率说明女性更多的购买了高等级客舱票,而且女性的存活人数是男性的两倍,也可以看出越高等级客舱存活率越高

【思考】从任务二到任务三中,这些运算可以通过agg()函数来同时计算。并且可以使用rename函数修改列名。你可以按照提示写出这个过程吗?

#思考心得
df.groupby('Sex').agg({'Fare': 'mean', 'Survived': 'sum'}).rename(columns=
                            {'Fare': 'mean_fare', 'Survived': 'sum_pclass'})
Sexmean_faresum_pclass
female44.479818233
male25.523893109

2.3.5 统计在不同等级的票中的不同年龄的船票花费的平均值

df.groupby(['Pclass','Age'])['Fare'].mean().head()
Pclass  Age  
1       0.92     151.5500
        2.00     151.5500
        4.00      81.8583
        11.00    120.0000
        14.00    120.0000
Name: Fare, dtype: float64

2.3.6 将2.3.2和2.3.3的数据合并,并保存到sex_fare_survived.csv

df=pd.merge(means,sums,left_index=True,right_index=True)
df.head()
FareSurvived
Sex
female44.479818233
male25.523893109
df.to_csv('sex_fare_survived.csv')

2.3.7 得出不同年龄的总的存活人数,然后找出存活人数的最高的年龄,最后计算存活人数最高的存活率(存活人数/总人数)

a=df['Survived'].groupby(df['Age'])
b=a.sum()
b.head()
Age
0.42    1
0.67    1
0.75    2
0.83    2
0.92    1
Name: Survived, dtype: int64
b[b.values==b.max()]
Age
24.0    15
Name: Survived, dtype: int64
sums=df['Survived'].sum()
sums
342
survival_rate=b.max()/sums
survival_rate
0.043859649122807015
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值