动手学数据分析_第二章:第二节:数据重构

复习:在前面我们已经学习了Pandas基础,第二章我们开始进入数据分析的业务部分,在第二章第一节的内容中,我们学习了数据的清洗,这一部分十分重要,只有数据变得相对干净,我们之后对数据的分析才可以更有力。而这一节,我们要做的是数据重构,数据重构依旧属于数据理解(准备)的范围。

开始之前,导入numpy、pandas包和数据
# 导入基本库
import pandas as pd
import numpy as np
# 载入data文件中的:train-left-up.csv
df = pd.read_csv('./data/train-left-up.csv')
df.head()
PassengerIdSurvivedPclassName
0103Braund, Mr. Owen Harris
1211Cumings, Mrs. John Bradley (Florence Briggs Th...
2313Heikkinen, Miss. Laina
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)
4503Allen, Mr. William Henry

2 第二章:数据重构1

2.4 数据的合并

2.4.1 任务一:将data文件夹里面的所有数据都载入,观察数据的之间的关系
#写入代码
text_left_up = pd.read_csv("data/train-left-up.csv")
text_left_down = pd.read_csv("data/train-left-down.csv")
text_right_up = pd.read_csv("data/train-right-up.csv")
text_right_down = pd.read_csv("data/train-right-down.csv")
text_left_up
PassengerIdSurvivedPclassName
0103Braund, Mr. Owen Harris
1211Cumings, Mrs. John Bradley (Florence Briggs Th...
2313Heikkinen, Miss. Laina
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)
4503Allen, Mr. William Henry
...............
43443501Silvey, Mr. William Baird
43543611Carter, Miss. Lucile Polk
43643703Ford, Miss. Doolina Margaret "Daisy"
43743812Richards, Mrs. Sidney (Emily Hocking)
43843901Fortune, Mr. Mark

439 rows × 4 columns

text_left_down
PassengerIdSurvivedPclassName
044002Kvillner, Mr. Johan Henrik Johannesson
144112Hart, Mrs. Benjamin (Esther Ada Bloomfield)
244203Hampe, Mr. Leon
344303Petterson, Mr. Johan Emil
444412Reynaldo, Ms. Encarnacion
...............
44788702Montvila, Rev. Juozas
44888811Graham, Miss. Margaret Edith
44988903Johnston, Miss. Catherine Helen "Carrie"
45089011Behr, Mr. Karl Howell
45189103Dooley, Mr. Patrick

452 rows × 4 columns

text_right_down
SexAgeSibSpParchTicketFareCabinEmbarked
0male31.000C.A. 1872310.500NaNS
1female45.011F.C.C. 1352926.250NaNS
2male20.0003457699.500NaNS
3male25.0103470767.775NaNS
4female28.00023043413.000NaNS
...........................
447male27.00021153613.000NaNS
448female19.00011205330.000B42S
449femaleNaN12W./C. 660723.450NaNS
450male26.00011136930.000C148C
451male32.0003703767.750NaNQ

452 rows × 8 columns

text_right_up
SexAgeSibSpParchTicketFareCabinEmbarked
0male22.010A/5 211717.2500NaNS
1female38.010PC 1759971.2833C85C
2female26.000STON/O2. 31012827.9250NaNS
3female35.01011380353.1000C123S
4male35.0003734508.0500NaNS
...........................
434male50.0101350755.9000E44S
435female14.012113760120.0000B96 B98S
436female21.022W./C. 660834.3750NaNS
437female24.0232910618.7500NaNS
438male64.01419950263.0000C23 C25 C27S

439 rows × 8 columns

【提示】结合之前我们加载的train.csv数据,大致预测一下上面的数据是什么

四个表的数据就是train.csv

2.4.2:任务二:使用concat方法:将数据train-left-up.csv和train-right-up.csv横向合并为一张表,并保存这张表为result_up
#写入代码

# axis = 1 : 合并列,行数保持不变 ,默认是0
result_up = pd.concat([text_left_up,text_right_up],axis=1)
result_up
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS
.......................................
43443501Silvey, Mr. William Bairdmale50.0101350755.9000E44S
43543611Carter, Miss. Lucile Polkfemale14.012113760120.0000B96 B98S
43643703Ford, Miss. Doolina Margaret "Daisy"female21.022W./C. 660834.3750NaNS
43743812Richards, Mrs. Sidney (Emily Hocking)female24.0232910618.7500NaNS
43843901Fortune, Mr. Markmale64.01419950263.0000C23 C25 C27S

439 rows × 12 columns

2.4.3 任务三:使用concat方法:将train-left-down和train-right-down横向合并为一张表,并保存这张表为result_down。然后将上边的result_up和result_down纵向合并为result。
#写入代码
list_down=[text_left_down,text_right_down]
result_down = pd.concat(list_down,axis=1)

result = pd.concat([result_up,result_down])
result.shape
(891, 12)
2.4.4 任务四:使用DataFrame自带的方法join方法和append:完成任务二和任务三的任务
#写入代码

result_up = text_left_up.join(text_right_up)
result_down = text_left_down.join(text_right_down)
result = result_up.append(result_down)
result
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS
.......................................
44788702Montvila, Rev. Juozasmale27.00021153613.0000NaNS
44888811Graham, Miss. Margaret Edithfemale19.00011205330.0000B42S
44988903Johnston, Miss. Catherine Helen "Carrie"femaleNaN12W./C. 660723.4500NaNS
45089011Behr, Mr. Karl Howellmale26.00011136930.0000C148C
45189103Dooley, Mr. Patrickmale32.0003703767.7500NaNQ

891 rows × 12 columns

2.4.5 任务五:使用Panads的merge方法和DataFrame的append方法:完成任务二和任务三的任务
#写入代码

result_up = pd.merge(text_left_up,text_right_up,left_index=True,right_index=True)
result_down = pd.merge(text_left_down,text_right_down,left_index=True,right_index=True)
result = result_up.append(result_down)
result
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS
.......................................
44788702Montvila, Rev. Juozasmale27.00021153613.0000NaNS
44888811Graham, Miss. Margaret Edithfemale19.00011205330.0000B42S
44988903Johnston, Miss. Catherine Helen "Carrie"femaleNaN12W./C. 660723.4500NaNS
45089011Behr, Mr. Karl Howellmale26.00011136930.0000C148C
45189103Dooley, Mr. Patrickmale32.0003703767.7500NaNQ

891 rows × 12 columns

【思考】对比merge、join以及concat的方法的不同以及相同。思考一下在任务四和任务五的情况下,为什么都要求使用DataFrame的append方法,如何只要求使用merge或者join可不可以完成任务四和任务五呢?

2.4.6 任务六:完成的数据保存为result.csv
#写入代码

result.to_csv('result.csv')

2.5 换一种角度看数据

2.5.1 任务一:将我们的数据变为Series类型的数据
#写入代码
text = pd.read_csv('result.csv')
text

Unnamed: 0PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
00103Braund, Mr. Owen Harrismale22.01.00.0A/5 211717.2500NaNS
11211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.01.00.0PC 1759971.2833C85C
22313Heikkinen, Miss. Lainafemale26.00.00.0STON/O2. 31012827.9250NaNS
33411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01.00.011380353.1000C123S
44503Allen, Mr. William Henrymale35.00.00.03734508.0500NaNS
..........................................
88644788702Montvila, Rev. Juozasmale27.00.00.021153613.0000NaNS
88744888811Graham, Miss. Margaret Edithfemale19.00.00.011205330.0000B42S
88844988903Johnston, Miss. Catherine Helen "Carrie"femaleNaN1.02.0W./C. 660723.4500NaNS
88945089011Behr, Mr. Karl Howellmale26.00.00.011136930.0000C148C
89045189103Dooley, Mr. Patrickmale32.00.00.03703767.7500NaNQ

891 rows × 13 columns

#写入代码

unit_result=text.stack().head(20)
unit_result.head()
0  Unnamed: 0                           0
   PassengerId                          1
   Survived                             0
   Pclass                               3
   Name           Braund, Mr. Owen Harris
dtype: object
test = pd.read_csv('unit_result.csv')
test.head()
Unnamed: 0Unnamed: 10
00Unnamed: 00
10PassengerId1
20Survived0
30Pclass3
40NameBraund, Mr. Owen Harris
unit_result.to_csv('unit_result_1.csv')

test = pd.read_csv('unit_result_1.csv')
test.head()
Unnamed: 0Unnamed: 10
00Unnamed: 00
10PassengerId1
20Survived0
30Pclass3
40NameBraund, Mr. Owen Harris

总结

  • merge/ join / contact

复习:在前面我们已经学习了Pandas基础,第二章我们开始进入数据分析的业务部分,在第二章第一节的内容中,我们学习了数据的清洗,这一部分十分重要,只有数据变得相对干净,我们之后对数据的分析才可以更有力。而这一节,我们要做的是数据重构,数据重构依旧属于数据理解(准备)的范围。

开始之前,导入numpy、pandas包和数据
# 导入基本库
import pandas as pd
import numpy as np
# 载入上一个任务人保存的文件中:result.csv,并查看这个文件
df = pd.read_csv("result.csv")
print(df.shape)
df.head()
(891, 13)
Unnamed: 0PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
00103Braund, Mr. Owen Harrismale22.01.00.0A/5 211717.2500NaNS
11211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.01.00.0PC 1759971.2833C85C
22313Heikkinen, Miss. Lainafemale26.00.00.0STON/O2. 31012827.9250NaNS
33411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01.00.011380353.1000C123S
44503Allen, Mr. William Henrymale35.00.00.03734508.0500NaNS

2 第二章:数据重构2

第一部分:数据聚合与运算

2.6 数据运用

2.6.1 任务一:通过教材《Python for Data Analysis》P303、Google or anything来学习了解GroupBy机制

#写入心得

2.4.2:任务二:计算泰坦尼克号男性与女性的平均票价

# 写入代码

sex_Fare = df['Fare'].groupby(df['Sex']) # 根据性别分组,查看Fare列
means = sex_Fare.mean()
print(type(sex_Fare))
print(type(means))
means
<class 'pandas.core.groupby.generic.SeriesGroupBy'>
<class 'pandas.core.series.Series'>





Sex
female    44.479818
male      25.523893
Name: Fare, dtype: float64

在了解GroupBy机制之后,运用这个机制完成一系列的操作,来达到我们的目的。

下面通过几个任务来熟悉GroupBy机制。

2.4.3:任务三:统计泰坦尼克号中男女的存活人数

# 写入代码

# 这是根据性别分组,查看存活人数,存活人数就是Survived这一列求和
Survived_count_sex = df['Survived'].groupby(df['Sex']).sum()
Survived_count_sex
Sex
female    233
male      109
Name: Survived, dtype: int64
df.groupby(df['Sex']).sum()
Unnamed: 0PassengerIdSurvivedPclassAgeSibSpParchFare
Sex
female713741353432336787286.00218.0204.013966.6628
male126693262043109137913919.17248.0136.014727.2865
# 这是乘客的性别分布
df.groupby(df['Sex']).size() # size() count()是记数  sum()是求和
Sex
female    314
male      577
dtype: int64

2.4.4:任务四:计算客舱不同等级的存活人数

# 写入代码

# 根据客舱划分,对 Survived 求和
df['Survived'].groupby(df['Pclass']).sum()
Pclass
1    136
2     87
3    119
Name: Survived, dtype: int64

提示:】表中的存活那一栏,可以发现如果还活着记为1,死亡记为0

思考】从数据分析的角度,上面的统计结果可以得出那些结论

# 这是每个客舱的人数
df['Survived'].groupby(df['Pclass']).size()
Pclass
1    216
2    184
3    491
Name: Survived, dtype: int64
#思考心得 

# 通过 size()得到人数分布 和 sun()得到存活的人数推断:
# 女性的存活比例高于男性
# Pcalss 等级中 级别3中死亡人比例很高

【思考】从任务二到任务三中,这些运算可以通过agg()函数来同时计算。并且可以使用rename函数修改列名。你可以按照提示写出这个过程吗?

#思考心得

# 这个好像并不是
df.groupby('Sex').agg({'Fare':'count','Pclass':'sum'}).rename(columns={'Fare':'mean_fare','Pclass':'count'})

mean_farecount
Sex
female314678
male5771379

2.4.5:任务五:统计在不同等级的票中的不同年龄的船票花费的平均值

# 写入代码

df.groupby(['Pclass','Age'])['Fare'].mean()
Pclass  Age  
1       0.92     151.5500
        2.00     151.5500
        4.00      81.8583
        11.00    120.0000
        14.00    120.0000
                   ...   
3       61.00      6.2375
        63.00      9.5875
        65.00      7.7500
        70.50      7.7500
        74.00      7.7750
Name: Fare, Length: 182, dtype: float64
# 结果同上
df['Fare'].groupby([df['Pclass'],df['Age']]).mean()
Pclass  Age  
1       0.92     151.5500
        2.00     151.5500
        4.00      81.8583
        11.00    120.0000
        14.00    120.0000
                   ...   
3       61.00      6.2375
        63.00      9.5875
        65.00      7.7500
        70.50      7.7500
        74.00      7.7750
Name: Fare, Length: 182, dtype: float64

2.4.6:任务六:将任务二和任务三的数据合并,并保存到sex_fare_survived.csv

# 写入代码
result = pd.merge(means,Survived_count_sex,on='Sex') # merge按照sex列进行合并
result
FareSurvived
Sex
female44.479818233
male25.523893109

2.4.7:任务七:得出不同年龄的总的存活人数,然后找出存活人数的最高的年龄,最后计算存活人数最高的存活率(存活人数/总人数)

# 写入代码

# 不同年龄总的存活人数 : 对年龄Groupby
survived_age = df['Survived'].groupby(df['Age']).sum()
survived_age
Age
0.42     1
0.67     1
0.75     2
0.83     2
0.92     1
        ..
70.00    0
70.50    0
71.00    0
74.00    0
80.00    1
Name: Survived, Length: 88, dtype: int64
# 写入代码

# 找出存活人数最高的年龄
# 这个是值 survived_age.values 
survived_age[survived_age.values==survived_age.max()]
Age
24.0    15
Name: Survived, dtype: int64
# 写入代码

# 24岁的存活的人数占总存活人数的比例
print(survived_age.max() / df['Survived'].sum())
# 存活人数占总人数的比例

df['Survived'].sum() / df['Survived'].count()
0.043859649122807015





0.3838383838383838

本节总结

    1. groupby()的使用
    • df.groupby()
    • df[’’].groupby()

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值