pandas的拼接分为两种:
- 级联:pd.concat, pd.append
- 合并:pd.merge, pd.join
In [1]:
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
0. 回顾numpy的级联
============================================
练习12:
- 生成2个3*3的矩阵,对其分别进行两个维度上的级联
============================================
In [2]:
n1 = np.random.randint(0,9,size = (3,3))
n2 = np.random.randint(-9,0,size = (3,3))
In [3]:
display(n1,n2)
np.concatenate([n1,n2])
array([[7, 4, 8],
[3, 2, 2],
[1, 4, 7]])
array([[-3, -5, -3],
[-4, -2, -3],
[-7, -2, -5]])
Out[3]:
array([[ 7, 4, 8],
[ 3, 2, 2],
[ 1, 4, 7],
[-3, -5, -3],
[-4, -2, -3],
[-7, -2, -5]])
为方便讲解,我们首先定义一个生成DataFrame的函数:
In [4]:
def make_df(inds,cols):
#字典的key作为列名进行展示
dic = {key:[key+str(i) for i in inds]for key in cols}
df = DataFrame(dic,index = inds)
return df
In [5]:
make_df([1,2],list('AB'))
Out[5]:
| A | B |
1 | A1 | B1 |
2 | A2 | B2 |
1. 使用pd.concat()级联
pandas使用pd.concat函数,与np.concatenate函数类似,只是多了一些参数:
pd.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
keys=None, levels=None, names=None, verify_integrity=False,
copy=True)
1) 简单级联
和np.concatenate一样,优先增加行数(默认axis=0)
In [6]:
df1 = make_df([1,2],list('AB'))
df2 = make_df([3,4],list('AB'))
In [7]:
display(df1,df2)
pd.concat([df1,df2])
| A | B |
1 | A1 | B1 |
2 | A2 | B2 |
| A | B |
3 | A3 | B3 |
4 | A4 | B4 |
Out[7]:
| A | B |
1 | A1 | B1 |
2 | A2 | B2 |
3 | A3 | B3 |
4 | A4 | B4 |
可以通过设置axis来改变级联方向
In [8]:
pd.concat((df1,df2),axis = 1)
Out[8]:
| A | B | A | B |
1 | A1 | B1 | NaN | NaN |
2 | A2 | B2 | NaN | NaN |
3 | NaN | NaN | A3 | B3 |
4 | NaN | NaN | A4 | B4 |
注意index在级联时可以重复
In [9]:
df3 = make_df([0,1,2],list('AB'))
df4 = make_df([1,2,3],list('AB'))
In [10]:
df5 = pd.concat([df3,df4])
df5
Out[10]:
| A | B |
0 | A0 | B0 |
1 | A1 | B1 |
2 | A2 | B2 |
1 | A1 | B1 |
2 | A2 | B2 |
3 | A3 | B3 |
In [11]:
df5.loc[[1,2]]
Out[11]:
| A | B |
1 | A1 | B1 |
1 | A1 | B1 |
2 | A2 | B2 |
2 | A2 | B2 |
In [12]:
#列名,肯定是不建议相同(属性),行名是可以相同的(比如,人的姓名)
df = DataFrame(np.random.randint(0,10,size = (2,2)),columns = ['张三','张三'],index = ['A','B'])
df
Out[12]:
| 张三 | 张三 |
A | 8 | 2 |
B | 9 | 8 |
也可以选择忽略ignore_index,重新索引
In [13]:
#ignore_index,重新创建从0开始(逐个+1增大)的数字索引
pd.concat([df3,df4],ignore_index=True)
Out[13]:
| A | B |
0 | A0 | B0 |
1 | A1 | B1 |
2 | A2 | B2 |
3 | A1 | B1 |
4 | A2 | B2 |
5 | A3 | B3 |
或者使用多层索引 keys
concat([x,y],keys=['x','y'])
In [14]:
#keys=['x','y']创建第一层索引,keys可以让合并后的数据更加清晰
x = make_df(['A','B'],list('XY'))
y = make_df(['a','b'],list('XY'))
pd.concat([x,y],keys = ['x','y'])
Out[14]:
|
| X | Y |
x | A | XA | YA |
B | XB | YB | |
y | a | Xa | Ya |
b | Xb | Yb |
In [15]:
#使用之前创建的df1,df2,也是一样
pd.concat([df1,df2],keys=['x','y'])
Out[15]:
|
| A | B |
x | 1 | A1 | B1 |
2 | A2 | B2 | |
y | 3 | A3 | B3 |
4 | A4 | B4 |
In [ ]:
#pd 模块 import pandas as pd
#df1,df2 具体的实例
#级联的方法,属于上一级,DataFrame来自pandas
============================================
练习13:
想一想级联的应用场景?
使用昨天的知识,建立一个期中考试张三、李四的成绩表ddd
假设新增考试学科"计算机",如何实现?
新增王老五同学的成绩,如何实现?
============================================
2) 不匹配级联
不匹配指的是级联的维度的索引不一致。例如纵向级联时列索引不一致,横向级联时行索引不一致
In [16]:
df7 = make_df([1,2,3],list('AB'))
df8 = make_df([3,4],list('BCD'))
display(df7,df8)
| A | B |
1 | A1 | B1 |
2 | A2 | B2 |
3 | A3 | B3 |
| B | C | D |
3 | B3 | C3 | D3 |
4 | B4 | C4 | D4 |
In [17]:
df9 = make_df([1,2,3],list('ABC'))
df10 = make_df([4,5,6],list('BCD'))
display(df9,df10)
| A | B | C |
1 | A1 | B1 | C1 |
2 | A2 | B2 | C2 |
3 | A3 | B3 | C3 |
| B | C | D |
4 | B4 | C4 | D4 |
5 | B5 | C5 | D5 |
6 | B6 | C6 | D6 |
有3种连接方式:
- 外连接:补NaN(默认模式)
- 内连接:只连接匹配的项
- 连接指定轴 join_axes
In [18]:
#外连接:补NaN(默认模式)
pd.concat([df7,df8])
Out[18]:
| A | B | C | D |
1 | A1 | B1 | NaN | NaN |
2 | A2 | B2 | NaN | NaN |
3 | A3 | B3 | NaN | NaN |
3 | NaN | B3 | C3 | D3 |
4 | NaN | B4 | C4 | D4 |
In [19]:
#内连接:只连接匹配的项
pd.concat((df7,df8),join = 'inner',axis = 1)
Out[19]:
| A | B | B | C | D |
3 | A3 | B3 | B3 | C3 | D3 |
In [20]:
#连接指定轴 join_axes,某一个DataFrame列索引,作为,新的列索引值
pd.concat((df9,df10),join_axes = [df10.columns])
#新版本pandas已经删除了join_axes
---------------------------------------------------------------------------TypeError Traceback (most recent call last)<ipython-input-20-d4b5758fa6e1> in <module> 1 #连接指定轴 join_axes,某一个DataFrame列索引,作为,新的列索引值----> 2 pd.concat((df9,df10),join_axes = [df10.columns]) 3 #新版本pandas已经删除了join_axes
TypeError: concat() got an unexpected keyword argument 'join_axes'
============================================
练习14:
假设【期末】考试ddd2的成绩没有张三的,只有李四、王老五、赵小六的,使用多种方法级联
============================================
3) 使用append()函数添加
由于在后面级联的使用非常普遍,因此有一个函数append专门用于在后面添加
In [21]:
df11 = make_df([0,1,2,3,4],['大众','福克斯'])
df12 = make_df([5,6,7,8,9],['大众','福克斯'])
display(df11,df12)
| 大众 | 福克斯 |
0 | 大众0 | 福克斯0 |
1 | 大众1 | 福克斯1 |
2 | 大众2 | 福克斯2 |
3 | 大众3 | 福克斯3 |
4 | 大众4 | 福克斯4 |
| 大众 | 福克斯 |
5 | 大众5 | 福克斯5 |
6 | 大众6 | 福克斯6 |
7 | 大众7 | 福克斯7 |
8 | 大众8 | 福克斯8 |
9 | 大众9 | 福克斯9 |
In [22]:
#使用append()函数添加,直接调用对象,相对要更加灵活、方便
df11.append(df12)
Out[22]:
| 大众 | 福克斯 |
0 | 大众0 | 福克斯0 |
1 | 大众1 | 福克斯1 |
2 | 大众2 | 福克斯2 |
3 | 大众3 | 福克斯3 |
4 | 大众4 | 福克斯4 |
5 | 大众5 | 福克斯5 |
6 | 大众6 | 福克斯6 |
7 | 大众7 | 福克斯7 |
8 | 大众8 | 福克斯8 |
9 | 大众9 | 福克斯9 |
In [23]:
#append函数属于DataFrame,concat这函数属于pandas模块
pd.concat((df11,df12))
Out[23]:
| 大众 | 福克斯 |
0 | 大众0 | 福克斯0 |
1 | 大众1 | 福克斯1 |
2 | 大众2 | 福克斯2 |
3 | 大众3 | 福克斯3 |
4 | 大众4 | 福克斯4 |
5 | 大众5 | 福克斯5 |
6 | 大众6 | 福克斯6 |
7 | 大众7 | 福克斯7 |
8 | 大众8 | 福克斯8 |
9 | 大众9 | 福克斯9 |
============================================
练习15:
新建一个只有张三李四王老五的期末考试成绩单ddd3,使用append()与期中考试成绩表ddd级联
============================================
2. 使用pd.merge()合并
merge与concat的区别在于,merge需要依据某一共同的行或列来进行合并
使用pd.merge()合并时,会自动根据两者相同column名称的那一列,作为key来进行合并。
注意每一列元素的顺序不要求一致
1) 一对一合并
In [27]:
#merge根据相同的元素进行合并的
df21 = DataFrame({'id':[1,2,3],'name':['张三','李四','Chales'],'age':[22,21,25]})
df22 = DataFrame({'id':[2,3,4],'sex':['男','男','女'],'group':['sale','search','service']})
display(df21,df22)
| id | name | age |
0 | 1 | 张三 | 22 |
1 | 2 | 李四 | 21 |
2 | 3 | Chales | 25 |
| id | sex | group |
0 | 2 | 男 | sale |
1 | 3 | 男 | search |
2 | 4 | 女 | service |
In [28]:
df21.merge(df22)
Out[28]:
| id | name | age | sex | group |
0 | 2 | 李四 | 21 | 男 | sale |
1 | 3 | Chales | 25 | 男 | search |
In [30]:
#对比concat连接
pd.concat([df21,df22],axis = 1)
Out[30]:
| id | name | age | id | sex | group |
0 | 1 | 张三 | 22 | 2 | 男 | sale |
1 | 2 | 李四 | 21 | 3 | 男 | search |
2 | 3 | Chales | 25 | 4 | 女 | service |
2) 多对一合并
In [31]:
#多对一的合并
df23 = DataFrame({'id':[1,2,2],'name':['张三','李四','Chales'],'age':[22,21,25]})
df24 = DataFrame({'id':[2,3,4],'sex':['男','男','女'],'group':['sale','search','service']})
display(df23,df24)
| id | name | age |
0 | 1 | 张三 | 22 |
1 | 2 | 李四 | 21 |
2 | 2 | Chales | 25 |
| id | sex | group |
0 | 2 | 男 | sale |
1 | 3 | 男 | search |
2 | 4 | 女 | service |
In [32]:
#注意id的2
df23.merge(df24)
Out[32]:
| id | name | age | sex | group |
0 | 2 | 李四 | 21 | 男 | sale |
1 | 2 | Chales | 25 | 男 | sale |
3) 多对多合并
In [36]:
df25 = DataFrame({'name':['张三','李四','张三'],'age':[22,21,25],'salary':[10000,12000,20000]})
df26 = DataFrame({'name':['张三','张三','凡凡'],'sex':['男','男','女'],'group':['sale','search','service']})
display(df25,df26)
| name | age | salary |
0 | 张三 | 22 | 10000 |
1 | 李四 | 21 | 12000 |
2 | 张三 | 25 | 20000 |
| name | sex | group |
0 | 张三 | 男 | sale |
1 | 张三 | 男 | search |
2 | 凡凡 | 女 | service |
In [37]:
#多对多合并,每一个数据都没有放过,相当于乘法
df25.merge(df26)
Out[37]:
| name | age | salary | sex | group |
0 | 张三 | 22 | 10000 | 男 | sale |
1 | 张三 | 22 | 10000 | 男 | search |
2 | 张三 | 25 | 20000 | 男 | sale |
3 | 张三 | 25 | 20000 | 男 | search |
4) key的规范化
- 使用on=显式指定哪一列为key,当有多个key相同时使用
In [38]:
df27 = DataFrame({'name':['张三','李四','张三'],'age':[22,21,25],'salary':[10000,12000,20000]})
df28 = DataFrame({'name':['张三','张三','凡凡'],'age':[21,18,29],'group':['sale','search','service']})
display(df27,df28)
| name | age | salary |
0 | 张三 | 22 | 10000 |
1 | 李四 | 21 | 12000 |
2 | 张三 | 25 | 20000 |
| name | age | group |
0 | 张三 | 21 | sale |
1 | 张三 | 18 | search |
2 | 凡凡 | 29 | service |
In [39]:
#suffixes默认为x、y
df27.merge(df28,on = 'name',suffixes=['_a','_b'])
Out[39]:
| name | age_a | salary | age_b | group |
0 | 张三 | 22 | 10000 | 21 | sale |
1 | 张三 | 22 | 10000 | 18 | search |
2 | 张三 | 25 | 20000 | 21 | sale |
3 | 张三 | 25 | 20000 | 18 | search |
- 使用left_on和right_on指定左右两边的列作为key,当左右两边的key都不想等时使用
In [41]:
df29 = DataFrame({'name':['张三','李四','张三'],'age':[22,21,25],'salary':[10000,12000,20000]})
df30 = DataFrame({'名字':['张三','张三','凡凡'],'年龄':[21,18,29],'group':['sale','search','service']})
display(df29,df30)
| name | age | salary |
0 | 张三 | 22 | 10000 |
1 | 李四 | 21 | 12000 |
2 | 张三 | 25 | 20000 |
| 名字 | 年龄 | group |
0 | 张三 | 21 | sale |
1 | 张三 | 18 | search |
2 | 凡凡 | 29 | service |
In [42]:
#No common columns to perform merge on
df29.merge(df30,left_on='name',right_on='名字')
Out[42]:
| name | age | salary | 名字 | 年龄 | group |
0 | 张三 | 22 | 10000 | 张三 | 21 | sale |
1 | 张三 | 22 | 10000 | 张三 | 18 | search |
2 | 张三 | 25 | 20000 | 张三 | 21 | sale |
3 | 张三 | 25 | 20000 | 张三 | 18 | search |
In [43]:
df31 = DataFrame({'name':['张三','李四','张三'],'age':[22,21,25],'salary':[10000,12000,20000]})
df32 = DataFrame({'名字':['张三','张三','凡凡'],'年龄':[21,18,29],'group':['sale','search','service']},
index = [22,21,25])
display(df31,df32)
| name | age | salary |
0 | 张三 | 22 | 10000 |
1 | 李四 | 21 | 12000 |
2 | 张三 | 25 | 20000 |
| 名字 | 年龄 | group |
22 | 张三 | 21 | sale |
21 | 张三 | 18 | search |
25 | 凡凡 | 29 | service |
In [44]:
df31.merge(df32,left_on='age',right_index=True)
Out[44]:
| name | age | salary | 名字 | 年龄 | group |
0 | 张三 | 22 | 10000 | 张三 | 21 | sale |
1 | 李四 | 21 | 12000 | 张三 | 18 | search |
2 | 张三 | 25 | 20000 | 凡凡 | 29 | service |
============================================
练习16:
假设有两份成绩单,除了ddd是张三李四王老五之外,还有ddd4是张三和赵小六的成绩单,如何合并?
如果ddd4中张三的名字被打错了,成为了张十三,怎么办?
自行练习多对一,多对多的情况
自学left_index,right_index
============================================
5) 内合并与外合并
- 内合并:只保留两者都有的key(默认模式)
In [51]:
display(df27,df28)
| name | age | salary |
0 | 张三 | 22 | 10000 |
1 | 李四 | 21 | 12000 |
2 | 张三 | 25 | 20000 |
| name | age | group |
0 | 张三 | 21 | sale |
1 | 张三 | 18 | search |
2 | 凡凡 | 29 | service |
In [52]:
df27.merge(df28)
Out[52]:
| name | age | salary | group |
In [57]:
#'inner'是默认模式
df27.merge(df28,how = 'inner')
Out[57]:
| name | age | salary | group |
外合并 how='outer':补NaN
In [58]:
df27.merge(df28,how = 'outer')
Out[58]:
| name | age | salary | group |
0 | 张三 | 22 | 10000.0 | NaN |
1 | 李四 | 21 | 12000.0 | NaN |
2 | 张三 | 25 | 20000.0 | NaN |
3 | 张三 | 21 | NaN | sale |
4 | 张三 | 18 | NaN | search |
5 | 凡凡 | 29 | NaN | service |
- 左合并、右合并:how='left',how='right',
In [59]:
df27.merge(df28,how = 'left')
Out[59]:
| name | age | salary | group |
0 | 张三 | 22 | 10000 | NaN |
1 | 李四 | 21 | 12000 | NaN |
2 | 张三 | 25 | 20000 | NaN |
============================================
练习17:
如果只有张三赵小六语数英三个科目的成绩,如何合并?
考虑应用情景,使用多种方式合并ddd与ddd4
============================================
6) 列冲突的解决
当列冲突时,即有多个列名称相同时,需要使用on=来指定哪一个列作为key,配合suffixes指定冲突列名
可以使用suffixes=自己指定后缀
In [60]:
#期中
df41 = DataFrame({'name':['张三','李四','张三'],'degree':[120,118,149],'age':[22,21,25]})
#期末考试
df42 = DataFrame({'name':['张三','张三','凡凡'],'degree':[99,97,129],'group':['sale','search','service']})
display(df41,df42)
| name | degree | age |
0 | 张三 | 120 | 22 |
1 | 李四 | 118 | 21 |
2 | 张三 | 149 | 25 |
| name | degree | group |
0 | 张三 | 99 | sale |
1 | 张三 | 97 | search |
2 | 凡凡 | 129 | service |
In [61]:
df41.merge(df42,on = 'name',suffixes=['_期中','_期末'])
Out[61]:
| name | degree_期中 | age | degree_期末 | group |
0 | 张三 | 120 | 22 | 99 | sale |
1 | 张三 | 120 | 22 | 97 | search |
2 | 张三 | 149 | 25 | 99 | sale |
3 | 张三 | 149 | 25 | 97 | search |
============================================
练习18:
假设有两个同学都叫李四,ddd5、ddd6都是张三和李四的成绩表,如何合并?
============================================
作业
3. 案例分析:美国各州人口数据分析
首先导入文件,并查看数据样本
In [62]:
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
In [63]:
#使用pandas读取数据
pop = pd.read_csv(r'file:///home/ccoy/Documents/Ipython_Numpy/state_csv/state-population.csv')
areas = pd.read_csv(r'file:///home/ccoy/Documents/Ipython_Numpy/state_csv/state-areas.csv')
abb = pd.read_csv(r'file:///home/ccoy/Documents/Ipython_Numpy/state_csv/state-abbrevs.csv')
In [64]:
pop.shape
Out[64]:
(2544, 4)
In [65]:
pop.head()
Out[65]:
| state/region | ages | year | population |
0 | AL | under18 | 2012 | 1117489.0 |
1 | AL | total | 2012 | 4817528.0 |
2 | AL | under18 | 2010 | 1130966.0 |
3 | AL | total | 2010 | 4785570.0 |
4 | AL | under18 | 2011 | 1125763.0 |
In [66]:
areas.shape
Out[66]:
(52, 2)
In [67]:
abb.shape
Out[67]:
(51, 2)
合并pop与abbrevs两个DataFrame,分别依据state/region列和abbreviation列来合并。
为了保留所有信息,使用外合并。
In [68]:
pop.head()
Out[68]:
| state/region | ages | year | population |
0 | AL | under18 | 2012 | 1117489.0 |
1 | AL | total | 2012 | 4817528.0 |
2 | AL | under18 | 2010 | 1130966.0 |
3 | AL | total | 2010 | 4785570.0 |
4 | AL | under18 | 2011 | 1125763.0 |
In [69]:
abb.head()
Out[69]:
| state | abbreviation |
0 | Alabama | AL |
1 | Alaska | AK |
2 | Arizona | AZ |
3 | Arkansas | AR |
4 | California | CA |
In [70]:
display(pop.shape,abb.shape)
(2544, 4)
(51, 2)
In [71]:
#此时的场景 left == outer left数据大于abb
#left效果比outer差一些
pop_m = pop.merge(abb,left_on='state/region',right_on='abbreviation',how = 'outer')
pop_m.shape
Out[71]:
(2544, 6)
去除abbreviation的那一列(axis=1)
In [72]:
pop_m.head()
Out[72]:
| state/region | ages | year | population | state | abbreviation |
0 | AL | under18 | 2012 | 1117489.0 | Alabama | AL |
1 | AL | total | 2012 | 4817528.0 | Alabama | AL |
2 | AL | under18 | 2010 | 1130966.0 | Alabama | AL |
3 | AL | total | 2010 | 4785570.0 | Alabama | AL |
4 | AL | under18 | 2011 | 1125763.0 | Alabama | AL |
In [73]:
pop_m.drop('abbreviation',axis = 1,inplace=True)
In [74]:
pop_m.head()
Out[74]:
| state/region | ages | year | population | state |
0 | AL | under18 | 2012 | 1117489.0 | Alabama |
1 | AL | total | 2012 | 4817528.0 | Alabama |
2 | AL | under18 | 2010 | 1130966.0 | Alabama |
3 | AL | total | 2010 | 4785570.0 | Alabama |
4 | AL | under18 | 2011 | 1125763.0 | Alabama |
查看存在缺失数据的列。
使用.isnull().any(),只有某一列存在一个缺失数据,就会显示True。
In [75]:
pop_m.isnull().any()
Out[75]:
state/region False
ages False
year False
population True
state True
dtype: bool
#population 和 state这两列有数据缺失的情况
查看缺失数据
In [76]:
#为空的行索引
pop_m.loc[pop_m.isnull().any(axis = 1)]
Out[76]:
| state/region | ages | year | population | state |
2448 | PR | under18 | 1990 | NaN | NaN |
2449 | PR | total | 1990 | NaN | NaN |
2450 | PR | total | 1991 | NaN | NaN |
2451 | PR | under18 | 1991 | NaN | NaN |
2452 | PR | total | 1993 | NaN | NaN |
... | ... | ... | ... | ... | ... |
2539 | USA | total | 2010 | 309326295.0 | NaN |
2540 | USA | under18 | 2011 | 73902222.0 | NaN |
2541 | USA | total | 2011 | 311582564.0 | NaN |
2542 | USA | under18 | 2012 | 73708179.0 | NaN |
2543 | USA | total | 2012 | 313873685.0 | NaN |
96 rows × 5 columns
根据数据是否缺失情况显示数据,如果缺失为True,那么显示
找到有哪些state/region使得state的值为NaN,使用unique()查看非重复值
In [77]:
condition = pop_m['state'].isnull()
pop_m['state/region'][condition].unique()
Out[77]:
array(['PR', 'USA'], dtype=object)
In [78]:
areas
Out[78]:
| state | area (sq. mi) |
0 | Alabama | 52423 |
1 | Alaska | 656425 |
2 | Arizona | 114006 |
3 | Arkansas | 53182 |
4 | California | 163707 |
5 | Colorado | 104100 |
6 | Connecticut | 5544 |
7 | Delaware | 1954 |
8 | Florida | 65758 |
9 | Georgia | 59441 |
10 | Hawaii | 10932 |
11 | Idaho | 83574 |
12 | Illinois | 57918 |
13 | Indiana | 36420 |
14 | Iowa | 56276 |
15 | Kansas | 82282 |
16 | Kentucky | 40411 |
17 | Louisiana | 51843 |
18 | Maine | 35387 |
19 | Maryland | 12407 |
20 | Massachusetts | 10555 |
21 | Michigan | 96810 |
22 | Minnesota | 86943 |
23 | Mississippi | 48434 |
24 | Missouri | 69709 |
25 | Montana | 147046 |
26 | Nebraska | 77358 |
27 | Nevada | 110567 |
28 | New Hampshire | 9351 |
29 | New Jersey | 8722 |
30 | New Mexico | 121593 |
31 | New York | 54475 |
32 | North Carolina | 53821 |
33 | North Dakota | 70704 |
34 | Ohio | 44828 |
35 | Oklahoma | 69903 |
36 | Oregon | 98386 |
37 | Pennsylvania | 46058 |
38 | Rhode Island | 1545 |
39 | South Carolina | 32007 |
40 | South Dakota | 77121 |
41 | Tennessee | 42146 |
42 | Texas | 268601 |
43 | Utah | 84904 |
44 | Vermont | 9615 |
45 | Virginia | 42769 |
46 | Washington | 71303 |
47 | West Virginia | 24231 |
48 | Wisconsin | 65503 |
49 | Wyoming | 97818 |
50 | District of Columbia | 68 |
51 | Puerto Rico | 3515 |
只有两个州,对应的州名为空
为找到的这些state/region的state项补上正确的值,从而去除掉state这一列的所有NaN!
记住这样清除缺失数据NaN的方法!
In [79]:
#Puerto Rico
conditon = pop_m['state/region'] == 'PR'
condition
Out[79]:
0 False
1 False
2 False
3 False
4 False
...
2539 True
2540 True
2541 True
2542 True
2543 True
Name: state, Length: 2544, dtype: bool
In [80]:
pop_m['state'][condition] = 'Puerto Rico'
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
In [81]:
condition = pop_m['state/region'] == 'USA'
pop_m['state'][condition] = 'United State'
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
In [82]:
#刚才的填补操作,起作用了
pop_m.isnull().any()
Out[82]:
state/region False
ages False
year False
population True
state False
dtype: bool
合并各州面积数据areas,使用左合并。
思考一下为什么使用外合并?
In [83]:
pop.head()
#人口的DataFrame和abb合并,有了州名
#可以和areas DataFrame进行合并
Out[83]:
| state/region | ages | year | population |
0 | AL | under18 | 2012 | 1117489.0 |
1 | AL | total | 2012 | 4817528.0 |
2 | AL | under18 | 2010 | 1130966.0 |
3 | AL | total | 2010 | 4785570.0 |
4 | AL | under18 | 2011 | 1125763.0 |
In [84]:
pop_areas_m = pop_m.merge(areas,how = 'outer')
继续寻找存在缺失数据的列
In [85]:
pop_areas_m.shape
Out[85]:
(2544, 6)
In [86]:
areas
Out[86]:
| state | area (sq. mi) |
0 | Alabama | 52423 |
1 | Alaska | 656425 |
2 | Arizona | 114006 |
3 | Arkansas | 53182 |
4 | California | 163707 |
5 | Colorado | 104100 |
6 | Connecticut | 5544 |
7 | Delaware | 1954 |
8 | Florida | 65758 |
9 | Georgia | 59441 |
10 | Hawaii | 10932 |
11 | Idaho | 83574 |
12 | Illinois | 57918 |
13 | Indiana | 36420 |
14 | Iowa | 56276 |
15 | Kansas | 82282 |
16 | Kentucky | 40411 |
17 | Louisiana | 51843 |
18 | Maine | 35387 |
19 | Maryland | 12407 |
20 | Massachusetts | 10555 |
21 | Michigan | 96810 |
22 | Minnesota | 86943 |
23 | Mississippi | 48434 |
24 | Missouri | 69709 |
25 | Montana | 147046 |
26 | Nebraska | 77358 |
27 | Nevada | 110567 |
28 | New Hampshire | 9351 |
29 | New Jersey | 8722 |
30 | New Mexico | 121593 |
31 | New York | 54475 |
32 | North Carolina | 53821 |
33 | North Dakota | 70704 |
34 | Ohio | 44828 |
35 | Oklahoma | 69903 |
36 | Oregon | 98386 |
37 | Pennsylvania | 46058 |
38 | Rhode Island | 1545 |
39 | South Carolina | 32007 |
40 | South Dakota | 77121 |
41 | Tennessee | 42146 |
42 | Texas | 268601 |
43 | Utah | 84904 |
44 | Vermont | 9615 |
45 | Virginia | 42769 |
46 | Washington | 71303 |
47 | West Virginia | 24231 |
48 | Wisconsin | 65503 |
49 | Wyoming | 97818 |
50 | District of Columbia | 68 |
51 | Puerto Rico | 3515 |
In [87]:
pop_areas_m.isnull().any()
Out[87]:
state/region False
ages False
year False
population True
state False
area (sq. mi) True
dtype: bool
我们会发现area(sq.mi)这一列有缺失数据,为了找出是哪一行,我们需要找出是哪个state没有数据
In [88]:
cond = pop_areas_m['area (sq. mi)'].isnull()
cond
Out[88]:
0 False
1 False
2 False
3 False
4 False
...
2539 True
2540 True
2541 True
2542 True
2543 True
Name: area (sq. mi), Length: 2544, dtype: bool
In [89]:
pop_areas_m['state/region'][cond]
Out[89]:
2496 USA
2497 USA
2498 USA
2499 USA
2500 USA
2501 USA
2502 USA
2503 USA
2504 USA
2505 USA
2506 USA
2507 USA
2508 USA
2509 USA
2510 USA
2511 USA
2512 USA
2513 USA
2514 USA
2515 USA
2516 USA
2517 USA
2518 USA
2519 USA
2520 USA
2521 USA
2522 USA
2523 USA
2524 USA
2525 USA
2526 USA
2527 USA
2528 USA
2529 USA
2530 USA
2531 USA
2532 USA
2533 USA
2534 USA
2535 USA
2536 USA
2537 USA
2538 USA
2539 USA
2540 USA
2541 USA
2542 USA
2543 USA
Name: state/region, dtype: object
去除含有缺失数据的行
In [90]:
pop_areas_m.shape
Out[90]:
(2544, 6)
In [91]:
#去除含有缺失数据的行
pop_areas_r = pop_areas_m.dropna()
In [92]:
pop_areas_r.shape
Out[92]:
(2476, 6)
查看数据是否缺失
In [93]:
pop_areas_r.isnull().any()
Out[93]:
state/region False
ages False
year False
population False
state False
area (sq. mi) False
dtype: bool
找出2010年的全民人口数据,df.query(查询语句)
In [94]:
pop_areas_r.head()
Out[94]:
| state/region | ages | year | population | state | area (sq. mi) |
0 | AL | under18 | 2012 | 1117489.0 | Alabama | 52423.0 |
1 | AL | total | 2012 | 4817528.0 | Alabama | 52423.0 |
2 | AL | under18 | 2010 | 1130966.0 | Alabama | 52423.0 |
3 | AL | total | 2010 | 4785570.0 | Alabama | 52423.0 |
4 | AL | under18 | 2011 | 1125763.0 | Alabama | 52423.0 |
In [95]:
t_2010 = pop_areas_r.query("ages == 'total' and year == 2010")
In [96]:
t_2010.shape
Out[96]:
(52, 6)
In [97]:
t_2010
Out[97]:
| state/region | ages | year | population | state | area (sq. mi) |
3 | AL | total | 2010 | 4785570.0 | Alabama | 52423.0 |
91 | AK | total | 2010 | 713868.0 | Alaska | 656425.0 |
101 | AZ | total | 2010 | 6408790.0 | Arizona | 114006.0 |
189 | AR | total | 2010 | 2922280.0 | Arkansas | 53182.0 |
197 | CA | total | 2010 | 37333601.0 | California | 163707.0 |
283 | CO | total | 2010 | 5048196.0 | Colorado | 104100.0 |
293 | CT | total | 2010 | 3579210.0 | Connecticut | 5544.0 |
379 | DE | total | 2010 | 899711.0 | Delaware | 1954.0 |
389 | DC | total | 2010 | 605125.0 | District of Columbia | 68.0 |
475 | FL | total | 2010 | 18846054.0 | Florida | 65758.0 |
485 | GA | total | 2010 | 9713248.0 | Georgia | 59441.0 |
570 | HI | total | 2010 | 1363731.0 | Hawaii | 10932.0 |
581 | ID | total | 2010 | 1570718.0 | Idaho | 83574.0 |
666 | IL | total | 2010 | 12839695.0 | Illinois | 57918.0 |
677 | IN | total | 2010 | 6489965.0 | Indiana | 36420.0 |
762 | IA | total | 2010 | 3050314.0 | Iowa | 56276.0 |
773 | KS | total | 2010 | 2858910.0 | Kansas | 82282.0 |
858 | KY | total | 2010 | 4347698.0 | Kentucky | 40411.0 |
869 | LA | total | 2010 | 4545392.0 | Louisiana | 51843.0 |
954 | ME | total | 2010 | 1327366.0 | Maine | 35387.0 |
965 | MD | total | 2010 | 5787193.0 | Maryland | 12407.0 |
1050 | MA | total | 2010 | 6563263.0 | Massachusetts | 10555.0 |
1061 | MI | total | 2010 | 9876149.0 | Michigan | 96810.0 |
1146 | MN | total | 2010 | 5310337.0 | Minnesota | 86943.0 |
1157 | MS | total | 2010 | 2970047.0 | Mississippi | 48434.0 |
1242 | MO | total | 2010 | 5996063.0 | Missouri | 69709.0 |
1253 | MT | total | 2010 | 990527.0 | Montana | 147046.0 |
1338 | NE | total | 2010 | 1829838.0 | Nebraska | 77358.0 |
1349 | NV | total | 2010 | 2703230.0 | Nevada | 110567.0 |
1434 | NH | total | 2010 | 1316614.0 | New Hampshire | 9351.0 |
1445 | NJ | total | 2010 | 8802707.0 | New Jersey | 8722.0 |
1530 | NM | total | 2010 | 2064982.0 | New Mexico | 121593.0 |
1541 | NY | total | 2010 | 19398228.0 | New York | 54475.0 |
1626 | NC | total | 2010 | 9559533.0 | North Carolina | 53821.0 |
1637 | ND | total | 2010 | 674344.0 | North Dakota | 70704.0 |
1722 | OH | total | 2010 | 11545435.0 | Ohio | 44828.0 |
1733 | OK | total | 2010 | 3759263.0 | Oklahoma | 69903.0 |
1818 | OR | total | 2010 | 3837208.0 | Oregon | 98386.0 |
1829 | PA | total | 2010 | 12710472.0 | Pennsylvania | 46058.0 |
1914 | RI | total | 2010 | 1052669.0 | Rhode Island | 1545.0 |
1925 | SC | total | 2010 | 4636361.0 | South Carolina | 32007.0 |
2010 | SD | total | 2010 | 816211.0 | South Dakota | 77121.0 |
2021 | TN | total | 2010 | 6356683.0 | Tennessee | 42146.0 |
2106 | TX | total | 2010 | 25245178.0 | Texas | 268601.0 |
2117 | UT | total | 2010 | 2774424.0 | Utah | 84904.0 |
2202 | VT | total | 2010 | 625793.0 | Vermont | 9615.0 |
2213 | VA | total | 2010 | 8024417.0 | Virginia | 42769.0 |
2298 | WA | total | 2010 | 6742256.0 | Washington | 71303.0 |
2309 | WV | total | 2010 | 1854146.0 | West Virginia | 24231.0 |
2394 | WI | total | 2010 | 5689060.0 | Wisconsin | 65503.0 |
2405 | WY | total | 2010 | 564222.0 | Wyoming | 97818.0 |
2490 | PR | total | 2010 | 3721208.0 | Puerto Rico | 3515.0 |
对查询结果进行处理,以state列作为新的行索引:set_index
In [98]:
t_2010.set_index('state',inplace=True)
In [99]:
t_2010
Out[99]:
| state/region | ages | year | population | area (sq. mi) |
state |
|
|
|
|
|
Alabama | AL | total | 2010 | 4785570.0 | 52423.0 |
Alaska | AK | total | 2010 | 713868.0 | 656425.0 |
Arizona | AZ | total | 2010 | 6408790.0 | 114006.0 |
Arkansas | AR | total | 2010 | 2922280.0 | 53182.0 |
California | CA | total | 2010 | 37333601.0 | 163707.0 |
Colorado | CO | total | 2010 | 5048196.0 | 104100.0 |
Connecticut | CT | total | 2010 | 3579210.0 | 5544.0 |
Delaware | DE | total | 2010 | 899711.0 | 1954.0 |
District of Columbia | DC | total | 2010 | 605125.0 | 68.0 |
Florida | FL | total | 2010 | 18846054.0 | 65758.0 |
Georgia | GA | total | 2010 | 9713248.0 | 59441.0 |
Hawaii | HI | total | 2010 | 1363731.0 | 10932.0 |
Idaho | ID | total | 2010 | 1570718.0 | 83574.0 |
Illinois | IL | total | 2010 | 12839695.0 | 57918.0 |
Indiana | IN | total | 2010 | 6489965.0 | 36420.0 |
Iowa | IA | total | 2010 | 3050314.0 | 56276.0 |
Kansas | KS | total | 2010 | 2858910.0 | 82282.0 |
Kentucky | KY | total | 2010 | 4347698.0 | 40411.0 |
Louisiana | LA | total | 2010 | 4545392.0 | 51843.0 |
Maine | ME | total | 2010 | 1327366.0 | 35387.0 |
Maryland | MD | total | 2010 | 5787193.0 | 12407.0 |
Massachusetts | MA | total | 2010 | 6563263.0 | 10555.0 |
Michigan | MI | total | 2010 | 9876149.0 | 96810.0 |
Minnesota | MN | total | 2010 | 5310337.0 | 86943.0 |
Mississippi | MS | total | 2010 | 2970047.0 | 48434.0 |
Missouri | MO | total | 2010 | 5996063.0 | 69709.0 |
Montana | MT | total | 2010 | 990527.0 | 147046.0 |
Nebraska | NE | total | 2010 | 1829838.0 | 77358.0 |
Nevada | NV | total | 2010 | 2703230.0 | 110567.0 |
New Hampshire | NH | total | 2010 | 1316614.0 | 9351.0 |
New Jersey | NJ | total | 2010 | 8802707.0 | 8722.0 |
New Mexico | NM | total | 2010 | 2064982.0 | 121593.0 |
New York | NY | total | 2010 | 19398228.0 | 54475.0 |
North Carolina | NC | total | 2010 | 9559533.0 | 53821.0 |
North Dakota | ND | total | 2010 | 674344.0 | 70704.0 |
Ohio | OH | total | 2010 | 11545435.0 | 44828.0 |
Oklahoma | OK | total | 2010 | 3759263.0 | 69903.0 |
Oregon | OR | total | 2010 | 3837208.0 | 98386.0 |
Pennsylvania | PA | total | 2010 | 12710472.0 | 46058.0 |
Rhode Island | RI | total | 2010 | 1052669.0 | 1545.0 |
South Carolina | SC | total | 2010 | 4636361.0 | 32007.0 |
South Dakota | SD | total | 2010 | 816211.0 | 77121.0 |
Tennessee | TN | total | 2010 | 6356683.0 | 42146.0 |
Texas | TX | total | 2010 | 25245178.0 | 268601.0 |
Utah | UT | total | 2010 | 2774424.0 | 84904.0 |
Vermont | VT | total | 2010 | 625793.0 | 9615.0 |
Virginia | VA | total | 2010 | 8024417.0 | 42769.0 |
Washington | WA | total | 2010 | 6742256.0 | 71303.0 |
West Virginia | WV | total | 2010 | 1854146.0 | 24231.0 |
Wisconsin | WI | total | 2010 | 5689060.0 | 65503.0 |
Wyoming | WY | total | 2010 | 564222.0 | 97818.0 |
Puerto Rico | PR | total | 2010 | 3721208.0 | 3515.0 |
计算人口密度。注意是Series/Series,其结果还是一个Series。
In [100]:
pop_density = t_2010['population']/t_2010["area (sq. mi)"]
pop_density
Out[100]:
state
Alabama 91.287603
Alaska 1.087509
Arizona 56.214497
Arkansas 54.948667
California 228.051342
Colorado 48.493718
Connecticut 645.600649
Delaware 460.445752
District of Columbia 8898.897059
Florida 286.597129
Georgia 163.409902
Hawaii 124.746707
Idaho 18.794338
Illinois 221.687472
Indiana 178.197831
Iowa 54.202751
Kansas 34.745266
Kentucky 107.586994
Louisiana 87.676099
Maine 37.509990
Maryland 466.445797
Massachusetts 621.815538
Michigan 102.015794
Minnesota 61.078373
Mississippi 61.321530
Missouri 86.015622
Montana 6.736171
Nebraska 23.654153
Nevada 24.448796
New Hampshire 140.799273
New Jersey 1009.253268
New Mexico 16.982737
New York 356.094135
North Carolina 177.617157
North Dakota 9.537565
Ohio 257.549634
Oklahoma 53.778278
Oregon 39.001565
Pennsylvania 275.966651
Rhode Island 681.339159
South Carolina 144.854594
South Dakota 10.583512
Tennessee 150.825298
Texas 93.987655
Utah 32.677188
Vermont 65.085075
Virginia 187.622273
Washington 94.557817
West Virginia 76.519582
Wisconsin 86.851900
Wyoming 5.768079
Puerto Rico 1058.665149
dtype: float64
排序,并找出人口密度最高的五个州sort_values()
In [101]:
type(pop_density)
Out[101]:
pandas.core.series.Series
In [102]:
pop_density.sort_values(inplace=True)
找出人口密度最低的五个州
In [103]:
#人口密度最低的五个州
pop_density[:5]
Out[103]:
state
Alaska 1.087509
Wyoming 5.768079
Montana 6.736171
North Dakota 9.537565
South Dakota 10.583512
dtype: float64
In [104]:
#人口密度最高的五个州
pop_density.tail()
Out[104]:
state
Connecticut 645.600649
Rhode Island 681.339159
New Jersey 1009.253268
Puerto Rico 1058.665149
District of Columbia 8898.897059
dtype: float64
要点总结:
- 统一用loc()索引
- 善于使用.isnull().any()找到存在NaN的列
- 善于使用.unique()确定该列中哪些key是我们需要的
- 一般使用外合并、左合并,目的只有一个:宁愿该列是NaN也不要丢弃其他列的信息
回顾:Series/DataFrame运算与ndarray运算的区别
- Series与DataFrame没有广播,如果对应index没有值,则记为NaN;或者使用add的fill_value来补缺失值
- ndarray有广播,通过重复已有值来计算
仅供参考学习,严禁转载!