数据集合与分组运算 《利用python进行数据分析》笔记,第9章

pandas的groupby功能,可以计算 分组统计和生成透视表,可对数据集进行灵活的切片、切块、摘要等操作

GroupBy技术

“split-apply-comebine”(拆分-应用-合并)

import numpy as np
from pandas import DataFrame,Series

df=DataFrame({'key1':['a','a','b','b','a'],
              'key2':['one','two','one','two','one'],
              'data1':np.random.randn(5),
              'data2':np.random.randn(5)})
df
data1data2key1key2
01.1607600.360555aone
1-0.992606-0.120562atwo
2-0.6167270.856179bone
3-1.921879-0.690846btwo
4-0.458540-0.093610aone
grouped=df['data1'].groupby(df['key1'])
grouped
grouped.mean()
key1 a -0.096796 b -1.269303 Name: data1, dtype: float64
#如果一次传入多个数组,会得到不同=结果
means=df['data1'].groupby([df['key1'],df['key2']]).mean()
means#得到的数据具有层次化索引
key1 key2 a one 0.351110 two -0.992606 b one -0.616727 two -1.921879 Name: data1, dtype: float64
means.unstack()#将层次化索引展开
key2onetwo
key1
a0.351110-0.992606
b-0.616727-1.921879
#分组键可以是任何长度适合的数组
states=np.array(['Ohio','Colifornia','Colifornia','Ohio','Ohio'])
years=np.array([2005,2005,2006,2005,2006])
df['data1'].groupby([states,years]).mean()
Colifornia 2005 -0.992606 2006 -0.616727 Ohio 2005 -0.380560 2006 -0.458540 Name: data1, dtype: float64
#此外,你还可以将列名(字符串、数字或其他python对象)当做分组键
df.groupby(['key1']).mean()
data1data2
key1
a-0.0967960.048794
b-1.2693030.082666
df.groupby(['key1','key2']).mean()
data1data2
key1key2
aone0.3511100.133473
two-0.992606-0.120562
bone-0.6167270.856179
two-1.921879-0.690846
#因为df['key2]不是数值数据,所有被从结果中排除了,默认情况下,所以后数值列都会被聚合
#Groupby的size方法,返回一个含有分组大小的Series
df.groupby(['key1','key2']).size()
key1 key2 a one 2 two 1 b one 1 two 1 dtype: int64

对分组进行迭代

for name,group in df.groupby('key1'):
    print(name)
    print(group)
a data1 data2 key1 key2 0 1.160760 0.360555 a one 1 -0.992606 -0.120562 a two 4 -0.458540 -0.093610 a one b data1 data2 key1 key2 2 -0.616727 0.856179 b one 3 -1.921879 -0.690846 b two
#对于多重键的情况,元组的第一个元素将会是由键值组成的元组
for (k1,k2),group in df.groupby(['key1','key2']):
    print(k1,k2)
    print(group)
a one data1 data2 key1 key2 0 1.16076 0.360555 a one 4 -0.45854 -0.093610 a one a two data1 data2 key1 key2 1 -0.992606 -0.120562 a two b one data1 data2 key1 key2 2 -0.616727 0.856179 b one b two data1 data2 key1 key2 3 -1.921879 -0.690846 b two
#可以对分组的片段做任何操作,常用的操作时将这些数据片段做成一个字典
pieces=dict(list(df.groupby('key1')))
pieces['b']
data1data2key1key2
2-0.6167270.856179bone
3-1.921879-0.690846btwo
#groupby默认是在axis=0上进行分组的,同样也可以在其他轴上进行分组
#我们可以根据dtype对列进行分组
df.dtypes
data1 float64 data2 float64 key1 object key2 object dtype: object
grouped=df.groupby(df.dtypes,axis=1)
dict(list(grouped))
{dtype(‘float64’): data1 data2 0 1.160760 0.360555 1 -0.992606 -0.120562 2 -0.616727 0.856179 3 -1.921879 -0.690846 4 -0.458540 -0.093610, dtype(‘O’): key1 key2 0 a one 1 a two 2 b one 3 b two 4 a one}

选取一个或一组列

#对于由DataFrame产生的GroupBy对象,如果用一个(单个字符)或一组(字符串数组)列名对其
#进行索引,就能实现选取部分列进行聚合的目的。
df.groupby('key1')['data1']
df.groupby('key1')[['data2']]
#以上两行是以下代码的语法糖
df['data1'].groupby(df['key1'])
df[['data2']].groupby(df['key1'])
df.groupby(['key1','key2'])[['data2']].mean()
data2
key1key2
aone0.133473
two-0.120562
bone0.856179
two-0.690846
s_grouped=df.groupby(['key1','key2'])['data2']
s_grouped
s_grouped.mean()
key1 key2 a one 0.133473 two -0.120562 b one 0.856179 two -0.690846 Name: data2, dtype: float64

通过字典或Series进行分组

除数组外,分组信息还可以其他形式存着

people=DataFrame(np.random.randn(5,5),
                 columns=['a','b','c','d','e'],
                 index=['Joe','Steve','Wes','Jim','Travis'])
people.ix[2:3,['b','c']]=np.nan#添加几个NA值
people
abcde
Joe0.2461820.5566420.5306630.0724570.769930
Steve-0.735543-0.0461470.0921910.6590660.563112
Wes-0.671631NaNNaN0.3515550.320022
Jim0.730654-0.554864-0.013574-0.238270-1.276084
Travis-0.2461240.4944040.782177-1.8561250.838289
#假设已知列的分组关系,希望根据分组计算列的总计
mapping={'a':'red','b':'red','c':'blue','d':'blue','e':'red','f':'orange'}
#将上面字典传给groupby即可
by_column=people.groupby(mapping,axis=1)
by_column.sum()
bluered
Joe0.6031201.572753
Steve0.751258-0.218579
Wes0.351555-0.351610
Jim-0.251844-1.100294
Travis-1.0739481.086570
#Series也有同样的功能,它可以被看做一个固定大小的映射。对于上面的例子,如果
#用Series作为分组键,则pandas会检查Series以确保其索引跟分组轴是对齐的
map_series=Series(mapping)
map_series
a red b red c blue d blue e red f orange dtype: object
people.groupby(map_series,axis=1).count()
bluered
Joe23
Steve23
Wes12
Jim23
Travis23

通过函数进行分组

#任何被当做分组键的函数都会在各个索引值上被调用一次,其返回值就会被用作分组名称
#以上节的例子说明,其索引值是人的名字,我们希望根据人名的长度进行分组
people.groupby(len).sum()
abcde
30.3052040.0017780.5170890.185742-0.186132
5-0.735543-0.0461470.0921910.6590660.563112
6-0.2461240.4944040.782177-1.8561250.838289
#将函数跟跟数组、列表、字典、Series混合使用也不是问题,
# 因为任何东西最终都会被转换为数组
key_list=['one','one','one','two','two']
people.groupby([len,key_list]).min()
abcde
3one-0.6716310.5566420.5306630.0724570.320022
two0.730654-0.554864-0.013574-0.238270-1.276084
5one-0.735543-0.0461470.0921910.6590660.563112
6two-0.2461240.4944040.782177-1.8561250.838289

根据索引级别分组

#层次化索引数据集最方便的地方在于它能够根据索引级别进行聚合。要实现该目的,通过level关键字
#传入级别编号或名称即可
import pandas as pd
columns=pd.MultiIndex.from_arrays([['US','US','US','JP','JP'],
                                  [1,3,5,1,3]],names=['city','tenor'])
hier_df=DataFrame(np.random.randn(4,5),columns=columns)
hier_df
cityUSJP
tenor13513
0-0.729876-0.4903561.200420-1.594183-0.571277
1-1.336457-2.033271-0.3566160.915616-0.234895
2-0.065620-0.1024850.605027-0.5189721.190415
30.9852980.9235311.7841941.815795-1.261107
hier_df.groupby(level='city',axis=1).count()
cityJPUS
023
123
223
323

数据聚合

#聚合,这里指任何能从数组产生标量值的数据转换过程。比如mean.count.min及sum等。
#也可以自己定义聚合函数
df
data1data2key1key2
01.1607600.360555aone
1-0.992606-0.120562atwo
2-0.6167270.856179bone
3-1.921879-0.690846btwo
4-0.458540-0.093610aone
grouped=df.groupby('key1')
grouped['data1'].quantile(0.9)#这里的quantile是一个Series方法
key1 a 0.836900 b -0.747242 Name: data1, dtype: float64
#使用自己的聚合函数,只需要将其传入aggregate或agg方法即可
def peak_to_peak(arr):
    return arr.max()-arr.min()
grouped.agg(peak_to_peak)
data1data2
key1
a2.1533660.481117
b1.3051521.547025
#注意,有些方法(如describe)也是可以用在这里的,即使严格来讲,它们并非集合运算
grouped.describe()
data1data2
key1
acount3.0000003.000000
mean-0.0967960.048794
std1.1213340.270329
min-0.992606-0.120562
25%-0.725573-0.107086
50%-0.458540-0.093610
75%0.3511100.133473
max1.1607600.360555
bcount2.0000002.000000
mean-1.2693030.082666
std0.9228821.093912
min-1.921879-0.690846
25%-1.595591-0.304090
50%-1.2693030.082666
75%-0.9430150.469422
max-0.6167270.856179
#一般自定义的聚合函数要比经过优化的Groupby的方法慢得多(count,sum,mean,median,std,var
# (无偏,分母为n-1),min,max,prod(非NA值的积),first、last(第一个和最后一个非NA值) )
#下面为了演示一些更高级的集合功能,将使用一个有关餐馆小费的数据集
tips=pd.read_csv('ch08/tips.csv')
tips.head()
total_billtipsexsmokerdaytimesize
016.991.01FemaleNoSunDinner2
110.341.66MaleNoSunDinner3
221.013.50MaleNoSunDinner3
323.683.31MaleNoSunDinner2
424.593.61FemaleNoSunDinner4
#添加‘小费占总额百分比’的了
tips['tip_pct']=tips['tip']/tips['total_bill']
tips[:6]
total_billtipsexsmokerdaytimesizetip_pct
016.991.01FemaleNoSunDinner20.059447
110.341.66MaleNoSunDinner30.160542
221.013.50MaleNoSunDinner30.166587
323.683.31MaleNoSunDinner20.139780
424.593.61FemaleNoSunDinner40.146808
525.294.71MaleNoSunDinner40.186240

面向列的多函数应用

#根据sex和smoker对tips进行分组
grouped=tips.groupby(['sex','smoker'])
#可以将函数名以字符串的形式传入agg函数
grouped_pct=grouped['tip_pct']
grouped_pct.agg('mean')
sex smoker Female No 0.156921 Yes 0.182150 Male No 0.160669 Yes 0.152771 Name: tip_pct, dtype: float64
#如果传入一组寒山寺或函数名,得到的DataFrame的列就会以相应的函数命名
grouped_pct.agg(['mean','std',peak_to_peak])
meanstdpeak_to_peak
sexsmoker
FemaleNo0.1569210.0364210.195876
Yes0.1821500.0715950.360233
MaleNo0.1606690.0418490.220186
Yes0.1527710.0905880.674707
#如果传入的是一个由(name,function)元组组成的列表,则各元组的第一个元素就会被用作
#DataFrame的列名(可以将这种二元元组列表看做一个有序映射)
grouped_pct.agg([('foo','mean'),('bar',np.std)])
foobar
sexsmoker
FemaleNo0.1569210.036421
Yes0.1821500.071595
MaleNo0.1606690.041849
Yes0.1527710.090588
#对于DataFrame,还可以定义一组应用于全部列的函数,或不同的列应用不同的函数。
#假设我们想对tip_pct和total_bill列计算三个统计信息
functions=['count','mean','max']
result=grouped['tip_pct','total_bill'].agg(functions)
result
tip_pcttotal_bill
countmeanmaxcountmeanmax
sexsmoker
FemaleNo540.1569210.2526725418.10518535.83
Yes330.1821500.4166673317.97787944.30
MaleNo970.1606690.2919909719.79123748.33
Yes600.1527710.7103456022.28450050.81
#结果DataFrame拥有层次化的列,这相当于分别对各列进行聚合,然后用concat将结果组装
#到一起(列名用作keys参数)
result['tip_pct']
countmeanmax
sexsmoker
FemaleNo540.1569210.252672
Yes330.1821500.416667
MaleNo970.1606690.291990
Yes600.1527710.710345
#跟前面一样,这里也可以传入自定义名称的元组列表
ftuples=[('Durchschnitt','mean'),('Abweichung',np.var)]
grouped['tip_pct','total_bill'].agg(ftuples)
tip_pcttotal_bill
DurchschnittAbweichungDurchschnittAbweichung
sexsmoker
FemaleNo0.1569210.00132718.10518553.092422
Yes0.1821500.00512617.97787984.451517
MaleNo0.1606690.00175119.79123776.152961
Yes0.1527710.00820622.28450098.244673
#现在假设需求是对不同的列应用不同的函数。具体的办法是向agg传入一个从列名映射到函数的字典
grouped.agg({'tip':np.max,'size':'sum'})
sizetip
sexsmoker
FemaleNo1405.2
Yes746.5
MaleNo2639.0
Yes15010.0
grouped.agg({'tip_pct':['min','max','mean','std'],'size':'sum'})
sizetip_pct
summinmaxmeanstd
sexsmoker
FemaleNo1400.0567970.2526720.1569210.036421
Yes740.0564330.4166670.1821500.071595
MaleNo2630.0718040.2919900.1606690.041849
Yes1500.0356380.7103450.1527710.090588

以”无索引”的形式返回聚合数据

#可以向groupby传入as_index=False禁用聚合数据分组键组成的分层索引
tips.groupby(['sex','smoker'],as_index=False).mean()
sexsmokertotal_billtipsizetip_pct
0FemaleNo18.1051852.7735192.5925930.156921
1FemaleYes17.9778792.9315152.2424240.182150
2MaleNo19.7912373.1134022.7113400.160669
3MaleYes22.2845003.0511672.5000000.152771
#当然,对结果调用reset_index也能得到这种形式的结果

分组级运算和转换

聚合只不过是分组运算的其中一种而已。它能够接受将一维数组简化为标量值的函数。本节介绍transform和apply方法,他们能执行更多其他的分组运算

#任务 ,给DataFrame添加一个用于存放各索引分组平均值的列,一个办法是先聚合再合并
df
data1data2key1key2
01.1607600.360555aone
1-0.992606-0.120562atwo
2-0.6167270.856179bone
3-1.921879-0.690846btwo
4-0.458540-0.093610aone
k1_means=df.groupby('key1').mean().add_prefix('mean_')
k1_means
mean_data1mean_data2
key1
a-0.0967960.048794
b-1.2693030.082666
pd.merge(df,k1_means,left_on='key1',right_index=True)
data1data2key1key2mean_data1mean_data2
01.1607600.360555aone-0.0967960.048794
1-0.992606-0.120562atwo-0.0967960.048794
4-0.458540-0.093610aone-0.0967960.048794
2-0.6167270.856179bone-1.2693030.082666
3-1.921879-0.690846btwo-1.2693030.082666
#这样虽然达到了目的,但是不灵活,该过程可是看做利用np.mean函数对两个数据列
# 进行转换。以People DataFrame为例
key=['one','two','one','two','one']
people.groupby(key).mean()
abcde
one-0.2238580.5255230.656420-0.4773710.642747
two-0.002445-0.3005050.0393090.210398-0.356486
people.groupby(key).transform(np.mean)
abcde
Joe-0.2238580.5255230.656420-0.4773710.642747
Steve-0.002445-0.3005050.0393090.210398-0.356486
Wes-0.2238580.5255230.656420-0.4773710.642747
Jim-0.002445-0.3005050.0393090.210398-0.356486
Travis-0.2238580.5255230.656420-0.4773710.642747
#transform会将一个函数应用到各个分组,然后将结果放置到合适的位置上,如果分组
#产生的是一个标量值,则该值会被广播出去。
#比如,任务是从各组中减去平均值。先创建一个距平化函数(demeaning function),然后
# 将其传给transform
def demean(arr):
    return arr-arr.mean()
demeaned=people.groupby(key).transform(demean)
demeaned
abcde
Joe0.4700390.031119-0.1257570.5498280.127183
Steve-0.7330990.2543580.0528830.4486680.919598
Wes-0.447773NaNNaN0.828926-0.322725
Jim0.733099-0.254358-0.052883-0.448668-0.919598
Travis-0.022266-0.0311190.125757-1.3787540.195542
#检查下demeaned现在的分组平均值是否为0
demeaned.groupby(key).mean()
abcde
one1.850372e-17-5.551115e-170.000000e+000.000000e+001.110223e-16
two0.000000e+002.775558e-17-3.469447e-18-2.775558e-170.000000e+00

apply:一般性的“拆分-应用-合并”

跟aggregate一样,transform是一个有着严格条件的特殊函数:传入的函数只能产生两种结果,要么产生一个可以广播的标量值
如(np.mean),要么产生一个相同大小的结果数组。最一般化的GroupBy方法是apply,本节重点演示它。
apply会将待处理的对象拆分成多个片段,然后对各片段调用传入的函数,最后尝试将各片段组合到一起。

#继续例子,利用之前的那个小费数据集,假设要根据分组选出最高的5个tip_pct值,
# 首先,得编写一个函数,作用是在指定列找出最大值,然后把这个值所在的行选取出来
def top(df,n=5,column='tip_pct'):
    return df.sort_values(by=column)[-n:]
top(tips,n=6)
total_billtipsexsmokerdaytimesizetip_pct
10914.314.00FemaleYesSatDinner20.279525
18323.176.50MaleYesSunDinner40.280535
23211.613.39MaleNoSatDinner20.291990
673.071.00FemaleYesSatDinner10.325733
1789.604.00FemaleYesSunDinner20.416667
1727.255.15MaleYesSunDinner20.710345
#现在,如果对smoker分组并用该函数调用apply,将会得到:
tips.groupby('smoker').apply(top)
total_billtipsexsmokerdaytimesizetip_pct
smoker
No8824.715.85MaleNoThurLunch20.236746
18520.695.00MaleNoSunDinner50.241663
5110.292.60FemaleNoSunDinner20.252672
1497.512.00MaleNoThurLunch20.266312
23211.613.39MaleNoSatDinner20.291990
Yes10914.314.00FemaleYesSatDinner20.279525
18323.176.50MaleYesSunDinner40.280535
673.071.00FemaleYesSatDinner10.325733
1789.604.00FemaleYesSunDinner20.416667
1727.255.15MaleYesSunDinner20.710345
#top函数在DataFrame的各个片段上调用,然后结果由pandas.concat组装到一起,并以
#分组名称进行了标记。于是,最终结果就有了一个层次化索引,其内层索引值来自原DataFrame
# 如果传给apply的函数能够接受其他参数或关键字,则可以将这些内容放在函数名后面一并传入
tips.groupby(['smoker','day']).apply(top,n=1,column='total_bill')
total_billtipsexsmokerdaytimesizetip_pct
smokerday
NoFri9422.753.25FemaleNoFriDinner20.142857
Sat21248.339.00MaleNoSatDinner40.186220
Sun15648.175.00MaleNoSunDinner60.103799
Thur14241.195.00MaleNoThurLunch50.121389
YesFri9540.174.73MaleYesFriDinner40.117750
Sat17050.8110.00MaleYesSatDinner30.196812
Sun18245.353.50MaleYesSunDinner30.077178
Thur19743.115.00FemaleYesThurLunch40.115982

除了基本用法之外,能否发挥apply的威力很大程度取决于你的创造力,传入的函数能做什么全由你说了算,只需要返回一个pandas对象或者标量即可。

#在Groupby对象上调用过decribe
result=tips.groupby('smoker')['tip_pct'].describe()
result
smoker No count 151.000000 mean 0.159328 std 0.039910 min 0.056797 25% 0.136906 50% 0.155625 75% 0.185014 max 0.291990 Yes count 93.000000 mean 0.163196 std 0.085119 min 0.035638 25% 0.106771 50% 0.153846 75% 0.195059 max 0.710345 Name: tip_pct, dtype: float64
result.unstack('smoker')
smokerNoYes
count151.00000093.000000
mean0.1593280.163196
std0.0399100.085119
min0.0567970.035638
25%0.1369060.106771
50%0.1556250.153846
75%0.1850140.195059
max0.2919900.710345
#在Groupby中,当你调用诸如describe之类的方法时,实际上只是应用了下面两条代码
f=lambda x:x.describe()
grouped.apply(f)

禁止分组键

从上面的例子可以看出,分组键会跟原始的索引共同构成结果中的层次化索引,将group_keys=Flase传入groupby即可禁止该效果

tips.groupby('smoker',group_keys=False).apply(top)
total_billtipsexsmokerdaytimesizetip_pct
8824.715.85MaleNoThurLunch20.236746
18520.695.00MaleNoSunDinner50.241663
5110.292.60FemaleNoSunDinner20.252672
1497.512.00MaleNoThurLunch20.266312
23211.613.39MaleNoSatDinner20.291990
10914.314.00FemaleYesSatDinner20.279525
18323.176.50MaleYesSunDinner40.280535
673.071.00FemaleYesSatDinner10.325733
1789.604.00FemaleYesSunDinner20.416667
1727.255.15MaleYesSunDinner20.710345

分位数和桶分析

pandas有一些能根据指定面元或者样本分位数将数据拆分成多块的工具(比如cut和qcut)。将这些函数跟groupby结合其阿里,能
轻松实现对数据集的桶(bucket)或者分位数(quantile)分析了。

#以简单随机数据集为例,利用cut将其装入长度相等的桶中
frame=DataFrame({'data1':np.random.randn(100),
                 'data2':np.random.randn(100)})
frame.head()
data1data2
01.421652-0.133642
11.6635931.570306
20.0725881.445291
3-1.1174810.485219
40.673224-0.565916
factor=pd.cut(frame.data1,4)
factor[:10]
0 (0.592, 1.913] 1 (0.592, 1.913] 2 (-0.73, 0.592] 3 (-2.0564, -0.73] 4 (0.592, 1.913] 5 (0.592, 1.913] 6 (0.592, 1.913] 7 (-0.73, 0.592] 8 (-0.73, 0.592] 9 (-2.0564, -0.73] Name: data1, dtype: category Categories (4, object): [(-2.0564, -0.73]
#由cut返回的Factor对象可以直接用于groupby.
def get_stats(group):
    return{'min':group.min(),'max':group.max(),'count':group.count()
           ,'mean':group.mean()}
grouped=frame.data2.groupby(factor)
grouped.apply(get_stats).unstack()
countmaxmeanmin
data1
(-2.0564, -0.73]22.01.656349-0.048650-2.129592
(-0.73, 0.592]50.02.1181170.006637-2.178494
(0.592, 1.913]25.02.8938150.525179-2.531124
(1.913, 3.234]3.01.423038-0.643547-2.888465
#这些都是长度相等的桶。要根据样本分位数得到大小相等的桶,使用qcut即可。传入labes=Fase
#即可只获取分位数的编号
grouping=pd.qcut(frame.data1,10,labels=False)
grouped=frame.data2.groupby(grouping)
grouped.apply(get_stats).unstack()
countmaxmeanmin
data1
010.01.6563490.200332-2.129592
110.00.892938-0.161470-1.448465
210.00.936595-0.502936-1.486019
310.01.225042-0.136442-1.471110
410.02.1181170.223633-2.178494
510.01.445291-0.010989-1.145924
610.01.1662040.230245-0.578312
710.01.5823530.491112-0.565916
810.02.8938150.183646-2.531124
910.02.2151170.528908-2.888465

示例:用特定于分组的值填充缺失值

s=Series(np.random.randn(6))
s[::2]=np.nan
s
0 NaN 1 0.480519 2 NaN 3 0.994221 4 NaN 5 0.324907 dtype: float64
s.fillna(s.mean())#用平均值填充Na值
0 0.599882 1 0.480519 2 0.599882 3 0.994221 4 0.599882 5 0.324907 dtype: float64
#假设要对不同的分组填充不同的值。只需将数据分组,并使用apply和一个能够对各数据块调用fillna的函数即可。
#下面是一些有关美国几个州的示例数据,这些州又被分为东部和西部。
states=['Ohio','New York','Vermont','Florida','Oregon','Nevada','California','Idaho']
group_key=['East']*4+['West']*4
data=Series(np.random.randn(8),index=states)
data[['Vermont','Nevada','Idaho']]=np.nan
data
Ohio -0.714495 New York -0.484234 Vermont NaN Florida -0.485962 Oregon 0.399898 Nevada NaN California -0.956605 Idaho NaN dtype: float64
data.groupby(group_key).mean()
East -0.561564 West -0.278353 dtype: float64
fill_mean=lambda g:g.fillna(g.mean())
data.groupby(group_key).apply(fill_mean)
Ohio -0.714495 New York -0.484234 Vermont -0.561564 Florida -0.485962 Oregon 0.399898 Nevada -0.278353 California -0.956605 Idaho -0.278353 dtype: float64
#此外,可以在代码中预定义各组的填充值。由于分组具有一个name属性,可以拿来用下
fill_values={'East':0.5,'West':-1}
fill_func=lambda g:g.fillna(fill_values[g.name])
data.groupby(group_key).apply(fill_func)
Ohio         -0.714495
New York     -0.484234
Vermont       0.500000
Florida      -0.485962
Oregon        0.399898
Nevada       -1.000000
California   -0.956605
Idaho        -1.000000
dtype: float64

示例:随机采样和排列

#随机采样的方法很多,比较高效的办法是:选取np.random.permutation(N)的前k个元素,其中N为完整数据的大小,
#k为期望的样本大小。下面是构造一副英语型扑克牌的一个方式
#红桃(Hearts),黑桃(Spades),梅花(Clubs),方片(Diamonds)
suits=['H','S','C','D']
card_val=( list(range(1,11))+ [10]*3)*4
base_names=['A']+list(range(2,11))+['J','Q','K']
cards=[]
for suit in ['H','S','C','D']: 
    cards.extend(str(num)+suit for num in base_names)
deck=Series(card_val,index=cards)
#现在我们有了一个长度为52的Series,其索引为牌名,值则是21点或其他游戏中用于计分的点数(为了简单起见,我当A的点数为1)
deck[:13]
AH 1 2H 2 3H 3 4H 4 5H 5 6H 6 7H 7 8H 8 9H 9 10H 10 JH 10 QH 10 KH 10 dtype: int64
#现在,从整副牌中抽出5张,代码如下
def draw(deck,n=5):
    return deck.take(np.random.permutation(len(deck))[:n])
draw(deck)
10D 10 5D 5 10H 10 3S 3 8C 8 dtype: int64
#假设你想从每种花色中随机抽取两张牌。由于花色是牌名的最后一个字符,所以我们可以据此进行分组,并使用apply:
get_suit=lambda card:card[-1]#只要最后一个字母就可以了
deck.groupby(get_suit).apply(draw,n=2)
C 7C 7 3C 3 D 10D 10 KD 10 H 5H 5 10H 10 S 10S 10 4S 4 dtype: int64
#另一种方法
deck.groupby(get_suit,group_keys=False).apply(draw,n=2)
7C      7
10C    10
KD     10
QD     10
QH     10
JH     10
3S      3
KS     10
dtype: int64

示例:分组加权平均数和相关系数

根据groupby的“拆分-应用-合并”范式,DataFrame的列与列之间或两个Series之间的运算(比如分组加权平均)成为一种标准作业。
以下面这个数据集为例,它含有分组键、值以及一些权重值

df=DataFrame({'category':['a','a','a','a','b','b','b','b'],'data':np.random.randn(8),
              'weights':np.random.rand(8)})
df
categorydataweights
0a-1.2793660.262668
1a-0.9931970.124788
2a-0.0926310.644840
3a0.2166700.413393
4b-0.6978990.621993
5b-0.5680830.190767
6b-0.9639620.587816
7b0.6373610.522886
#然后可以利用category计算分组加权平均数
grouped=df.groupby('category')
get_wavg=lambda g:np.average(g['data'],weights=g['weights'])
grouped.apply(get_wavg)
category a -0.297540 b -0.403348 dtype: float64
#下面看一个稍微实际点的例子-来自Yahoo!Finance的数据集,其中含有标准普尔500指数(SPX字段)和几只股票的收盘价
close_px=pd.read_csv('ch09/stock_px.csv',parse_dates=True,index_col=0)
close_px[-4:]
AAPLMSFTXOMSPX
2011-10-11400.2927.0076.271195.54
2011-10-12402.1926.9677.161207.25
2011-10-13408.4327.1876.371203.66
2011-10-14422.0027.2778.111224.58
#来做一个有趣的任务:计算一个由日收益率(通过百分数变化计算)与SPX之间的年度相关系数组成的DataFrame
rets=close_px.pct_change().dropna()
spx_corr=lambda x:x.corrwith(x['SPX'])
by_year=rets.groupby(lambda x:x.year)
by_year.apply(spx_corr)
AAPLMSFTXOMSPX
20030.5411240.7451740.6612651.0
20040.3742830.5885310.5577421.0
20050.4675400.5623740.6310101.0
20060.4282670.4061260.5185141.0
20070.5081180.6587700.7862641.0
20080.6814340.8046260.8283031.0
20090.7071030.6549020.7979211.0
20100.7101050.7301180.8390571.0
20110.6919310.8009960.8599751.0
#当然,可以计算列与列之间的相关系数
by_year.apply(lambda g:g['AAPL'].corr(g['MSFT']))
2003    0.480868
2004    0.259024
2005    0.300093
2006    0.161735
2007    0.417738
2008    0.611901
2009    0.432738
2010    0.571946
2011    0.581987
dtype: float64

示例:面向分组的线性回归

顺着上一个例子继续,你可以用groupby执行更为复杂的分组统计分析,只要函数返回的是一个pandas对象或标量值即可。
例如,我们可以定义下面这个regress函数(利用statsmodels库)对各数据块执行普通最小二乘法(Ordinary Least Squares,OLS)回归

import statsmodels.api as sm
def regress(data,yvar,xvars):
    Y=data[yvar]
    X=data[xvars]
    X['intercetp']=1.
    result=sm.OLS(Y,X).fit()
    return result.params
#现在,为了按年计算AAPL对SPX收益率的显性回归,执行
by_year.apply(regress,'AAPL',['SPX'])
SPXintercetp
20031.1954060.000710
20041.3634630.004201
20051.7664150.003246
20061.6454960.000080
20071.1987610.003438
20080.968016-0.001110
20090.8791030.002954
20101.0526080.001261
20110.8066050.001514
#透视表和交叉表 透视表(pivot table)是各种电子表格程序和其他数据分析软件中一种常见的数据汇总工具。它根据一个或多个键对数据进行聚合,并根据行和列的分组键将数据分配到各个矩形区域中。 在Python和pandas中,可以通过本章所介绍的groupby功能以及重塑运算制作透视表。DataFrame有一个pivot_table方法,此外,还有一个顶级的pandas.pivot_table函数。除能为groupby提供便利之外,pivot_table还可以添加分项小计(也叫做margins) 回到小费数据集,假设我想要根据sex和smoker计算分组平均数(pivot_table的默认集合类型),并将sex和smoker放到行上。。
tips.pivot_table(index=['sex','smoker'])
sizetiptip_pcttotal_bill
sexsmoker
FemaleNo2.5925932.7735190.15692118.105185
Yes2.2424242.9315150.18215017.977879
MaleNo2.7113403.1134020.16066919.791237
Yes2.5000003.0511670.15277122.284500
tips.pivot_table(['tip_pct','size'],index=['sex','day'],columns=['smoker'])
tip_pctsize
smokerNoYesNoYes
sexday
FemaleFri0.1652960.2091292.5000002.000000
Sat0.1479930.1638172.3076922.200000
Sun0.1657100.2370753.0714292.500000
Thur0.1559710.1630732.4800002.428571
MaleFri0.1380050.1447302.0000002.125000
Sat0.1621320.1390672.6562502.629630
Sun0.1582910.1739642.8837212.600000
Thur0.1657060.1644172.5000002.300000
#传入margins=True添加分项小计,这将会添加标签为ALL的航和列,其值对应于单个登记中所有数据的分组统计。
tips.pivot_table(['tip_pct','size'],index=['sex','day'],columns=['smoker'],margins=True)
tip_pctsize
smokerNoYesAllNoYesAll
sexday
FemaleFri0.1652960.2091290.1993882.5000002.0000002.111111
Sat0.1479930.1638170.1564702.3076922.2000002.250000
Sun0.1657100.2370750.1815693.0714292.5000002.944444
Thur0.1559710.1630730.1575252.4800002.4285712.468750
MaleFri0.1380050.1447300.1433852.0000002.1250002.100000
Sat0.1621320.1390670.1515772.6562502.6296302.644068
Sun0.1582910.1739640.1623442.8837212.6000002.810345
Thur0.1657060.1644170.1652762.5000002.3000002.433333
All0.1593280.1631960.1608032.6688742.4086022.569672
#要使用其他的聚合函数,将其传给aggfunc即可,例如,使用count或len可以得到有关分组大小的交叉表
tips.pivot_table('tip_pct',index=['sex','smoker'],columns=['day'],aggfunc=len,margins=True)
dayFriSatSunThurAll
sexsmoker
FemaleNo2.013.014.025.054.0
Yes7.015.04.07.033.0
MaleNo2.032.043.020.097.0
Yes8.027.015.010.060.0
All19.087.076.062.0244.0
#如果存在空的组合(也就是NA),你可能会希望设置一个fill_value:
tips.pivot_table('size',index=['time','sex','smoker'],columns=['day'],aggfunc='sum',fill_value=0)
dayFriSatSunThur
timesexsmoker
DinnerFemaleNo230432
Yes833100
MaleNo4851240
Yes1271390
LunchFemaleNo30060
Yes60017
MaleNo00050
Yes50023

交叉表: crosstab

交叉表(cross-tabulation,简称crosstab)是一种用于计算分组频率的特殊透视表

#crosstab的前两个参数可以使数组、Sereis或数组列表。这里对小费数据集
pd.crosstab([tips.time,tips.day],tips.smoker,margins=True)
smokerNoYesAll
timeday
DinnerFri3912
Sat454287
Sun571976
Thur101
LunchFri167
Thur441761
All15193244

示例:2012联邦选举委员会数据库

fec=pd.read_csv('ch09/P00000001-ALL.csv')
fec.head()
C:\Users\ZJL\AppData\Local\Programs\Python\Python35\lib\site-packages\IPython\core\interactiveshell.py:2717: DtypeWarning: Columns (6) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result)
cmte_idcand_idcand_nmcontbr_nmcontbr_citycontbr_stcontbr_zipcontbr_employercontbr_occupationcontb_receipt_amtcontb_receipt_dtreceipt_descmemo_cdmemo_textform_tpfile_num
0C00410118P20002978Bachmann, MichelleHARVEY, WILLIAMMOBILEAL3.6601e+08RETIREDRETIRED250.020-JUN-11NaNNaNNaNSA17A736166
1C00410118P20002978Bachmann, MichelleHARVEY, WILLIAMMOBILEAL3.6601e+08RETIREDRETIRED50.023-JUN-11NaNNaNNaNSA17A736166
2C00410118P20002978Bachmann, MichelleSMITH, LANIERLANETTAL3.68633e+08INFORMATION REQUESTEDINFORMATION REQUESTED250.005-JUL-11NaNNaNNaNSA17A749073
3C00410118P20002978Bachmann, MichelleBLEVINS, DARONDAPIGGOTTAR7.24548e+08NONERETIRED250.001-AUG-11NaNNaNNaNSA17A749073
4C00410118P20002978Bachmann, MichelleWARDENBURG, HAROLDHOT SPRINGS NATIONAR7.19016e+08NONERETIRED300.020-JUN-11NaNNaNNaNSA17A736166
fec.ix[123456]
cmte_id C00431445 cand_id P80003338 cand_nm Obama, Barack contbr_nm ELLMAN, IRA contbr_city TEMPE contbr_st AZ contbr_zip 852816719 contbr_employer ARIZONA STATE UNIVERSITY contbr_occupation PROFESSOR contb_receipt_amt 50 contb_receipt_dt 01-DEC-11 receipt_desc NaN memo_cd NaN memo_text NaN form_tp SA17A file_num 772372 Name: 123456, dtype: object
#通过unique,你可以获取全部的候选人名单(注意,Numpy不会输出信息中字符串两侧的引号)
unique_cands=fec.cand_nm.unique()
unique_cands
array([‘Bachmann, Michelle’, ‘Romney, Mitt’, ‘Obama, Barack’, “Roemer, Charles E. ‘Buddy’ III”, ‘Pawlenty, Timothy’, ‘Johnson, Gary Earl’, ‘Paul, Ron’, ‘Santorum, Rick’, ‘Cain, Herman’, ‘Gingrich, Newt’, ‘McCotter, Thaddeus G’, ‘Huntsman, Jon’, ‘Perry, Rick’], dtype=object)
unique_cands[2]
‘Obama, Barack’
#最简单的办法是利用字典说明党派关系:
parties={'Bachmann, Michelle':'Republican',
         'Cain, Herman':'Republican',
         'Gingrich, Newt':'Republican',
         'Johnson, Gary Earl':'Republican',
         'McCotter, Thaddeus G':'Republican',
         'Obama, Barack':'Democrat',
         'Paul, Ron':'Republican',
         'Pawlenty, Timothy':'Republican',
         'Perry, Rick':'Republican',
         "Roemer, Charles E. 'Buddy' III":'Republican',
         'Romney, Mitt':'Republican',
         'Cain, Herman':'Republican',
         'Santorum, Rick':'Republican'
         }
fec.cand_nm[123456:123461]
123456 Obama, Barack 123457 Obama, Barack 123458 Obama, Barack 123459 Obama, Barack 123460 Obama, Barack Name: cand_nm, dtype: object
fec.cand_nm[123456:123461].map(parties)
123456 Democrat 123457 Democrat 123458 Democrat 123459 Democrat 123460 Democrat Name: cand_nm, dtype: object
#将其设置为一个新列
fec['party']=fec.cand_nm.map(parties)
fec['party'].value_counts()
Democrat 593746 Republican 403829 Name: party, dtype: int64
#数据集中既包含赞助也包含退款(负的出资额)
(fec.contb_receipt_amt>0).value_counts()
True 991475 False 10256 Name: contb_receipt_amt, dtype: int64
#为了简化分析过程,我们限定该数据集只有正的出资额
fec=fec[fec.contb_receipt_amt>0]
#由于Barack Obama和Mitt Romney是最主要的两名候选人,所以我们还专门准备了一个子集,只包含他们两人的竞选互动的赞助信息
fec_mrbo=fec[fec.cand_nm.isin(['Obama, Barack','Romney, Mitt'])]
fec_mrbo.head()
cmte_idcand_idcand_nmcontbr_nmcontbr_citycontbr_stcontbr_zipcontbr_employercontbr_occupationcontb_receipt_amtcontb_receipt_dtreceipt_descmemo_cdmemo_textform_tpfile_numparty
411C00431171P80003353Romney, MittELDERBAUM, WILLIAMDPOAA3.4023e+08US GOVERNMENTFOREIGN SERVICE OFFICER25.001-FEB-12NaNNaNNaNSA17A780124Republican
412C00431171P80003353Romney, MittELDERBAUM, WILLIAMDPOAA3.4023e+08US GOVERNMENTFOREIGN SERVICE OFFICER110.001-FEB-12NaNNaNNaNSA17A780124Republican
413C00431171P80003353Romney, MittCARLSEN, RICHARDAPOAE9.128e+07DEFENSE INTELLIGENCE AGENCYINTELLIGENCE ANALYST250.013-APR-12NaNNaNNaNSA17A785689Republican
414C00431171P80003353Romney, MittDELUCA, PIERREAPOAE9.128e+07CISCOENGINEER30.021-AUG-11NaNNaNNaNSA17A760261Republican
415C00431171P80003353Romney, MittSARGENT, MICHAELAPOAE9.01201e+07RAYTHEON TECHNICAL SERVICES CORPCOMPUTER SYSTEMS ENGINEER100.007-MAR-12NaNNaNNaNSA17A780128Republican

根据职业和雇主统计赞助信息

#首先,根据职业计算出资总额
fec.contbr_occupation.value_counts()[:10]
RETIRED 233990 INFORMATION REQUESTED 35107 ATTORNEY 34286 HOMEMAKER 29931 PHYSICIAN 23432 INFORMATION REQUESTED PER BEST EFFORTS 21138 ENGINEER 14334 TEACHER 13990 CONSULTANT 13273 PROFESSOR 12555 Name: contbr_occupation, dtype: int64
#许多职业都涉及相同的基本工作类型,或者同一样东西有多种变体。对这些数据要进行处理。
occ_mapping={'INFORMATION REQUESTED PER BEST EFFORTS':'NOT PROVIDED',
             'INFORMATION REQUESTED':'NOT PROVIDED',
             'INFORMATION REQUESTED (BEST EFFORTS)':'NOT PROVIDED',
             'C.E.O':'CEO'}
#如果没有提供相关映射,则返回x
f=lambda x:occ_mapping.get(x,x)
fec.contbr_occupation=fec.contbr_occupation.map(f)
#对雇主信息也进行同样的处理
emp_mapping={'INFORMATION REQUESTED PER BEST EFFORTS':'NOT PROVIDED',
             'INFORMATION REQUESTED':'NOT PROVIDED',
             'SELF':'SELF-EMPLOYED',}
#如果没有提供相关映射,则会返回x
f=lambda x:emp_mapping.get(x,x)
fec.contbr_employer=fec.contbr_employer.map(f)
#现在,可以通过pivot_table根据党派和职业对数据进行聚合,然后过滤掉总出资源不足200万美元的数据
by_occupation=fec.pivot_table('contb_receipt_amt',index='contbr_occupation',columns='party',aggfunc='sum')
over_2mm=by_occupation[by_occupation.sum(1)>2000000]
over_2mm
partyDemocratRepublican
contbr_occupation
ATTORNEY11141982.977.333637e+06
C.E.O.1690.002.543933e+06
CEO2074284.791.598978e+06
CONSULTANT2459912.712.502470e+06
ENGINEER951525.551.812534e+06
EXECUTIVE1355161.054.057645e+06
HOMEMAKER4248875.801.338025e+07
INVESTOR884133.002.356671e+06
LAWYER3160478.873.394023e+05
MANAGER762883.221.430643e+06
NOT PROVIDED4866973.962.018997e+07
OWNER1001567.362.376829e+06
PHYSICIAN3735124.943.555067e+06
PRESIDENT1878509.954.622173e+06
PROFESSOR2165071.082.839557e+05
REAL ESTATE528902.091.567552e+06
RETIRED25305116.382.320195e+07
SELF-EMPLOYED672393.401.621010e+06
import matplotlib.pyplot as plt
over_2mm.plot(kind='barh')
plt.show()

这里写图片描述

#下面统计对Obama和Romney总出资额最高的职业和企业
def get_top_amounts(group,key,n=5):
    totals=group.groupby(key)['contb_receipt_amt'].sum()
    #根据key对totals进行降序排列
    return totals.sort_values(ascending=False)[:n]
#根据职业和雇主进行聚合
grouped=fec_mrbo.groupby('cand_nm')
grouped.apply(get_top_amounts,'contbr_occupation',n=7)
cand_nm contbr_occupation Obama, Barack RETIRED 25305116.38 ATTORNEY 11141982.97 INFORMATION REQUESTED 4866973.96 HOMEMAKER 4248875.80 PHYSICIAN 3735124.94 LAWYER 3160478.87 CONSULTANT 2459912.71 Romney, Mitt RETIRED 11508473.59 INFORMATION REQUESTED PER BEST EFFORTS 11396894.84 HOMEMAKER 8147446.22 ATTORNEY 5364718.82 PRESIDENT 2491244.89 EXECUTIVE 2300947.03 C.E.O. 1968386.11 Name: contb_receipt_amt, dtype: float64
grouped.apply(get_top_amounts,'contbr_employer',n=10)
cand_nm contbr_employer Obama, Barack RETIRED 22694358.85 SELF-EMPLOYED 17080985.96 NOT EMPLOYED 8586308.70 INFORMATION REQUESTED 5053480.37 HOMEMAKER 2605408.54 SELF 1076531.20 SELF EMPLOYED 469290.00 STUDENT 318831.45 VOLUNTEER 257104.00 MICROSOFT 215585.36 Romney, Mitt INFORMATION REQUESTED PER BEST EFFORTS 12059527.24 RETIRED 11506225.71 HOMEMAKER 8147196.22 SELF-EMPLOYED 7409860.98 STUDENT 496490.94 CREDIT SUISSE 281150.00 MORGAN STANLEY 267266.00 GOLDMAN SACH & CO. 238250.00 BARCLAYS CAPITAL 162750.00 H.I.G. CAPITAL 139500.00 Name: contb_receipt_amt, dtype: float64

对出资额分组

利用cut函数根据出资额的大小将数据离散化到多个面元中

bins=np.array([0,1,10,100,1000,10000,100000,1000000,10000000])
labels=pd.cut(fec_mrbo.contb_receipt_amt,bins)
labels.head()
411 (10, 100] 412 (100, 1000] 413 (100, 1000] 414 (10, 100] 415 (10, 100] Name: contb_receipt_amt, dtype: category Categories (8, object): [(0, 1]
#然后根据候选人姓名以及面元标签对数据进行分组:
grouped=fec_mrbo.groupby(['cand_nm',labels])
grouped.size().unstack(0)
cand_nmObama, BarackRomney, Mitt
contb_receipt_amt
(0, 1]493.077.0
(1, 10]40070.03681.0
(10, 100]372280.031853.0
(100, 1000]153991.043357.0
(1000, 10000]22284.026186.0
(10000, 100000]2.01.0
(100000, 1000000]3.0NaN
(1000000, 10000000]4.0NaN
#可以看出,在小额资助方面,Obama获得的数量比Romney多很多,下面图形化赞助额度的比例
bucket_sums=grouped.contb_receipt_amt.sum().unstack(0)
bucket_sums
cand_nmObama, BarackRomney, Mitt
contb_receipt_amt
(0, 1]318.2477.00
(1, 10]337267.6229819.66
(10, 100]20288981.411987783.76
(100, 1000]54798531.4622363381.69
(1000, 10000]51753705.6763942145.42
(10000, 100000]59100.0012700.00
(100000, 1000000]1490683.08NaN
(1000000, 10000000]7148839.76NaN
normed_sums=bucket_sums.div(bucket_sums.sum(axis=1),axis=0)
normed_sums
cand_nmObama, BarackRomney, Mitt
contb_receipt_amt
(0, 1]0.8051820.194818
(1, 10]0.9187670.081233
(10, 100]0.9107690.089231
(100, 1000]0.7101760.289824
(1000, 10000]0.4473260.552674
(10000, 100000]0.8231200.176880
(100000, 1000000]1.000000NaN
(1000000, 10000000]1.000000NaN
normed_sums[:-2].plot(kind='barh',stacked=True)#排除了最大的两个面元,因为这些不是个人捐赠
plt.show()

这里写图片描述

根据州统计赞助信息

#首先根据候选人和州对数据进行聚合
grouped=fec_mrbo.groupby(['cand_nm','contbr_st'])
totals=grouped.contb_receipt_amt.sum().unstack(0).fillna(0)
totals=totals[totals.sum(1)>100000]
totals[:10]
cand_nmObama, BarackRomney, Mitt
contbr_st
AK281840.1586204.24
AL543123.48527303.51
AR359247.28105556.00
AZ1506476.981888436.23
CA23824984.2411237636.60
CO2132429.491506714.12
CT2068291.263499475.45
DC4373538.801025137.50
DE336669.1482712.00
FL7318178.588338458.81
#对各行除以总赞助额,就会得到各候选人在各州的总赞助额比例
percent=totals.div(totals.sum(1),axis=0) 
percent[:10]
cand_nmObama, BarackRomney, Mitt
contbr_st
AK0.7657780.234222
AL0.5073900.492610
AR0.7729020.227098
AZ0.4437450.556255
CA0.6794980.320502
CO0.5859700.414030
CT0.3714760.628524
DC0.8101130.189887
DE0.8027760.197224
FL0.4674170.532583
#将结果在地图上显示,以下代码仅供参考,需要的文件没有下到
from mpl_toolkits.basemap import Basemap,cm
from matplotlib import rcParams
from matplotlib.collections import LineCollection
from shapelib import ShapeFile
import dbflib
obama=percent['Obama, Barack']
fig=plt.figure(figsize=(12,12))
ax=fig.add_axes[0.1,0.1,0.8,0.8]
lllat=21;urlat=53;lllon=-118,urlon=-62
m=Basemap(ax=ax,projection='stere',lon_0=(urlon+lllon)/2,lat_0=(urlat+lllat)/2,
          llcrnrlat=lllat,urcrnrlat=urlat,llcrnrlon=lllon,
          urcrnrlon=urlon,resolution='l')
m.drawcoastlines()
m.drawcounties()

shp=ShapeFile('../states/statespo2o')
dbf=dblib.open('../states/statespo2o')

for nploy in range(shp.info()[0]):
    #在地图上绘制彩色多边形
    shpsegs=[]
    shp_object=shp.read_object(npoly)
    verts=shp_object.vertices()
    rings=len(verts)
    for ring in range(rings):
        lons,lats=zip(*verts[ring])
        x,y=m(lons,lats)
        shpsegs.append(zip(x,y))
        if ring==0:
            shapedict=dbf.read_record(npoly)
        name=shapedict['STATE']
    lines=LineCollection(shpsegs,antialiaseds=(1,))
    #state_to_code字典,例如’ALASKA'->'AK'
    try:
        per=obama[state_to_code[name.upper()]]
    except KeyError:
        continue
    lines.set_facecolors('k')
    lines.set_alpha(0.75*per)#把'百分比’变小一点
    lines.set_edgecolors('k')
    lines.set_linewidth(0.3)
plt.show()
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值