pandas中的分组技术

目录

我们在这里要讲一个很常用的技术, 就是所谓的分组技术, 这个在数据库中是非常常用的, 要去求某些分组的统计量, 那么我们需要知道在pandas里面, 这些分组技术是怎么实现的.

分组操作

我们这里要来聊聊在pandas中实现分组运算, 大致上可以按照列, 字典或者Series, 函数, 索引级别进行分组, 我们会逐渐来介绍.

按照列进行分组
import pandas as pd
from pandas import DataFrame, Series
import numpy as np

sep = "---------------------------------------------------------------------------"
data = DataFrame({"key1": ['a', 'a', 'b', 'b', 'a'], "key2": ['one', 'two', 'one', 'two', 'one'], 'data1': np.random.randn(5), 'data2': np.random.randn(5)})
print(data)
      data1     data2 key1 key2
0  0.733951  0.000379    a  one
1  1.039029  0.852930    a  two
2  0.921413 -1.644942    b  one
3  0.294560  0.521525    b  two
4  0.286072 -0.074574    a  one

data1按照key1分组为:

groups = data['data1'].groupby(data['key1'])

我们发现得到了一个SeriesGroupBy 对象, 现在我们对这个对象进行迭代:

for name, group in groups:
    print(name)
    print(sep)
    print(group)
    print(sep)
a
---------------------------------------------------------------------------
0    0.733951
1    1.039029
4    0.286072
Name: data1, dtype: float64
---------------------------------------------------------------------------
b
---------------------------------------------------------------------------
2    0.921413
3    0.294560
Name: data1, dtype: float64
---------------------------------------------------------------------------

我们发现, groups有(key, Series)对组成, key根据什么来分组的元素, Series(DataFrame)是分组的元素, Series(DataFrame)的name还是原来的列名.

对你分组进行迭代, 用:

for name, group in groups

groups = data.groupby(data['key1'])
for name, group in groups:
    print(name)
    print(sep)
    print(group)
    print(sep)
a
---------------------------------------------------------------------------
      data1     data2 key1 key2
0  0.733951  0.000379    a  one
1  1.039029  0.852930    a  two
4  0.286072 -0.074574    a  one
---------------------------------------------------------------------------
b
---------------------------------------------------------------------------
      data1     data2 key1 key2
2  0.921413 -1.644942    b  one
3  0.294560  0.521525    b  two
---------------------------------------------------------------------------

groupby就是按照某个值来分组, 无论是对series还是dataframe, 都成立.

我们可以在分好组的对象上调用统计函数.

data.groupby(data['key1']).mean()
data1data2
key1
a0.6863510.259578
b0.607986-0.561709

在每个分组上分别对每个每一列求均值, 如果是非数字列, 或默认剔除.

作业1:在每个分组上分别对每个每一行求均值.

提示: data.groupby(data['key1']).mean(axis=1)是行不通的.

对于多个列进行分组, 分组的key是对应分组元素的元组.

作业2:对DataFrame用多个列进行分组.

下面其我们来看一个语法糖:

data.groupby([data['key1'], data['key2']])
<pandas.core.groupby.DataFrameGroupBy object at 0x000001D080230278>

它等价于:

data.groupby(['key1', 'key2'])
<pandas.core.groupby.DataFrameGroupBy object at 0x000001D080230630>

我们来验证一下:

groups =data.groupby([data['key1'], data['key2']])
for name, group in groups:
    print(name)
    print(sep)
    print(group)
    print(sep)
('a', 'one')
---------------------------------------------------------------------------
      data1     data2 key1 key2
0  0.733951  0.000379    a  one
4  0.286072 -0.074574    a  one
---------------------------------------------------------------------------
('a', 'two')
---------------------------------------------------------------------------
      data1    data2 key1 key2
1  1.039029  0.85293    a  two
---------------------------------------------------------------------------
('b', 'one')
---------------------------------------------------------------------------
      data1     data2 key1 key2
2  0.921413 -1.644942    b  one
---------------------------------------------------------------------------
('b', 'two')
---------------------------------------------------------------------------
     data1     data2 key1 key2
3  0.29456  0.521525    b  two
---------------------------------------------------------------------------
groups = data.groupby(['key1', 'key2'])
for name, group in groups:
    print(name)
    print(sep)
    print(group)
    print(sep)
('a', 'one')
---------------------------------------------------------------------------
      data1     data2 key1 key2
0  0.733951  0.000379    a  one
4  0.286072 -0.074574    a  one
---------------------------------------------------------------------------
('a', 'two')
---------------------------------------------------------------------------
      data1    data2 key1 key2
1  1.039029  0.85293    a  two
---------------------------------------------------------------------------
('b', 'one')
---------------------------------------------------------------------------
      data1     data2 key1 key2
2  0.921413 -1.644942    b  one
---------------------------------------------------------------------------
('b', 'two')
---------------------------------------------------------------------------
     data1     data2 key1 key2
3  0.29456  0.521525    b  two
---------------------------------------------------------------------------

我们发现输出结果是一模一样, 总结一下:

data.groupby([data['key1'], data['key2']])等价于data.groupby(['key1', 'key2'])

进一步:

data['data1'].groupby([data['key1'], data['key2']])等价于data.groupby(['key1', 'key2'])['data1']

作业3: 验证data['data1'].groupby([data['key1'], data['key2']])等价于data.groupby(['key1', 'key2'])['data1'].

data.groupby(['key1', 'key2'])['data1']
<pandas.core.groupby.SeriesGroupBy object at 0x000001D0FCD95D68>
data.groupby(['key1', 'key2'])[['data1']]
<pandas.core.groupby.DataFrameGroupBy object at 0x000001D080232898>

我不知道大家发现没有, 这两个返回的数据类型是有区别的, 我们仔细来看看:

data[['data1']] # 这是一个DataFrame
data1
00.733951
11.039029
20.921413
30.294560
40.286072
data['data1'] # 这是一个Series
0    0.733951
1    1.039029
2    0.921413
3    0.294560
4    0.286072
Name: data1, dtype: float64

那么这里的区别就不言而喻了吧

groups = data.groupby(['key1', 'key2'])[['data1']]

for name, group in groups:
    print(name)
    print(sep)
    print(group)
    print(sep)
('a', 'one')
---------------------------------------------------------------------------
<class 'pandas.core.frame.DataFrame'>
---------------------------------------------------------------------------
('a', 'two')
---------------------------------------------------------------------------
<class 'pandas.core.frame.DataFrame'>
---------------------------------------------------------------------------
('b', 'one')
---------------------------------------------------------------------------
<class 'pandas.core.frame.DataFrame'>
---------------------------------------------------------------------------
('b', 'two')
---------------------------------------------------------------------------
<class 'pandas.core.frame.DataFrame'>
---------------------------------------------------------------------------

结果是一样的.

data.groupby(['key1', 'key2'])[['data1']].mean()
data1
key1key2
aone0.510012
two1.039029
bone0.921413
two0.294560
data.groupby(['key1', 'key2'])['data1'].mean()
key1  key2
a     one     0.510012
      two     1.039029
b     one     0.921413
      two     0.294560
Name: data1, dtype: float64

在做数据聚合的时候就发现了不同,

[['data1']]得到的是一个DataFrame, 而['data1']得到的是Series.

按照字典进行分组

我们来看一个按照字典进行分组的例子:

data = DataFrame(np.random.randn(5, 5), columns=['a', 'b', 'c', 'd', 'e'], index=['joe', 'steve', 'wes', 'jim', 'Travis'])
data
abcde
joe-0.0895971.2393072.173063-0.519295-1.783812
steve0.5391090.724553-0.0418990.7874940.394633
wes-0.0554170.384068-0.594006-0.4515870.722761
jim-0.0567670.3988632.140669-1.060791-0.953756
Travis0.245142-0.468819-0.863372-0.1519661.185567
# 定义一个分组的字典, a, b, c --> red, d, e --> blue
mapping = {'a':'red', 'b':'red', 'c': 'red', 'd':'blue', 'e': 'blue'}
data.groupby(mapping, axis=1).mean()   # 对每一个分组求平均
bluered
joe-1.1515541.107591
steve0.5910630.407255
wes0.135587-0.088452
jim-1.0072730.827589
Travis0.516800-0.362350

作业4:自己设计一个index的mapping, 按axis=0进行分组.

根据函数进行分组

话不多说, 直接来看例子:

data.groupby(len).mean()
abcde
3-0.0672600.6740791.239909-0.677224-0.671602
50.5391090.724553-0.0418990.7874940.394633
60.245142-0.468819-0.863372-0.1519661.185567

我们发现, 字典和函数都是作用到索引上的.

按照list组合

这个例子非常简单:

data.groupby(['1', '1', '1', '2', '2']).mean()
abcde
10.1313650.7826430.512386-0.061130-0.222139
20.094188-0.0349780.638649-0.6063780.115905

他会自动判断是按照列还是list.

按照索引级别进行分组

作业5: 自己学习按索引级别进行分组.

分组运算

分组运算主要设计到3个函数, agg, transform和apply.

我们一个一个来看.

agg
data = DataFrame({"key1": ['a', 'a', 'b', 'b', 'a'], "key2": ['one', 'two', 'one', 'two', 'one'], 'data1': np.random.randn(5), 'data2': np.random.randn(5)})
data
data1data2key1key2
00.441278-0.848457aone
11.843375-0.522482atwo
2-1.435176-0.191682bone
3-2.700772-0.832993btwo
4-1.430386-1.910834aone
data.groupby("key2").agg(np.mean)
data1data2
key2
one-0.808095-0.983658
two-0.428699-0.677738

当然, 这个等价于:

data.groupby("key2").mean()
data1data2
key2
one-0.808095-0.983658
two-0.428699-0.677738

原理: 聚合函数会在group后的每个切片上(group后的每一行或每一列)计算出值.

我们也可以自定义函数:

data.groupby("key2").agg(lambda x: x.max() - x.min())
data1data2
key2
one1.8764541.719153
two4.5441470.310511

他会在每个分组的每个列上调用这个函数.

data.groupby("key2").agg([np.mean, np.max,np.min])
data1data2
meanamaxaminmeanamaxamin
key2
one-0.8080950.441278-1.435176-0.983658-0.191682-1.910834
two-0.4286991.843375-2.700772-0.677738-0.522482-0.832993
data.groupby("key2").agg([("平均值:", np.mean), ("最大值",np.max), ("最小值",np.min)]).rename({"one": "第一组", "two":"第二组"})
data1data2
平均值:最大值最小值平均值:最大值最小值
key2
第一组-0.8080950.441278-1.435176-0.983658-0.191682-1.910834
第二组-0.4286991.843375-2.700772-0.677738-0.522482-0.832993
# 对不同列用不同的分组函数 
data.groupby("key2").agg({"data1":[("平均值:", np.mean), ("最大值",np.max)], "data2":[("最小值",np.min)]}).rename({"one": "第一组", "two":"第二组"})
data2data1
最小值平均值:最大值
key2
第一组-1.910834-0.8080950.441278
第二组-0.832993-0.4286991.843375
transform

transform是一个矢量化的函数, 如果最后我们得到的值和分组切片不一致, 会进行广播:

data
data1data2key1key2
00.441278-0.848457aone
11.843375-0.522482atwo
2-1.435176-0.191682bone
3-2.700772-0.832993btwo
4-1.430386-1.910834aone
data.groupby("key1").transform(np.mean)
data1data2
00.284756-1.093924
10.284756-1.093924
2-2.067974-0.512338
3-2.067974-0.512338
40.284756-1.093924

仔细看, 0,1, 4一组, 2,3一组, 发生了广播.

现在有个需求,按分组减去均值.

data.groupby("key1").transform(lambda x: x - x.mean())
data1data2
00.1565230.245468
11.5586190.571442
20.6327980.320656
3-0.632798-0.320656
4-1.715142-0.816910

a, b分组的各列都减去了他们的均值, 不信, 来看:

data.groupby("key1").transform(lambda x: x - x.mean()).groupby([1, 1, 0,0, 1]).mean()
data1data2
01.110223e-16-5.551115e-17
17.401487e-17-1.110223e-16
apply

这个函数是transform的加强版, transform只能返回和原来切片大小一样大的, 但apply是可以任意的. 其实我们之前就用过apply函数, 我们知道, apply是作用在列(行)上的, applymap是作用在函数上的.

data = DataFrame({"key1": ['a', 'a', 'b', 'b', 'a'], "key2": ['one', 'two', 'one', 'two', 'one'], 'data1': np.random.randn(5), 'data2': np.random.randn(5)})
data
data1data2key1key2
0-0.3126940.073574aone
1-0.902065-0.854249atwo
2-0.4409150.228551bone
3-0.406243-0.878505btwo
41.812926-0.114598aone

如果我们要找出one, 和two分组中选出data2最大的前两个呢?

data.groupby('key2').apply(lambda x: x.sort_values(by='data2')[-2:])
data1data2key1key2
key2
one0-0.3126940.073574aone
2-0.4409150.228551bone
two3-0.406243-0.878505btwo
1-0.902065-0.854249atwo

去掉group层次索引:

data.groupby('key2', group_keys=False).apply(lambda x: x.sort_values(by='data2')[-2:])
data1data2key1key2
0-0.3126940.073574aone
2-0.4409150.228551bone
3-0.406243-0.878505btwo
1-0.902065-0.854249atwo

总结一下: apply就是把分完组的切片挨个(按行, 按列, 或者整体)调用我们的函数, 最后再把结果合并起来.

利用groupby技术多进程处理DataFrame

我们这里要教大家用一种groupby技术, 来实现对DataFrame并行处理.

pip install joblib

因为我们windows系统的限制, 我们的代码是在linux上运行的:


import math
from joblib import Parallel, delayed
from pandas import DataFrame
import pandas as pd
import numpy as np
import time

begin = time.time()
test = DataFrame(np.random.randn(10000000, 10))
test_other = test.copy()
groups = test.groupby(lambda x: x % 8)

def func(x):
    return x.applymap(lambda y: math.pow(y, 4))

pd.concat(Parallel(n_jobs=8)(delayed(func)(group) for name, group in groups))
print(time.time() - begin)


begin = time.time()
test_other.applymap(lambda x: math.pow(x, 4))
print(time.time() - begin)

运算结果为:
23.35878014564514
62.76386260986328

速度大概提升了2.5倍, 还是很不错的.

转载于:https://www.cnblogs.com/songfy/p/8449920.html

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值