Cohort Analysis Using Python

Cohort Analysis是将某一个时期内的用户划分为一个cohort,并将多个cohort进行时间上的某个属性的比较的一种分析方法。Cohort Analysis在有些场景下非常有用。比如一个网站或App,在某个连续的4周里陆续更新或新增了一个功能或设计,想要知道这些功能和设计上的改动对用户的影响,就可以将每周的新注册作为一个cohort,观察这4个cohort在接下来的一段实际里的行为数据,就可以很清楚地观察到4个改动的影响。

最近要做Cohort Analysis,数据都在数据库里,就直接想用Python接数据库直接分析了。Google到一篇讲得很清楚的文章,学着用Python实现一遍。示例数据从这里下载。

1. 读取数据

据说 from ... import ... 的方式在性能上有弊端,一般都推荐直接 import。示例数据是典型的购物数据,按客户的第一次消费时间将客户分为不同的cohort。

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

%matplotlib inline
df = pd.read_excel('relay-foods.xlsx', sheet='Purchase Data')
df.head()
OrderIdOrderDateUserIdTotalChargesCommonIdPupIdPickupDate
02622009-01-114750.67TRQKD22009-01-12
12782009-01-204726.604HH2S32009-01-20
22942009-02-034738.713TRDC22009-02-04
33012009-02-064753.38NGAZJ22009-02-09
43022009-02-064714.28FFYHD22009-02-09

2. 确定 OrderDate 的月份,根据 OrderDate 分群

分别创建两个字段 OrderPeriodCohortGroupOrderPeriod是购买日期的月份,CohortGroup是根据购买日期对 UserId 的分群。

df['OrderPeriod'] = df.OrderDate.map(lambda x: x.strftime('%Y-%m'))

df = df.assign(OrderPeriod = df.OrderDate.map(lambda x: x.strftime('%Y-%m'))) \
       .set_index('UserId')

# goupby(level=0): level 是index的level, 对于multiIndex, 可用level=0或1指定根据那个index来group
df = df.assign(CohortGroup = df.groupby(level=0).OrderDate.min().apply(lambda x: x.strftime('%Y-%m'))) \
       .reset_index()

df.head()
UserIdOrderIdOrderDateTotalChargesCommonIdPupIdPickupDateOrderPeriodCohortGroup
0472622009-01-1150.67TRQKD22009-01-122009-012009-01
1472782009-01-2026.604HH2S32009-01-202009-012009-01
2472942009-02-0338.713TRDC22009-02-042009-022009-01
3473012009-02-0653.38NGAZJ22009-02-092009-022009-01
4473022009-02-0614.28FFYHD22009-02-092009-022009-01

3. 计算每个CohortGroup在各个OrderPeriod的用户量

# pd.Series.nunique --> Return number of unique elements in the object.
cohorts = df.groupby(['CohortGroup', 'OrderPeriod']) \
            .agg({'UserId': pd.Series.nunique,
                 'OrderId': pd.Series.nunique,
                 'TotalCharges': 'sum'})
cohorts.rename(columns={'UserId': 'TotalUsers', 'OrderId': 'TotalOrders'}, inplace=True)
cohorts.head()
TotalChargesTotalOrdersTotalUsers
CohortGroupOrderPeriod
2009-012009-011850.2553022
2009-021351.065258
2009-031357.3602610
2009-041604.500289
2009-051575.6252610

4. 标记每个CohortGroup的Cohort时期

比如,对于2009-01 cohort,其第一个时期是 2009-01,第二到第五个时期为 2009-02,...,2009-05。这里需要将每个CohortGroup的OrderPeriod对应到其第1,2,...个时期。

def cohort_period(df):
    df['CohortPeriod'] = np.arange(len(df)) + 1
    return df

cohorts = cohorts.groupby(level=0).apply(cohort_period)
cohorts.head()
TotalChargesTotalOrdersTotalUsersCohortPeriod
CohortGroupOrderPeriod
2009-012009-011850.25530221
2009-021351.0652582
2009-031357.36026103
2009-041604.5002894
2009-051575.62526105

上面 level=0 实际上就是对 group by CohortGroup,然后对每个group结果 apply cohort_periodgroupby 后的结果是这样的:

[(k, v) for k, v in cohorts.head(5).groupby(level=0)]
[('2009-01',
                           TotalCharges  TotalOrders  TotalUsers  CohortPeriod
  CohortGroup OrderPeriod                                                     
  2009-01     2009-01          1850.255           30          22             1
              2009-02          1351.065           25           8             2
              2009-03          1357.360           26          10             3
              2009-04          1604.500           28           9             4
              2009-05          1575.625           26          10             5)]

5. 确定分群后的结果正确

x = df[(df.CohortGroup=='2009-01') & (df.OrderPeriod=='2009-01')]
y = cohorts.ix[('2009-01', '2009-01')]

assert(x.UserId.nunique()==y.TotalUsers)
assert(x.OrderId.nunique()==y.TotalOrders)
assert(x.TotalCharges.sum()==y.TotalCharges)

x = df[(df.CohortGroup=='2009-03') & (df.OrderPeriod=='2009-05')]
y = cohorts.ix[('2009-03', '2009-05')]

assert(x.UserId.nunique()==y.TotalUsers)
assert(x.OrderId.nunique()==y.TotalOrders)
assert(x.TotalCharges.sum()==y.TotalCharges)

每个CohortGroup的留存

6. 计算每个CohortGroup在第一个CohortPeriod的用户数量

cohorts = cohorts.reset_index() \
                 .set_index(['CohortGroup', 'CohortPeriod'])

cohort_group_size = cohorts.TotalUsers.groupby(level=0).first()
cohort_group_size
CohortGroup
2009-01     22
2009-02     15
2009-03     13
2009-04     39
2009-05     50
2009-06     32
2009-07     50
2009-08     31
2009-09     37
2009-10     54
2009-11    130
2009-12     65
2010-01     95
2010-02    100
2010-03     24
Name: TotalUsers, dtype: int64

7. 计算每个CohortPeriod的留存率

user_retention = cohorts.TotalUsers.unstack(0).divide(cohort_group_size, axis=1)
user_retention.head()
CohortGroup2009-012009-022009-032009-042009-052009-062009-072009-082009-092009-102009-112009-122010-012010-022010-03
CohortPeriod
11.0000001.0000001.0000001.0000001.001.000001.001.0000001.0000001.0000001.0000001.0000001.0000001.001.0
20.3636360.2000000.3076920.3333330.260.468750.460.3548390.4054050.3148150.2461540.2615380.5263160.19NaN
30.4545450.3333330.3846150.2564100.240.281250.260.2903230.3783780.2222220.2000000.2769230.273684NaNNaN
40.4090910.0666670.3076920.3333330.100.187500.200.2258060.2162160.2407410.2230770.107692NaNNaNNaN
50.4545450.2666670.0769230.1538460.080.218750.220.1935480.3513510.2407410.100000NaNNaNNaNNaN

留存率曲线

user_retention[['2009-01', '2009-05', '2009-08']] \
  .plot(figsize=(11, 6), color=['#4285f4', '#EA4335', '#A60628'])
plt.title("Cohorts: User Retention")
plt.xticks(np.arange(1, len(user_retention)+1, 1))
plt.xlim(1, len(user_retention))
plt.ylabel('% of Cohort Purchasing', fontsize=16)

737998-20160810223649465-502108799.png

留存率热力图

import seaborn as sns
sns.set(style='white')

plt.figure(figsize=(16, 8))
plt.title('Cohorts: User Retetion', fontsize=14)
sns.heatmap(user_retention.T, 
            mask=user_retention.T.isnull(), 
            annot=True, fmt='.0%')

737998-20160810223719465-1083012946.png

转载于:https://www.cnblogs.com/zzbyy/p/5759136.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值