A/B test 数据分析Python小结例子

Analyze A/B Test Results

This project will assure you have mastered the subjects covered in the statistics lessons. The hope is to have this project be as comprehensive of these topics as possible. Good luck!

Table of Contents

  • Introduction
  • Part I - Probability
  • Part II - A/B Test
  • Part III - Regression

 

Introduction

A/B tests are very commonly performed by data analysts and data scientists. It is important that you get some practice working with the difficulties of these

For this project, you will be working to understand the results of an A/B test run by an e-commerce website. Your goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.

As you work through this notebook, follow along in the classroom and answer the corresponding quiz questions associated with each question. The labels for each classroom concept are provided for each question. This will assure you are on the right track as you work through the project, and you can feel more confident in your final submission meeting the criteria. As a final check, assure you meet all the criteria on the RUBRIC.

 

Part I - Probability

To get started, let's import our libraries.

In [1]:
import pandas as pd
import numpy as np import random import matplotlib.pyplot as plt %matplotlib inline #We are setting the seed to assure you get the same answers on quizzes as we set up np.random.seed(42) 
 

1. Now, read in the ab_data.csv data. Store it in df. Use your dataframe to answer the questions in Quiz 1 of the classroom.

a. Read in the dataset and take a look at the top few rows here:

In [2]:
df=pd.read_csv('ab_data.csv') df.head() 
Out[2]:
 user_idtimestampgrouplanding_pageconverted
08511042017-01-21 22:11:48.556739controlold_page0
18042282017-01-12 08:01:45.159739controlold_page0
26615902017-01-11 16:55:06.154213treatmentnew_page0
38535412017-01-08 18:28:03.143765treatmentnew_page0
48649752017-01-21 01:52:26.210827controlold_page1
 

b. Use the below cell to find the number of rows in the dataset.

In [3]:
df.shape
Out[3]:
(294478, 5)
 

c. The number of unique users in the dataset.

In [4]:
df['user_id'].nunique() 
Out[4]:
290584
 

d. The proportion of users converted.

In [5]:
df['converted'].mean() 
Out[5]:
0.11965919355605512
 

e. The number of times the new_page and treatment don't line up.

In [6]:
treatment_df=df[df['group']=='treatment'] a=sum(treatment_df['landing_page']!='new_page') new_page_df=df[df['landing_page']=='new_page'] b=sum(new_page_df['group']!='treatment') a+b 
Out[6]:
3893
 

f. Do any of the rows have missing values?

In [7]:
df.info()
 
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 294478 entries, 0 to 294477
Data columns (total 5 columns):
user_id         294478 non-null int64
timestamp       294478 non-null object
group           294478 non-null object
landing_page    294478 non-null object
converted       294478 non-null int64
dtypes: int64(2), object(3)
memory usage: 11.2+ MB
 

<font color= blue> No rows with missing values

 

2. For the rows where treatment is not aligned with new_page or control is not aligned with old_page, we cannot be sure if this row truly received the new or old page. Use Quiz 2 in the classroom to provide how we should handle these rows.

a. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in df2.

 

<font color= blue> It's reasonable to use only rows where we feel confident about the data - so we will remove the conflicting rows

In [8]:
# Conflicting rows to drop
rows_to_drop1=df[((df['group']=='treatment') & (df['landing_page']!='new_page'))] rows_to_drop2=df[((df['group']!='treatment')&(df['landing_page']=='new_page'))] len(rows_to_drop1)+len(rows_to_drop2),len(df) df2=df.drop(rows_to_drop1.index) df2=df2.drop(rows_to_drop2.index) len(df2) 
Out[8]:
290585
In [9]:
# Double Check all of the correct rows were removed - this should be 0
df2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape[0] 
Out[9]:
0
 

3. Use df2 and the cells below to answer questions for Quiz3 in the classroom.

 

a. How many unique user_ids are in df2?

In [10]:
df2['user_id'].nunique() 
Out[10]:
290584
 

b. There is one user_id repeated in df2. What is it?

In [11]:
sum(df2.duplicated(subset='user_id')) list(df2[df2.duplicated(subset='user_id')]['user_id']) 
Out[11]:
[773192]
 

c. What is the row information for the repeat user_id?

In [12]:
df2[df2.duplicated(subset='user_id')] 
Out[12]:
 user_idtimestampgrouplanding_pageconverted
28937731922017-01-14 02:55:59.590927treatmentnew_page0
 

d. Remove one of the rows with a duplicate user_id, but keep your dataframe as df2.

In [13]:
#Since, there is consistency with this id, 
#we can probably just choose either and remove. 
#We shouldn't be counting the same user more than once.

df2.drop_duplicates(subset='user_id',inplace=True) len(df2) 
Out[13]:
290584
 

4. Use df2 in the below cells to answer the quiz questions related to Quiz 4 in the classroom.

a. What is the probability of an individual converting regardless of the page they receive?

In [41]:
# No.of conversions / total no. of individuals
df2['converted'].mean() 
Out[41]:
0.11959708724499628
 

b. Given that an individual was in the control group, what is the probability they converted?

In [42]:
control_gp=df2[df2['group']=='control'] control_gp['converted'].mean() 
Out[42]:
0.1203863045004612
 

c. Given that an individual was in the treatment group, what is the probability they converted?

In [43]:
control_gp=df2[df2['group']=='treatment'] control_gp['converted'].mean() 
Out[43]:
0.11880806551510564
 

d. What is the probability that an individual received the new page?

In [44]:
new_pg=df2[df2['landing_page']=='new_page'] len(new_pg)/len(df2) 
Out[44]:
0.5000619442226688
 

e. Consider your results from a. through d. above, and explain below whether you think there is sufficient evidence to say that the new treatment page leads to more conversions.

 

<font color= blue> The probability of an individual in the treatment group converting is 0.120 and probability of an indiviual in the control group converting is 0.118 (almost equal). So, The old page conversion rate is better than the new page conversion rate by a very slight margin

 

 

Part II - A/B Test

Notice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed.

However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another?

These questions are the difficult parts associated with A/B tests in general.

1. For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of poldpold and pnewpnew, which are the converted rates for the old and new pages.

 

<font color= blue>

H0:PnewPoldH0:Pnew≤Pold
<font color= blue>
H1:Pnew>PoldH1:Pnew>Pold
<font color= blue>
alpha=0.05alpha=0.05

 

 

2. Assume under the null hypothesis, pnewpnew and poldpold both have "true" success rates equal to the converted success rate regardless of page - that is pnewpnew and poldpold are equal. Furthermore, assume they are equal to the converted rate in ab_data.csv regardless of the page. 

Use a sample size for each page equal to the ones in ab_data.csv. 

Perform the sampling distribution for the difference in converted between the two pages over 10,000 iterations of calculating an estimate from the null. 

Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use Quiz 5 in the classroom to make sure you are on the right track.

 

a. What is the convert rate for pnewpnew under the null?

In [18]:
P_new=df2['converted'].mean() P_new 
Out[18]:
0.11959708724499628
 

b. What is the convert rate for poldpold under the null? 

In [19]:
P_old=df2['converted'].mean() P_old 
Out[19]:
0.11959708724499628
 

c. What is nnewnnew?

In [20]:
new_df=df2[df2['landing_page']=='new_page'] n_new=len(new_df) n_new 
Out[20]:
145310
 

d. What is noldnold?

In [21]:
old_df=df2[df2['landing_page']=='old_page'] n_old=len(old_df) n_old 
Out[21]:
145274
 

e. Simulate nnewnnew transactions with a convert rate of pnewpnew under the null. Store these nnewnnew 1's and 0's in new_page_converted.

In [22]:
# Simulating n_new 1's and 0's with P_new probability of success
np.random.seed(42) new_page_converted=np.random.binomial(n=1,p=P_new,size=n_new) (new_page_converted) 
Out[22]:
array([0, 1, 0, ..., 0, 0, 0])
 

f. Simulate noldnold transactions with a convert rate of poldpold under the null. Store these noldnold 1's and 0's in old_page_converted.

In [23]:
# Simulating n_new 1's and 0's with P_old probability of success
old_page_converted=np.random.binomial(n=1,p=P_old,size=n_old) old_page_converted 
Out[23]:
array([0, 0, 0, ..., 0, 0, 1])
 

g. Find pnewpnew - poldpold for your simulated values from part (e) and (f).

In [24]:
p_diff_obs=new_page_converted.mean() - old_page_converted.mean() p_diff_obs 
Out[24]:
-0.0023973022979572739
 

h. Simulate 10,000 pnewpnew - poldpold values using this same process similarly to the one you calculated in parts a. through g. above. Store all 10,000 values in a numpy array called p_diffs.

In [25]:
p_diffs=[]
for i in range(10000): new_page_converted=np.random.binomial(n=1,p=P_new,size=n_new) old_page_converted=np.random.binomial(n=1,p=P_old,size=n_old) diff=new_page_converted.mean() - old_page_converted.mean() p_diffs.append(diff) 
 

i. Plot a histogram of the p_diffs. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.

In [26]:
plt.hist(p_diffs); 
 
 

j. What proportion of the p_diffs are greater than the actual difference observed in ab_data.csv?

In [27]:
# Calculation of p value
new_conv=df2[df2['group']=='treatment']['converted'].mean() old_conv=df2[df2['group']=='control']['converted'].mean() act_diff=new_conv-old_conv p_diffs=np.array(p_diffs) pval = (p_diffs>act_diff).mean() pval 
Out[27]:
0.90100000000000002
 

k. In words, explain what you just computed in part j. What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages?

 

<font color= blue> The value calculated is the p value. Given that the null hypothesis is true, the probability of observing our statistic, or one or more extreme in favor of the alternative is the p value. 
In this case, the large p value suggests that we fail to reject the null. The conversion rate for the old page seems slightly better than or equal to the new page.

 

l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let n_old and n_new refer the the number of rows associated with the old page and new pages, respectively.

In [28]:
import statsmodels.api as sm

convert_old = sum(old_df.converted) # count of 1's from new and old convert_new = sum(new_df.converted) n_old = len(old_df) n_new = len(new_df) 
 
C:\Users\cdivy\Anaconda3\lib\site-packages\statsmodels\compat\pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.
  from pandas.core import datetools
 

m. Now use stats.proportions_ztest to compute your test statistic and p-value. Here is a helpful link on using the built in.

In [29]:
z_score, p_value = sm.stats.proportions_ztest([convert_new,convert_old], [n_new,n_old],alternative='larger') (z_score,p_value) 
Out[29]:
(-1.3109241984234394, 0.90505831275902449)
In [30]:
# Critical Z score value for a one tailed test at confidence level of 95%
from scipy.stats import norm
print(norm.ppf(1-0.05)) 
 
1.64485362695
In [31]:
# Tells how significant z_score is:
print(norm.cdf(z_score)) 
 
0.094941687241
 

n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts j. and k.?

 

<font color= blue> The p value agrees with the finding in part j, suggesting that we fail to reject the null. The conversion rate for the old page is slightly better than or equal to the new page. 
The absolute value of the z-score for a value tells us how many standard deviations it is away from the mean. The Z score obtained here is lesser than the critical Z score, suggesting that we fail to reject the null

 

 

Part III - A regression approach

1. In this final part, you will see that the result you acheived in the previous A/B test can also be acheived by performing regression.

a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case?

 

<font color = blue> Logistic Regression

 

b. The goal is to use statsmodels to fit the regression model you specified in part a. to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create a column for the intercept, and create a dummy variable column for which page each user received. Add an intercept column, as well as an ab_page column, which is 1 when an individual receives the treatment and 0 if control.

In [32]:
df2['intercept']=1 df2[['control','treatment']]=pd.get_dummies(df2['group']) df2.head() df2.rename(columns={'treatment':'ab_page'},inplace=True) 
 

c. Use statsmodels to import your regression model. Instantiate the model, and fit the model using the two columns you created in part b. to predict whether or not an individual converts.

In [33]:
from scipy import stats
stats.chisqprob = lambda chisq, df: stats.chi2.sf(chisq, df) import statsmodels.api as sm predictors=['intercept','ab_page'] logit_mod=sm.Logit(df2['converted'],df2[predictors]) results=logit_mod.fit() 
 
Optimization terminated successfully.
         Current function value: 0.366118
         Iterations 6
 

d. Provide the summary of your model below, and use it as necessary to answer the following questions.

In [34]:
results.summary()
Out[34]:
Logit Regression Results
Dep. Variable:convertedNo. Observations:290584
Model:LogitDf Residuals:290582
Method:MLEDf Model:1
Date:Sun, 31 Dec 2017Pseudo R-squ.:8.077e-06
Time:19:55:40Log-Likelihood:-1.0639e+05
converged:TrueLL-Null:-1.0639e+05
  LLR p-value:0.1899
 coefstd errzP>|z|[0.0250.975]
intercept-1.98880.008-246.6690.000-2.005-1.973
ab_page-0.01500.011-1.3110.190-0.0370.007
 

e. What is the p-value associated with ab_page? Why does it differ from the value you found in Part II?

Hint: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in the Part II?

 

<font color = blue> p-value associated with ab_page is 0.19, which is different from the value found in PartII (0.9). But the larger p value still suggests the same inference that the old page is better than or equal to the new page.

<font color = blue> The previous p value computed was for a one sided test, whereas this one is two sided test. Hence there is a difference in the p value. 
<font color = blue> The null and alternative in the regression case is:

H0:Pnew=PoldH0:Pnew=Pold
H1:PnewPoldH1:Pnew≠Pold
<font color = blue> Here the alternative is 'not equal' which is a two sided test, while in the A/B test our hypotheses were different (a one tailed test)

 

 

f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model?

 

It is a good idea to consider other factors that may help to avoid the Simpson's paradox. Taking other factors into account might bring to light things that we missed out, or another way of looking at the data and even imply different inferences. Change aversion and novelty effects may influence the rate of conversions. It would be good to create features that factor in these influences. The duration of the experiment is also a major factor. Potential disadvantages: generally, if too many terms are added with too little data we might not get convergence. But in this case since we have a lot of data convergence might not be a problem. Adding additional terms also makes interpretation of coefficients difficult.

 

g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives. You will need to read in the countries.csv dataset and merge together your datasets on the approporiate rows. Here are the docs for joining tables.

Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - Hint: You will need two columns for the three dummy variables. Provide the statistical output as well as a written response to answer this question.

In [35]:
countries_df = pd.read_csv('./countries.csv') df_new = countries_df.set_index('user_id').join(df2.set_index('user_id'), how='inner') df_new.head() 
Out[35]:
 countrytimestampgrouplanding_pageconvertedinterceptcontrolab_page
user_id        
834778UK2017-01-14 23:08:43.304998controlold_page0110
928468US2017-01-23 14:44:16.387854treatmentnew_page0101
822059UK2017-01-16 14:04:14.719771treatmentnew_page1101
711597UK2017-01-22 03:14:24.763511controlold_page0110
710616UK2017-01-16 13:14:44.000513treatmentnew_page0101
In [36]:
### Effect of country and page on conversion
import statsmodels.api as sm
df_new[['CA','US']]=pd.get_dummies(df_new['country'])[['CA',"US"]] df_new['intercept']=1 Xvars=[ 'intercept','CA','US','ab_page'] lm = sm.Logit (df_new['converted'],df_new[Xvars ] ) results = lm.fit() results.summary() 
 
Optimization terminated successfully.
         Current function value: 0.366113
         Iterations 6
Out[36]:
Logit Regression Results
Dep. Variable:convertedNo. Observations:290584
Model:LogitDf Residuals:290580
Method:MLEDf Model:3
Date:Sun, 31 Dec 2017Pseudo R-squ.:2.323e-05
Time:19:55:41Log-Likelihood:-1.0639e+05
converged:TrueLL-Null:-1.0639e+05
  LLR p-value:0.1760
 coefstd errzP>|z|[0.0250.975]
intercept-1.97940.013-155.4150.000-2.004-1.954
CA-0.05060.028-1.7840.074-0.1060.005
US-0.00990.013-0.7430.457-0.0360.016
ab_page-0.01490.011-1.3070.191-0.0370.007
 

<font color = blue> The baseline is UK.The p values for CA and US suggest that they are not statistically significant in determining conversions at an alpha of 0.05.

 

h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model.

Provide the summary results, and your conclusions based on the results.

In [37]:
### Model to look at interaction between page and country
df_new['intercept']=1 Xvars=[ 'intercept','CA','US'] lm = sm.Logit (df_new['ab_page'],df_new[Xvars ] ) results = lm.fit() results.summary() 
 
Optimization terminated successfully.
         Current function value: 0.760413
         Iterations 3
Out[37]:
Logit Regression Results
Dep. Variable:ab_pageNo. Observations:290584
Model:LogitDf Residuals:290581
Method:MLEDf Model:2
Date:Sun, 31 Dec 2017Pseudo R-squ.:-0.1223
Time:19:55:41Log-Likelihood:-2.2096e+05
converged:TrueLL-Null:-1.9688e+05
  LLR p-value:1.000
 coefstd errzP>|z|[0.0250.975]
intercept-0.00700.007-0.9440.345-0.0220.008
CA0.02120.0181.1660.244-0.0140.057
US0.00880.0091.0230.306-0.0080.026
 

<font color = blue> The p values for the countries are larger indicating that they are not very useful (not statistically significant).

 

Optional work

In [39]:
# Convert timestamp to datetime object
df2['timestamp']=pd.to_datetime(df['timestamp'],format='%Y-%m-%d %H:%M:%S') df2.dtypes 
Out[39]:
user_id                  int64
timestamp       datetime64[ns]
group                   object
landing_page            object
converted                int64
intercept                int64
control                  uint8
ab_page                  uint8
dtype: object
In [40]:
# Duration of experiment
sorted_time = df2['timestamp'].sort_values() sorted_time[0] - sorted_time[len(sorted_time)-1] 
Out[40]:
Timedelta('14 days 07:41:40.548145')
 

 

Conclusions

<font color = blue>

  • By the one tailed hypothesis test results, we fail to reject the null hypothesis (old page is better than or equal to the new page) There is not a significant difference in the conversion rates between the old and new pages. So, the recommendation is to retain the old version of the page.
  • The analysis and results are limited only based on available data - effects of change aversion and novelty effects may influence the results. Practical considerations may also need to be factored in while making decisions
  • Logistic Regression also gives results that agree with the results of A/B testing. Country does not appear to be a very useful factor in the regression model. The fit of logistic regression models does not appear to be great from the Pseudo R square values - The model can be revised for better predictions
  • The duration of the experiment is 14 days. The influence of time may be a major factor in determining the conclusions. It would be good to analyze the data for different subsets of time and look at how the conversion rates differ between control and treatment groups.

<font color = blue> References

  • Project Walkthrough , Stack Overflow posts, Udacity Course content and slack forums

 

转载于:https://www.cnblogs.com/peterhuang1977/p/9862429.html

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值