A/B Testing the Udacity Website

In these exercises, we’ll be analyzing data on user behavior from an experiment run by Udacity, the online education company. More specifically, we’ll be looking at a test Udacity ran to improve the onboarding process on their site.

Udacity’s test is an example of an “A/B” test, in which some portion of users visiting a website (or using an app) are randomly selected to see a new version of the site. An analyst can then compare the behavior of users who see a new website design to users seeing their normal website to estimate the effect of rolling out the proposed changes to all users. While this kind of experiment has it’s own name in industry (A/B testing), to be clear it’s just a randomized experiment, and so everything we’ve learned about potential outcomes and randomized experiments apply here.

(Udacity has generously provides the data from this test under an Apache open-source license, and you can find their original writeup here. If you’re interested in learning more on A/B testing in particular, it seems only fair while we use their data to flag they have a full course on the subject here.)

Udacity’s Test

The test is described by Udacity as follows:

At the time of this experiment, Udacity courses currently have two options on the course overview page: “start free trial”, and “access course materials”.

Current Conditions Before Change

  • If the student clicks “start free trial”, they will be asked to enter their credit card information, and then they will be enrolled in a free trial for the paid version of the course. After 14 days, they will automatically be charged unless they cancel first.

  • If the student clicks “access course materials”, they will be able to view the videos and take the quizzes for free, but they will not receive coaching support or a verified certificate, and they will not submit their final project for feedback.

Description of Experimented Change

  • In the experiment, Udacity tested a change where if the student clicked “start free trial”, they were asked how much time they had available to devote to the course.

  • If the student indicated 5 or more hours per week, they would be taken through the checkout process as usual. If they indicated fewer than 5 hours per week, a message would appear indicating that Udacity courses usually require a greater time commitment for successful completion, and suggesting that the student might like to access the course materials for free.

  • At this point, the student would have the option to continue enrolling in the free trial, or access the course materials for free instead. This screenshot shows what the experiment looks like.

Udacity’s Hope is that…:

this might set clearer expectations for students upfront, thus reducing the number of frustrated students who left the free trial because they didn’t have enough time – without significantly reducing the number of students to continue past the free trial and eventually complete the course. If this hypothesis held true, Udacity could improve the overall student experience and improve coaches’ capacity to support students who are likely to complete the course.

Gradescope Autograding

Please follow all standard guidance for submitting this assignment to the Gradescope autograder, including storing your solutions in a dictionary called results and ensuring your notebook runs from the start to completion without any errors.

For this assignment, please name your file exercise_abtesting.ipynb before uploading.

You can check that you have answers for all questions in your results dictionary with this code:

assert set(results.keys()) == {
    "ex4_avg_oec",
    "ex5_avg_guardrail",
    "ex7_ttest_pvalue",
    "ex9_ttest_pvalue_clicks",
    "ex10_num_obs",
    "ex11_guard_ate",
    "ex11_guard_pvalue",
    "ex11_oec_ate",
    "ex11_oec_pvalue",
    "ex14_se_treatment",
}

Submission Limits

Please remember that you are only allowed FOUR submissions to the autograder. Your last submission (if you submit 4 or fewer times), or your third submission (if you submit more than 4 times) will determine your grade Submissions that error out will not count against this total.

That’s one more than usual in case there are issues with exercise clarity.

Import the Data

Exercise 1

Begin by importing Udacity’s data on user behavior here.

There are TWO datasets for this test — one for the control data (users who saw the original design), and one for treatment data (users who saw the experimental design). Udacity decided to show their test site to 1/2 of visitors, so there are roughly the same number of users appearing in each dataset (though this is not a requirement of AB tests).

Please rmeember to load the data directly from github to assist the autograder.

[1]:
import pandas as pd
import numpy as np

pd.set_option("mode.copy_on_write", True)
[2]:
control = pd.read_csv(
    "https://media.githubusercontent.com/media/nickeubank/"
    "MIDS_Data/master/udacity_AB_testing/control_data.csv"
)
treat = pd.read_csv(
    "https://media.githubusercontent.com/media/nickeubank/"
    "MIDS_Data/master/udacity_AB_testing/experiment_data.csv"
)

Exercise 2

Explore the data. Can you identify the unit of observation of the data (e.g. what is represented by each row)?

To be clear, the columns represent stages in a user funnel:

  • Some number of users arrive at the website and are counted as Pageviews,

  • Some portion of those users then click to enroll (and are counted as clicks),

  • Some portion of those users then actually enroll in the free trial (after seeing an informational popup, in the case of treatment individuals),

  • Finally some portion of those users end up paying at the end of the free trial period.

(Note this is not the only way that A/B test data can be collected and/or reported — this is just what Udacity provided, presumably to help address privacy concerns.)

Each row is the behavior of all of the users who arrived at the website on a given day and were assigned to a given treatment arm.

Pick your measures

Exercise 3

The easiest way to analyze this data is to stack it into a single dataset where each observation is a day-treatment-arm (so you should end up with two rows per day, one for those who are in the treated groups, and one for those who were in the control group). Note that currently nothing in the data identifies whether a given observation is a treatment group observation or a control group observation, so you’ll want to make sure to add a “treatment” indicator variable.

The variables in the data are:

  • Pageviews: number of unique users visiting homepage

  • Clicks: number of those users clicking “Start Free Trial”

  • Enrollments: Number of people enrolling in trial

  • Payments: Number of people who eventually pay for the service. Note the payment column reports payments for the users who first visited the site on the reported date, not payments occurring on the reported date.

[3]:
control["treatment"] = 0
treat["treatment"] = 1
users = pd.concat([control, treat])
users.head()
[3]:
Date Pageviews Clicks Enrollments Payments treatment
0 Sat, Oct 11 7723 687 134.0 70.0 0
1 Sun, Oct 12 9102 779 147.0 70.0 0
2 Mon, Oct 13 10511 909 167.0 95.0 0
3 Tue, Oct 14 9871 836 156.0 105.0 0
4 Wed, Oct 15 10014 837 163.0 64.0 0
[4]:
users.tail()
[4]:
Date Pageviews Clicks Enrollments Payments treatment
32 Wed, Nov 12 10042 802 NaN NaN 1
33 Thu, Nov 13 9721 829 NaN NaN 1
34 Fri, Nov 14 9304 770 NaN NaN 1
35 Sat, Nov 15 8668 724 NaN NaN 1
36 Sun, Nov 16 8988 710 NaN NaN 1
[5]:
# Make sure it worked well!
assert len(users) == len(control) + len(treat)

Exercise 4

Given Udacity’s goals, what outcome are they hoping will be impacted by their manipulation?

Or, to ask the same question in the language of the Potential Outcomes Framework, what is their \(Y\)?

Or to ask the same question in the language of Kohavi, Tang and Xu, what is their Overall Evaluation Criterion (OEC)?

(I’m only asking one question, I’m just trying to phrase it using different terminologies we’ve encountered to help you see how they all fit together)

When you feel like you have your answer, please compute it. Store the average value of the variable in results under the key ex4_avg_oec. Please round your answer to 4 decimal places.

NOTE: You’ll probably notice you have two choices to make when it comes to actually computing the OEC.

  • You could probably imagine either computing a ratio or a difference of two things — please calculate the difference.

  • You may also be unsure whether to normalize by Clicks. Normalizing by clicks will help account for variation that comes from day-to-day variation in users, so it’s a good thing to do. With infinite data, you’d expect to get the same results without normalizing by Clicks (since on average the same share of users are in each arm of the experiment), but for finite data it’s a good strategy. Note that this is only ok because users make the choice to click or not before they see different versions of the website (it is “pre-treatment”).

Just to make sure you’re on track, your measure should have an average value of about 9%.

Udacity wants a decline in the number of people who enroll in a trial but don’t end up continuing (paying) at the end of the trial. So first we want the number of enrollments that end in non-payment per person who sees trial page.

Note that because “clicks” is pre-treatment and we have the same number of people in both treatment arms, on average we have the same number of clicks in both groups so you don’t technically have to normalize.

It’s also ok to just look at the share of Enrollments that end up as payments.

[6]:
users["enroll_but_nopayment_per_click"] = (
    users["Enrollments"] - users["Payments"]
) / users["Clicks"]

results = {}
results["ex4_avg_oec"] = np.round(users["enroll_but_nopayment_per_click"].mean(), 4)
print(f"Mean value of our OEC is {results['ex4_avg_oec']:.4f}")
Mean value of our OEC is 0.0941

Exercise 5

Given Udacity’s goals, what outcome are they hoping will not be impacted by their manipulation? In other words, what do they want to measure to ensure their treatment doesn’t have unintended negative consequences that might be really costly to their operation?

Note that while this isn’t how Kohavi, Tang, and Xu use the term “guardrail metrics” — they usually use the term to refer to things we measure to ensure the experiment is working the way it should — some people would also use the term “guardrail metrics” for something that could be impacted even if the experiment is working correctly, but which the organization wants to track to ensure they aren’t impacted because they are deemed really important.

Again, please normalize by Clicks. Store the average value of this guardrail metric as ex5_avg_guardrail and round your answer to 4 decimal places.

[7]:
# Udacity doesn't want to see a decline in payments, so we also want to measure
# the conversion rate from clicks to payments. Ideally, this won't change.

# Again, dividing by clicks is optional.

users["payments_per_click"] = users["Payments"] / users["Clicks"]

results["ex5_avg_guardrail"] = np.round(users["payments_per_click"].mean(), 4)
print(
    f"Mean value of our guardrail metric (payments per click) is {results['ex5_avg_guardrail']:.4f}"
)
Mean value of our guardrail metric (payments per click) is 0.1158

Validating The Data

Exercise 6

Whenever you are working with experimental data, the first thing you want to do is verify that users actually were randomly sorted into the two arms of the experiment. In this data, half of users were supposed to be shown the old version of the site and half were supposed to see the new version.

Pageviews tells you how many unique users visited the welcome site we are experimenting on. Pageviews is what is sometimes called an “invariant” or “guardrail” variable, meaning that it shouldn’t vary across treatment arms—after all, people have to visit the site before they get a chance to see the treatment, so there’s no way that being assigned to treatment or control should affect the number of pageviews assigned to each group.

“Invariant” variables are also an example of what are known as a “pre-treatment” variable, because pageviews are determined before users are manipulated in any way. That makes it analogous to gender or age in experiments where you have demographic data—a person’s age and gender are determined before they experience any manipulations, so the value of any pre-treatment attributes should be the same across the two arms of our experiment. This is what we’ve previously called “checking for balance,” If pre-treatment attributes aren’t balanced, then we may worry our attempt to randomly assign people to different groups failed. Kohavi, Tang and Xu call this a “trust-based guardrail metric” because it helps us determine if we should trust our data.

To test the quality of the randomization, calculate the average number of pageviews for the treated group and for the control group. Do they look similar?

[8]:
print(
    f"Avg num pageviews for treated users per day was: "
    f"{users.loc[users['treatment'] == 1, 'Pageviews'].mean():.1f}"
)
Avg num pageviews for treated users per day was: 9315.1
[9]:
print(
    f"Avg num pageviews for untreated users per day was:"
    f" {users.loc[users['treatment'] == 0, 'Pageviews'].mean():.1f}"
)
Avg num pageviews for untreated users per day was: 9339.0

Exercise 7

“Similar” is a tricky concept – obviously, we expect some differences across groups since users were randomly divided across treatment arms. The question is whether the differences between groups are larger than we’d expect to emerge given our random assignment process. To evaluate this, let’s use a ttest to test the statistical significance of the differences we see.

Note: Remember that scipy functions don’t accept pandas objects, so you use a scipy function, you have to pass the numpy vectors underlying your data with the .values operator (e.g. df.my_column.values).

Does the difference in pageviews look statistically significant?

Store the resulting p-value in ex7_ttest_pvalue rounded to four decimal places.

[10]:
from scipy.stats import ttest_ind

ttest_result = ttest_ind(
    users.loc[users["treatment"] == 1, "Pageviews"].values,
    users.loc[users["treatment"] == 0, "Pageviews"].values,
)

results["ex7_ttest_pvalue"] = np.round(ttest_result.pvalue, 4)
print(
    f"the p-value of the difference is {results['ex7_ttest_pvalue']:.4f},"
    " so not at all significant"
)
the p-value of the difference is 0.8877, so not at all significant
[11]:
# p-value of 0.8877 -- nope! Not statistically different at all.

Exercise 8

Pageviews is not the only “pre-treatment” variable in this data we can use to evaluate balance/use as a guardrail metric. What other measure is pre-treatment? Review the description of the experiment if you’re not sure.

[12]:
# Clicks. The experiment only changes what happens
# AFTER people click on "free trial", so clicks are "pre-treatment",

Exercise 9

Check if the other pre-treatment variable is also balanced. Store the p-value of your test of difference in results under the key "ex9_ttest_pvalue_clicks" rounded to four decimal places.

[13]:
ttest_result_clicks = ttest_ind(
    users.loc[users["treatment"] == 1, "Clicks"].values,
    users.loc[users["treatment"] == 0, "Clicks"].values,
)

results["ex9_ttest_pvalue_clicks"] = np.round(ttest_result_clicks.pvalue, 4)
print(
    f"the p-value of the difference in clicks is {results['ex9_ttest_pvalue_clicks']:.4f},"
    " so not at all significant"
)
the p-value of the difference in clicks is 0.9264, so not at all significant

Estimating the Effect of Experiment

Exercise 10

Now that we’ve validated our randomization, our next task is to estimate our treatment effect. First, though, there’s an issue with your data you’ve been able to largely ignore until now, but which you should get a grip on before estimating your treatment effect — can you tell what it is and what you should do about it?

Store the number of observations in your data after you’ve addressed this in ex10_num_obs (this is mostly meant as a way to sanity check your answer with autograder).

Because trials last 2 weeks, you don’t get payment stats from the last two weeks of data. So we drop them.

[14]:
users = users[pd.notnull(users.Enrollments)]
results["ex10_num_obs"] = len(users)
print(f"There are {results['ex10_num_obs']:,.0f} observations.")
There are 46 observations.

Exercise 11

Now that we’ve established we have good balance (meaning we think randomization was likely successful), we can evaluate the effects of the experiment. Test whether the OEC and the metric you don’t want affected have different average values in the control group and treatment group.

Because we’ve randomized, this is a consistent estimate of the Average Treatment Effect of Udacity’s website change.

Calculate the difference in means in your OEC and guardrail metrics using a simple t-test. Store the resulting effect estimates in ex11_oec_ate and ex11_guard_ate and p-values in ex11_oec_pvalue and ex11_guard_pvalue. Please round all answers to 4 decimal places. Report your ATE in percentage points, where 1 denotes 1 percentage point.

[15]:
keys = {
    "enroll_but_nopayment_per_click": "ex11_oec_",
    "payments_per_click": "ex11_guard_",
}

for i in ["enroll_but_nopayment_per_click", "payments_per_click"]:
    wo_change = users.loc[users["treatment"] == 0, i].mean()
    w_change = users.loc[users["treatment"] == 1, i].mean()
    pvalue = ttest_ind(
        users.loc[users["treatment"] == 1, i].values,
        users.loc[users["treatment"] == 0, i].values,
    ).pvalue

    results[keys[i] + "pvalue"] = np.round(pvalue, 4)

    results[keys[i] + "ate"] = np.round((wo_change - w_change) * 100, 4)

    print(
        f"change from orig to experiment in {i} is "
        f" {results[keys[i]+ 'ate']:.4f} percentage points"
    )
    print(f"Two-tailed p-value (default) of difference is {pvalue:.3f}")
    print(f"So one-tailed p-value (appropriate here) of difference is {pvalue / 2:.3f}")
    print("\n")
change from orig to experiment in enroll_but_nopayment_per_click is  1.5888 percentage points
Two-tailed p-value (default) of difference is 0.132
So one-tailed p-value (appropriate here) of difference is 0.066


change from orig to experiment in payments_per_click is  0.4897 percentage points
Two-tailed p-value (default) of difference is 0.593
So one-tailed p-value (appropriate here) of difference is 0.296


Exercise 12

Do you feel that Udacity achieved their goal? Did their intervention cause them any problems? If they asked you “What would happen if we rolled this out to everyone?” what would you say?

As you answer this question, a small additional question: up until this point you’ve (presumably) been reporting the default p-values from the tools you are using. These, as you may recall from stats 101, are two-tailed p-values. Do those seem appropriate for your OEC?

[16]:
# So we do see a decrease in share of people who click trial,
# enroll but then don't pay, and it's borderline significant
# (has p-value < 0.1 if we had used a 1-tailed tests).

# But we don't see a significant decline in share of people who
# click who end up enrolling after trial,
# so the change was basically successful in only filtering out
# people who weren't really interested.

# It seems like everything is going in the expected direction. However
# before rolling out something that affects something as important
# as on-boarding, I would run a longer trial to get a little greater
# statistical significance...

Exercise 13

One of the magic things about experiments is that all you have to do is compare averages to get an average treatment effect. However, you can do other things to try and increase the statistical power of your experiments, like add controls in a linear regression model.

As you likely know, a bivariate regression is exactly equivalent to a t-test, so let’s start by re-estimating the effect of treatment on your OEC using a linear regression. Can you replicate the results from your t-test? They shouldn’t just be close—they should be numerically equivalent (i.e. exactly the same to the limits of floating point number precision).

[17]:
import statsmodels.formula.api as smf

smf.ols("enroll_but_nopayment_per_click ~ treatment", users).fit().summary()
[17]:
OLS Regression Results
Dep. Variable: enroll_but_nopayment_per_click R-squared: 0.051
Model: OLS Adj. R-squared: 0.029
Method: Least Squares F-statistic: 2.356
Date: Sat, 02 Mar 2024 Prob (F-statistic): 0.132
Time: 19:10:08 Log-Likelihood: 89.832
No. Observations: 46 AIC: -175.7
Df Residuals: 44 BIC: -172.0
Df Model: 1
Covariance Type: nonrobust
coef std err t P>|t| [0.025 0.975]
Intercept 0.1021 0.007 13.948 0.000 0.087 0.117
treatment -0.0159 0.010 -1.535 0.132 -0.037 0.005
Omnibus: 14.160 Durbin-Watson: 1.908
Prob(Omnibus): 0.001 Jarque-Bera (JB): 15.205
Skew: 1.227 Prob(JB): 0.000499
Kurtosis: 4.383 Cond. No. 2.62


Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

Exercise 14

Now add indicator variables for the date of each observation. Do the standard errors on your treatment variable change? If so, in what direction?

Store your new standard error in ex14_se_treatment. Round your answer to 4 decimal places.

[18]:
import statsmodels.formula.api as smf

reg_fit = smf.ols("enroll_but_nopayment_per_click ~ treatment + C(Date)", users).fit()
reg_fit.summary()
[18]:
OLS Regression Results
Dep. Variable: enroll_but_nopayment_per_click R-squared: 0.806
Model: OLS Adj. R-squared: 0.602
Method: Least Squares F-statistic: 3.962
Date: Sat, 02 Mar 2024 Prob (F-statistic): 0.000978
Time: 19:10:08 Log-Likelihood: 126.29
No. Observations: 46 AIC: -204.6
Df Residuals: 22 BIC: -160.7
Df Model: 23
Covariance Type: nonrobust
coef std err t P>|t| [0.025 0.975]
Intercept 0.1079 0.016 6.651 0.000 0.074 0.142
C(Date)[T.Fri, Oct 24] 0.0445 0.022 1.983 0.060 -0.002 0.091
C(Date)[T.Fri, Oct 31] -0.0074 0.022 -0.331 0.744 -0.054 0.039
C(Date)[T.Mon, Oct 13] -0.0231 0.022 -1.026 0.316 -0.070 0.024
C(Date)[T.Mon, Oct 20] -0.0285 0.022 -1.270 0.217 -0.075 0.018
C(Date)[T.Mon, Oct 27] 0.0328 0.022 1.458 0.159 -0.014 0.079
C(Date)[T.Sat, Nov 1] -0.0235 0.022 -1.047 0.306 -0.070 0.023
C(Date)[T.Sat, Oct 11] -0.0017 0.022 -0.074 0.941 -0.048 0.045
C(Date)[T.Sat, Oct 18] -0.0438 0.022 -1.950 0.064 -0.090 0.003
C(Date)[T.Sat, Oct 25] -0.0309 0.022 -1.375 0.183 -0.077 0.016
C(Date)[T.Sun, Nov 2] 0.0549 0.022 2.441 0.023 0.008 0.101
C(Date)[T.Sun, Oct 12] -0.0347 0.022 -1.542 0.137 -0.081 0.012
C(Date)[T.Sun, Oct 19] -0.0178 0.022 -0.791 0.437 -0.064 0.029
C(Date)[T.Sun, Oct 26] -0.0222 0.022 -0.989 0.333 -0.069 0.024
C(Date)[T.Thu, Oct 16] -0.0228 0.022 -1.016 0.321 -0.069 0.024
C(Date)[T.Thu, Oct 23] -0.0046 0.022 -0.203 0.841 -0.051 0.042
C(Date)[T.Thu, Oct 30] 0.0588 0.022 2.618 0.016 0.012 0.105
C(Date)[T.Tue, Oct 14] -0.0417 0.022 -1.855 0.077 -0.088 0.005
C(Date)[T.Tue, Oct 21] -0.0059 0.022 -0.260 0.797 -0.052 0.041
C(Date)[T.Tue, Oct 28] -0.0287 0.022 -1.276 0.215 -0.075 0.018
C(Date)[T.Wed, Oct 15] -0.0132 0.022 -0.588 0.562 -0.060 0.033
C(Date)[T.Wed, Oct 22] -0.0220 0.022 -0.980 0.338 -0.069 0.025
C(Date)[T.Wed, Oct 29] 0.0466 0.022 2.076 0.050 4.77e-05 0.093
treatment -0.0159 0.007 -2.398 0.025 -0.030 -0.002
Omnibus: 3.871 Durbin-Watson: 1.863
Prob(Omnibus): 0.144 Jarque-Bera (JB): 3.826
Skew: -0.000 Prob(JB): 0.148
Kurtosis: 4.413 Cond. No. 27.3


Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[19]:
results["ex14_se_treatment"] = np.round(reg_fit.bse["treatment"], 4)
print(f"The standard error is {results['ex14_se_treatment']:.4f}")
The standard error is 0.0066

You should have found that your standard errors decreased by about 30%—this is why, although just comparing means works, if you have additional variables adding them to your analysis can be helpful (all the usual rules for model specification apply — for example, you still want to be careful about overfitting, which one could argue is maybe part of what’s happening here).

In many other cases, the effect of adding controls is likely to be larger — the date indicators we added to our data are perfectly balanced between treatment and control, so we aren’t adding a lot of data to the model by adding them as variables. They’re accounting for some day-to-day variation (presumably in the types of people coming to the site), but they aren’t controlling for any residual baseline differences the way a control like “gender” or “age” might (since those kind of individual-level attributes will never be perfectly balanced across treatment and control).

Exercise 15

Does this result have any impact on the recommendations you would offer Udacity?

Makes for a stronger case that it’s ok to roll this out!

[20]:
sorted(set(results.keys()))
[20]:
['ex10_num_obs',
 'ex11_guard_ate',
 'ex11_guard_pvalue',
 'ex11_oec_ate',
 'ex11_oec_pvalue',
 'ex14_se_treatment',
 'ex4_avg_oec',
 'ex5_avg_guardrail',
 'ex7_ttest_pvalue',
 'ex9_ttest_pvalue_clicks']
[21]:
len(results.keys())
[21]:
10
[22]:
assert set(results.keys()) == {
    "ex4_avg_oec",
    "ex5_avg_guardrail",
    "ex7_ttest_pvalue",
    "ex9_ttest_pvalue_clicks",
    "ex10_num_obs",
    "ex11_guard_ate",
    "ex11_guard_pvalue",
    "ex11_oec_ate",
    "ex11_oec_pvalue",
    "ex14_se_treatment",
}