{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# A/B Testing the Udacity Website\n", "\n", "In these exercises, we'll be analyzing data on user behavior from an experiment run by Udacity, the online education company. More specifically, we'll be looking at a test Udacity ran to improve the onboarding process on their site.\n", "\n", "Udacity's test is an example of an \"A/B\" test, in which some portion of users visiting a website (or using an app) are randomly selected to see a new version of the site. An analyst can then compare the behavior of users who see a new website design to users seeing their normal website to estimate the effect of rolling out the proposed changes to all users. While this kind of experiment has it's own name in industry (A/B testing), to be clear it's just a randomized experiment, and so everything we've learned about potential outcomes and randomized experiments apply here. \n", "\n", "(Udacity has generously provides the data from this test under an Apache open-source license, and you can find their [original writeup here](https://www.kaggle.com/tammyrotem/ab-tests-with-python/notebook). If you're interested in learning more on A/B testing in particular, it seems only fair while we use their data to flag they have a full course on the subject [here](https://www.udacity.com/course/ab-testing--ud257).)\n", "\n", "## Udacity's Test\n", "\n", "The test [is described by Udacity as follows](https://www.kaggle.com/tammyrotem/ab-tests-with-python/notebook): \n", "\n", "At the time of this experiment, Udacity courses currently have two options on the course overview page: \"start free trial\", and \"access course materials\".\n", "\n", "**Current Conditions Before Change**\n", "\n", "- If the student clicks \"start free trial\", they will be asked to enter their credit card information, and then they will be enrolled in a free trial for the paid version of the course. After 14 days, they will automatically be charged unless they cancel first.\n", "- If the student clicks \"access course materials\", they will be able to view the videos and take the quizzes for free, but they will not receive coaching support or a verified certificate, and they will not submit their final project for feedback.\n", "\n", "**Description of Experimented Change**\n", "\n", "- In the experiment, Udacity tested a change where if the student clicked \"start free trial\", they were asked how much time they had available to devote to the course.\n", "- If the student indicated 5 or more hours per week, they would be taken through the checkout process as usual. If they indicated fewer than 5 hours per week, a message would appear indicating that Udacity courses usually require a greater time commitment for successful completion, and suggesting that the student might like to access the course materials for free.\n", "- At this point, the student would have the option to continue enrolling in the free trial, or access the course materials for free instead. This [screenshot](images/udacity_checkyoureready.png) shows what the experiment looks like.\n", "\n", "**Udacity's Hope is that...**:\n", "\n", "> this might set clearer expectations for students upfront, thus reducing the number of frustrated students who left the free trial because they didn't have enough time -- without significantly reducing the number of students to continue past the free trial and eventually complete the course. If this hypothesis held true, Udacity could improve the overall student experience and improve coaches' capacity to support students who are likely to complete the course.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Gradescope Autograding\n", "\n", "Please follow [all standard guidance](https://www.practicaldatascience.org/html/autograder_guidelines.html) for submitting this assignment to the Gradescope autograder, including storing your solutions in a dictionary called `results` and ensuring your notebook runs from the start to completion without any errors.\n", "\n", "For this assignment, please name your file `exercise_abtesting.ipynb` before uploading.\n", "\n", "You can check that you have answers for all questions in your `results` dictionary with this code:\n", "\n", "```python\n", "assert set(results.keys()) == {\n", " \"ex4_avg_oec\",\n", " \"ex5_avg_guardrail\",\n", " \"ex7_ttest_pvalue\",\n", " \"ex9_ttest_pvalue_clicks\",\n", " \"ex10_num_obs\",\n", " \"ex11_guard_ate\",\n", " \"ex11_guard_pvalue\",\n", " \"ex11_oec_ate\",\n", " \"ex11_oec_pvalue\",\n", " \"ex14_se_treatment\",\n", "}\n", "```\n", "\n", "\n", "### Submission Limits\n", "\n", "Please remember that you are **only allowed FOUR submissions to the autograder.** Your last submission (if you submit 4 or fewer times), or your third submission (if you submit more than 4 times) will determine your grade Submissions that error out will **not** count against this total.\n", "\n", "That's one more than usual in case there are issues with exercise clarity." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import the Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1\n", "\n", "Begin by importing Udacity's data on user behavior [here.](https://github.com/nickeubank/MIDS_Data/tree/master/udacity_AB_testing) \n", "\n", "There are TWO datasets for this test — one for the control data (users who saw the original design), and one for treatment data (users who saw the experimental design). Udacity decided to show their test site to 1/2 of visitors, so there are roughly the same number of users appearing in each dataset (though this is not a requirement of AB tests).\n", "\n", "Please remember to load the data directly from github to assist the autograder." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 2\n", "\n", "Explore the data. Can you identify the unit of observation of the data (e.g. what is represented by each row)?\n", "\n", "(Note this is not the only way that A/B test data can be collected and/or reported — this is just what Udacity provided, presumably to help address privacy concerns.)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Pick your measures" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 3\n", "\n", "The easiest way to analyze this data is to stack it into a single dataset where each observation is a day-treatment-arm (so you should end up with two rows per day, one for those who are in the treated groups, and one for those who were in the control group). Note that currently nothing in the data identifies whether a given observation is a treatment group observation or a control group observation, so you'll want to make sure to add a \"treatment\" indicator variable.\n", "\n", "The variables in the data are:\n", "\n", "- Pageviews: number of unique users visiting homepage\n", "- Clicks: number of those users clicking \"Start Free Trial\"\n", "- Enrollments: Number of people enrolling in trial\n", "- Payments: Number of people who eventually pay for the service. Note the `payment` column reports payments for the users who first visited the site on the reported date, not payments occurring on the reported date." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 4\n", "\n", "Given Udacity's goals, what outcome are they hoping will be impacted by their manipulation?\n", "\n", "Or, to ask the same question in the language of the Potential Outcomes Framework, what is their $Y$?\n", "\n", "Or to ask the same question in the language of Kohavi, Tang and Xu, what is their *Overall Evaluation Criterion (OEC)*?\n", "\n", "(I'm only asking one question, I'm just trying to phrase it using different terminologies we've encountered to help you see how they all fit together)\n", "\n", "When you feel like you have your answer, please compute it. Store the average value of the variable in `results` under the key `ex4_avg_oec`. **Please round your answer to 4 decimal places.**\n", "\n", "NOTE: You'll probably notice you have two choices to make when it comes to actually computing the OEC. \n", "\n", "- You could probably imagine either computing a ratio or a difference of two things — please calculate the difference.\n", "- You may also be unsure whether to normalize by `Clicks`. Normalizing by clicks will help account for variation that comes from day-to-day variation in users, so it's a good thing to do. With infinite data, you'd expect to get the same results without normalizing by `Clicks` (since on average the same share of users are in each arm of the experiment), but for finite data it's a good strategy. Note that this is only ok because users make the choice to click or not *before* they see different versions of the website (it is \"pre-treatment\").\n", "\n", "Just to make sure you're on track, your measure should have an average value of *about* 9%." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 5\n", "\n", "Given Udacity's goals, what outcome are they hoping will *not* be impacted by their manipulation? In other words, what do they want to measure to ensure their treatment doesn't have unintended negative consequences that might be really costly to their operation?\n", "\n", "Note that while this isn't how Kohavi, Tang, and Xu use the term \"guardrail metrics\" — they usually use the term to refer to things we measure to ensure the experiment is working the way it should — some people would also use the term \"guardrail metrics\" for something that could be impacted even if the experiment is working correctly, but which the organization wants to track to ensure they aren't impacted because they are deemed really important.\n", "\n", "Again, please normalize by `Clicks`. Store the average value of this guardrail metric as `ex5_avg_guardrail` and **round your answer to 4 decimal places.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Validating The Data" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 6\n", "\n", "Whenever you are working with experimental data, the first thing you want to do is verify that users actually were randomly sorted into the two arms of the experiment. In this data, half of users were supposed to be shown the old version of the site and half were supposed to see the new version.\n", "\n", "`Pageviews` tells you how many unique users visited the welcome site we are experimenting on. `Pageviews` is what is sometimes called an \"invariant\" or \"guardrail\" variable, meaning that it shouldn't vary across treatment arms—after all, people have to visit the site before they get a chance to see the treatment, so there's no way that being assigned to treatment or control should affect the number of pageviews assigned to each group.\n", "\n", "\"Invariant\" variables are also an example of what are known as a \"pre-treatment\" variable, because pageviews are determined before users are manipulated in any way. That makes it analogous to gender or age in experiments where you have demographic data—a person's age and gender are determined before they experience any manipulations, so the value of any pre-treatment attributes should be the same across the two arms of our experiment. This is what we've previously called \"checking for balance,\" If pre-treatment attributes aren't balanced, then we may worry our attempt to randomly assign people to different groups failed. Kohavi, Tang and Xu call this a \"trust-based guardrail metric\" because it helps us determine if we should trust our data.\n", "\n", "To test the quality of the randomization, calculate the average number of pageviews for the treated group and for the control group. Do they look similar?\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 7\n", "\n", "\"Similar\" is a tricky concept -- obviously, we expect *some* differences across groups since users were *randomly* divided across treatment arms. The question is whether the differences between groups are larger than we'd expect to emerge given our random assignment process. To evaluate this, let's use a `ttest` to test the statistical significance of the differences we see. \n", "\n", "**Note**: Remember that scipy functions don't accept `pandas` objects, so you use a scipy function, you have to pass the numpy vectors underlying your data with the `.values` operator (e.g. `df.my_column.values`). \n", "\n", "Does the difference in `pageviews` look statistically significant?\n", "\n", "Store the resulting p-value in `ex7_ttest_pvalue` **rounded to four decimal places.**" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 8\n", "\n", "`Pageviews` is not the only \"pre-treatment\" variable in this data we can use to evaluate balance/use as a guardrail metric. What other measure is pre-treatment? Review the description of the experiment if you're not sure." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 9\n", "\n", "Check if the other pre-treatment variable is also balanced. Store the p-value of your test of difference in `results` under the key `\"ex9_ttest_pvalue_clicks\"` **rounded to four decimal places.**" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Estimating the Effect of Experiment\n", "\n", "### Exercise 10\n", "\n", "Now that we've validated our randomization, our next task is to estimate our treatment effect. First, though, there's an issue with your data you've been able to largely ignore until now, but which you should get a grip on before estimating your treatment effect — can you tell what it is and what you should do about it?\n", "\n", "Store the number of observations in your data *after* you've addressed this in `ex10_num_obs` (this is mostly meant as a way to sanity check your answer with autograder)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Exercise 11\n", "\n", "Now that we've established we have good balance (meaning we think randomization was likely successful), we can evaluate the effects of the experiment. Test whether the OEC and the metric you *don't* want affected have different average values in the control group and treatment group. \n", "\n", "Because we've randomized, this is a consistent estimate of the Average Treatment Effect of Udacity's website change.\n", "\n", "Calculate the difference in means in your OEC and guardrail metrics using a simple t-test. Store the resulting effect estimates in `ex11_oec_ate` and `ex11_guard_ate` and p-values in `ex11_oec_pvalue` and `ex11_guard_pvalue`. **Please round all answers to 4 decimal places.** Report your ATE in *percentage points*, where `1` denotes 1 percentage point.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 12\n", "\n", "Do you feel that Udacity achieved their goal? Did their intervention cause them any problems? If they asked you \"What would happen if we rolled this out to everyone?\" what would you say?\n", "\n", "As you answer this question, a small additional question: up until this point you've (presumably) been reporting the default p-values from the tools you are using. These, as you may recall from stats 101, are two-tailed p-values. Do those seem appropriate for your OEC?" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 13\n", "\n", "One of the magic things about experiments is that all you have to do is compare averages to get an average treatment effect. However, you *can* do other things to try and increase the statistical power of your experiments, like add controls in a linear regression model. \n", "\n", "As you likely know, a bivariate regression is exactly equivalent to a t-test, so let's start by re-estimating the effect of treatment on your OEC using a linear regression. Can you replicate the results from your t-test? They shouldn't just be close—they should be numerically equivalent (i.e. exactly the same to the limits of floating point number precision). " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 14\n", "\n", "Now add indicator variables for the date of each observation. Do the standard errors on your `treatment` variable change? If so, in what direction?\n", "\n", "Store your new standard error in `ex14_se_treatment`. Round your answer to 4 decimal places." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "You should have found that your standard errors decreased by about 30\\%—this is why, although just comparing means *works*, if you have additional variables adding them to your analysis can be helpful (all the usual rules for model specification apply — for example, you still want to be careful about overfitting, which one could argue is maybe part of what's happening here). \n", "\n", "In many other cases, the effect of adding controls is likely to be larger — the date indicators we added to our data are perfectly balanced between treatment and control, so we aren't adding a lot of data to the model by adding them as variables. They're accounting for some day-to-day variation (presumably in the types of people coming to the site), but they aren't controlling for any residual baseline differences the way a control like \"gender\" or \"age\" might (since those kind of individual-level attributes will never be perfectly balanced across treatment and control). " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 15\n", "\n", "Does this result have any impact on the recommendations you would offer Udacity?" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.8" } }, "nbformat": 4, "nbformat_minor": 4 }