Validating A/B Test Results: Answers

In this lesson we'll cover:

Preparation and prioritizing

A/B tests can alter user behavior in a lot of ways, and sometimes these changes are unexpected. Before digging around test data, it's important to hypothesize how a feature might change user behavior, and why. If you identify changes in the data first, it can be very easy to rationalize why these changes should be obvious, even if you never would have have thought of them before the experiment.

It's similarly important to develop hypotheses for explaining test results before looking further into the data. These hypotheses focus your thinking, provide specific conclusions to validate, and keep you from always concluding that the first potential answer you find is the right one.

For this problem, a number of factors could explain the anomalous test. Here are a few examples:

Validating the results

1. The number of messages sent shouldn't be the only determinant of this test's success, so dig into a few other metrics to make sure that their outcomes were also positive. In particular, we're interested in metrics that determine if a user is getting value out of Yammer. (Yammer typically uses login frequency as a core value metric.)

First, the average number of logins per user is up. This suggests that not only are users sending more messages, but they're also signing in to Yammer more.

View Mode Analysis