What Happened at Corvus CRO Last Week

Running Google Analytics reports can be time consuming

Projecting The Yearly Impact of a Successful Split Test

Estimating the yearly impact from a successful test is important to monitoring test program success. A very basic way to calculate yearly impact is to figure out the gain per day and multiply by 365. Gain per day can be determined by taking a goal conversion total from the strongest performing variation, subtract the baseline conversion, and divide by number of days run. Expressed as a formula:

Daily Gain = (Strongest Variation - Baseline) ÷ Days Run

Here is an example using the total revenue goal:

$611.04 = ($323,515.27 - $303,350.90) ÷ 33

The strongest variation generated and additional $611.04 per day. 365 days in a year, so multiply:

$223,030.15 = $611.04 × 365

A yearly impact of $223,030.15. Sounds great, right? Not quite. This approach is too basic; it does not account for seasonality. All sites have peaks and valleys throughout the year. The above calculations assume day-to-day consistency. In the real world, it’s uncommon to have consistent daily behavior. A more accurate method for estimating yearly impact will account for seasonality.

Accounting for Seasonality

One way to address seasonality is to think of it like “how much of our traffic for the year did we get during this time period?”. For purposes of calculation, this can be expressed as a percentage. Here’s an example:

 410,304 ÷ 4,234,722 = 9.69%

Looking back a year, 9.69% of the traffic came through while the test was running. That percentage can be used to calculate an estimated yearly impact using the difference between the strongest variation and baseline, like so:

Yearly Impact = 100 * (Strongest Variation - Baseline) ÷ Seasonality Percentage

Using our previous data:

$208,115.21 = 100 * ($323,515.27 - $303,350.90) ÷ 9.69% 

An estimated yearly impact of $208,115.21 when using a traffic seasonality percentage. Now, traffic seasonality doesn’t always cleanly line up with order conversion or revenue seasonality, so calculate percentages for those as well:

With the two additional percentages, we now have four potential estimates for yearly impact:

which you can interpret separately, average together, or otherwise manipulate as strategy dictates.

Automating The Data Collection Process

There’s a lot of data points that need to be collected to take advantage of the above process. It could take an hour or more to collect all the necessary data points between the testing tool and analytics:

That’s a lot of time devoted to data collection and entry, for every single split test. It’s unsatisfying busywork and prone to transposition errors. All of the data points are known and consistent. This is a prime area for automation.

Which is exactly what I did last week using a combination of JavaScript, Airtable, and Zapier. Wrote a JavaScript snippet to scrape the HTML of a split test result for the relevant test data and uploads it to the split test database set up in Airtable. Set up a Zapier automation that runs a report in Google Analytics to grab the data needed for seasonality percentage calculation and uploads that to Airtable as well. Calculations are done in Airtable with formulas.

This is a very bootstrapped solution, but condenses an hour of work down into about a minute and completely eliminates potential transposition errors. Roadmap of improvements:

Help Wanted

This is a great first step, but to take the tool to the next level, I’m going to need some help. Looking for the following:

If you know someone that would be interested in either of those roles and has an interest in conversion optimization, split testing, and automation, please ask them to email me (matt@corvuscro.com) or connect with me on LinkedIn.