The observed allocation of users between the test groups statistically significantly differs from the expected allocation under the specified allocation proportions. – Analytics Toolkit
Relevance in CRO
A sample ratio mismatch is when the actual distribution of users in variations of an A/B test does not match the intended distribution. Let’s say that you have a test built that is supposed to be an even 50/50 user split. The test has been running for 2 weeks; variation A has 3,147 users assigned to it, while variation B only has 732 users. The distribution of users for this test is closer to an 80/20 split, rather than the intended 50/50 split. This is a sample ratio mismatch, and is an indicator of a malformed experiment.
SRMs can have numerous causes. A well-designed A/B testing tool is supposed to randomly assign users to variations in order to avoid unintentional distribution bias. However unlikely, it is entirely possible that random allocation can cause a mismatch. There may be a technical issue on the site causing the experiment to assign more visitors into one variation over another. Tracking down the exact cause of an SRM can be difficult, but it’s a red flag that your experiment data is invalid.
Most of the popular A/B testing tools on the market today do not have SRM checks in place; it’s something you have to monitor yourself. There are several formulas, tools, and software easily available to help you check your experiments for SRMs, some linked below.
- What is Sample Ratio Mismatch? – Analytics Toolkit
- The Essential Guide to Sample Ratio Mismatch for Your A/B Tests – Towards Data Science
- SRM Checker Chrome extension – Lukas Vermeer
- Diagnosing Sample Ratio Mismatch in Online Controlled Experiments – Research Gate