In recent years, A/B testing has become one of the most important skills and tools for digital marketers. In A/B testing, a marketer is testing which of two versions performs better on a selected key performance indicator (KPI). However, today A/B testing has become a synonym for multivariate testing, because in most cases it looks at more than two versions.
The popularity of A/B testing is based on its usefulness and ease of use. Its biggest weakness is that if a tester doesn’t have a good understanding of testing, they can easily draw conclusions that could lead to big mistakes. To avoid this, I explain below the four most common mistakes relating to A/B testing:
1. Assumption that the test results will remain the same
The world is changing and so are people’s tastes and decisions. Just because a particular version performed the best a year ago, this may no longer be the case. In addition, testing should take into account when the test was performed and how long it took. The test result may depend, for example, on whether a test was conducted in the morning or evening, on a weekday or at the weekend, in the summer or winter, and in Finland or Japan.
2. Lack of data and statistical testing
The winner of the test may be decided too early, when it looks as if you already know which version is the best. The amount of data required depends on how confident you want to be when selecting the best version and how big the differences are between versions. To predict the required amount of data, it is a good idea to use the Test Significance website.
Statistical testing will nearly always allow you to resolve whether you have enough data to draw a conclusion on the winner. With statistical tests, is the leading version the winner at the desired confidence level, or it is leading the test only because of random fluctuations among the various versions?
You might think from your student days that statistical tests are complicated, but in fact they can be pretty simple. Using Google, you can find a number of tools for this purpose, which makes it even easier. For example, you can download an Excel template or use the tool on the website of House of Kaizen .
3. Testing without a plan
There should be always a clear plan of what will be tested and why. Without a plan, you could easily test only matters of opinion that are next to useless, such as whether it is better that the call-to-action button is blue or light blue in colour. In such cases, the versions tested are also almost identical; because of that, the differences aren’t going to be big.
At the beginning, you should test the most important things and the versions should differ clearly from each other. In later tests, you could test smaller changes to the version that wins the first test.
The plan helps you to ensure that the most important things will be tested and to decide when you should move on to testing something different. When drawing up a testing plan, it is also good to create a hypothesis – that is, why you would expect one particular version to perform better than the others. This will enable you to understand more easily why one version wins the test.
4. Failure to start a new test after completion of the previous one
There will be always something to test. Testing should be continuous and not just a one-off activity. When the test is complete, you should have a clear understanding of what is to be tested next. If you feel that you have identified enough improvement for the thing currently being tested, or that the improvements are very minor, then you should move on to testing the next most important thing.
When you avoid these errors, you’ll go far with A/B testing and your colleagues and your boss will be dazzled by your talent. And – perhaps most importantly – you’ll avoid making expensive mistakes.
What are your best tips for A/B testing?
31 Oct 2016