What is A/B Testing?
A/B testing, sometimes called split testing, is a method of comparing two versions of a webpage, app screen, or digital experience to see which one performs better against a defined goal. Visitors are randomly shown one of the two versions, and the performance of each variation is measured against a success metric such as conversion rate, click-through rate, form completions, or revenue.
In conversion rate optimisation, A/B testing helps businesses move beyond assumptions and make decisions based on real user behaviour. Rather than relying on opinion or guesswork, teams can test a proposed change against the current experience and measure whether it improves results in a meaningful way.
Silence the HiPPO
One of the biggest benefits of A/B testing is that it helps reduce the influence of personal opinion in digital decision-making. This is often referred to as the HiPPO problem, where the “Highest Paid Person’s Opinion” shapes the direction of design, UX, or marketing decisions without enough evidence to support it.
By testing ideas against real user behaviour, A/B testing creates a more objective way to evaluate changes. Instead of debating which idea feels right, teams can look at the data and understand which version actually performs better for users and the business.
How A/B testing works
A/B testing begins by identifying a page, screen, or step in a journey that could be improved. A variation is then created that changes one or more elements of the original experience. This might include a headline, button label, page layout, form design, image, offer, or call to action.
Traffic is then split between the original version, often called the control, and the new version, often called the variation or challenger. As users interact with each version, testing software records the results and compares performance against the selected goal.
Over time, the test gathers enough data to help teams understand whether the variation produced a meaningful improvement, no significant difference, or a worse outcome. This allows businesses to make more confident decisions about which changes to keep, refine, or reject.
Why A/B testing matters
Many businesses invest heavily in bringing traffic to their website or app, but traffic alone does not guarantee results. A/B testing helps make better use of that traffic by improving the experience visitors have once they arrive. Even small improvements to key pages or journeys can lead to stronger conversion performance, increased revenue, and better customer engagement.
This is one of the reasons A/B testing is such an important part of conversion rate optimisation. It gives businesses a structured way to validate ideas, reduce risk, and improve performance over time through testing and learning rather than one-off redesigns based on instinct.
What can you A/B test?
There is very little you cannot A/B test in a digital experience. Common examples include headlines, call-to-action buttons, imagery, trust messaging, testimonials, form layouts, page copy, navigation elements, product page content, pricing presentation, and checkout flows.
The most effective tests usually start with insight. By analysing user behaviour, reviewing analytics, and identifying points of friction, businesses can focus on testing the elements most likely to have a meaningful impact on conversion performance.
Examples of A/B testing
A simple example of A/B testing is comparing two versions of a call-to-action button to see which one generates more clicks. Another common example is testing a shorter form against a longer form to understand whether reducing friction improves completion rates.
Businesses also use A/B testing to compare different headlines, promotional messages, page layouts, product recommendations, offers, and trust signals. The goal is not just to change something, but to learn whether that change improves the experience and leads to a better outcome.
A/B testing and CRO
A/B testing is one of the core tools used in conversion rate optimisation because it allows teams to validate ideas before rolling them out more broadly. Instead of redesigning a page based on internal preference, CRO teams use testing to understand how real users respond to changes and whether those changes improve commercial performance.
Over time, a structured A/B testing programme can help businesses build a stronger understanding of user behaviour, reduce friction across key journeys, and create a more reliable process for improving digital performance.
Common A/B testing mistakes
One common mistake is testing too many major changes at once, which makes it difficult to understand what actually influenced the result. Another is ending a test too early before there is enough data to make a reliable decision.
Teams can also run into trouble when they test without a clear hypothesis, focus on low-impact elements, or judge success using the wrong metric. Strong A/B testing depends on good experimental design, clear goals, and thoughtful interpretation of results.
Testing platforms and tools
Kraken Data is software agnostic and can work with your experimentation platform of choice. The right solution depends on your business needs, technical setup, and testing maturity. What matters most is not the tool alone, but how well the testing programme is designed, implemented, and analysed.
A/B testing vs multivariate testing
A/B testing and multivariate testing are both used to improve digital experiences, but they work in different ways. A/B testing compares one version of a page or experience against another to see which performs better against a defined goal. This makes it a practical and widely used approach when teams want to test a specific change, such as a new headline, layout, call to action, or offer.
Multivariate testing is more complex because it tests multiple elements and combinations at the same time. Instead of comparing one complete version against another, it looks at how different variations of several page elements perform together. For example, a team might test different headlines, images, and button labels in the same experiment to understand which combination produces the best result.
In most cases, A/B testing is easier to set up, easier to analyse, and better suited to businesses with lower traffic volumes or clearer test priorities. Multivariate testing can be useful when there is enough traffic to support a more complex experiment and when the goal is to understand how multiple elements interact with each other.
For many businesses, A/B testing is the better starting point because it provides a simpler and more controlled way to validate ideas. Multivariate testing can be valuable in more advanced optimisation programmes, but it usually requires more traffic, more time, and a more careful approach to analysis.