Many years ago (too many to mention), one of my earlier IT roles was working in the trading floor of a very large oil company. I was working with oil traders who used amazingly complicated and large Excel spreadsheets to make multi-million dollar decisions on future oil prices.
These traders had the classic high-pressure, high-reward jobs and every Friday would be a big pressure release day where we would all go out and have a massively long lunch (mainly delicious curries). It was great chatting to them as they talked about the pressures of the role, their motivations and trading tactics.
One big thing has stuck with me from that time that really applies to conversion rate optimisation. What we are looking for is change. For oil traders they take a bet on the price going up or down, it’s the highs and lows where they make their money. Worst case scenario for them is a stable oil price, there is no money to be made.
A loss is still a gain
It is the same for CRO, we test hypotheses to measure the impact of changes. If we see a significant lift, we celebrate and announce it to the world with extrapolations on likely revenue gains. Tests that show no significant shifts at least demonstrate that the changes are not likely to damage conversion rates.
What I have noticed with some companies whose tests show significant loss is a tendency to accept it as a learning and move on to the next step of the CRO program. These tests should be celebrated as much as the ones that display a lift. A 1 million dollar loss is as important as a 1 million dollar gain. The analysis and the distribution audience should be the same, it demonstrates the value of the testing program and highlights the need to test as many proposed changes to the website as possible.
Even the smallest, most innocuous change can have a significant impact on conversion rates and revenue – but they are difficult to spot just using traditional analytics reports. A test that shows a loss should be analysed to identify the likely causes of that loss. This will lead to hypotheses for future tests and a richer CRO roadmap.
Of course we do love to celebrate successes, but don’t put your CRO losses in the corner! Running a test enables you to attribute lifts or dips to a specific site change, this is data driven decision making.