Elements Of A Successful Long-Term Optimization Effort
When you decide to play the long game of website optimization, there are a few things to keep in mind. We discovered a number of qualities that we feel may help make long-term optimization more sustainable for companies or individual consultants. Surely, scoring a higher conversion win here or there is great, but the true rewards lie in continuous improvements which compound over time. Here are some elements which we identified that should help you along the way.
Having a primary metric that the business backs and the whole team agrees on is quite important. Without agreement on improving a clearly defined metric, the whole optimization project can fall apart at any time. Without focus, analysis of completed tests also becomes fuzzy as different people may view the results with different measures of success. This is why for each test that we run, we always establish one primary measure before the the test is started. In situations where there are two or more key business metrics, we still will focus on one but might account for not hurting the other ones. For example, we might try to increase signups without hurting revenue. Or we might optimize product X, without hurting sales of product Y. Set focus for your series of tests.
It’s important to assess the current state of the conversion and define some goals to reach for. Perhaps the business has a 5% lead gen rate and would like to reach for 15%. Because optimization work deals with ranges and probabilities, in the past we also set low and high end targets on our projects giving a bit more room to maneuver. Without targets, no one knows what success looks like. Without targets, people may feel like they need to keep on optimizing until they reach 100% conversion rates. Whoever aims for 100% is not only setting unprofessional unrealistic expectations, but is going to become extremely inefficient and may in fact hurt your business (as optimization has a cost assigned to it).
Tests will fail and the team must make room for this. Retests, iterations and continuous testing needs to be baked into the process in order for you to generate the insights and have the time to find the wins. When tests fail they don’t just fail, but also leave a residue of insights that can lead to future wins. Be sure to pay attention to significant wins and losses that can be reused or turned into ideas to be avoided. Time and continuity are needed.
Speed and quantity of tests is also important to long term success. Many tests will either turn out as losing or insignificant and it's important to know when to cut your losses to make room for winning tests. Experimental Engine, an a/b testing platform, claims that only 40% of a/b tests generate wins of which most range in the 1% to 10% improvement rate. The more good tests are run, the higher the chances to detect winners.
Even though an optimization project needs a strictly defined metric to focus on, the scope of where to test (your playing field) needs to be more forgiving and flexible. Optimization projects cannot be constrained to a single screen. Typically, users will perform actions across a number of screens and all of them should be open to improvement. We have the most success on projects where we can move across various screens. This is especially important as you generate insights from tests and wish to reuse them on other sections (on other products, or at different stages of the funnel, etc).
It’s good for the person or team doing the optimization work to share the risk and reward. We strongly believe that an optimizer must feel the pain of a losing test as well as think twice if the work is worth the effort. This is why when we optimize, we negotiate elements of performance based pricing on our projects. Working within a partnership model is healthy to the long term of the project because the optimizer becomes incentivized to give up on his/her own when the work becomes too difficult (as an optimum is being approached). At the same time, a performance based compensation model also incentivizes to seek out the highest and easiest gains first - which the businesses like.
It’s good practice on a long term optimization project to validate the fruits of your work with another tool that the rest of the team already trusts (ex: GA or any other core metrics service). When a test comes to an end and a +21% claim to fame is made, that’s one thing. When on the other hand a rising trend to the primary metric is observed in a second tool over a few months, that’s another.
A sustainable optimization project needs winning tests, no doubt. When the red begins to show over and over, it might mean that the given metric and testing scope combination needs a rest - at least for now. Before switching gears however, it’s worth to give it a series of your best attempts first. Typically we find that the strongest optimization results come from a healthy combination of three things: solid analytics (to see the bottlenecks), customer insights (to understand pain & selling points), and past successes (to build certainty). Making sure that your ideas are grounded in all three of the above will increases your chances for finding wins.
I do honestly believe that there is room for random exploration in optimization. Exploratory playfulness may lead to beautiful discoveries which we lacked the imagination to look for in the first place. However, at the same time, long-term optimization programs need to have strong elements of prioritization to let the winning tests surface to the top (or their chances of). The exact way each company prioritizes is a personal one. The criteria which we have been using to filter our own test ideas forward include ones with: highest potential effects,highest certainty, lowest effort, and shortest testing duration.
I think it’s completely valid to optimize a screen with a test or two and reach an optimum or stop at that. Perhaps due to traffic (sample size) constraints , you may only have the sensitivity to detect a +20% gain and call it quits. When you do set your targets high and have the traffic, resources, patience to aim for compounding wins, do keep the above elements in mind - they should sustain your efforts for a few more extra tests.
Did I miss anything? Please share ...