We have previously talked about growth levers or growth drivers. The next step after setting up growth drivers is their validation through experimentation.
Throughout the process, we will focus on validating growth drivers and, once validated, its goal will be their continuous
optimization (this is when a theory becomes strategy and where we
actually implement drivers on a strategic, scalable level).
Specifically, every experiment we do will be an attempt to grow our business.
Below we specify what the experiments are and how they should be executed o guarantee maximum benefits out of the methodology:
Whenever we experiment, we will do so seeking business growth and, therefore, the first thing we will have to ask ourselves is how are we going to grow? What do we want to optimize?
To answer that question we will have to go back to the growth funnel we designed for our business. In which part of the funnel
do we want to experiment? Do we want to reach more people? Do we want people who already know us
or that already buy from us to buy more? Do we want to increase the average ticket? Do we want to recover churned clients?
Some of this initial work should have been done in the growth driver setup phase.
Suppose we want to optimize the activation phase and that one of our drivers to achieve that is email marketing. If we are going to act on said lever, the first thing
we will have to do is relate said lever to the metric with which we are going to measure its
performance to know if we have managed to optimize it.
For example, in the case of wanting to optimize the activation phase using the email driver for
marketing, we may want to optimize the click rate.
The next step - which is always a reality in an experimentation setting - is to formulate
a hypothesis. Such hypothesis must follow the SMART philosophy and thus be specific, measurable,
achievable, relevant and time bounded.
Possible hypothesis: “I think that if we personalize the emails including the user's name in multiple places, this
connect more with us and during the next month there will be a 10% increase in the rate of
A relevant point is that, probably, when we establish which driver we want
to prioritize we will come up with multiple ways to do it; in this case, we must execute the
experiments one after another and sequentially; never at the same time.
In the case that we have different possible experiments to run, there is a formula to
measure which one is worth opting for: the ICE Score, a way of measuring that takes into account
- The impact it can have if it works.
- The confidence you have that it works.
- Its ease of implementation.
To choose the experiment to run, the three parameters must be scored from 1 to 10 in all
the options that we contemplate and multiply the 3 values (impact x confidence x ease).
Once we are able to measure which experiment is most worth executing, we have to move on to the
action, but the process does not end here.
The most important thing will be, during the entire time in which experiment is being run, to keep a record of
this through all the metrics that we have assigned and also through tangible learnings. Throughout the experiments
we must document as much as possible about the impact of the action taken, how it made the public behave, if they react,
they interact, if the hypothesis we had formulated was right or wrong ... etc.
Finally, every experiment involves learning and this must be recorded in the same place always.
Have a record of all the experiments we carry out and the results that
we get from these is going to be when we are really capable of seeing the big picture in our growth process.
We can begin to detect patterns: Why do the experiments that had great results worked so well? Are there some techniques that work better than others?
Do people respond better through certain channels or media?
this extra information will allow you to get more insights in the
future, giving feedback and spinning faster our growth wheel.