Escape Experimentation Limbo: Prioritise Growth Experiments with LEAP Framework
- Swati Rai
- Jun 27, 2024
- 4 min read
The biggest pitfall in growth isn't ignorance of experimentation, it's getting stuck in a cycle of random A/B testing and vanity metrics. This leads to wasted resources and lost time when you are already strapped for cash (and sanity) in the startup world.
Working with Product & Growth teams across both small and mid-size companies, I've seen one consistent struggle between prioritising experiments for short-term wins versus long-term insights. This focus on immediate results leaves a gap in understanding user needs and sustainable growth strategies.
In an attempt to resolve the struggle, I created the LEAP Framework, your key to escaping experimentation limbo.
This framework, built from my experience prioritises experiments based on four key criteria
Learning Value (L): How much will this experiment teach us about our users and their needs? Will it validate core assumptions or uncover new opportunities?
Ease of Implementation (E): How quickly and easily can we implement this experiment? Faster implementation allows for faster learning cycles.
Alignment with User Needs (A): Does this experiment directly address a pain point or desired feature for our users? Prioritise experiments that improve user experience.
Potential Impact (P): While ROI is important, consider the broader potential impact. Can this experiment unlock a new revenue stream or significantly improve user engagement?
Now let's look at the LEAP Framework in action. The growth team at a streaming service wants to run an experiment for their free trials. They plan on A/B testing two different free trial sign-up experiences (one with a shorter form, one with a longer form capturing more user data). Let's leap into the analysis! ( Sorry, couldn't help it)
To prioritise this experiment, they look at how it scores against LEAP
L - Learning Value:
High: This experiment can reveal user preferences for data collection. Do users abandon forms asking for too much information?
Uncover Opportunities: This might lead to new strategies for collecting user data while minimising friction.
E - Ease of Implementation:
High: A/B testing is a relatively quick and easy way to implement this experiment.
Fast Learning Cycles: Results can be obtained quickly, allowing for rapid adjustments to the signup process.
A - Alignment with User Needs:
High: A shorter form addresses the user need for a quick and easy sign-up, potentially leading to a smoother user experience.
P - Potential Impact:
Broader Impact: While conversion rate (ROI) is important, understanding user preferences for data collection can have a broader impact.
Improved User Experience: A faster, less cumbersome signup can benefit user experience and potentially increase conversion to paid subscriptions.
Data Strategy: Insights can inform future data collection practices, potentially leading to better personalisation efforts.
Prioritisation Result :
This experiment scores high across all factors. While the short form might have a slightly lower conversion rate, the learning value, ease of implementation, strong alignment with user needs, and potential for broader growth impact make it a high-priority experiment. By prioritizing this experiment, the team can reduce sign up friction, optimize data collection and unlock growth.
Let's look at another example from an e-commerce company and how they used LEAP to deprioritize experiments that may sound exciting but don't drive value. The company is an E-commerce retailer specialising in outdoor apparel and equipment.
Experiment: A/B testing two product recommendation algorithms:
Algorithm A (Current): Recommends similar products based on purchase history and browsing behaviour. This is the industry standard and considered a reliable approach.
Algorithm B (Experimental): Recommends products based on a combination of purchase history, browsing behavior, and user-uploaded photos containing the purchased item. (This requires a new feature allowing photo uploads).
LEAP Evaluation & Analysis
L - Learning Value (Low)
While Algorithm B might reveal some user preferences based on uploaded photos, the learning value is limited. Existing industry standards for product recommendations are well-established.
E - Ease of Implementation (Low)
Implementing Algorithm B is complex. It requires significant development effort to build the new photo upload feature and integrate it with the recommendation engine.
A - Alignment with User Needs (Medium)
This experiment might indirectly benefit users by potentially offering more relevant recommendations. However, the value proposition for users to upload photos is unclear.
P - Potential Impact (Low)
Even if Algorithm B performs slightly better, the learning value is limited and the development effort is significant. There's a high risk of investing resources in a feature with potentially minimal user adoption and impact on conversion rates.
Prioritisation Result
Based on the LEAP analysis, this experiment ranks low in priority. While the potential for slightly better recommendations exists, the learning value is limited, the implementation is complex, and user adoption of the photo upload feature is uncertain. The team should focus on other areas with the potential for higher learning value and impact.
I've adopted this framework with many of my clients and shared it with companies I advise and they've found that it helped them prioritise better for both short term gains but also think of qualitative insights as valuable in contributing to long term growth.
If you'd like a copy of the simple excel template I created that teams can use during prioritisation discussions, you can download it by clicking the button below. Happy experimenting!
Comments