In 2021, if you’re not building MVPs you’re probably doing something wrong. We’re all building MVPs – big MVPs, small MVPs, fat MVPs, short MVPs, cute MVPs, and ugly MVPs.
MVPs are the way to turn ideas into execution, eliminate risk, and increase business efficiency.
I’m an innovation leader in IBM Corporate Strategy. Before joining corporate strategy, I was part of the IBM Garage, helping business solve their thorniest business problems. Over the past seven years, I’ve built a lot of MVPs with the Garage.
The IBM Garage combines design thinking, lean startup, and eXtreme programming (XP) to deliver real value, fast. That means MVPs. I’ve explained lean startup and the minimum viable product concept more times than I can count.
At first, the ideas were unfamiliar, but at some point I realised everyone I was talking to already knew all about MVPs. There was a problem, though. Although we were using the same term, their idea of MVP was totally different from mine.
I doubled down. I added more slides to my standard deck, carefully explaining what Eric Ries meant by ‘MVP’ when he popularised the term in Lean Startup. An MVP, I explained, was the smallest thing you could build to test your riskiest assumption.
It was an experiment, not just a staged delivery. It might not even involve code at all. I added more examples, like Zappos. (Have you heard the origin story of Zappos?)
When Zappos was started, the conventional wisdom was that you could sell books online, but you couldn’t sell anything which had to be tried on, like clothes. And everyone knew you definitely couldn’t sell shoes online.
Nick Swinmurn decided to challenge that assumption. Because no one knew if his market actually existed, he needed to minimise his investment and reduce his risks. He set up an online storefront, but he didn’t build any back-office function, and he didn’t buy any expensive inventory.
When someone ordered shoes from his website, Nick would run out to his local shoe shop, buy the shoes at full retail price, and post them out. He lost money on every pair of shoes he sold.
Eventually, the transaction volume was so high that Nick was losing a lot of money, and he knew he was on to a winner. He’d proved people would buy shoes online.
Unfortunately, when I said ‘MVP’, my audience didn’t always hear ‘carefully designed experiment’. Often, they heard ‘smallish first release’ or sometimes even just ‘first release’. Sometimes they heard ‘first release but we will be cavalier about the quality’.
Once or twice, to my dismay, they heard ‘enormous first release.’ The more I explained, the less useful the term became. It was a distraction. We were spending our time talking about the definition of ‘MVP’, rather than collaborating and inventing and making new amazing things and figuring out clever techniques for de-risking our projects.
That doesn’t mean I don’t think MVPs have value. They do! In fact, both kinds do! There’s so much value in the idea(s) of MVP, I prefer to be explicit about what value we’re hoping for. Is this an experiment? A first release? Are we planning to do continuous deployment after the first release or do fewer, larger, releases with more ceremony?
The drift in meaning of MVP from ‘experiment’ to ‘smallish first release’ is an example of what Martin Fowler calls ‘semantic diffusion’. New terms gradually lose their original meaning as they are adopted more widely.
The irony is that terms which were specifically coined to be opposite of “the usual way” get diluted back to “the usual”. One MVP I heard about passed through five architecture review boards before any code was written. Another was scoped to include every idea the team had, in order to try and ensure a £12 million ROI on the first release.
- Digital care hub to boost Scots skills and opportunities
- IoT could help Scots councils create safer, more sustainable communities
- Digital Transformation Summit 2021 | Foundations and the future
Our industry has lost some clarity about the MVP concept because the original formulation is wonderful, but it’s challenging, counter-intuitive, and hard to implement. It up-ends conventional business – and software – wisdom.
An MVP is a castle in the air, or a man behind a curtain in Oz, by design. Professional software engineers have been trained to avoid suspicious curtains and to build solid foundations well before we even think about building any castles.
‘Pure’ Ries MVPs aren’t especially palatable to the business, either. Consider the case of Zappos; Swinmurn created a business model where success meant losing money (in the short term). Funnily enough, most business leaders prefer scenarios where success means making money.
So if we ban the phrase ‘MVP’ from our vocabulary, what should we be building? That depends how confident we are that we’re building the right thing, with technologies that do what we expect. In high-risk contexts, we should focus on developing a series of hypotheses and small experiments. If we’re more certain we’re on the right track, we should focus on incremental delivery of value.
Here’s how these two approaches look in practice.
The idea of being hypothesis-driven in our development is appealing, but it’s tricky to get right. Framing a good hypothesis is hard, and so is figuring out the right experiment. What assumption are we trying to validate? How do we know if we’ve got it right? Critically, are we willing to stop or re-plan partway through if the experimental results don’t go our way?
If not, perhaps we should be considering an incremental delivery instead. A good experiment should be designed as something which can disprove a hypothesis, not as a way of gathering vanity metrics to present to the board. (If you haven’t got a healthy level of psychological safety in your organisation, experiments are probably not going to be a good approach to product development.)
Experiments should also be small. I’ve seen many teams formulate experiments which were along the lines of “if we build the whole product, our clients will be satisfied and our revenue will increase.”
It takes discipline and skill to break things down – are there indicators we can measure more cheaply, and earlier? ‘Revenue increasing’ is a lagging indicator. Can we identify relevant leading indicators which will help guide our development investment?
Even if we decide we need to build most of the product, are there parts we can leave out?
The key to a good lean experiment is that no money is wasted on things which don’t contribute to the outcome. We’re not aiming for completeness (yet), we’re aiming for validated learning. One of my favourite examples of this in the IBM Garage was a project we did for a steel firm.
Their catalogue was complex, and the ordering process was high-touch. Contractors would ring up their rep and talk through their configuration requirements. Would they even want this white glove experience to be digitised?
Would it be possible to represent the many subtle product variations on a small phone display? Once user research validated that the paper designs made customers happy, the next experiment was to build an application that could be deployed to the field.
The app was fully functional, but it was missing something most e-commerce applications have – an order processing facility. When a customer clicked ‘buy’, the system would send an email with the order details to the sales rep, who would process it manually using the old systems. As a techie, leaving a clunky manual gap in the middle of an otherwise slick app makes me itch … but it’s smart business.
Deferring the ‘well-understood but time-consuming’ parts of the build allowed the organisation to measure app usage before investing in a fully self-service sales platform.
Even if we aren’t formulating explicit hypotheses, designing experiments, and descoping wherever we can, we should still think about risk in our prioritisation. Use a ranked backlog, and put the riskiest items at the top. By riskiest I don’t just mean ‘most likely to fail; it’s about front-loading things which will torpedo the whole project if they don’t work.
If we’re using AI to write scenarios for role-playing scenarios, do the training and check whether AI can create a convincing story before building a beautiful UI. If we’re making a new way for people to order construction materials, build a UI on paper and do the user-testing before writing any code. If we need a legal clearance, get that clearance before investing in architecture work.
Another important way to manage risk is through continuous delivery techniques and automation. I encourage teams to adopt trunk-based development and TDD, and to deploy as often as possible. Although it seems scary at first, more regular deployments reduce the risk of each deployment and also reduce overall risk.
Deployments don’t have to mean releases. Use feature toggles to ship code without enabling it, or limit new functionality to friends and family. Randomised A/B testing is another excellent technique for gathering feedback while limiting the blast radius of changes.
Small MVPs often turn into large MVPs, and so many organisations find releasing is hard! We need to unlearn the habit of releasing infrequently and move to a more continuous stream of value. Otherwise, we’re leaving inventory (written code) on the shelf. Early release of value is an important lean principle.
So will I continue building MVPs? Definitely. Innovation is the lifeblood of a business, and being lean allows more innovation. I’ll always aim to continuously deliver value, and if the context is right, I’ll be coming up with experiments and nifty ways of measuring anything. But I won’t be calling it an MVP.
DIGIT Expo 2021 | Join the Conversation
Holly Cummins will explore the theme of effective innovation at DIGIT Expo 2021 on 23rd November.
DIGIT Expo is Scotland’s largest gathering of senior IT and Digital personnel. With 1000+ delegates, 40+ speakers and 50+ exhibitors, the event is an unmissable opportunity for knowledge exchange, networking and business opportunity.
Register your FREE place now at please visit: www.digit-expo.com/register