A decision framework for deciding what not to build
The existing prioritisation frameworks all help you rank features. RICE tells you which features to build first. Kano tells you which features delight. MoSCoW tells you which features are essential. None of them tells you when to build zero features. That option is not in the framework. It is not in the vocabulary. This page adds it.
We review RICE, Kano, MoSCoW, and Shape Up honestly: what each does well, where each misses, and how to combine them with a bloat-specific lens that adds the question the others omit. At the end is a 10-question self-diagnostic that returns a bloat score and a recommendation.
01. RICE
RICE
Reach, Impact, Confidence, Effort
RICE was developed by Intercom to rank feature requests in a consistent, repeatable way. The formula is: (Reach x Impact x Confidence) / Effort. Reach is how many users the feature touches per quarter. Impact is the estimated impact on each user (1 = minimal, 2 = low, 3 = medium, 4 = high). Confidence is how confident you are in your estimates (50%, 80%, or 100%). Effort is person-months. The output is a score. Higher scores should be built first.
RICE is useful because it forces quantification. The feature that someone is most excited about is not necessarily the one with the highest RICE score. It surfaces assumptions. A high-Reach, low-Impact, high-Confidence feature with low Effort has a decent score. A low-Reach, high-Impact, low-Confidence feature with high Effort scores poorly and should be deprioritised or researched further.
Where RICE misses: it presumes a feature is going to be built. The scoring system is a ranking mechanism, not a gating mechanism. A RICE score of 3.2 does not tell you whether 3.2 is good enough to justify the maintenance commitment. Nothing in RICE asks "should we build zero?" Nothing asks "does this feature already exist somewhere in the product?" Nothing asks "who will maintain this in 18 months?"
Applied
Example: a team is deciding between a bulk-export feature (Reach 200/quarter, Impact 2, Confidence 80%, Effort 2 months) and an in-app tutorial (Reach 800/quarter, Impact 3, Confidence 80%, Effort 1 month). RICE: bulk-export = (200x2x0.8)/2 = 160. Tutorial = (800x3x0.8)/1 = 1920. Tutorial wins clearly. This is a correct and useful output. RICE does not, however, ask whether both features should be deprioritised in favour of fixing a critical bug that is causing 10% churn.
02. Kano model
Kano model
Basic, Performance, Excitement
The Kano model was developed by Noriaki Kano in 1984. It categorises features into three types. Basic features (also called Must-be): their absence causes dissatisfaction, but their presence does not cause satisfaction. They are expected. Performance features: their presence increases satisfaction linearly. More is better. Excitement features (also called Delighters): unexpected features that cause significant satisfaction when present but no dissatisfaction when absent.
The model is useful for understanding why adding more Performance features does not deliver proportional returns. And it is particularly useful for understanding why Excitement features become Basic features over time. Touch-ID was an Excitement feature in 2013. It is a Basic feature in 2026. Features degrade through the Kano categories as expectations rise.
Where Kano misses the bloat question: the Excitement category is a bloat engine. Every team wants to ship Delighters. Delighters are memorable, shareable, praised in product reviews. But today's Delighter is tomorrow's Basic feature that you are now committed to maintaining forever. The Kano model does not ask whether shipping a Delighter is a maintenance commitment you can afford. It does not ask whether the Delighter belongs in your product or in a different product.
Applied
A practical application: before adding an Excitement feature, ask whether you can sunset a Basic feature whose maintenance burden has grown disproportionate to its value. The Kano model does not ask this. You have to add the question.
03. MoSCoW
MoSCoW
Must / Should / Could / Won't
MoSCoW prioritisation categorises requirements into four buckets: Must have (the release is not viable without this), Should have (important but not vital for the current release), Could have (nice to have, included if time allows), and Won't have (explicitly excluded from this release).
MoSCoW is widely used in project management and requirements definition. It is simple, communicable, and stakeholder-friendly. It has one quality that is genuinely rare in prioritisation frameworks: the Won't column.
The Won't column is the most important column in MoSCoW and the least used. In practice, the Won't column is treated as "not yet" rather than "never." Items move from Won't to Could to Should to Must across releases, which means the Won't column is a deferral mechanism, not a rejection mechanism. The discipline to keep things in Won't permanently, to say "this is not in this product now or ever," is the anti-bloat application of MoSCoW.
Where MoSCoW misses: it does not ask whether Must-have features from two years ago are still must-have today. Features that were Must for v1 are maintained indefinitely without re-evaluation. MoSCoW is a requirements framework, not a lifecycle framework.
Applied
Applied to sunsetting: run MoSCoW on your existing feature set, not just on new requests. Which features are genuinely Must today? Which have drifted to Could or Should? Which belong in Won't? This is the feature audit that MoSCoW was not designed for but can be adapted to serve.
04. Shape Up
Shape Up
Basecamp's fixed-time, variable-scope methodology
Shape Up is Ryan Singer's methodology for product development, published in 2019 as a free book by Basecamp. It is the most complete structural response to feature bloat available in the product management literature.
The core insight is appetite: before a feature is worked on, the team defines how much time they are willing to spend on it. This is the appetite, typically 2 weeks (small batch) or 6 weeks (large batch). The scope is then shaped to fit the appetite, not the other way around. If the full feature cannot be done in 6 weeks, you find the version of the feature that can be done in 6 weeks, or you don't do it at all.
This is the fundamental inversion of how most teams work. Most teams scope the feature, then estimate how long it will take, then argue about whether they have time. Shape Up starts with time and makes scope variable. This structural change forces cuts at the shaping stage, before any code is written.
The betting table is where work gets chosen. Every six-week cycle, the team places bets: here is the work we will do this cycle. What doesn't get bet on doesn't get worked on. There is no backlog in the conventional sense. Things that do not get bet on can be pitched again next cycle or dropped. This forces a continuous re-evaluation of what is worth doing.
Applied
Quote from Ryan Singer on appetite: 'The appetite is completely different from an estimate. An estimate starts with a design and asks how long it will take. The appetite starts with how much time we want to spend and asks what we can do in that time.' Source: Shape Up, Chapter 5, basecamp.com/shapeup.
05. The bloat-specific checklist
Ten questions before any feature enters a sprint
This checklist supplements RICE, Kano, MoSCoW, and Shape Up. It is not a replacement. It adds the questions the other frameworks omit. Every feature should pass this checklist before it is bet on.
What metric will this move?
If you cannot name a specific metric with a specific directional expectation, you are not ready to build this.
Is there already a feature that addresses this need?
Users often request features that already exist but are undiscoverable. Before building, audit the existing feature set.
If we said no, what happens?
If the answer is 'nothing,' stop now. The value of a feature is what breaks when it is absent.
What feature would we remove to make room?
Every feature has a maintenance cost. Name the feature you would deprioritise to fund this one.
What does success look like in 6 months?
Name the specific metric, the specific threshold, and the specific time horizon.
What does failure look like?
Name the conditions under which this feature is removed. If you cannot describe failure, you cannot manage it.
Who will maintain this in 2 years?
Name the person or team. If the answer is 'whoever is available,' that is a risk signal.
Is this a platform decision or a customer request?
Platform decisions require different governance than customer requests.
Have we tested it qualitatively with users?
A five-user prototype test costs less than two days of engineering time. Do it before building.
If we had to ship it in a week, what would we cut?
The version you can ship in a week is often the right version. Build that, then evaluate.
06. The self-diagnostic
Does your product have feature bloat?
Ten yes/no questions. Takes 3 minutes. Returns a bloat score (0-100) and a recommendation.
Feature bloat self-diagnostic
1. Does your product have more features than your team can describe in a 5-minute demo?
2. Do more than 20% of your features have fewer than 5% of users touching them per month?
3. Is your onboarding longer than 3 steps?
4. Do you have feature flags that have been in the codebase for more than 6 months without a sunset plan?
5. Does your roadmap contain items that have been there for more than 2 quarters?
6. Can your team articulate the user problem each of your last 5 shipped features solved?
7. Do you have a documented feature sunset process?
8. Does your team measure outcomes (behaviour change) rather than output (features shipped)?
9. Can a new user reach their first success in your product in under 10 minutes without help?
10. Do you run a quarterly feature usage audit?
Digital Signet
Need help running the checklist across a real product?
Digital Signet facilitates two-week product audits that apply this framework to your actual feature set, with your actual usage data.
Email OliverRelated reading