featurebloat.com is an independent editorial site. Some links are affiliate links. All company names are trademarks of their respective owners. Opinions are the author's. Company references cite public sources. This is editorial content, not endorsement or advice. Learn more

Feature bloat FAQ

Twenty questions about feature bloat, answered properly. Grouped into definitions, causes, measurement, and fixes. Every answer links to the deep-dive page for more.

01. Definitions

What is feature bloat?+

Feature bloat is the state a software product reaches when it has accumulated so many features that the complexity imposed on users and the engineering team outweighs the value those features deliver. It is the noun. The process that creates it is feature creep. Research by the Standish Group (2002) found that 64% of enterprise application features are rarely or never used. Pendo's 2019 Feature Adoption Report found only 12% of features are used often.

The manifesto
What is the difference between feature bloat and feature creep?+

Feature creep is the process: the gradual, often individually defensible addition of features without sufficient deliberation. Feature bloat is the result: the product state where that accumulation has exceeded the value it delivers. You can stop feature creep before it produces bloat. You cannot stop bloat without actively removing what has accumulated. Feature creep is the verb; feature bloat is the noun.

The 10 anti-patterns
What is bloatware and how does it relate?+

Bloatware traditionally refers to pre-installed software on hardware (phones, laptops) that the user did not ask for and cannot easily remove. The term has broadened to mean any software with more features than necessary for its purpose. Feature bloat is the most precise term for the specific problem of too many features in an application that started with a clear purpose. Bloatware and feature bloat share the same cause: features accumulate faster than they are removed.

The no-sunsetting anti-pattern
Is feature bloat the same as scope creep?+

They are related but distinct. Scope creep is a project-management concept: the tendency for project requirements to expand beyond their original definition during a project. Feature bloat is a product-management concept: the accumulation of features across a product's life cycle, beyond what serves the core use case. Scope creep happens during a project. Feature bloat happens across many projects over many years. Both share the root cause identified by Leidy Klotz: humans default to adding.

Scope creep anti-pattern
What is the kitchen sink anti-pattern?+

The kitchen sink anti-pattern is when every possible control, option, and feature is visible on every screen simultaneously, without prioritisation or progressive disclosure. The user is presented with the full capability of the product at all times, which creates cognitive overload. The name comes from the idiom 'everything but the kitchen sink.' LinkedIn's original desktop feed, early Salesforce UI, and classic Microsoft Word toolbars are canonical examples. The fix is progressive disclosure and task-oriented design.

Kitchen sink anti-pattern in full

02. Causes

What causes feature bloat?+

Ten structural causes, which we call anti-patterns: scope creep (projects expanding beyond their definition), the feature factory (organisations measuring output rather than outcomes), kitchen sink UX (every control visible at once), competitor parity paranoia (copying competitors without strategic filter), treating the roadmap as a list of requests, no sunsetting discipline, configuration overload, feature flags kept indefinitely, CEO pet features, and enterprise exceptions that become defaults. See the full taxonomy on the anti-patterns page.

All 10 anti-patterns
Why do companies keep adding features?+

Because adding is visible and subtracting is invisible. Leidy Klotz's research in Subtract (2021) shows that humans systemically overlook the subtractive option even when it is the better one. In organisations, the cognitive bias is amplified by structural incentives: engineers are rewarded for shipping, product managers have stories to tell about new features, and roadmaps have no 'sunset' column. The organisation is structurally optimised for addition and structurally resistant to subtraction.

Subtract by Leidy Klotz
What is a feature factory?+

A feature factory is Marty Cagan's term for a product organisation that measures itself by the number of features shipped (output) rather than the outcomes those features produce. Symptoms: the roadmap is a feature list, PMs take requirements rather than discover problems, there is no user-research budget, OKRs measure features shipped. Cagan argues this is the norm, not the exception, in most product organisations. The fix is outcome-based OKRs and empowered product teams. See Inspired (SVPG, 2017).

Feature factory anti-pattern
Is feature bloat a management problem or an engineering problem?+

It is primarily a management problem with engineering consequences. The decisions to add features are made by product managers, founders, and leaders who are responding to stakeholder requests, competitive pressure, and growth narratives. Engineers are the ones who inherit the maintenance debt. The engineering team's leverage is in making the maintenance cost visible (see the metrics page and the for-engineers page) and in insisting on flag-sunset policies and feature-removal ceremonies.

For engineers
Do users ask for feature bloat?+

Sometimes. Users ask for specific features; they do not ask for feature bloat. But the aggregate of user requests, if implemented without strategic filter, produces bloat. Microsoft's own research showed that the top user requests were for features already in Word, just undiscoverable. Implementing 'discoverability improvements' rather than more features would have served users better. Users describe their problems; the solution is the product team's responsibility.

Microsoft Word case study

03. Measurement

How do you measure feature bloat?+

Four metrics: (1) feature usage rate per user per week - what percentage of active users touch each feature. (2) Active feature count per user - how many of your features does the median user actually use per month. (3) Time-to-first-value - how long it takes a new user to reach their first success. (4) Retention cohorts by feature adoption - do users who adopt specific features retain better or worse. Tools: Pendo, Amplitude, Mixpanel, PostHog, Heap.

Full metrics guide
What is the 80/20 rule for software features?+

The Pareto principle applied to software: approximately 80% of users use approximately 20% of features. The Standish Group (2002) found 64% of enterprise features are rarely or never used, consistent with an 80/20 distribution. Pendo (2019) found 12% used often, 15% sometimes, 73% rarely or never. In practice, the ratio varies by product, but the distribution is always power-law: a small set of features drives most usage.

Metrics page
Is it true that 64% of features are never used?+

The Standish Group CHAOS report (2002) found that, across enterprise applications, 45% of features are 'never' used and 19% are 'rarely' used, giving the often-cited 64% figure. The research is from 2002 and has been criticised for its methodology, but no subsequent large-scale study has produced a substantially different number. Pendo's 2019 study of 180 million users across 35,000 applications found 73% of features rarely or never used, consistent with the Standish finding.

Homepage with data
What is feature adoption rate?+

Feature adoption rate is the percentage of active users who interact with a specific feature in a given time period (typically weekly or monthly). It is measured per feature: 'Feature X has a 23% weekly adoption rate' means 23% of active users interacted with Feature X at least once in the past week. It is distinct from aggregate usage count, which can be misleading. Pendo defines 'feature adoption' as the feature being used by at least one new user who had not previously used it.

Full metrics guide
What tools measure feature usage?+

The major product analytics tools: Pendo (best for out-of-the-box feature adoption reports), Amplitude (best for cohort analysis and funnel work), Mixpanel (event-based, flexible querying), PostHog (open-source, self-hostable), Heap (retroactive analysis without pre-instrumentation). Each has a different strength. Pendo requires the least engineering work for basic feature usage reports. Amplitude requires more instrumentation but gives more flexibility.

Metrics: tools section

04. Fixes

How do you remove features from a product?+

Five steps: (1) Pre-flight checks: is the feature genuinely dead or dormant, who uses it and how much do they pay, is it load-bearing for any other feature, are there contractual commitments. (2) Choose the cut strategy: delete entirely, hide for new users, deprecate with a sunset date, or spin off. (3) Communication: in-app notice 30-60 days before, email sequence at 30 days / 7 days / day-of, changelog entry, support macro. (4) Revenue-risk modelling. (5) Avoid the four failure modes.

Full surgical playbook
How do you say no to a feature request from the CEO?+

Reframe the conversation from feature to problem. 'What outcome are you hoping this moves? I want to make sure I'm solving the right problem, not just shipping a feature.' Give yourself a week to gather evidence rather than refusing on the spot. Present the evidence. If the data supports the feature, build it. If it does not, present the data and explain why the problem is better solved another way. See the full script with dialogue on the team conversations page.

Full CEO script
What is Shape Up methodology?+

Shape Up is Ryan Singer's product development methodology published by Basecamp in 2019 (free online at basecamp.com/shapeup). The core idea is appetite: before any work begins, the team commits to how much time it is willing to spend on the project. The scope is then shaped to fit the appetite, not the other way around. Six-week cycles, a betting table to choose what gets worked on, and no formal backlog. It is the most complete structural response to feature bloat in the product management literature.

Shape Up in the decision framework
What is the RICE prioritisation framework?+

RICE stands for Reach, Impact, Confidence, Effort. It is a scoring framework developed by Intercom for ranking feature requests. Score = (Reach x Impact x Confidence) / Effort. It forces quantification of assumptions and prevents the loudest advocate from winning prioritisation arguments. Its limitation: it ranks features against each other but does not include a 'build nothing' option. The featurebloat.com checklist adds ten questions that RICE omits.

Full decision framework
When should you add features?+

Four situations where adding is genuinely right: (1) you are building a platform, not a product, and your users are building on top of you (platform completeness is a legitimate constraint). (2) You are closing an enterprise contract with specific security or compliance requirements (SAML, SCIM, audit logs belong in the enterprise tier). (3) You have a network-effect product and the feature strengthens the core network. (4) You are late in a category with a specific differentiation gap that all leading competitors share. See the honest counterpoint page.

When features are OK

All pages

Home / ManifestoAnti-patternsCase StudiesHow to CutWhen Features Are OKDecision FrameworkTeam ConversationsMetricsRoadmap TemplatesBooks and ReferencesFor FoundersFor PMsFor EngineersFor Designers