Methodology
How featurebloat.com sources its anti-patterns, case studies, frameworks, and metric ranges. Every claim on the site should be re-checkable against a named, dated source: a book, a peer-reviewed paper, a vendor research report with disclosed methodology, or a public record (Hacker News, Reddit, Daring Fireball, earnings call, product changelog).
Sources reviewed May 2026
01. Primary sources
The named books, research bodies, and benchmark reports behind every claim
Anti-patterns, case studies, framework synthesis, and metric ranges all trace to a named primary source. Vendor glossaries are used as cross-references, not as primary sources. Where a vendor glossary is the only available source for a claim, the page flags this and treats the claim as directional.
| Source | Cadence | What we take from it |
|---|---|---|
| Inside Intercom (essay series on feature bloat) | On publication | Inside Intercom long-form essays (Des Traynor and team) on product simplicity and the long-run cost of saying yes are a primary editorial reference. Specifically: the ‘saying no’ essay series and the product-management essays that name lazy product decisions. Cited inline where featurebloat.com’s editorial position aligns or diverges. |
| Marty Cagan, Inspired (SVPG / Wiley) | On revision | INSPIRED (Wiley, 2nd ed. 2017) is the canonical source for the ‘feature factory’ term and for the product-discovery vs product-delivery distinction that underlies the anti-pattern taxonomy on /anti-patterns and the framework synthesis on /decision-framework. |
| Marty Cagan, Empowered (SVPG / Wiley) | On revision | EMPOWERED (Wiley, 2020) covers the org-design half of the feature-factory anti-pattern. Used as the primary source for the /for-founders four-stages taxonomy and the PM-role distinction on /for-pms. |
| Teresa Torres, Continuous Discovery Habits (Product Talk) | On revision | Continuous Discovery Habits (Product Talk LLC, 2021) is the canonical source for opportunity solution trees, the discovery cadence framing, and the customer-interview practice that underpins the /decision-framework pre-flight checks. |
| Leidy Klotz, Subtract (Flatiron Books) | On revision | Subtract (Flatiron, 2021) is the primary scientific source for the addition-bias claim that frames the entire site. The behavioural-science research base (Klotz et al., Nature 2021) is cited specifically in the home page manifesto and in /faq. |
| Don Norman, The Design of Everyday Things (Basic Books) | Reference | The Design of Everyday Things (Basic Books, revised 2013 edition) is the foundational reference for affordances, signifiers, perceived complexity, and the cognitive cost of choice. Cited on /for-designers and in the kitchen-sink-UX anti-pattern. |
| Steve Krug, Don’t Make Me Think (New Riders) | Reference | Don’t Make Me Think (New Riders, 3rd ed. 2014) is the working reference for usability heuristics and for the ‘obvious to a first-time user’ standard underlying the progressive-disclosure guidance on /for-designers and /anti-patterns. |
| Nielsen Norman Group (NN/g) UX research | Quarterly review | NN Group’s ongoing UX research library is the primary source for progressive-disclosure technique guidance, complexity-cost studies, defaults research, and usability-heuristic articulation. Specific NN/g articles are cited inline on /for-designers and the kitchen-sink-UX anti-pattern. |
| Pendo Product Benchmarks | On publication | Pendo Product Benchmarks reports (annual) are cited for feature-adoption rate ranges, the ‘most features are rarely used’ baseline claim, and the rough feature-adoption distribution that anchors /metrics ranges. Vendor research, so vendor-incentive critique is flagged where the report itself acknowledges methodology constraints. |
| Mixpanel Product Benchmarks | On publication | Mixpanel Product Benchmarks (annual; varied product categories) cover activation, retention, and feature-engagement medians. Cited on /metrics where the median is reported with methodology and on /case-studies where Mixpanel publishes a category-specific data point. |
| ProductPlan glossary (cross-reference) | Reference | ProductPlan’s product-management glossary is used as a cross-reference for vendor-glossary definitions. Where featurebloat.com’s editorial position diverges from a glossary entry, the divergence is noted on the page in question. |
| Ryan Singer, Shape Up (Basecamp) | Reference | Shape Up (Basecamp, 2019, free online) is the canonical source for fixed-time / variable-scope decision-making cited on /decision-framework and /roadmap-templates. The betting-table pattern on /roadmap-templates is a direct adaptation. |
| Jason Fried and DHH, Rework + Getting Real (37signals) | Reference | Rework (Crown Business, 2010) and Getting Real (37signals, 2006, free online) are cited for the ‘say no’ editorial discipline and for the strategic-clarity counter-argument to competitor parity paranoia in /anti-patterns. The Jason Fried hero quote on the home page sources to this body of work. |
| Peer-reviewed HCI research (ACM CHI / CSCW) | Per study | ACM CHI and CSCW conference papers and the HCI journal literature are cited per claim where featurebloat.com makes a specific empirical statement (e.g. choice-overload effects, defaults research, progressive-disclosure efficacy). Citations link to the specific paper or DOI rather than to the conference page. |
02. In scope
What this site is willing to publish
- ●Editorial synthesis of named-source positions on feature bloat (Cagan, Torres, Klotz, Norman, Krug, Fried/DHH, Singer).
- ●10-pattern taxonomy of feature-bloat causes with structural cause and named fix per pattern.
- ●Case studies of publicly visible feature-bloat episodes in named products (Microsoft Word, Evernote, Notion, Slack, Zoom, iOS Settings) triangulated from public sources.
- ●Framework synthesis: RICE / Kano / MoSCoW / Shape Up reviewed honestly with the bloat-specific question each one omits, plus a 10-question diagnostic.
- ●Role-specific guidance for founders, PMs, engineers, and designers, derived from the named-author positions cited above.
- ●Feature-adoption metric framing using Pendo and Mixpanel Product Benchmarks where the source publishes its own methodology.
- ●Annotated reading list with capsule reviews of eight books in the simplicity / product-discovery / design literature.
03. Out of scope
What this site deliberately does not publish
- ○Proprietary product analytics from any individual product. Where a case study references specific feature usage data, the source is named and the figure is treated as the source’s claim, not as a featurebloat.com original measurement.
- ○Individual product audits or consulting deliverables. Digital Signet does not run a paid feature-bloat audit service tied to this site; the editorial framing is not a sales motion.
- ○Vendor product-management software comparisons (Aha! vs ProductPlan vs Productboard vs Pendo). Featurebloat.com cites these tools where relevant but does not rank or recommend.
- ○Country-specific or industry-specific compliance considerations for feature deprecation (data-protection notification rules, healthcare-software regulatory paths, financial-services audit trails). Where these matter, the site flags the consideration without giving advice.
- ○Specific user research data attributed to any individual product team. The site does not republish unattributed user-research findings.
- ○Roadmap or feature-decision advice for any individual product or company. The /decision-framework diagnostic returns a score, not a buy-or-build recommendation.
04. Editorial framework
How claims get from source to page
Each of the 10 named anti-patterns has: a definition; a structural cause (the org-design or incentive condition that produces it); a named author or research base; and a named fix that is itself sourced. Patterns are not invented for this site; they are surfaced from the literature and given a single canonical phrasing.
Each case study cites at minimum: a public quote from Hacker News, Reddit, Daring Fireball, Thurrott, a public earnings call, or a product changelog. Press summaries alone are not sufficient; the underlying public record is what gets cited. Where the public record is contested, both sides are surfaced rather than the convenient one picked.
RICE (Sean McVey / Intercom origins, popularised in product-management literature), Kano (Noriaki Kano), MoSCoW (Dai Clegg), and Shape Up (Ryan Singer / Basecamp) are reviewed for what each captures and what each misses. The bloat-specific question each one omits is named explicitly. The diagnostic quiz on /decision-framework returns a 0 to 100 bloat score with bands tied to specific recommended interventions, not a generic colour code.
Feature usage rate, active feature count per user, time-to-interactive, and retention cohorts by adoption are presented with Pendo and Mixpanel Product Benchmarks as the published-baseline anchors. Where the vendor research lacks methodology disclosure, the page flags the gap and treats the figure as directional, not a point estimate.
Four named conditions: building a platform (not a product); closing an enterprise contract with specific written requirements; running a genuine network-effect product (the value scales with feature surface for established users); and late-category entry with a clear, narrowly-bounded differentiation gap. Each condition has a litmus test on /when-features-are-ok to keep the counter-position from being a get-out-of-jail card.
Peer-reviewed research beats vendor research; named-author primary source beats vendor glossary summary; public-record primary source beats secondhand press summary. Where a vendor glossary is the only available source, the page says so and treats the claim as directional.
05. Refresh cadence
Monthly first-business-week pass + out-of-cycle triggers
The site is reviewed against primary sources on the first business week of each month. The visible “Last verified” label, the dateModified field in every page’s Article JSON-LD, and the on-page review-date badges on /about and /methodology all read from a single constant (LAST_VERIFIED_DATE) so on-page text and schema are structurally in lockstep. Cosmetic date refreshes are not possible without changing the constant, which updates every page at once.
Out-of-cycle refreshes trigger on:
- ●New edition or significant revision of a primary book (INSPIRED, EMPOWERED, Continuous Discovery Habits, Shape Up, Don’t Make Me Think, The Design of Everyday Things, Subtract).
- ●New annual edition of Pendo Product Benchmarks or Mixpanel Product Benchmarks affecting cited metric ranges.
- ●New NN Group research bulletin or peer-reviewed HCI study with material impact on a named anti-pattern or framework section.
- ●Public feature-bloat incident in a named product covered on /case-studies (e.g. a Slack vs Teams feature-set change, a Notion architecture revision, a Microsoft Word command-count update).
- ●Flagged correction from a reader, author, or named-product team that materially affects a claim on a page.
Refreshes that affect a metric range or a case-study fact ship as soon as the change is confirmed against the primary source. Refreshes that affect only editorial framing batch into the next monthly pass.
06. Limitations
What this site cannot tell you
This is opinion content with citations, not consulting advice. The decision framework and the diagnostic quiz return a directional signal, not a buy-or-build verdict for any individual product. Substitute the specific facts of your product, your customers, and your contract obligations before acting on any recommendation here.
Case studies cite public records and triangulate from public quotes. Where a product is mid-pivot during a monthly verification pass (a feature is being deprecated, an interface is in transition, a public feature list is in flux) the site flags the in-flight status rather than picking a single point-in-time framing.
Peer-reviewed HCI research is cited per study. A claim that depends on one study is presented as that study’s claim with citation; a claim that depends on multiple studies is presented as the synthesised position with each contributing citation.
Vendor research (Pendo Product Benchmarks, Mixpanel Product Benchmarks) is cited where the source publishes its own methodology. Where the methodology is opaque, the figure is treated as directional and the page says so.
07. Corrections and read next
Corrections process
Spotted a misattributed quote, a case-study fact that has changed, a named-author position misstatement, or a missing source? Email [email protected] with the page URL and the source you would like cited. Substantive corrections are typically actioned within five business days. Non-substantive corrections (typos, link rot, structural edits) batch into the next monthly verification pass.
Read next
Why this site exists, who builds it, editorial position, full coverage map.
The 10 named structural causes of feature bloat with the fix per pattern.
Six named products triangulated from public sources: Word, Evernote, Notion, Slack, Zoom, iOS Settings.
RICE, Kano, MoSCoW, Shape Up reviewed honestly plus a 10-question diagnostic.