March 25, 2026

How can I analyze nps verbatims at scale

NPS scores tell you something is wrong. Verbatims tell you what. The open-text response — the field where customers explain their score in their own words — is where most of the diagnostic value in an NPS survey actually lives. The problem is that manual analysis of those responses breaks down fast. Here is how to analyze NPS verbatims systematically, across thousands of responses, in a way that actually informs decisions.

What Are NPS Verbatims and Why Are They Hard to Analyze?

When a customer fills out an NPS survey, they answer two things: a score from 0 to 10, and a follow-up question — usually "What's the main reason for your score?" The answer to that second question is the verbatim. It's unstructured text written in the customer's own words, and it carries more signal than the number next to it.

The score tells you sentiment. The verbatim tells you cause.

The difficulty: most companies collect verbatims at volume. A hundred responses per quarter is readable. Ten thousand is not. Manual review by an analyst covers a fraction of what came in, misses patterns that only appear in aggregate, and produces results that vary depending on who did the tagging. Teams end up treating NPS verbatims as qualitative color — something to quote in a presentation — rather than a systematic input to product and business decisions.

Why Manual NPS Analysis Doesn't Scale

Three things consistently break down when teams try to analyze verbatims by hand.

The first is coverage. An analyst reading 300 verbatims out of 5,000 is drawing conclusions from 6% of the data. The analysis reflects the portion the team had time for, not the full picture. High-signal responses from smaller customer segments — often the ones that matter most for retention — get skipped entirely.

The second is consistency. One analyst tags "slow load times" as a performance issue. Another calls it a product bug. A third files it under UX. Over time, categories drift and you lose the ability to track trends. When you can't trust your taxonomy, you can't trust your trend lines.

The third is depth. The most obvious themes surface in any analysis — they're loud enough that even a 10% sample catches them. The insight usually lives in the second tier: themes affecting 8 to 12 percent of respondents, often concentrated in specific cohorts like enterprise accounts or customers past their first renewal. Manual analysis almost always misses these.

How to Analyze NPS Verbatims at Scale: Step by Step

Step 1: Separate analysis by score segment before you do anything else.

Promoters, passives, and detractors are answering the same question for completely different reasons. Analyze each group independently first, then compare. What your detractors are describing is a different problem from what your promoters are celebrating. Running a combined analysis blurs both signals.

Step 2: Define your taxonomy before you touch the data.

Whether you're tagging manually or building a categorization model, establish your category framework first. Start with the product areas, experience moments, and outcomes that matter to your business — even a rough set of 10 to 15 top-level categories is enough to begin. Starting without a taxonomy is how inconsistency enters your data. You can refine categories as patterns emerge, but the framework needs to exist before tagging starts.

Step 3: Enrich verbatims with customer context.

A verbatim alone is interesting. A verbatim linked to a customer's plan type, account size, tenure, and product usage is actionable. Before analysis, make sure your NPS data is enriched with CRM and product attributes. This lets you ask the right segmentation questions: do enterprise customers complain about different things than SMB? Do long-tenured users mention different friction than customers in their first 90 days? Segmentation is what turns observations into decisions.

Step 4: Look for themes, not just topics.

There is a difference between a topic ("reporting") and a theme ("enterprise customers can't share reports with external stakeholders without a paid seat"). Topics tell you which area of your product is mentioned. Themes tell you the specific experience being described. Push past the first level of categorization. The diagnostic value — and the action — lives at the theme level.

Step 5: Track volume and trend over time, not just current state.

A theme mentioned by 3% of detractors this quarter is a data point. A theme that was 1% a year ago and is now at 3% is a warning. Verbatim analysis without trending misses movement entirely. Build your analysis so you can track how theme prevalence changes across quarters, and watch specifically for themes growing faster than your overall detractor rate.

Four Questions Worth Asking Once You Have the Data

The categories alone don't tell you what to do. These four questions help you extract the right decisions:

What do detractors say that promoters never mention? These are your actual failure modes — the experiences bad enough to drag a score into the 0 to 6 range.

What do promoters say that detractors never mention? These are your real differentiators — the moments of value that are working and need to be protected, and the proof points that belong in your marketing.

What theme is growing fastest among detractors? This is your most urgent signal. It's getting worse, and your current review cadence probably hasn't caught it yet.

Are any themes concentrated in a specific cohort? A 4% detractor rate across all customers is manageable. A 4% rate that's actually 18% among your enterprise accounts on a specific plan is a retention problem hiding inside an acceptable average.

How to Make Sure the Analysis Changes Something

Analysis that doesn't drive action is a research exercise. A few things that help close the gap.

Share findings in a format that travels. A one-page summary with the top themes, their volume, their trend direction, and one representative verbatim each is more likely to change decisions than a 40-slide deck. Make it easy for someone who wasn't in the analysis to understand what matters.

Route themes to the right owner. A verbatim about billing belongs with Finance. A verbatim about onboarding belongs with CS. A verbatim about a specific feature belongs with Product. If insights land in a marketing report and go nowhere, build a routing layer into the process.

Set a threshold before you start. Decide in advance what share of detractors mentioning a theme constitutes a signal worth acting on. This prevents over-reacting to noise and under-reacting to the themes that actually matter.

Where Enterpret Fits In

Enterpret automatically categorizes NPS verbatims against an adaptive taxonomy that learns your specific business language — your product areas, feature names, and customer vocabulary. When a theme starts growing in your detractor verbatims, you see it before it shows up in churn. And because Enterpret unifies verbatims with the rest of your feedback (support tickets, sales call transcripts, review sites), the patterns you find in NPS are connected to everything else customers are telling you — not siloed in a separate report.

See how Enterpret analyzes NPS verbatims.

Heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

This is some text inside of a div block.

Frequently Asked Questions

01
What are NPS verbatims and why do they matter?

NPS verbatims are the open-text responses customers write when explaining their score — the "What's the main reason for your score?" field. They matter because the score tells you sentiment, but the verbatim tells you cause. A promoter score without the verbatim tells you a customer is happy; with it, you know exactly what made them happy and why that experience is worth protecting.

02
Why does manual NPS verbatim analysis fail at scale?

Manual analysis fails in three ways: coverage (analysts can only read a fraction of responses, so conclusions reflect whoever the team had time for), consistency (different analysts categorize the same response differently, causing taxonomy drift), and depth (second-tier themes affecting 8–12% of respondents — often the most actionable ones — get missed entirely when sampling). Together these make manual analysis unreliable as a systematic input to decisions.

03
Should I analyze promoters, passives, and detractors together?

Should I analyze promoters, passives, and detractors together?

04
What customer context should I attach to NPS verbatims before analysis?

At minimum: plan type, ARR, customer tenure, and product usage level. This enrichment turns observations into decisions. Without it, you know 200 customers mentioned slow reporting. With it, you know whether those 200 are concentrated in enterprise accounts at renewal risk or distributed across free-tier users unlikely to churn. The business context changes what the theme is worth and how urgently to act on it.

05
What four questions should I ask when analyzing NPS verbatim data?

What do detractors say that promoters never mention? (Your actual failure modes.) What do promoters say that detractors never mention? (Your real differentiators.) What theme is growing fastest among detractors? (Your most urgent signal.) Are any themes concentrated in a specific cohort? (A 4% detractor rate that's actually 18% among enterprise accounts is a retention problem hiding inside an acceptable average.)

06
How do I make sure NPS verbatim analysis leads to action?

Three practices close the gap between analysis and action. Share findings as a one-page summary — top themes, volume, trend direction, one representative verbatim each — rather than a full data report. Route themes to the team that owns them: billing verbatims go to Finance, onboarding verbatims go to CS, feature verbatims go to Product. Set a threshold before you start for what percentage of detractors constitutes a signal worth acting on, so the decision framework is in place before the data arrives.

Related Guides
See all guides

AI That Learns Your Business

Generic AI gives generic insights. Enterpret is trained on your data to speak your language.

Book a personalized demo

Start transforming feedback into customer love.

Leading companies like Perplexity, Notion and Strava power customer intelligence with Enterpret.

Book a demo