
Utility customer satisfaction surveys work when they are triggered at the right moment in the customer journey, ask the right question type for that interaction, and connect responses to the account data that explains why a score came in low. Generic annual surveys miss the billing dispute that happened in March, the service request that took four calls to resolve in July, and the outage in October where customers got no status update for six hours. This guide covers the survey questions that matter at each utility touchpoint, how to choose between CSAT, CES, and NPS, and how to connect your survey program to your CIS so every low score has account context behind it.
Most CSAT templates are written for retail, hospitality, or software support. They ask about "the experience" as a single event. Utility customer interactions do not work that way.
A billing interaction is not one event. It starts when the customer receives the bill, continues when they log into the portal to check their usage, escalates when they call to dispute an estimated read, and ends when the adjustment posts. A generic "how satisfied were you?" after the call captures only the last 15 minutes of a process that may have taken three days.
Utility satisfaction surveys need to be designed around the nature of the touchpoint, not a generic template. The questions, scale, and timing differ by stage because the customer's context differs by stage. A customer who just moved into a new service address is evaluating your enrollment process. A customer who just had service restored after a 14-hour outage is evaluating your communication and response speed. Asking both the same question produces data that helps neither.
Your customer information system is the system of record that makes survey data actionable. Without it, a low score is just a number. With it, you can see that the dissatisfied customer is on a tiered rate, had an estimated read in the billing period they are complaining about, and called in twice in the last 90 days. That context turns a score into a correctable problem.
Are you measuring satisfaction with a specific interaction, the effort required to resolve an issue, or your customers' overall willingness to stay with your utility?
These are three different questions. Mixing the metrics produces data you cannot act on.
| Metric | What It Measures | Best Utility Use Case | Typical Question |
|---|---|---|---|
| CSAT | Satisfaction with a specific interaction | Post-billing, post-service request, post-outage resolution | "How satisfied were you with how your billing issue was handled? (1-5)" |
| CES | Effort required to resolve an issue | Service requests, billing disputes, payment portal usability | "How easy was it to resolve your issue with us today? (1-7)" |
| NPS | Overall loyalty and likelihood to recommend | Annual relationship survey, post-major-upgrade | "How likely are you to recommend our utility to a neighbor? (0-10)" |
For most US utilities, CSAT and CES are the workhorses. They fire after specific events and tell you what broke in the process. NPS is an annual temperature check. Using NPS after every billing cycle inflates the question's meaning and makes it impossible to benchmark against industry data, which is calibrated for annual measurement.
Track what each score means for operations. Utility customer experience metrics: a measurement guide covers the full set of KPIs, including how CSAT, CES, and NPS each connect to operational outcomes like first-contact resolution rate and call handle time.
At which stage are your lowest satisfaction scores concentrated, and does your current survey design let you see that by stage, or only in aggregate?
Every stage of the utility customer journey generates different satisfaction drivers. Grouping all interactions into a single survey metric hides which stages are failing.
The five stages requiring distinct survey designs are service enrollment, billing and payment, service requests and field work, usage monitoring and alerts, and outage response. For a full breakdown of what happens at each stage, the utility customer journey: a digital guide maps each touchpoint and identifies the digital gaps that most commonly drive low scores.
Below are the specific questions that perform best at each utility touchpoint. Limit each trigger to two or three questions. More than three questions at a post-interaction trigger reduces completion rates by roughly 40 percent.
Before designing stage-specific questions, every utility survey program should include these baseline design rules:
Service Enrollment
Billing and Payment
Service Requests and Field Work
Usage Monitoring and Alerts
Outage and Service Disruption
What is your current process when a customer gives you a score of 1 or 2, and does anyone contact that customer within 48 hours?
A survey without a closed-loop response process produces reports, not improvements. Build the program in this sequence:
For platform selection guidance, customer information systems for utilities: a complete guide covers the CIS capabilities that determine how well survey data integrates with account history.
Customer satisfaction data is most valuable when it is not isolated in a survey tool. A score of 2 on a post-billing survey has different implications depending on whether that customer is on a tiered rate, whether they have an outstanding dispute, whether they called in the same billing cycle, and how long they have been a customer.
Your CIS platform is the source of all of that context. When your survey platform pulls account-level tags from the CIS at the time of survey trigger, every response arrives with the data you need to understand why the score is what it is. Low scores from customers with recent estimated reads point to a meter data problem. Low scores from new customers who just completed enrollment point to an onboarding gap. Low scores clustered in a specific service zone during an outage event point to a field operations response issue.
SMART360 connects to survey and notification platforms through 25+ pre-built integrations, so survey triggers fire from CIS events rather than from scheduled email batches. The result is higher completion rates, contextually tagged responses, and a closed-loop workflow that feeds low scores back to the account record for staff follow-up. Utilities on SMART360 report a 95% customer retention rate, driven in part by the closed-loop CX process that flags at-risk accounts before they escalate to formal complaints.
If you are reviewing whether your current CIS can support trigger-based survey workflows, how to evaluate and choose a CIS system for your utility covers the integration and reporting capabilities to require from any platform.
Trigger-based surveys should fire after specific events: billing cycles, service request completion, and outage restoration. Annual relationship NPS surveys should be sent once per year. Sending NPS quarterly or after every interaction dilutes the metric and makes benchmarking against industry data unreliable.
Industry benchmarks vary by utility type, but US utilities typically target a CSAT score of 75 to 80 out of 100 (on a 5-point scale converted to 100). Scores below 70 indicate systemic issues in the measured touchpoint. Scores above 85 are achievable with digital self-service at billing and service request stages where most low scores originate.
Two to three questions per trigger event is the standard for post-interaction utility surveys. One scored question (CSAT 1-5 or CES 1-7), one follow-up open text for low scores, and optionally one binary question (Yes/No) about a specific process. Surveys exceeding five questions at transaction-level triggers see completion rates drop below 15 percent.
Yes, when your survey platform integrates with your CIS. Survey triggers fire from CIS events (invoice generation, service order close), and each response is tagged with account metadata. This lets you segment low scores by rate type, service zone, account tenure, and interaction history, turning aggregate satisfaction data into actionable operational intelligence.
Once your survey program is running, improving the self-service touchpoints that generate the most low scores is the next step. CIS billing software features: a utility checklist covers the billing portal and payment capabilities that drive satisfaction scores at the highest-volume touchpoint.