5 min read

Utility Customer Satisfaction Survey Questions: A Practical Guide

Learn which CSAT survey questions to ask at each utility touchpoint, when to use CSAT vs. CES vs. NPS, and how to connect survey data to your CIS.
utility customer satisfaction
Written by
Sewanti Lahiri
Published on
May 14, 2026

Utility customer satisfaction surveys work when they are triggered at the right moment in the customer journey, ask the right question type for that interaction, and connect responses to the account data that explains why a score came in low. Generic annual surveys miss the billing dispute that happened in March, the service request that took four calls to resolve in July, and the outage in October where customers got no status update for six hours. This guide covers the survey questions that matter at each utility touchpoint, how to choose between CSAT, CES, and NPS, and how to connect your survey program to your CIS so every low score has account context behind it.

Why Utility CSAT Surveys Require Different Questions Than Generic Templates

Most CSAT templates are written for retail, hospitality, or software support. They ask about "the experience" as a single event. Utility customer interactions do not work that way.

A billing interaction is not one event. It starts when the customer receives the bill, continues when they log into the portal to check their usage, escalates when they call to dispute an estimated read, and ends when the adjustment posts. A generic "how satisfied were you?" after the call captures only the last 15 minutes of a process that may have taken three days.

Utility satisfaction surveys need to be designed around the nature of the touchpoint, not a generic template. The questions, scale, and timing differ by stage because the customer's context differs by stage. A customer who just moved into a new service address is evaluating your enrollment process. A customer who just had service restored after a 14-hour outage is evaluating your communication and response speed. Asking both the same question produces data that helps neither.

Your customer information system is the system of record that makes survey data actionable. Without it, a low score is just a number. With it, you can see that the dissatisfied customer is on a tiered rate, had an estimated read in the billing period they are complaining about, and called in twice in the last 90 days. That context turns a score into a correctable problem.

CSAT vs. CES vs. NPS: Which Score Matters Most for Utilities

Are you measuring satisfaction with a specific interaction, the effort required to resolve an issue, or your customers' overall willingness to stay with your utility?

These are three different questions. Mixing the metrics produces data you cannot act on.

MetricWhat It MeasuresBest Utility Use CaseTypical Question
CSATSatisfaction with a specific interactionPost-billing, post-service request, post-outage resolution"How satisfied were you with how your billing issue was handled? (1-5)"
CESEffort required to resolve an issueService requests, billing disputes, payment portal usability"How easy was it to resolve your issue with us today? (1-7)"
NPSOverall loyalty and likelihood to recommendAnnual relationship survey, post-major-upgrade"How likely are you to recommend our utility to a neighbor? (0-10)"

For most US utilities, CSAT and CES are the workhorses. They fire after specific events and tell you what broke in the process. NPS is an annual temperature check. Using NPS after every billing cycle inflates the question's meaning and makes it impossible to benchmark against industry data, which is calibrated for annual measurement.

Track what each score means for operations. Utility customer experience metrics: a measurement guide covers the full set of KPIs, including how CSAT, CES, and NPS each connect to operational outcomes like first-contact resolution rate and call handle time.

The 5 Journey Stages That Need Their Own Survey Questions

At which stage are your lowest satisfaction scores concentrated, and does your current survey design let you see that by stage, or only in aggregate?

Every stage of the utility customer journey generates different satisfaction drivers. Grouping all interactions into a single survey metric hides which stages are failing.

The five stages requiring distinct survey designs are service enrollment, billing and payment, service requests and field work, usage monitoring and alerts, and outage response. For a full breakdown of what happens at each stage, the utility customer journey: a digital guide maps each touchpoint and identifies the digital gaps that most commonly drive low scores.

Survey Questions by Journey Stage

Below are the specific questions that perform best at each utility touchpoint. Limit each trigger to two or three questions. More than three questions at a post-interaction trigger reduces completion rates by roughly 40 percent.

Before designing stage-specific questions, every utility survey program should include these baseline design rules:

  • One scored question per trigger (CSAT 1-5, CES 1-7, or NPS 0-10 depending on survey type)
  • One open-text follow-up field for scores below the midpoint, required to understand root cause
  • No leading questions that assume satisfaction ("How great was your experience?" fails this test)
  • A consistent scale across all triggers so scores can be compared across journey stages
  • Survey delivery via the same channel as the interaction (email for billing, SMS for field work)

Service Enrollment

  • "How easy was it to start service with us today? (1-5)"
  • "Was the information you needed (billing dates, deposit requirements, portal access) clearly explained? (Yes/No + comment)"

Billing and Payment

  • "How clearly did your bill explain your charges this month? (1-5)"
  • "How easy was it to pay your bill online? (1-7)"
  • "If you contacted us about your bill, was your question resolved in a single interaction? (Yes/No)"

Service Requests and Field Work

  • "How satisfied were you with the response time for your service request? (1-5)"
  • "Was the work completed as described during your initial request? (Yes/No + comment)"
  • "How easy was it to track the status of your request online? (1-7)"

Usage Monitoring and Alerts

  • "Did you receive a notification before receiving an unexpectedly high bill this period? (Yes/No)"
  • "How useful is the usage history in your online account for understanding your charges? (1-5)"

Outage and Service Disruption

  • "How satisfied were you with the communication you received during the recent outage? (1-5)"
  • "Did you have access to a status update without needing to call us? (Yes/No)"
  • "How satisfied were you with the time it took to restore your service? (1-5)"

How to Build a Utility Customer Satisfaction Survey Program

What is your current process when a customer gives you a score of 1 or 2, and does anyone contact that customer within 48 hours?

A survey without a closed-loop response process produces reports, not improvements. Build the program in this sequence:

  1. Define your measurement objective for each trigger: decide whether you are measuring satisfaction (CSAT), effort (CES), or loyalty (NPS) for each event type. Billing disputes call for CES. Post-restoration surveys call for CSAT. Do not use NPS at transaction-level triggers.
  2. Map your trigger events to your CIS workflow: identify the system event that fires the survey, such as invoice generation, service order close, or outage restoration confirmation. Surveys triggered by CIS events have 35 to 45 percent higher completion rates than email blasts sent on a schedule.
  3. Build a short question set per trigger: two to three questions maximum per event. Include one scored question (CSAT or CES scale) and one open-text follow-up for scores below 3. Do not include satisfaction questions unrelated to the trigger event.
  4. Tag every response with account context: connect your survey platform to your CIS so each response carries the customer's service type, billing zone, last interaction type, and account tenure. A 1-star score from a customer with three consecutive estimated reads is a different problem than a 1-star score from a new account.
  5. Establish a 48-hour closed-loop protocol: assign low scores (1-2 on CSAT or 1-3 on CES) to a designated team member for outbound contact within 48 hours. Track resolution rate separately. This is the single metric that predicts whether your survey program improves satisfaction or just measures it.

For platform selection guidance, customer information systems for utilities: a complete guide covers the CIS capabilities that determine how well survey data integrates with account history.

How Your CIS Platform Connects to Survey Data

Customer satisfaction data is most valuable when it is not isolated in a survey tool. A score of 2 on a post-billing survey has different implications depending on whether that customer is on a tiered rate, whether they have an outstanding dispute, whether they called in the same billing cycle, and how long they have been a customer.

Your CIS platform is the source of all of that context. When your survey platform pulls account-level tags from the CIS at the time of survey trigger, every response arrives with the data you need to understand why the score is what it is. Low scores from customers with recent estimated reads point to a meter data problem. Low scores from new customers who just completed enrollment point to an onboarding gap. Low scores clustered in a specific service zone during an outage event point to a field operations response issue.

SMART360 connects to survey and notification platforms through 25+ pre-built integrations, so survey triggers fire from CIS events rather than from scheduled email batches. The result is higher completion rates, contextually tagged responses, and a closed-loop workflow that feeds low scores back to the account record for staff follow-up. Utilities on SMART360 report a 95% customer retention rate, driven in part by the closed-loop CX process that flags at-risk accounts before they escalate to formal complaints.

If you are reviewing whether your current CIS can support trigger-based survey workflows, how to evaluate and choose a CIS system for your utility covers the integration and reporting capabilities to require from any platform.

Frequently Asked Questions

How often should a utility send customer satisfaction surveys?

Trigger-based surveys should fire after specific events: billing cycles, service request completion, and outage restoration. Annual relationship NPS surveys should be sent once per year. Sending NPS quarterly or after every interaction dilutes the metric and makes benchmarking against industry data unreliable.

What is a good CSAT score for a US utility?

Industry benchmarks vary by utility type, but US utilities typically target a CSAT score of 75 to 80 out of 100 (on a 5-point scale converted to 100). Scores below 70 indicate systemic issues in the measured touchpoint. Scores above 85 are achievable with digital self-service at billing and service request stages where most low scores originate.

How many questions should a utility customer survey contain?

Two to three questions per trigger event is the standard for post-interaction utility surveys. One scored question (CSAT 1-5 or CES 1-7), one follow-up open text for low scores, and optionally one binary question (Yes/No) about a specific process. Surveys exceeding five questions at transaction-level triggers see completion rates drop below 15 percent.

Can utility CSAT data connect to billing and service records?

Yes, when your survey platform integrates with your CIS. Survey triggers fire from CIS events (invoice generation, service order close), and each response is tagged with account metadata. This lets you segment low scores by rate type, service zone, account tenure, and interaction history, turning aggregate satisfaction data into actionable operational intelligence.

Once your survey program is running, improving the self-service touchpoints that generate the most low scores is the next step. CIS billing software features: a utility checklist covers the billing portal and payment capabilities that drive satisfaction scores at the highest-volume touchpoint.

About Two Cta Image

Ready to see how SMART360 fits your utility?

Book a personalized demo with the SMART360 team and see how SMART360 fits your utility?

Key Takeaways
  • Trigger surveys at journey stages: after billing, service requests, and outage restoration.
  • CES measures interaction effort, NPS measures loyalty, CSAT measures specific satisfaction.
  • Limit to 2-3 questions per trigger for best completion rates.
  • CIS tagging links responses to account type, zone, and interaction history.
  • Acting on low scores within 48 hours drives more improvement than frequency.

Subscribe to receive utility insights

Subscribe to our monthly newsletter for the latest trends, best practices, and product updates.
We care about your data in our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Related Post From This Category