Download White Paper: Measuring Customer Satisfaction – Creating a Reliable Survey
Abstract
Every for-profit, for-cause and even governmental organization cares about customer satisfaction. When someone wants to measure customer satisfaction the natural instrument is a survey, but we all know that surveys can be problematic (more about this later). This case study documents a process for measuring customer satisfaction based on an implementation project with a financial-services Client. The survey we created was robust, had a high response rate, and provided valuable insight to the management team. Best of all, the CEO is the first person to get the data and is responsible for follow-up; he did not delegate to the manager of client service. This sent a powerful message to clients and employees that client satisfaction starts at the top.
Background
Our Client is a mid-size financial institution that prides itself on client service. They feel they are differentiated in this aspect and charge a premium for this service. However, their high opinion of their great service was all based on anecdotal feedback from clients. The CEO wondered if he only received the good news and if they were as good as they thought.
They originally contacted Supply Velocity to create an annual survey that would go out once a year to all clients. We discussed what they wanted to measure and realized that this instrument would not be effective.
What is wrong with surveys
The “annual survey” has many problems, some are rather obvious and others require an understanding of survey theory. The obvious problems include the measurement occurring during some relatively short period prior to the survey, the typically low response rates and the mindless or blank responses due to the number or type of questions. Then, and most importantly, the survey results are usually looked at, discussed in management meetings and then set aside, with no definable actions coming from the data. (We would argue that this is probably best anyway, because the data is often unreliable!)
The second problem is related to two concepts that are critical when designing surveys; measurement error and non-response bias. Measurement error means that different people with the same opinion could answer differently. This can occur because the question is vague or the scale is not clear. (Have you ever been confused by what “above average” means?) Non-response bias is the concept that the people that don’t respond may feel differently (be biased negatively or positively) than the people that responded. With response rates of less than 10%, the company sending out a survey should probably worry about how the other 90% feel.
We attempted to solve all of these problems using the process described in the next section.
Creating a robust and easy-to-complete survey
- Surveying should be a continuous process
- Determine the high level question you are trying to answer
- Less is more when it comes to surveys
- Carefully word the questions
- Define the scale in very specific terms
- Act on the data
Monthly phone surveys
When implementing a customer satisfaction survey we feel that the best method is to survey continuously. This requires surveys to go out to customers every month. You can differentiate customers based on size and survey your “A” customers quarterly, “B” customers twice a year and “C” customers once a year. Or you can divide your entire customer base up into 12 equal groups so every customer gets surveyed once a year. We recommend using a phone-survey service. The internet is revolutionary, but for surveys it is too easy to ignore. Our phone-survey partner has representatives that call everyone on the customer list, usually leaving one voicemail and then trying a defined number of times to make contact. (Another benefit of this process is you can find out that your key contact may have left the company, or you don’t have their correct phone number. It also requires you to really think about who from your customer should be answering the survey; the owner, CEO, CFO, etcetera). Our financial institution Client decided to survey all of their clients once a year.
What question are you trying to answer?
So often companies start the survey process by throwing out lots of questions they want answered. It is important to first think about the top-level question. If there was a single question you wanted answered by your customers, what would it be?
Our Client brainstormed the following 10 questions:
- Do clients think we are a great financial intuition?
- Are clients thrilled with our service?
- Are we better than competitors?
- Are we as good as we think we are?
- What can we do better (blind spots)?
- Can we elevate the client experience?
- Are we like Nordstrom?
- Are we worth a premium? 1
- Is it the people or the institution?
- Do we have a high level of quality in our service?
To simplify the next step the team voted this list down to two questions:
- What can we do better (blind spots)?
- Are we worth a premium?
We felt that the “blind spots” would come out in the process, so we focused on “are we worth a premium?” (Remember that this may not be applicable to your firm. Our Client prided itself on client service and being a “high touch” service provider.)
Carefully word the questions (and less is more)
If you are going to be contacted by phone, and asked survey questions, the phone call better be fast. Therefore, we set the maximum number of questions at five.
The first step was to create questions or statements that that helped us define what a premium is worth. The team came up with the following five definitions of premium financial services:
- A trusted advisor
- High quality service
- Few errors
- Timely response for info/service, availability/access to people/team, proactive
- Resolve problems quickly and painlessly
- Clients feel special
- Total experience is wonderful
These statements turned into the five questions on the survey:
- Rate your primary contact
- Rate your experience with our firm
- How many errors have you experienced over the last year?
- If we made an error how have we done resolving the problem?
- What can we do better?
Define the scale in very specific terms
One reason that firms get inconclusive results from surveys is the vague definition of the scale (the words associated with 1 through 5). Often 1 = poor, 2 = below average, 3 = average, 4 = above average and 5 = outstanding. This type of scale creates tremendous opportunities for measurement error. As asked earlier, “what exactly does above average mean?”
The team took two days to determine a very specific scale for our four questions. Below is the result. As you observe the scales you will quickly notice the care that we took to be as specific as possible. We used “below average” as our worst response, but made the others real words that people use to describe relationships with their financial services provider. Instead of “outstanding” we used the words “trusted advisor”, “strategic partnership”, “zero errors”, and “fast and painless”. Note that we ended up using a four-point scale (versus the normal five-point scale). This was because we felt the words that describe and 3 and a 4 were too similar and could confuse clients. Keeping the words clear and specific was more important than trying to use the symmetrical five-point scale.
What describes your primary contact at our firm?
- Below average
- No different than others
- Helpful
- A trusted advisor
How would you describe your client experience?
- Below average
- No different than others
- Helpful
- Strategic partnership
How many errors have you experienced over the last year?
- Three or more
- Two
- One
- None
If you had an error, how would have we done resolving these errors?
- It is painful
- Some hassles, could do better
- Acceptable
- Fast and painless
What can we do better? (This question was open-ended and didn’t have a scale.)
Act on the data
Each month 1/12 of this firm’s clients receive a call from the phone-survey provider. Approximately 70% of their clients respond to the survey. The results go directly to the CEO. In addition, if any client answers any question with a 1 or 2, that client goes on a “priority” list for the CEO to contact immediately and help resolve a problem or prevent that client from going to a competitor. The results are graphed and shared with all employees, so negative trends, if they occur, can be acted on quickly.

Learn more about our Business Process Improvement Services.