I learned an important lesson about transaction surveys (that is, surveys linked to tickets) when I was a director of customer service. We had restructured the service desk, implemented incident management and service level management, and were scoring 4.8 out of 5.0 on the customer satisfaction surveys we sent out after we closed a ticket. We looked at this score as a barometer of our success. However, although we had improved our service immensely, we found we were not as good as we thought.
All escalations were split between the director of operations and me. We were being called out of meetings at least twice a week by the CIO’s administrative assistant. One time, the CIO called us to his office to discuss the latest escalation – a business executive called to complain about support. Each time, we assured the CIO that this was an exception, that we were doing a great job, and that we were scoring 4.8 out of 5.0 on our customer satisfaction surveys! I guess we said that one time too many, because he finally told us not to mention those surveys again. He said that they were worthless and out of touch with reality. If we were doing so well, we wouldn’t be experiencing multiple “exceptions” a week, would we?
We left the CIO’s office determined to prove him wrong. We were going to show him that our teams really were doing a great job. We decided to start with our customers. We hired a temp to come in and call about half of our internal customers and document their answers to this one question: What do you expect when you call the service desk?
The answers were consistent. Our customers (users) expected our teams to:
- Understand the business issue, beyond the technology side;
- Have a proper sense of urgency;
- Keep them informed when things changed; if we told them it would be resolved by a certain time and that changed, they wanted to know; and
- Keep them abreast of the status and let them know if it changed hands (i.e., was escalated).
When we saw these answers, we immediately recognized what was wrong with our surveys and why they weren’t reflecting reality. We were not evaluating any of these expectations and we certainly weren’t asking our customers how we were doing in these areas. We were asking questions about things that we (IT) thought were important, things like:
- Was the customer service representative knowledgeable?
- Was the customer service representative professional?
As a result of what we learned, we not only changed our surveys, we also changed how we trained and prepared our teams. We focused on business knowledge and appropriate prioritization based on business impact. We gave customers the ability to check their status online and built updates into our processes. If a ticket was escalated to another team, an email went out to the customer. If the resolution time changed, we called them. Initially, our survey scores dipped; however, the feedback was much more useful and helped up us to continually improve. From this, I learned a few important lessons.
- Customer satisfaction surveys should be used for continual service improvement. If the questions are not in alignment with expectations, they are not giving us information that will help us to improve.
- Surveys are only one aspect of customer perception. We need to look at the whole picture.
- Do they bypass the process and call someone? If so, why?
- Do we know what their expectations are? If not, ask!
- What trends are we seeing in our reports (contact volume, reopened tickets, abandoned rate, how-to calls, top ten call types, etc.)?
- What comments are customers making? If they are saying things to the analysts on the phone or at meetings, we need to document it.
- Are we giving our customers feedback mechanisms outside of the customer satisfaction survey?
- We need to talk to our potential customer base, not just those who are currently using our services. What if someone had a bad experience and won’t call anymore? Are they going to their peers?
- We in IT should never assume that we know what our customers want. We need to find out from them. One quick way to determine whether or not your current questions are the right questions is to let customers weigh the importance to them. If they scored it high for satisfaction, but it isn’t very important to them, then maybe it doesn’t need to be on the survey (base your decision on the average level of importance, not just a few customer’s answers). Here’s an example:
|Question||Level of Satisfaction||How Important Is This to You?|
|Did the support representative resolve your issue on first contact?||5||4|
|Was the support representative knowledgeable?||5||3|
Customer satisfaction surveys are always a “hot topic” in the classes I teach. Please share your challenges, or even your triumphs. What works for you? What lessons have you learned?