I don't want to keep talking about AI. It's everywhere and everyone has an opinion. But I have some strong feelings about support metrics and this is starting to cross a line.
A lot of companies are now measuring the rate of AI implementation in their organizations. With mid-year performance review season upon us, lots of those companies are using AI usage rates as individual performance metrics.
Please picture me grimacing as I wrote that.
Can you imagine? Using the amount of time spent using a tool as a gauge of how well a person is doing their job? When their job title is not "Tool User"?
You don't have to imagine it, because you've seen it or know someone who has been graded that way. Or your performance metrics now include "AI usage" as a factor. Or you’re using that metric to review your people.
Look, I can understand using AI implementation as a company-wide, or even team-wide, metric… for some things. When the CEO or CFO or team lead wants to see if they're getting their money's worth out of a software investment they made, it's good to see whether the team is using that software. Low usage numbers usually mean that the software isn't that useful, or it's a niche tool.
But at an individual level, why does that matter? It tells me that some of these teams don't know what success actually looks like, so they're looking for proxies. That's fine, we all do that. It's hard to know whether you've delighted your customers; there’s no objective Delight Scale to measure against, so we’re all looking for the next-best thing.
In Support, we're no strangers to using proxies for success metrics. Customer satisfaction scores and net promoter scores are the closest things we get to "customer happiness" ratings, and even those are deeply flawed. We have to be careful about what insights we're taking away from our proxy data, though. (We've written about this before, and I recommend catching up on the backstory.) Some proxies are more proximate than others.
Take first reply times: We assume they're a good indicator of how well the team is doing, or how well each support agent is doing. But if you tell the team that they're judged on first reply times, they'll find ways to make first reply times go down, and those ways may not actually support customers. They may even be at the expense of the things you should care about, like customer satisfaction or issues resolved.
Knowing that, why are we looking at AI implementation as a personal success metric in Customer Support? What do we assume when we make it a proxy for the things that we care about? It assumes, for one, that using AI tools correlates perfectly with customer satisfaction. Are we all comfortable with that? Really?
I'm picking on AI implementation because it seems absurd to use it as a factor in performance reviews for support teams, but I want you to scrutinize all your team's metrics. Which ones are measuring things you care about? Which ones are proxies? And which ones are proxies of proxies that aren't telling you anything at all?
I don't have the answers for you. Those are an exercise left for the reader, as they say. So think about it.

Brian Levine