Data is a curse

Brian Levine's Profile Picture

Brian Levine

Co-Founder, CEO

Nick Cannariato's Profile Picture

Nick Cannariato

Co-Founder, COO

Expected vs Actual

Customer support teams, like most teams now, rely on data for decision making. We move forward based on what we think the data shows us. Our understanding of our business and our customers is often driven by what our data tells us. We think data can tell us an unbiased story about our products, our businesses, our teams, our employees, and our customers.

We look at first reply times, ticket resolution times, handle times, and wait times to see how well we are helping people. We analyze inbound volumes and outbound volumes to measure the health of a support queue. And with great reverence we collect customer satisfaction scores and net promoter scores to measure the quality of the support we provide. All this data will surely tell us what we need to do to make the customers happy.

It's true that data can be useful for many things. We can use inbound volume and outbound volume to know when we need to hire more staff. It can help us schedule teams to be available when customers are most often opening support requests.

And yet! And yet we view support data in isolation. On its own, it gives us a hazy view of reality at best and a misleading funhouse mirror view of it at worst. The data cannot tell us how well we are doing at the overall job of customer support. Ultimately, it is not support's responsibility to make the customers happy: it is the company's responsibility.

Customers email a company to solve a problem they are having. It might be a bug in an app or a package lost in transit or a refund for product that wasn't what they expected. They aren't emailing support to solve their problem, they want the company to solve the problem. By measuring customer satisfaction with the support experience and fawning over charts of reply times and handle times, we ultimately focus on the wrong things. The data leads us astray. Those reply times and handle times are a small portion of a larger lifecycle. Other teams are involved in the issue - product teams who design the app, engineering teams who developed it, logistics teams who get physical products shipped around the world, finance teams who forecast and track revenue, marketing and sales teams who get the product into people's hands.

It is said that you can only change what you can measure. Or, perhaps more accurately, "If you can't measure it you can't manage it." (This quote ironically misattributed to W. Edwards Deming despite his statement being the opposite.) People take this clever sounding advice and measure the number of hours their employees are working and the speed with which they are answering emails or phone calls or Twitter threads. However, we can only get faster at the things that support teams do in isolation if we only measure those teams in isolation and fixate on driving the numbers towards our weekly/monthly/quarterly/annual goals. These decisions don't necessarily make the product better or improve the customer's experience. They might, though. We don't know! We don't know because the data we most often look at cannot tell us. Some of those decisions, however, definitely make people's lives worse. People are burned out by metrics. People are fired for not meeting a threshold for calls answered or tickets resolved, or they work an extra 10, 15, even 20 hours a week to make up those numbers.

This all sounds like I am against collecting data. Like I'm suggesting that we stop looking at numbers and let people do whatever they want. And while I (Brian) think that might be a good idea and a worthwhile experiment, it is not what I'm suggesting. I am instead suggesting that we look at data within a larger context. We need to look at data as an organization and as a company. Support ticket volume goes down when products have fewer issues that need fixing. Reducing the number of those issues happens fastest when teams work together to find problems, build a roadmap for addressing them, and communicate that to customers. It's a team effort and our data should reflect that.

I say that data is a curse because we look at segments of data and make decisions that "move the needle" but don't make a meaningful impact on the business - revenue, retention, expansion, etc. Looking at support data (what friction customers are encountering) alongside other data from across the company - product data (what are we building and why), engineering data (how are we building it and how quickly), marketing and sales data (who is interested and what they want). It's only by looking at a holistic set of data in context that we can see what makes customers happy or frustrated and what leads them to continue using a product, buying from a vendor, or recommending products or services to other people. Support data in isolation (and really any team's data in isolation) can only give a sliver of an insight and the likelihood of making good decisions based on that is as good as not.

I hear you all nodding your heads while saying, "Yes but..."

  • But how can we measure the performance of the support team?
  • But how can we know who is doing a great job and who needs coaching?
  • But how do we know who would make a good coach?
  • But how do we know when to promote people?
  • But how do we know when to fire people?

The answers to these aren't easy. If you are looking for data to tell you how your team is doing in isolation, you need to talk to your team more. Maybe the support agent answering 50% as many tickets per week as the team average is handling more difficult tickets. Or maybe they're just more thorough than other people. And maybe that's totally fine, but you'll only know by talking to the people on the team and working with them, rather than looking at a ream of numbers every day or week.

Maybe the bottleneck to happier customers isn't the speed with which each individual team member answers tickets but in how well the teams work together to solve customer problems. If these are your "Yes but..." issues, then you aren't measuring how well your teams are working together and your don't know with certainty where the bottleneck is. Since you're collecting data on issue handle times and ticket responses per week, that's where you feel you can affect change. The answer is to dig deeper, collect more data from more teams across the company and look at it all together. And acknowledge that some decisions won't be less arbitrary by using whatever data you happen to have available.

To summarize all of this, data is only as useful as the story it tells us. When we look at small bits of it at a time, we only get part of a story. If we want to shift our thinking from "measurable" to "impactful", we have to rethink our data strategy and stop looking at each team's performance in tightly bounded boxes. We also have to acknowledge that the data we collect can only ever tell part of a story. When it comes to people - and especially how we manage people and teams - we need to understand the work and the individuals doing that work. People are not machines and should not be expected to perform within precise tolerances. Every time I see a dashboard showing ticket resolution times for the support team or KPIs and OKRs asking support to reduce handle times or response times, I think of all the support professionals struggling to meet those demands without knowing how or if they are making a positive impact on the business or the customer.