AI can't do your job

Brian Levine's Profile Picture

Brian Levine

Co-Founder, CEO

Expected vs Actual

A few months ago, I heard a fellow Support leader tell a crowd of Support professionals that the new breed of LLM-based chatbots could vastly increase the number of customer requests handled per day without any assistance from Support team members. In the next breath they assured us that Support team members' jobs were not at risk. That contradiction has been echoing in my head ever since. My co-founder sometimes wisely says that every advancement in technology reveals who we think doesn't matter, or what work we think doesn't need to be done.

The truth is that we don't have all of the data available to say whether or not this is a good decision for a company. Customer satisfaction and churn haven't been weighed against budgets and profit margins. Not publicly, at least. But none of that is stopping everyone from plunging full steam ahead into AI, if only not to be left behind.

But... the AI era is just getting started. We're only about a year into this. Nothing is calcified yet. What if we took the time to think creatively, work with Support teams, and integrate AI with care and intent? What if that took us somewhere that's better for our staff and our customers?

The race is on

It's been a little over a year since the public release of ChatGPT and everyone is looking for ways to use this profoundly new and exciting technology. This is especially true in the world of Customer Support, where text is often the only visible output of the team: we write emails and chat responses to customers, post replies on X and Threads and LinkedIn and Facebook, and write user guides and instructions for our products and services. When trained on enough of the right data, AI (and large language models, LLMs, specifically) seems like it can do all of that work for us.

Along with this exciting story of wonder and magic, though, is a tinge of fear: You don't want to be left behind, do you?

Over the past year we've all seen these words somewhere, either implicitly or explicitly. You either implement these tools or you watch your competitors implement them and run past you. It's an all-out race just to keep up. But at the heart of it, nobody can say exactly what they're keeping up with.

When I heard that contradictory statement about AI's benefits months ago, the technology hadn't been around long enough to be tested at more than a few companies. It still hasn't, so there's no way to know how many tickets a bot could handle. There's also no way to know whether the bot could successfully handle those requests without human assistance. But then, most worryingly and obviously of all, it's clear that companies would start to use this technology specifically to reduce the number of Support team members they needed to keep on staff. And sure enough, that is exactly what started to happen a few months later as wave after wave of people were laid off and LLM-based bots started to take over the job.

We don't yet know the impact of these changes on the bottom line, of customer satisfaction (though in a couple of cases we can see that this ins't always going well for the bots or the companies that have used them in place of people). However, we don't need data to know that we need to hurry up and get on board before this train leaves the station without us. We'll figure out where we're going once we get on board. It'll all work out in the end, right?

Right?

Technology keeps advancing

When we evaluate new tools and new technology for almost any part of our business, we usually think about pros and cons and we do a cost-benefit analysis (no matter how rudimentary) and we make the best decisions we can with the information we have available. But some innovations are so new and different that we short-circuit our decision analysis and rush into the void out of excitement and fear and wonder and dread. It's most often the people whose jobs we least value or least understand who have to pay for the damages.

The train is definitely leaving the station. It always does. We can't turn the clock back and stop AI advancement any more than we can any other advancement - the silicon chip, transistors, electricity, the printing press. Progress and innovation is inevitable and I'm not here to argue that it should be stopped. I'm here to argue that we should make decisions with intent. We should know what outcomes we're trying to achieve and what outcomes we're trying to avoid. We should also be aware of how our decisions affect the people around us, throughout our industry - our customers, our co-workers, our communities, and yeah even our shareholders.

I've talked at length recently about how the work of Support is misunderstood and devalued. When we misunderstand the work of Support to be primarily one of writing text then it's easy to see how an algorithm that writes text could be substituted in for a person with little downside. Software, especially off-the-shelf software, is usually cheaper to maintain and has less downtime than a person. If, on top of that misconception, we believe the algorithm is more reliable in its output then the decision becomes even easier. And if I then tell you that all of your competitors are using this technology, then you'd be foolish to not replace at least some of your team with this software.

But that isn't the whole story. Support teams do more than regurgitate known facts in the form of written text in response to customer inquiries. And like the textile workers of 19th century England, I'm begging people to see the harm that this misunderstanding is causing people. Real people, many of whom we know personally, are affected by their company's rushed decisions. And many more real people are affected by those decisions when they contact support for help with a problem and are given useless, if not incorrect and harmful, answers to their questions.

We can't stop the train from moving. But we also don't need to rush to be the first ones on the train before we know where it's going. We can look at a schedule, make a plan, and get on the next train, or get on at another stop.

AI gives us new options

There are a lot of clever uses for this new technology right in front of us. There are a bunch of new projects and companies coming out that use LLMs in innovative ways that help Support Professionals do their jobs better. Being able to summarize a long email thread, automatically tag conversations based on their content, route tickets to the best teams, review responses for team improvement, and more. These are just a sample of things I've seen over the past year that have impressed me. The thing they all have in common is that they add to the Support team's capabilities rather than replace them.

There are a lot of options for adding AI, and LLMs specifically, to your Support workflow. But not all options are equal. Some are more useful than others. Some are more harmful than others. In general, if you want to know what helps and what hurts you can ask the people doing the work day in and day out. They have the best sense of what the team needs, what the customer needs, and what the company needs.

Similarly, there are tons of new ideas that we, as a community, have not thought of yet. The best ideas will likely come from the people doing the work, looking for ways to improve. Unfortunately, they aren't typically the ones we look to for business advice when new technology comes to market.

To decide how you want to use new or existing technology on your team, start with the problem you're trying to solve first. Never start with the tech you want to use or the product you want to buy. You will only solve your problems well if you go through the process of defining them for yourself from the perspective of your product or service, your team, and your customers. And if you decide that the problem is that you pay humans to do human relationship work with other humans, then yes, you'll probably replace those humans with whatever technology is available.

The future of Support

Technological advancements can be wonderful. I'm not suggesting we should give up our easy access to books in order to keep our scribes employed. Like the Luddites of 19th century England, I am instead suggesting that we not use new technology to create a worse product simply because it's cheaper and allows us to lower payroll costs. That's certainly a choice a company can make, but every company should make that choice with full knowledge of the outcomes and not as an industry-wide battle to the bottom.

When you hear someone tell you that you need to see how AI can be integrated into your teams' workflow, listen carefully. Are they telling you that the problems you're having are now easily solved by this new tool? Or are they telling you that this new tool is going to change the way business is done and you don't want to be left in the dust? If it's that second one, then take a step back. That's hype talking. And we don't have time for hype; we have too much work to do.

I don't know what specific problems you have on your team (though I can probably take a decent guess). Whatever they are, the solution is not to remove Support teams from the work of customer support and replace them with AI, but to let them do deeper work with the help of new technology. Support teams don't need to be trained to become engineers in the face of a disappearing Support industry. There is still a need for the skills required by that team that go well beyond email writing. The Support team maintains a holistic knowledge of your product and how it is used in the larger ecosystem by real people. They research and report on changes in the product and changes in the world in which that product operates. They are the glue connecting the company to the real world. By understanding the full spectrum of Support's work, we can find ways to augment and empower that work with the aid of new technology, AI and otherwise.