[ad_1]
This is a story about a man who wanted to get an oil change at his Subaru dealership. Really, though, it’s a story about what happens when companies think that AI is a better way to interact with customers than simply having real humans do things like send emails and text messages.
We’ll call the man Nick, which is not his real name, but that part isn’t important. What is important is that he scheduled an appointment for an oil change with his local dealership.
As the appointment approached, Nick received a perfectly normal reminder from someone named Cameron Rowe. The messages were friendly and helpful. They even included the dealership’s full name, a link to the address, their hours, and the details of the service.
But then Nick got another message confirming his appointment, even though he’d already been to the dealership and had the oil change. The message seemed weird, so Nick asked a basic question: “Is Cameron Rowe a person on the team?” Then the responses got… well, keep reading.
The “assistant” thanked him for asking. Then it assured him someone would “look into this and get back to him with the necessary details.” Then it suggested scheduling a call. And then it repeated itself. Word-for-word. Multiple times.

Just to be sure we’re clear, the text message, which previously had been coming from Cameron Rowe, said that the dealership was looking into the question of whether Cameron Rowe was a real person. It’s like some weird AI software loop, but with robots that don’t know they’re robots.
Eventually, after asking—more than once—Nick tried the obvious question:
“Are you a chatbot?”
The assistant replied:
“I am the dealership’s virtual assistant…”
That’s technically honest. But here’s where it gets ridiculous: the dealership didn’t just give its virtual assistant a first name. They gave it a last name. And a business title. And an email signature. And—if the messages are to be believed—a backstory compelling enough to text him more than a dozen times.
Literally, the dealership created an AI bot to pretend it was a person.
The thing is, AI chatbots may be many things, but they are not people. And they should not have two names.
Nick eventually connected with a real person—a consultant named Antonio. He was, thankfully, an actual human being. He confirmed it when Nick asked. Twice.
And then Antonio admitted what was already obvious to anyone who gave it more than a moment’s thought: Cameron Rowe was not real. He was an “artificial assistant designed to help set appointments and generate customer incentives.”
To Antonio’s credit, he didn’t hide from it. But he also revealed the underlying problem in one short sentence:
“Almost all major dealerships use some sort of AI to conduct business.”
That might be true. But the problem isn’t that dealerships are using AI. It’s that they’re using it without telling people they’re using it—while also designing it to feel as human as possible.
Maybe it’s just me, but it seems incredibly strange and dishonest that this AI chatbot was given a name, a personality, and a fake identity, without ever disclosing that none of it was real. I get that companies aren’t using AI because it delights customers. They are doing it because it allows them to handle more conversations, more cheaply, without hiring more people. There’s nothing inherently wrong with efficiency. But somewhere along the line, a lot of businesses seem to have learned the wrong lesson.
It seems like companies think that if people don’t want to talk to robots, the solution is just to make people think they’re talking to a human. Give the robots last names and job titles, and make them very friendly.
Except, no one wants that. They just want to know who—or what—they’re talking to. If you’re going to make me talk to a robot, it should be absolutely clear that I’m talking to a robot. Otherwise, you’re not being honest.
And here’s the part companies seem to forget: the moment customers catch you not being honest, they’ll assume you’re not being honest somewhere else—somewhere that matters.
That’s the part of this story that should make every business reconsider how they’re rolling out AI to customers. Trust, it turns out, is your most valuable asset.
If the dealership’s first message had simply said:
“This is our automated assistant. I can help schedule appointments or get basic information to our team,” none of this would have happened. Nick wouldn’t have been annoyed. He wouldn’t have felt misled. He wouldn’t have spent days trying to figure out whether Cameron with a last name was a human being.
Instead, he would have gotten his oil change, the dealership would have saved time, and everyone would have moved on with their day. But because the AI attempted to pass as human, it created the exact opposite outcome: confusion and broken trust.
Here’s the simple lesson: If your customer asks whether they’re talking to a human, your AI strategy has already failed. Just tell people the truth—that’s what they really want. What they don’t want is a chatbot with a last name.
The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.
The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.
[ad_2]
Jason Aten
Source link