ReportWire

Tag: Customer Support

  • A Customer Just Wanted an Oil Change. Then an AI Bot Made Everything Weird

    [ad_1]

    This is a story about a man who wanted to get an oil change at his Subaru dealership. Really, though, it’s a story about what happens when companies think that AI is a better way to interact with customers than simply having real humans do things like send emails and text messages.

    We’ll call the man Nick, which is not his real name, but that part isn’t important. What is important is that he scheduled an appointment for an oil change with his local dealership.

    As the appointment approached, Nick received a perfectly normal reminder from someone named Cameron Rowe. The messages were friendly and helpful. They even included the dealership’s full name, a link to the address, their hours, and the details of the service.

    But then Nick got another message confirming his appointment, even though he’d already been to the dealership and had the oil change. The message seemed weird, so Nick asked a basic question: “Is Cameron Rowe a person on the team?” Then the responses got… well, keep reading.

    The “assistant” thanked him for asking. Then it assured him someone would “look into this and get back to him with the necessary details.” Then it suggested scheduling a call. And then it repeated itself. Word-for-word. Multiple times.

    Just to be sure we’re clear, the text message, which previously had been coming from Cameron Rowe, said that the dealership was looking into the question of whether Cameron Rowe was a real person. It’s like some weird AI software loop, but with robots that don’t know they’re robots.

    Eventually, after asking—more than once—Nick tried the obvious question:

    “Are you a chatbot?”

    The assistant replied:

    “I am the dealership’s virtual assistant…”

    That’s technically honest. But here’s where it gets ridiculous: the dealership didn’t just give its virtual assistant a first name. They gave it a last name. And a business title. And an email signature. And—if the messages are to be believed—a backstory compelling enough to text him more than a dozen times.

    Literally, the dealership created an AI bot to pretend it was a person.

    The thing is, AI chatbots may be many things, but they are not people. And they should not have two names.

    Nick eventually connected with a real person—a consultant named Antonio. He was, thankfully, an actual human being. He confirmed it when Nick asked. Twice.

    And then Antonio admitted what was already obvious to anyone who gave it more than a moment’s thought: Cameron Rowe was not real. He was an “artificial assistant designed to help set appointments and generate customer incentives.”

    To Antonio’s credit, he didn’t hide from it. But he also revealed the underlying problem in one short sentence:

    “Almost all major dealerships use some sort of AI to conduct business.”

    That might be true. But the problem isn’t that dealerships are using AI. It’s that they’re using it without telling people they’re using it—while also designing it to feel as human as possible.

    Maybe it’s just me, but it seems incredibly strange and dishonest that this AI chatbot was given a name, a personality, and a fake identity, without ever disclosing that none of it was real. I get that companies aren’t using AI because it delights customers. They are doing it because it allows them to handle more conversations, more cheaply, without hiring more people. There’s nothing inherently wrong with efficiency. But somewhere along the line, a lot of businesses seem to have learned the wrong lesson.

    It seems like companies think that if people don’t want to talk to robots, the solution is just to make people think they’re talking to a human. Give the robots last names and job titles, and make them very friendly.

    Except, no one wants that. They just want to know who—or what—they’re talking to. If you’re going to make me talk to a robot, it should be absolutely clear that I’m talking to a robot. Otherwise, you’re not being honest.

    And here’s the part companies seem to forget: the moment customers catch you not being honest, they’ll assume you’re not being honest somewhere else—somewhere that matters.

    That’s the part of this story that should make every business reconsider how they’re rolling out AI to customers. Trust, it turns out, is your most valuable asset.

    If the dealership’s first message had simply said:

    “This is our automated assistant. I can help schedule appointments or get basic information to our team,” none of this would have happened. Nick wouldn’t have been annoyed. He wouldn’t have felt misled. He wouldn’t have spent days trying to figure out whether Cameron with a last name was a human being.

    Instead, he would have gotten his oil change, the dealership would have saved time, and everyone would have moved on with their day. But because the AI attempted to pass as human, it created the exact opposite outcome: confusion and broken trust.

    Here’s the simple lesson: If your customer asks whether they’re talking to a human, your AI strategy has already failed. Just tell people the truth—that’s what they really want. What they don’t want is a chatbot with a last name.

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Jason Aten

    Source link

  • FedEx’s Use of AI Chatbots Is the Worst Thing a Company Could Do to Its Customers

    [ad_1]

    If you’ve ever had a package delayed by FedEx, you already know the feeling of frustration. You refresh the tracking page for the tenth time, watching as the promised “by noon” delivery window ticks by. Only it’s not coming.

    So, you do what people do and try to figure out what’s going on. One of the most amazing technological advances of the last 50 years is that you can watch a package go from Seattle or Los Angeles, travel across the country, and arrive at your home in Miami or New York. The amount of coordination and logistics that go into making that happen is not something I can comprehend. The problem is, sometimes it seems like it’s just theater.

    For example, I was recently waiting for a package promised to be delivered by noon, though the tracking information said it never even left FedEx’s hub in Memphis. Even still, it insisted it would arrive at my door on time—despite being 700 miles away.

    That doesn’t really make sense, but it’s not nearly as bad as trying to actually contact FedEx’s customer support, which is now an artificial intelligence mess. You’d think that technology would mean faster responses or better answers. What it actually means is that the company has built a wall between itself and its customers—and then put a talking robot in front to tell you to go away.

    AI virtual assistants

    When you open the chat window on FedEx’s website, you’re greeted by an AI “virtual assistant” that offers to help. It can tell you what you already know: your package hasn’t moved. It can read tracking data, copy-paste policy lines, and assure you that it has the most up-to-date information. What it can’t do is anything remotely useful.

    I kept asking the chatbot why my package wasn’t delivered, but it just kept insisting that it was scheduled for delivery that day, even though it was already 10 p.m. Telling me that a package still sitting in an airport in Memphis will be delivered in Michigan is neither up-to-date nor useful.

    Of course, if you ask it to talk to a person, it’ll ask you to call. That seems reasonable, but if you do, you’ll hear that “our agents have the same information you can find online.” In other words, the company doesn’t even want you to try. It’s a remarkable statement: not only is the bot incapable of helping you, but FedEx seems proud of the fact that its human employees wouldn’t be able to either.

    Humans want to talk to humans

    The thing is, I know for sure that the humans can help you. At a minimum they can try to explain what went wrong. In some cases, those humans will go out of their way to try to solve whatever happened to your package.

    I mean that sincerely. FedEx has a long tradition of employees going out of their way to help customers get their deliveries, sometimes taking extraordinary measures to deliver a passport or business contract.

    And—to be very clear—the problem here is not with the planes and delivery trucks and people who deliver FedEx packages. Sure, my package was delayed, but I fully understand that things happen. I wasn’t mad about it, I just wanted to know what happened so I could plan.

    The problem isn’t even with the people you might talk to on the phone if you’re able to figure out the secret pathway through to an actual human. The problem is with the people who make decisions about how to do things like “streamline operations” and “increase efficiency,” by inserting technology in places where humans would rather interact with other humans.

    The wrong incentives

    Companies like FedEx know exactly what they’re doing. They don’t deploy AI chat systems because customers love them. Companies do it because it’s cheaper than hiring enough people to handle the number of inquiries and complaints they get. They do it because they know most people will give up before ever talking to someone who might actually solve their problem.

    And, to be fair to FedEx, it is definitely not the only company that is doing this. I wrote previously about how UPS and Taco Bell are inserting robots where people would prefer to interact with a human.

    If you think that you can use AI to save a bunch of money by letting your customers talk to robots instead of humans, I promise you, you’re doing it wrong. Your customers do not want to talk to robots, they want to talk to a person.

    I’m sure that there are times when the robots will provide a better answer, but it is not a better experience. And anyone who tries to justify it as being a better experience is thinking about the wrong incentives. It probably seems less expensive, except that it really isn’t when you make enough of your customers mad that they decide they don’t want to be your customers anymore.

    The illusion of a better experience

    Also, just because your support team ends up dealing with fewer customers doesn’t mean there are fewer problems. It just means that the customers who have those problems gave up before getting them solved. It just means they’re out there getting mad, and that chips away at your brand promise in ways you don’t even realize because you decided you didn’t want to hear from them.

    The irony is that FedEx’s business is built entirely on reliability and communication. The company wants you to trust it to deliver something important—something valuable—on time. But when that trust breaks down, the least it can do is acknowledge you as a person. It’s hard to overstate how damaging that is to a brand.

    In theory, AI could make customer service better. A well-trained model could predict issues before they happen, proactively communicate delays, and make sure humans step in when empathy or judgment is required. In practice, FedEx has done the opposite. It built a system that’s designed to minimize the number of human interactions precisely when customers need them most.

    That might save money in the short term. But in the long term, it teaches customers not to trust you. It teaches them that if something goes wrong, they’re on their own.

    I reached out to FedEx, but the company did not immediately respond to my request for comment.

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    [ad_2]

    Jason Aten

    Source link

  • Discord users’ IDs and data compromised in customer service provider hack

    [ad_1]

    One of Discord’s third-party customer service providers has been infiltrated by an unauthorized party who was able to gain access to users’ information. Discord said it recently discovered the incident, which took place on September 20. The compromised data includes a “small number” of government IDs like driver’s licenses and passports, which some users may have submitted to verify their ages. To be clear, Discord itself wasn’t hacked, and you would only be affected by the data breach if you’ve ever communicated with the messaging service’s Customer Support or Trust & Safety teams. That also means the bad actors didn’t get access to your messages within the service, just whatever you may have communicated with customer support.

    Discord has been sending out emails to people affected by the breach, even those who have no accounts but have contacted their support teams for any reason. In the email, the service said that the compromised information may include your real name, your username if you have one, your email and other contact details, the last four digits of any credit card associated with your account and your IP addresses. The service will also specify in the email it sends you if any ID you’d submitted has been compromised, which puts you at higher risk of identity theft than other users. Discord clarified that the breach would not have compromised your full credit card number, your physical address and your password.

    The service said it quickly revoked the provider’s access to its system after learning about the breach and notified law enforcement of the incident. It also said that it will “frequently audit [its] third-party systems” to ensure they meet Discord’s standards.

    [ad_2]

    Mariella Moon

    Source link