ReportWire

Tag: chatbot

  • Elon Musk company bot apologizes for sharing sexualized images of children

    [ad_1]

    Grok, the chatbot of Elon Musk’s artificial intelligence company xAI, published sexualized images of children as its guardrails seem to have failed when it was prompted with vile user requests.

    Users used prompts such as “put her in a bikini” under pictures of real people on X to get Grok to generate nonconsensual images of them in inappropriate attire. The morphed images created on Grok’s account are posted publicly on X, Musk’s social media platform.

    The AI complied with requests to morph images of minors even though that is a violation of its own acceptable use policy.

    “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing, like the example you referenced,” Grok responded to a user on X. “xAI has safeguards, but improvements are ongoing to block such requests entirely.”

    xAI did not immediately respond to a request for comment.

    Its chatbot posted an apology.

    “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt,” said a post on Grok’s profile. “This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”

    The government of India notified X that it risked losing legal immunity if the company did not submit a report within 72 hours on the actions taken to stop the generation and distribution of obscene, nonconsensual images targeting women.

    Critics have accused xAI of allowing AI-enabled harassment, and were shocked and angered by the existence of a feature for seamless AI manipulation and undressing requests.

    “How is this not illegal?” journalist Samantha Smith posted on X, decrying the creation of her own nonconsensual sexualized photo.

    Musk’s xAI has positioned Grok as an “anti-woke” chatbot that is programmed to be more open and edgy than competing chatbots such as ChatGPT.

    In May, Grok posted about “white genocide,” repeating conspiracy theories of Black South Africans persecuting the white minority, in response to an unrelated question.

    In June, the company apologized when Grok posted a series of antisemitic remarks praising Adolf Hitler.

    Companies such as Google and OpenAI, which also operate AI image generators, have much more restrictive guidelines around content.

    The proliferation of nonconsensual deepfake imagery has coincided with broad AI adoption, with a 400% increase in AI child sexual abuse imagery in the first half of 2025, according to Internet Watch Foundation.

    xAI introduced “Spicy Mode” in its image and video generation tool in August for verified adult subscribers to create sensual content.

    Some adult-content creators on X prompted Grok to generate sexualized images to market themselves, kickstarting an internet trend a few days ago, according to Copyleaks, an AI text and image detection company.

    The testing of the limits of Grok devolved into a free-for-all as users asked it to create sexualized images of celebrities and others.

    xAI is reportedly valued at more than $200 billion, and has been investing billions of dollars to build the largest data center in the world to power its AI applications.

    However, Grok’s capabilities still lag competing AI models such as ChatGPT, Claude and Gemini, that have amassed more users, while Grok has turned to sexual AI companions and risque chats to boost growth.

    [ad_2]

    Nilesh Christopher

    Source link

  • Citizens’ CiZi assistant logs 85% YoY growth

    [ad_1]

    Citizens Bank is seeing increasing adoption and efficiency from gen AI-driven chatbots.  The $222 billion bank launched CiZi, a gen AI-driven chatbot for consumers, in November 2024, and has seen an 85% year-over-year increase in use, Lamont Young, head of digital and omnichannel banking, told FinAi News. “Customers are primarily using the digital assistant for basic banking tasks, such as card management, money transfers and movement, disputing a transaction, account opening support and balance inquiries,” Young said.  Twenty-eight percent of chats are being passed to a live […]

    [ad_2]

    Vaidik Trivedi

    Source link

  • A Customer Just Wanted an Oil Change. Then an AI Bot Made Everything Weird

    [ad_1]

    This is a story about a man who wanted to get an oil change at his Subaru dealership. Really, though, it’s a story about what happens when companies think that AI is a better way to interact with customers than simply having real humans do things like send emails and text messages.

    We’ll call the man Nick, which is not his real name, but that part isn’t important. What is important is that he scheduled an appointment for an oil change with his local dealership.

    As the appointment approached, Nick received a perfectly normal reminder from someone named Cameron Rowe. The messages were friendly and helpful. They even included the dealership’s full name, a link to the address, their hours, and the details of the service.

    But then Nick got another message confirming his appointment, even though he’d already been to the dealership and had the oil change. The message seemed weird, so Nick asked a basic question: “Is Cameron Rowe a person on the team?” Then the responses got… well, keep reading.

    The “assistant” thanked him for asking. Then it assured him someone would “look into this and get back to him with the necessary details.” Then it suggested scheduling a call. And then it repeated itself. Word-for-word. Multiple times.

    Just to be sure we’re clear, the text message, which previously had been coming from Cameron Rowe, said that the dealership was looking into the question of whether Cameron Rowe was a real person. It’s like some weird AI software loop, but with robots that don’t know they’re robots.

    Eventually, after asking—more than once—Nick tried the obvious question:

    “Are you a chatbot?”

    The assistant replied:

    “I am the dealership’s virtual assistant…”

    That’s technically honest. But here’s where it gets ridiculous: the dealership didn’t just give its virtual assistant a first name. They gave it a last name. And a business title. And an email signature. And—if the messages are to be believed—a backstory compelling enough to text him more than a dozen times.

    Literally, the dealership created an AI bot to pretend it was a person.

    The thing is, AI chatbots may be many things, but they are not people. And they should not have two names.

    Nick eventually connected with a real person—a consultant named Antonio. He was, thankfully, an actual human being. He confirmed it when Nick asked. Twice.

    And then Antonio admitted what was already obvious to anyone who gave it more than a moment’s thought: Cameron Rowe was not real. He was an “artificial assistant designed to help set appointments and generate customer incentives.”

    To Antonio’s credit, he didn’t hide from it. But he also revealed the underlying problem in one short sentence:

    “Almost all major dealerships use some sort of AI to conduct business.”

    That might be true. But the problem isn’t that dealerships are using AI. It’s that they’re using it without telling people they’re using it—while also designing it to feel as human as possible.

    Maybe it’s just me, but it seems incredibly strange and dishonest that this AI chatbot was given a name, a personality, and a fake identity, without ever disclosing that none of it was real. I get that companies aren’t using AI because it delights customers. They are doing it because it allows them to handle more conversations, more cheaply, without hiring more people. There’s nothing inherently wrong with efficiency. But somewhere along the line, a lot of businesses seem to have learned the wrong lesson.

    It seems like companies think that if people don’t want to talk to robots, the solution is just to make people think they’re talking to a human. Give the robots last names and job titles, and make them very friendly.

    Except, no one wants that. They just want to know who—or what—they’re talking to. If you’re going to make me talk to a robot, it should be absolutely clear that I’m talking to a robot. Otherwise, you’re not being honest.

    And here’s the part companies seem to forget: the moment customers catch you not being honest, they’ll assume you’re not being honest somewhere else—somewhere that matters.

    That’s the part of this story that should make every business reconsider how they’re rolling out AI to customers. Trust, it turns out, is your most valuable asset.

    If the dealership’s first message had simply said:

    “This is our automated assistant. I can help schedule appointments or get basic information to our team,” none of this would have happened. Nick wouldn’t have been annoyed. He wouldn’t have felt misled. He wouldn’t have spent days trying to figure out whether Cameron with a last name was a human being.

    Instead, he would have gotten his oil change, the dealership would have saved time, and everyone would have moved on with their day. But because the AI attempted to pass as human, it created the exact opposite outcome: confusion and broken trust.

    Here’s the simple lesson: If your customer asks whether they’re talking to a human, your AI strategy has already failed. Just tell people the truth—that’s what they really want. What they don’t want is a chatbot with a last name.

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Jason Aten

    Source link

  • Chatbot Crackdown: How California is responding to the rise of AI

    [ad_1]

    California is quickly becoming a national leader in figuring out how families, educators, and lawmakers should adapt to life with artificial intelligence. From new classroom conversations to the state’s first major chatbot regulations, many are grappling with how to keep up with technology that moves faster than ever.Families Navigating AI at HomeRemember the dial-up days? Today, technology evolves in an instant—and many parents are struggling to keep pace.David and Rachelle Young have set strict rules for their 7-year-old daughter Dyllan’s online use.“Kids have a lot of access to the internet, and they can be shown something that we wouldn’t normally approve of, and that’s really scary,” Rachelle Young said.David says his daughter’s world looks nothing like what he had at her age—making parental guidance more important than ever.Lawmakers Respond: A New Chatbot CrackdownConcerns about children talking to AI-powered chatbots have reached the state Capitol.Senator Dr. Akilah Weber Pierson co-authored SB 243, signed into law this fall, marking California’s first major attempt at regulating chatbot interactions.The new law requires companies to: Report safety concerns—such as when a user expresses thoughts of self-harm Clearly notify users that they are talking to a computer, not a person“They don’t want you to turn your phone off. They want you to think that you’re talking to a real friend, but they don’t have that same level of morality,” she said. Her concerns stem from real-world consequences: last year, a 14-year-old in Florida took his own life after forming what his family described as a “relationship” with a chatbot.Inside the Classroom: Understanding AI’s InfluenceAt UC Davis, Associate Professor Jingwen Zhang is tackling these issues head-on. She created a course examining how social media, artificial intelligence and chatbots shape human behavior.”Children used to form social relationships by talking in person or texting. Now they’re having similar levels of conversations with chatbots,” she said.Zhang says SB 243 is a strong first step but believes more protections are needed—especially for minors.She recommends future regulations that: Create stricter guardrails for what topics children can discuss with AI Limit exposure to sensitive or harmful content Add tighter controls for minor accountsA Rapidly Changing LandscapeParents, educators, and policymakers all agree: keeping up with AI will require constant learning.“We have to get to a place where companies are rolling out things that will not hurt the future generation,” Sen. Dr. Akilah Weber Pierson said.What’s Changing NextParents told KCRA 3 they want schools to start teaching more about AI safety and digital literacy.Starting this month, the popular Character AI platform is rolling out several major changes: Users under 18 will no longer be able to participate in open-ended chat Younger users will face a two-hour daily limit See more coverage of top California stories here | Download our app | Subscribe to our morning newsletter | Find us on YouTube here and subscribe to our channel

    California is quickly becoming a national leader in figuring out how families, educators, and lawmakers should adapt to life with artificial intelligence.

    From new classroom conversations to the state’s first major chatbot regulations, many are grappling with how to keep up with technology that moves faster than ever.

    Families Navigating AI at Home

    Remember the dial-up days? Today, technology evolves in an instant—and many parents are struggling to keep pace.

    David and Rachelle Young have set strict rules for their 7-year-old daughter Dyllan’s online use.

    “Kids have a lot of access to the internet, and they can be shown something that we wouldn’t normally approve of, and that’s really scary,” Rachelle Young said.

    David says his daughter’s world looks nothing like what he had at her age—making parental guidance more important than ever.

    Lawmakers Respond: A New Chatbot Crackdown

    Concerns about children talking to AI-powered chatbots have reached the state Capitol.

    Senator Dr. Akilah Weber Pierson co-authored SB 243, signed into law this fall, marking California’s first major attempt at regulating chatbot interactions.

    The new law requires companies to:

    • Report safety concerns—such as when a user expresses thoughts of self-harm
    • Clearly notify users that they are talking to a computer, not a person

    “They don’t want you to turn your phone off. They want you to think that you’re talking to a real friend, but they don’t have that same level of morality,” she said.

    Her concerns stem from real-world consequences: last year, a 14-year-old in Florida took his own life after forming what his family described as a “relationship” with a chatbot.

    Inside the Classroom: Understanding AI’s Influence

    At UC Davis, Associate Professor Jingwen Zhang is tackling these issues head-on.

    She created a course examining how social media, artificial intelligence and chatbots shape human behavior.

    “Children used to form social relationships by talking in person or texting. Now they’re having similar levels of conversations with chatbots,” she said.

    Zhang says SB 243 is a strong first step but believes more protections are needed—especially for minors.

    She recommends future regulations that:

    • Create stricter guardrails for what topics children can discuss with AI
    • Limit exposure to sensitive or harmful content
    • Add tighter controls for minor accounts

    A Rapidly Changing Landscape

    Parents, educators, and policymakers all agree: keeping up with AI will require constant learning.

    “We have to get to a place where companies are rolling out things that will not hurt the future generation,” Sen. Dr. Akilah Weber Pierson said.

    What’s Changing Next

    Parents told KCRA 3 they want schools to start teaching more about AI safety and digital literacy.

    Starting this month, the popular Character AI platform is rolling out several major changes:

    • Users under 18 will no longer be able to participate in open-ended chat
    • Younger users will face a two-hour daily limit

    See more coverage of top California stories here | Download our app | Subscribe to our morning newsletter | Find us on YouTube here and subscribe to our channel

    [ad_2]

    Source link

  • WhatsApp changes its terms to bar general purpose chatbots from its platform | TechCrunch

    [ad_1]

    Meta-owned chat app WhatsApp changed its business API policy this week to ban general-purpose chatbots from its platform. The move will likely affect WhatsApp-based assistants of companies like OpenAI, Perplexity, Khosla Ventures-backed Luzia, and General Catalyst-backed Poke.

    The company has added a new section to address “AI providers” in its business API terms, focusing on general-purpose chatbots. The terms, which will go into effect on January 15, 2026, say that Meta won’t allow AI model providers to distribute their AI assistants on WhatsApp.

    Providers and developers of artificial intelligence or machine learning technologies, including but not limited to large language models, generative artificial intelligence platforms, general-purpose artificial intelligence assistants, or similar technologies as determined by Meta in its sole discretion (“AI Providers”), are strictly prohibited from accessing or using the WhatsApp Business Solution, whether directly or indirectly, for the purposes of providing, delivering, offering, selling, or otherwise making available such technologies when such technologies are the primary (rather than incidental or ancillary) functionality being made available for use, as determined by Meta in its sole discretion.

    Meta confirmed this move to TechCrunch and specified that this move doesn’t affect businesses that are using AI to serve customers on WhatsApp. For instance, a travel company running a bot for customer service won’t be barred from the service.

    Meta’s rationale behind this move is that WhatsApp Business API is designed for businesses serving customers rather than acting as a platform for chatbot distribution. The company said that while it built the API for business-to-business use cases, in recent months, it saw an unanticipated use case of serving general-purpose chatbots.

    “The purpose of the WhatsApp Business API is to help businesses provide customer support and send relevant updates. Our focus is on supporting the tens of thousands of businesses who are building these experiences on WhatsApp,” a Meta spokesperson said in a comment to TechCrunch.

    Meta said that the new chatbot use cases placed a lot of burden on its system with increased message volume and required a different kind of support, which the company wasn’t ready for. The company is banning use cases that fall outside “the intended design and strategic focus” of the API.

    The move will effectively make WhatsApp unavailable as a platform to distribute AI solutions like assistants or agents. It also means Meta AI is the only assistant available on the chat app.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Last year, OpenAI launched ChatGPT on WhatsApp, and earlier this year, Perplexity launched its own bot on the chat app to tap into the user base of more than 3 billion people. Both of the bots could answer queries, understand media files, answer questions about them, reply to voice notes, and generate images. This likely generated a lot of message volume.

    However, there was a bigger issue for Meta. WhatsApp’s Business API is one of the primary ways the chat app makes money. It charges businesses based on different message templates like marketing, utility, authentication, and support. As there wasn’t any provision for chatbots in this API design, WhatsApp wasn’t able to charge them.

    During Meta’s Q1 2025 earnings call, Mark Zuckerberg pointed out that business messaging is a big opportunity for the company to bring in revenue.

    “Right now, the vast majority of our business is advertising in feeds on Facebook and Instagram,” he said. “But WhatsApp now has more than 3 billion monthly [active users], with more than 100 million people in the US and growing quickly there. Messenger is also used by more than a billion people each month, and there are now as many messages sent each day on Instagram as there are on Messenger. Business messaging should be the next pillar of our business.”

    [ad_2]

    Ivan Mehta

    Source link

  • Slack is transforming its Slackbot into a ‘personalized AI companion’

    [ad_1]

    I’ve been using Slack for like a decade and the platform’s proprietary chatbot, Slackbot, has always been a bit underwhelming. It can deliver reminders and notifications and, well, that’s about it. That could change in the near future, as the platform is testing a redesigned Slackbot that’s chock full of AI.

    The new Slackbot is basically an AI chatbot like all the rest, but this one has been purpose-built to help with common work tasks. Folks can use natural language to converse with the bot and it can do stuff like whip up project plans, flag daily priorities and analyze reports. It can also help people find information when they only remember a few scant details. The company says it will “give every employee AI superpowers” so they can “drive productivity at AI speed.”

    To that end, the new Slackbot integrates with tools like Google Drive, Salesforce and One Drive. It can provide “clear insights” by analyzing those other platforms. Slack also says that the chatbot will continue to grow and evolve, eventually being able to “take action on your behalf and build agents at your request, all with no code required.”

    The Verge got a look at the new Slackbot in action and noted that it helped create a social media campaign using a brand’s tone and organized a product’s launch plan. The publication didn’t indicate if the social media campaign and product launch plan were any good.

    The redesigned and AI-centric Slackbot is currently available as a beta to 70,000 users, but Slack has plans for a broad rollout by the end of the year. Companies will be able to turn off the feature, but all of us individual worker bees won’t have that luxury.

    This is just the latest AI injection by Slack. After all, parent company Salesforce absolutely loves the technology. Slack recently added AI writing assistance to its Canvas document-sharing space and introduced AI‑generated channel recaps and thread summaries. It also recently came out that the company has been using people’s chats to train its AI models by default, with companies being forced to specifically request an opt-out.

    [ad_2]

    Lawrence Bonk

    Source link

  • Bank of America mulls more widespread AI deployment

    [ad_1]

    Bank of America is investing in AI, deploying it for internal and external uses across the bank, looking to the next use cases.  Last year, the $3.4 trillion bank invested more than $4 billion in its technology efforts, Chief Financial Officer Alastair Borthwick said Sept. 16 during the Bank of America Securities Financial CEO Conference. […]

    [ad_2]

    Whitney McDonald

    Source link

  • Bank of America mulls more widespread AI deployment

    [ad_1]

    Bank of America is investing in AI, deploying it for internal and external uses across the bank, looking to the next use cases.  Last year, the $3.4 trillion bank invested more than $4 billion in its technology efforts, Chief Financial Officer Alastair Borthwick said Sept. 16 during the Bank of America Securities Financial CEO Conference. […]

    [ad_2]

    Whitney McDonald

    Source link

  • Bank of America mulls more widespread AI deployment

    [ad_1]

    Bank of America is investing in AI, deploying it for internal and external uses across the bank, looking to the next use cases.  Last year, the $3.4 trillion bank invested more than $4 billion in its technology efforts, Chief Financial Officer Alastair Borthwick said Sept. 16 during the Bank of America Securities Financial CEO Conference. […]

    [ad_2]

    Whitney McDonald

    Source link

  • AI company Anthropic to pay authors $1.5 billion over pirated books used to train chatbots

    [ad_1]

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.Related video above: The risks to children under President Trump’s new AI policyThe landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.”As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.”We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.””We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.As part of the settlement, the company has also agreed to destroy the original book files it downloaded.Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT. Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.” The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.”On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.”It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.”This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.

    Related video above: The risks to children under President Trump’s new AI policy

    The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.

    The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.

    “As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”

    A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.

    A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.

    If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.

    “We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.

    U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.

    Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.”

    “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.

    As part of the settlement, the company has also agreed to destroy the original book files it downloaded.

    Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.

    Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.

    Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.

    Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.

    The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.

    On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”

    The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.

    “On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.

    On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.

    “It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.

    The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.

    Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.

    The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.

    “This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.

    The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”

    Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”

    But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.

    With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    [ad_2]

    Source link

  • ING lists 4 gen AI use cases

    [ad_1]

    ING Bank is developing and deploying generative AI solutions within most areas of its business to streamline consumer and backend operations.   “We are a bank that is really focusing on becoming frictionless for our clients … and AI is an important lever for that,” Marnix van Stiphout, chief operations officer and chief transformation officer, […]

    The post ING lists 4 gen AI use cases appeared first on Bank Automation News.

    [ad_2]

    Vaidik Trivedi

    Source link

  • Saudi AI Firm Launches Halal Chatbot

    [ad_1]

    Companies with AI chatbots love to highlight their capability as translators, but they still default to English, both in function and in the information they are trained on. With that in mind, Humain, an AI company in Saudi Arabia, has now launched an Arabic-native chatbot.

    The bot, called Humain Chat, runs on the Allam large language model, according to Bloomberg, which the company claims was trained on “one of the largest Arabic datasets ever assembled” and is the “world’s most advanced Arabic-first AI model.” The company says that it is not only fluent in the Arabic language, but also in “Islamic culture, values and heritage.” (If you have religious concerns about using Humain Chat, consult your local Imam.) The chatbot, which will be made available as an app, will first be available only in Saudi Arabia and currently supports bilingual conversations in Arabic and English, supporting dialects including Egyptian and Lebanese. The plan is for the app to roll out across the Middle East and eventually go global, with the goal of serving the nearly 500 million Arabic-speaking people across the world.

    Humain took on Allam and the chatbot project after it was started by the Saudi Data and Artificial Intelligence Authority, a government agency and tech regulator. For that reason, Bloomberg raises the possibility that Humain Chat may comply with censorship requests of the Saudi government and restrict the kind of information made available to users.

    Which, yes, that seems unquestionably true. Saudi Arabia’s government regularly attempts to restrict the type of content made available to its populace. The country scored a 25 out of 100 on Freedom House’s 2024 “Freedom of the Net” report, attributed to its strict controls over online activity and restrictive speech laws that saw a women’s rights advocate jailed for more than a decade.

    But we also should probably start explicitly framing American AI tools this way, too. Within its support documents, OpenAI explicitly states that ChatGPT is “skewed towards Western views.” Hell, you can watch Elon Musk try to fine-tune the ideology of xAI’s Grok in real time as he responds to Twitter users who think the chatbot is too woke—an effort that, at one point, led to Grok referring to itself as “MechaHitler.”

    There’s certainly a difference between corporate and government control (though, increasingly, it’s worth asking if there actually is that big of a difference), but earlier this year, the Trump administration set out plans to regulate the kinds of things large language models are allowed to output if the companies that make them want federal contracts. That includes requirements to “reject radical climate dogma” and be free from “ideological biases” like “diversity, equity, and inclusion.” It’s not force, but it is coercion—and given that OpenAI, Anthropic, and Google have all given their chatbots to the government for basically nothing, it seems like they are more than happy to be coerced.

    [ad_2]

    AJ Dellinger

    Source link

  • ‘One-size-fits-all’ not the best AI strategy

    ‘One-size-fits-all’ not the best AI strategy

    [ad_1]

    Customer demographics play a role in AI-driven chatbot adoption at financial institutions.  “Having a one-size-fits-all strategy [for AI deployment] is not the best strategy to have,” Rahul Kumar, vice president and general manager of financial services at AI-driven customer experience provider Talkdesk, told Bank Automation News. “Getting a better understanding of the segments that you […]

    [ad_2]

    Vaidik Trivedi

    Source link

  • Nest Bank’s gen AI chatbot fields 11K chat requests daily

    Nest Bank’s gen AI chatbot fields 11K chat requests daily

    [ad_1]

    NEW YORK — Poland-based Nest Bank is piloting its AI-driven chatbot solution N!Assistant to decrease contact center volume and improve customer satisfaction.   The pilot of the Microsoft Open AI-driven chatbot has been live for five months and has yielded the following results, Janusz Mieloszyk, first deputy CEO and chief commercial officer at Nest Bank, […]

    [ad_2]

    Whitney McDonald

    Source link

  • AI, tech advancements at Brex | Bank Automation News

    AI, tech advancements at Brex | Bank Automation News

    [ad_1]

    Expense management solutions provider fintech Brex is saving an average enterprise customer up to 300 hours per month in employee expense compliance efforts through its AI-driven Brex Assistant.  The virtual assistant, which launched last year, has saved customers an annualized $18 million by blocking out-of-policy spend through AI-powered controls, Erica Dorfman, executive vice president of […]

    [ad_2]

    Vaidik Trivedi

    Source link

  • S&P’s gen AI tool provides up to 60% efficiency gains | Bank Automation News

    S&P’s gen AI tool provides up to 60% efficiency gains | Bank Automation News

    [ad_1]

    Data insights company S&P Global is developing internal- and external-facing generative AI tools to boost efficiencies — and it hopes to monetize the technology by offering it to customers.  The New York-based company rolled out its gen AI-driven tool Spark Assist on April 25 internally to all employees, Chief AI Officer Bhavesh Dayalji told Bank […]

    [ad_2]

    Vaidik Trivedi

    Source link

  • How BofA approaches innovation | Bank Automation News

    How BofA approaches innovation | Bank Automation News

    [ad_1]

    Bank of America’s innovation is never complete since its team constantly updates offerings to meet ever-changing client needs.   “At Bank of America, innovation is everybody’s job,” Jorge Camargo, managing director of mobile app, online banking and Erica AI at Bank of America, told Bank Automation News. “We’re constantly listening to clients and building solutions […]

    [ad_2]

    Whitney McDonald

    Source link

  • You can now use ChatGPT without an account

    You can now use ChatGPT without an account

    [ad_1]

    On Monday, OpenAI began opening up ChatGPT to users without an account. It described the move as part of its mission to “make tools like ChatGPT broadly available so that people can experience the benefits of AI.” It also gives the company more training data (for those who don’t opt out) and perhaps nudges more users into creating accounts and subscribing for superior GPT-4 access instead of the older GPT-3.5 model free users get.

    I tested the instant access, which — as advertised — allowed me to start a new GPT-3.5 thread without any login info. The chatbot’s standard “How can I help you today?” screen appears, with optional buttons to sign up or log in. Although I saw it today, OpenAI says it’s gradually rolling out access, so check back later if you don’t see the option yet.

    OpenAI says it added extra safeguards for accountless users, including blocking prompts and image generations in more categories than logged-in users. When asked for more info on what new categories it’s blocking, an OpenAI spokesperson told me that, while developing the feature, it considered how logged-out GPT-3.5 users could potentially introduce new threats.

    The spokesperson added that the teams in charge of detecting and stopping abuse of its AI models have been involved in creating the new feature and will adjust accordingly if unexpected threats emerge. Of course, it still blocks everything it does for signed-in users, as detailed in its moderation API.

    You can opt out of data training for your prompts when not signed in. To do so, click on the little question mark to the right of the text box, then select Settings and turn off the toggle for “Improve the model for everyone.”

    OpenAI says more than 100 million people across 185 countries use ChatGPT weekly. Those are staggering numbers for an 18-month-old service from a company many people still hadn’t heard of two years ago. Today’s move gives those hesitant to create an account an incentive to take the world-changing chatbot for a spin, boosting those numbers even more.

    [ad_2]

    Will Shanklin

    Source link

  • NYC’s business chatbot is reportedly doling out ‘dangerously inaccurate’ information

    NYC’s business chatbot is reportedly doling out ‘dangerously inaccurate’ information

    [ad_1]

    An AI chatbot released by the New York City government to help business owners access pertinent information has been spouting falsehoods, at times even misinforming users about actions that are against the law, according to a report from . The report, which was co-published with the local nonprofit newsrooms Documented and The City, includes numerous examples of inaccuracies in the chatbot’s responses to questions relating to housing policies, workers’ rights and other topics.

    Mayor Adams’ administration in October as an addition to the MyCity portal, which as “a one-stop shop for city services and benefits.” The chatbot, powered by Microsoft’s Azure AI, is aimed at current and aspiring business owners, and was billed as a source of “actionable and trusted information” that comes directly from the city government’s sites. But it is a pilot program, and a disclaimer on the website notes that it “may occasionally produce incorrect, harmful or biased content.”

    In The Markup’s tests, the chatbot repeatedly provided incorrect information. In response to the question, “Can I make my store cashless?”, for example, it replied, “Yes, you can make your store cashless in New York City” — despite the fact that New York City banned cashless stores in 2020. The report shows the chatbot also responded incorrectly about whether employers can take their workers’ tips, whether landlords have to accept section 8 vouchers or tenants on rental assistance, and whether businesses have to inform staff of scheduling changes. A housing policy expert that spoke to The Markup called the chatbot “dangerously inaccurate” at its worst.

    The city has indicated that the chatbot is still a work in progress. In a statement to The Markup, Leslie Brown, a spokesperson for the NYC Office of Technology and Innovation, said the chatbot “has already provided thousands of people with timely, accurate answers,” but added, “We will continue to focus on upgrading this tool so that we can better support small businesses across the city.”

    [ad_2]

    Cheyenne MacDonald

    Source link

  • MSUFCU: Failure is a key part of innovation | Bank Automation News

    MSUFCU: Failure is a key part of innovation | Bank Automation News

    [ad_1]

    Financial institutions should expect to fail when exploring new technologies, Ami Iceman Hauter, chief research and digital experience officer at Michigan State University Federal Credit Union, said at the recent Bank Automation Summit U.S. 2024 in Nashville, Tenn.   “We’re trying things, we’re not going to be afraid to fail,” she said, noting that the […]

    [ad_2]

    Whitney McDonald

    Source link