ReportWire

Tag: chatgpt

  • What’s behind the massive AI data center headlines? | TechCrunch

    Silicon Valley flooded the news this week with headlines about wild AI infrastructure investments.

    Nvidia said it would invest up to $100 billion in OpenAI. Then OpenAI said it would build out five more Stargate AI data centers with Oracle and SoftBank, adding gigawatts of new capacity online in the coming years. And it was later revealed that Oracle sold $18 billion in bonds to pay for these data centers.

    On their own, each deal is dizzying in scale. But in aggregate, we see how Silicon Valley is moving heaven and earth to give OpenAI enough power to train and serve future versions of ChatGPT.

    This week on Equity, Anthony Ha and I (Max Zeff) go beyond the headlines to break down what’s really going on in these AI infrastructure deals.

    Rather conveniently, OpenAI also gave the world a glimpse this week of a power-intensive feature it could serve more broadly if it had access to more AI data centers.

    The company launched Pulse — a new feature in ChatGPT that works overnight to deliver personalized morning briefings for users. The experience feels similar to a news app or a social feed — something you check first thing in the morning — but doesn’t have posts from other users or ads (yet).

    Pulse is part of a new class of OpenAI products that work independently, even when users aren’t in the ChatGPT app. The company would like to deliver a lot more of these features and roll them out to free users, but they’re limited by the number of computer servers available to them. OpenAI said it can only offer Pulse to its $200-a-month Pro subscribers right now due to capacity constraints.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    The real question is whether features like Pulse are worth the hundreds of billions of dollars being invested in AI data centers to support OpenAI. The feature looks cool and all, but that’s a tall order.

    Watch the full episode to hear more about the massive AI infrastructure investments reshaping Silicon Valley, TikTok’s ownership saga, and the policy changes affecting tech’s biggest players.

    Maxwell Zeff

    Source link

  • Convert Long WhatsApp Voice Notes to Text using ChatGPT

    • Furthermore, sharing a voice note publicly might not be the best idea for your privacy, so I have a method for you to consider.
    • Yes, Kaptionai is free to use, but there are limitations to the number of notes you can transcribe, and also, the speed of transcribing is better in the premium version.
    • Then whenever you use WhatsApp, you can simply forward any note to Kaption AI chat, and it will take care of the rest.

    WhatsApp recently added thier Audio transcribing feature, which can convert any voice note into text. You can see that they rushed to make the feature live, as it often does not work properly. There is no context, and most of the audio part is not even transcribed. For those long voice notes, I have a tool that utilizes ChatGPT to transcribe them. That is what we will discuss in this article.

    Transcribe those long voice notes

    You can very easily transcribe any voice notes, but the key is to transcribe them correctly. We have already discussed some of the methods before, if you want to check them out. For a power user, it is almost impossible to pay attention to a long note. They would rather have a long text and summarize it later. Furthermore, sharing a voice note publicly might not be the best idea for your privacy, so I have a method for you to consider. Now this tool uses ChatGPT, and because of that, the accuracy of the transcribed notes is better than the built-in feature.

    Using Kaption AI

    Kaptionai is a tool built specifically to work with WhatsApp. It is secure and reliable. This tool can very easily convert any audio note into text in a few clicks. You can directly download the Chrome extension and pin it. Then whenever you use WhatsApp, you can simply forward any note to Kaption AI chat, and it will take care of the rest. For better clarity, refer to the steps mentioned below.

    1. Download and pin the Kaptionai extension.

    Add to chrome

    2. Then head over to your WhatsApp web, and you will see a separate chat of Kaptionai.

    Kaption AI chat

    3. Once you receive a voice note, next to it, you will see an Aa icon; click on it to transcribe.

    icon to transcribe

    4. If Kaptionai does not automatically transcribe, you can always forward the voice note to Kaptionai chat.

    5. Kaptionai can also read your summary.

    Read summary

    Bonus Tip: You also get a screen and chat privacy feature that will blur out your chat and contact window with a single click. Also, once you have added Kaptionai chat to your web interface, you can use it anytime on your phone or linked devices. Simply forward the voice note to the Kaptionai chat, and it will work.

    FAQs

    Q. How can I transcribe voice notes on an iPhone?

    You can use the built-in feature of WhatsApp, or you can use Kaptionai in the web interface. You can also send your notes to the chat option of Kaptionai.

    Q. Is Kaptionai free to use?

    Yes, Kaptionai is free to use, but there are limitations to the number of notes you can transcribe, and also, the speed of transcribing is better in the premium version.

    Wrapping Up

    This article talks about the Kaptionai,which is an AI tool that helps you transcribe any voice note received on WhatsApp. This tool is completely safe and free. You do not have to worry about your private messages and chats; they are all safe. You also get privacy features in the WhatsApp web version, which is a bonus.

    You may also like to read:

    Have any questions related to our how-to guides, or anything in the world of technology? Check out our new GadgetsToUse AI Chatbot for free, powered by ChatGPT.

    You can also follow us for instant tech news at Google News or for tips and tricks, smartphones & gadgets reviews, join the GadgetsToUse Telegram Group, or subscribe to the GadgetsToUse Youtube Channel for the latest review videos.

    Was this article helpful?

    YesNo

    Dev Chaudhary

    Source link

  • L.A. Attorney Fined $10K for Using ChatGPT in Legal Appeal

    A Los Angeles attorney used AI to improve his appeal, but he didn’t know ChatGPT would make up evidence in the process

    The Hall of Justice
    Credit: Courtesy Tupungato via Adobe Stock

    A Los Angeles attorney has been hit with a historic $10,000 fine after submitting an appeal containing information fabricated by ChatGPT. 

    This marks the largest fine ever issued in California over AI use so far.

    According to the opinion, the appeal contained evidence that was attributed to sources that either did not have the quotations or referred to cases that did not exist entirely. Additionally, of the 23 quotes from the case cited, 21 were found to be made up, according to the court opinion.

    “We therefore publish this opinion as a warning. Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations—whether provided by generative AI,” stated the document.

    Amir Mostafavi, the attorney fined last week, told the court that he had used ChatGPT to improve his appeal and did not read it over before submitting it in July 2023.  A three-judge panel fined him for frivolous appeal, violating court rules, citing fake cases, and for wasting the court’s time and taxpayer dollars. 

    “We therefore publish this opinion as a warning. Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations—whether provided by generative AI,” stated the document.

    Mostafavi told Calmatters that it is unrealistic to expect lawyers not to use AI. Comparing it to how online databases have replaced law libraries. 

    “In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages,” he said. “I hope this example will help others not fall into the hole. I’m paying the price.”

    California is not alone in having issues with AI in legal proceedings. There have been a number of other cases across the nation of attorneys and other legal professionals getting caught using AI. Like in New Jersey this week, where another attorney was hit with a $3000 fine for basically the same reason as here.

    Tara Nguyen

    Source link

  • After India, OpenAI launches its affordable ChatGPT Go plan in Indonesia | TechCrunch

    OpenAI is expanding its budget-friendly ChatGPT subscription plan beyond India. The company launched its sub-$5 ChatGPT Go paid plan for the country’s users last month and now is rolling out the same plan in Indonesia for Rp75,000 ($4.50) per month.

    The ChatGPT Go plan is a mid-tier subscription option that sits between OpenAI’s free version and its premium $20-per-month ChatGPT Plus plan. Users get 10 times higher usage limits than the free plan for sending questions or prompts, generating images, and uploading files. The plan also allows ChatGPT to remember previous conversations better, enabling more personalized responses over time, ChatGPT head Nick Turley said on X.

    Turley said that since the company launched the ChatGPT Go plan in India, paid subscribers have more than doubled.

    This move puts OpenAI in direct competition with Google, which launched its own similarly-priced AI Plus subscription plan in Indonesia earlier this month. Google’s AI Plus plan gives users access to its Gemini 2.5 Pro chatbot, along with creative tools for image and video creation like Flow, Whisk, and Veo 3 Fast. The plan also includes enhanced features for Google’s AI research assistant NotebookLM and integrates AI capabilities into Gmail, Docs, and Sheets, along with 200GB of cloud storage.

    Ivan Mehta

    Source link

  • Shipping at the Speed of Prompt: What Vibe Coding Changes and Breaks

    Developers are shifting from writing every line to guiding A.I., and facing fresh challenges in review and oversight. Unsplash+

    An emerging trend known as “vibe coding” is changing the way software gets built. Rather than painstakingly writing every line of code themselves, developers now guide an A.I. assistant— like Copilot or ChatGPT—with plain instructions, and the A.I. generates the framework. The barrier to entry drops dramatically: someone with only a rough idea and minimal technical background can spin up a working prototype. 

    The capital markets have taken notice. In the past year, several A.I. tooling startups raised nine-figure rounds and hit billion-dollar valuations. Swedish startup Lovable secured $200 million in funding in July—just eight months after its launch—pushing its value close to $2 billion. Cursor’s maker, Anysphere, is approaching a $10 billion valuation. Analysts project that by 2031, the A.I. programming market could be worth $24 billion. Given the speed of adoption, it might get there even sooner.  

    The pitch is simple: if prompts can replace boilerplate, then making software becomes cheaper, faster and more accessible. What matters less than whether the market ultimately reaches tens of billions is the fact that teams are already changing how they work. For many, this is a breakthrough moment, with software writing becoming as straightforward and routine as sending a text message. The most compelling promise is democratization: anyone with an idea, regardless of technical expertise, can bring it to life.   

    Where the wheels come off

    Vibe coding sounds great, but for all its promise, it also carries risks that could, if not managed, slow future innovation. Consider safety. In 2024, A.I. generated more than 256 billion lines of code. This year, that number is likely to double. Such velocity makes thorough code review difficult. Snippets that slip through without careful oversight can contain serious vulnerabilities, from outdated encryption defaults to overly permissive CORS rules. In industries like healthcare or finance, where data is highly sensitive, the consequences could be profound. 

    Scalability is another challenge. A.I. can make working prototypes, but scaling them for real-world use is another story entirely. Without careful design choices around state management, retries, back pressure or monitoring, these systems can become brittle, fragile and difficult to maintain. These are all architectural decisions that autocomplete models cannot make on their own. 

    And then there is the issue of hallucination. Anyone who has used A.I. coding tools before has come across examples of nonexistent libraries of data being cited or configuration flags inconsistently renamed within the same file. While minor errors in small projects may not be significant, these lapses can erode continuity and undermine trust when scaled across larger, mission-critical systems. 

    The productivity trade-off

    None of these concerns should be mistaken for a rejection of vibe coding. There is no denying that A.I.-powered tools can meaningfully boost productivity. But they also change what the programmer’s role entails: from line-by-line authoring to guiding, shaping and reviewing what A.I. produces to ensure it can function in the real world. 

    The future of software development is unlikely to be framed as a binary choice between humans and machines. The most resilient organizations will combine rapid prototyping through A.I. with deliberate practices—including security audits, testing and architectural design—that ensure the code survives beyond the demo stage.

    Currently, only a small fraction of the global population writes software. If A.I. tools continue to lower barriers, that number could increase dramatically. A larger pool of creators is an encouraging prospect, but it also expands the surface area for mistakes, raising the stakes for accountability and oversight.

    What comes next

    It’s clear that vibe coding should be the beginning of development, not the end. To get there, new infrastructure is needed: advanced auditing tools, security scanners and testing frameworks designed just for A.I.-generated code. In many ways, this emerging industry of safeguards and support systems will prove just as important as the code-generation tools themselves. 

    The conversation must now expand. It’s no longer enough to celebrate what A.I. can do; the focus should also be on how to use these tools responsibly. For developers, that means practicing caution and review. For non-technical users, it means working alongside engineers who can provide judgment and discipline. The promise of vibe coding is real: faster software, lower barriers, broader participation. But without careful design and accountability, that promise risks collapsing under its own speed. 

    Shipping at the Speed of Prompt: What Vibe Coding Changes and Breaks

    Ahmad Shadid

    Source link

  • Ohio State University initiative requires students study AI



    Ohio State University initiative requires students study AI – CBS News










































    Watch CBS News



    Beginning this year, all Ohio State University freshmen are required to take a course in generative AI and multiple workshops aimed at real-world applications to help them master the technology. Meg Oliver reports.

    [ad_2]
    Source link

  • Nvidia CEO Jensen Huang Is Bananas for Google Gemini’s AI Image Generator

    Nvidia CEO Jensen Huang is in London, standing in front of a room full of journalists, outing himself as a huge fan of Gemini’s Nano Banana. “How could anyone not love Nano Banana? I mean Nano Banana, how good is that? Tell me it’s not true!” He addresses the room. No one responds. “Tell me it’s not true! It’s so good. I was just talking to Demis [Hassabis, CEO of DeepMind] yesterday and I said ‘How about that Nano Banana! How good is that?’”

    It looks like lots of people agree with him: The popularity of the Nano Banana AI image generator—which launched in August and allows users to make precise edits to AI images while preserving the quality of faces, animals, or other objects in the background—has caused a 300 million image surge for Gemini in the first few days in September already, according to a post on X by Josh Woodward, VP of Google Labs and Google Gemini.

    Huang, whose company was among a cohort of big US technology companies to announce investments into data centers, supercomputers, and AI research in the UK on Tuesday, is on a high. Speaking ahead of a white-tie event with UK prime minister Keir Starmer (where he plans to wear custom black leather tails), he’s boisterously optimistic about the future of AI in the UK, saying the country is “too humble” about the country’s potential for AI advancements.

    He cites the UK’s pedigree in themes as wide as the industrial revolution, steam trains, DeepMind (now owned by Google), and university researchers, as well as other tangential skills. “No one fries food better than you do,” he quips. “Your tea is good. You’re great. Come on!”

    Nvidia announced a $683 million equity investment in datacenter builder Nscale this week, a move that—alongside investments from OpenAI and Microsoft—has propelled the company to the epicenter of this AI push in the UK. Huang estimates that Nscale will generate more than $68 billion in revenues over six years. “I’ll go on record to say I’m the best thing that’s ever happened to him,” he says, referring to Nscale CEO Josh Payne.

    “As AI services get deployed—I’m sure that all of you use it. I use it every day and it’s improved my learning, my thinking. It’s helped me access information, access knowledge a lot more efficiently. It helps me write, helps me think, it helps me formulate ideas. So my experience with AI is likely going to be everybody’s experience. I have the benefit of using all the AI—how good is that?”

    The leather-jacket-wearing billionaire, who previously told WIRED that he uses AI agents in his personal life, has expanded on how he uses AI (that’s not Nano Banana) for most daily things, including his public speeches and research.

    “I really like using an AI word processor because it remembers me and knows what I’m going to talk about. I could describe the different circumstance that I’m in and yet it still knows that I’m Jensen, just in a different circumstance,” Huang explains. “In that way it could reshape what I’m doing and be helpful. It’s a thinking partner, it’s truly terrific, and it saves me a ton of time. Frankly, I think the quality of work is better.”

    His favorite one to use “depends on what I’m doing,” he says. “For something more technical I will use Gemini. If I’m doing something where it’s a bit more artistic I prefer Grok. If it’s very fast information access I prefer Perplexity—it does a really good job of presenting research to me. And for near everyday use I enjoy using ChatGPT,” Huang says.

    “When I am doing something serious I will give the same prompt to all of them, and then I ask them to, because it’s research oriented, critique each other’s work. Then I take the best one.”

    In the end though, all topics lead back to Nano Banana. “AI should be democratized for everyone. There should be no person who is left behind, it’s not sensible to me that someone should be left behind on electricity or the internet of the next level of technology,” he says.

    “AI is the single greatest opportunity for us to close the technology divide,” says Huang. “This technology is so easy to use—who doesn’t know how to use Nano?”

    Natasha Bernal

    Source link

  • ‘KPop Demon Hunters’ Producer Accused of ChatGPT Use for Songwriting

    Netflix hit KPop Demon Hunters has stayed in the conversation in large part because of its blockbuster soundtrack, but now one of those songs has come under scrutiny for potentially getting an assist from ChatGPT.

    In a recent discussion in Seoul for OpenAI’s newly opened Korean office, songwriter Vince reportedly claimed he used the controversial technology to help pen the song “Soda Pop,” performed in the movie by the demonic Saja Boys. He is credited as one of several co-writers on the track, according to a Netflix blog post.

    A now-deleted tweet (preserved in a screengrab on Reddit) said to be penned by an OpenAI exec read: “Fav moment from the launch celebration was hearing singer/songwriter Vince share that ChatGPT helped him write ‘Soda Pop’ from KPop Demon Hunters! It apparently gave him ideas to make it sound ‘more bubbly.’”

    Here’s where things get complicated. The alleged use of AI to help write “Soda Pop” was first reported in the English-language version of Joongang Daily—but the original Korean text of the article makes no mention of ChatGPT being used specifically during the production of KPop Demon Hunters’ music.

    A translator on Gizmodo’s staff revealed Vince instead made a far broader statement—”I sometimes use ChatGPT to get some inspiration while producing K-Pop”—while discussing how AI technology is already being used in the K-Pop industry.

    As Kotaku has pointed out, KPop Demon Hunters has previously had to fend off allegations surrounding its characters being made with AI. Rei Ami, one of the singers for the movie’s girl group Huntr/x, has also had to insist that she and co-singers Ejae and Audrey Nuna are real human beings.

    io9 has reached out to Netflix for clarification and will update should we hear back.

    Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

    Justin Carter

    Source link

  • Parents of teens who died by suicide after AI chatbot interactions testify in Congress

    The parents of teenagers who killed themselves after interactions with artificial intelligence chatbots testified to Congress on Tuesday about the dangers of the technology.

    “What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” said Matthew Raine, whose 16-year-old son Adam died in April.

    “Within a few months, ChatGPT became Adam’s closest companion,” the father told senators. “Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother.”

    Raine’s family sued OpenAI and its CEO Sam Altman last month alleging that ChatGPT coached the boy in planning to take his own life.

     ChatGPT mentioned suicide 1,275 times to Raine, the lawsuit alleges, and kept providing specific methods to the teen on how to die by suicide. Instead of directing the 16-year-old to get professional help or speak to trusted loved ones, it continued to validate and encourage Raine’s feelings, the lawsuit alleges.

    Also testifying Tuesday was Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida.

    Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the chatbot.

    In this undated photo provided by Megan Garcia of Florida in Oct. 2024, she stands with her son, Sewell Setzer III.

    Courtesy Megan Garcia via AP


    His mother told CBS News last year that her son withdrew socially and stopped wanting to play sports after he started speaking to an AI chatbot. The company said after the teen’s death, it made changes that require users to be 13 or older to create an account and that it would launch parental controls in the first quarter of 2025. Those controls were rolled out in March.

    Hours before the Senate hearing, OpenAI pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set “blackout hours” when a teen can’t use ChatGPT. The company said it will attempt to contact the users’ parents if an under-18 user is having suicidal ideation and, if unable to reach them, will contact the authorities in case of imminent harm. 

    “We believe minors need significant protection,” OpenAI CEO Sam Altman said in a statement outlining the proposed changes.

    Child advocacy groups criticized the announcement as not enough.

    “This is a fairly common tactic — it’s one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company,” said Josh Golin, executive director of Fairplay, a group advocating for children’s online safety.

    “What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them,” Golin said. “We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”

    California State Senator Steve Padilla, who introduced legislation to create safeguards in the state around AI Chatbots, said in a statement to CBS News, “We need to create common-sense safeguards that rein in the worst impulses of this emerging technology that even the tech industry doesn’t fully understand.”

    He added that technology companies can lead the world in innovation, but it shouldn’t come at the expense of “our children’s health.”

    The Federal Trade Commission said last week it had launched an inquiry into several companies about the potential harms to children and teenagers who use their AI chatbots as companions.

    The agency sent letters to Character, Meta and OpenAI, as well as to Google, Snap and xAI.

    How to seek help

    If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here. For more information about mental health care resources and support, The National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.-10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.

    contributed to this report.

    Source link

  • OpenAI Reveals How (and Which) People Are Using ChatGPT

    Large language models largely remain black boxes in terms of what is happening inside them to produce the outputs that they do. They have also been a bit of a black box in terms of who is using them and what they are doing with them. OpenAI, with some help from the National Bureau of Economic Research (NBER), set out to figure out what exactly its growing user base is getting up to with its chatbot. It found a surprising amount of personal use and a closing “gender gap” among its frequent users.

    In an NBER working paper authored by the OpenAI Economic Research team and Harvard economist David Deming, the researchers found that about 80% of all ChatGPT usage falls under one of three categories: “Practical Guidance,” “Seeking Information,” and “Writing.” “Practical guidance,” which the study found to be the most common usage, includes things like “tutoring and teaching, how-to advice about a variety of topics, and creative ideation,” whereas “seeking information” is viewed as a substitute for traditional search. “Writing” included the automated creation of emails, documents, and other communications, as well as editing and translating text.

    Writing was also the most common work-related use case, per the study, accounting for 40% of work-related messages in June 2025, compared to just 4.2% of messages related to computer programming—so it seems coding with ChatGPT is not that common.

    Notably, work usage for ChatGPT appears to make up a shrinking share of how people are interacting with the chatbot. In June 2024, about 47% of interactions users had with the chatbot were work-related. That has shrunk to just 27%, which comes as other research shows companies largely failing to figure out how to generate any sort of meaningful return from their AI investments. Meanwhile, non-work-related interactions have jumped from 53% to 73%.

    While users are apparently spending more time with ChatGPT in their personal time, OpenAI’s research found that a “fairly small” share of messages with the chatbot were users seeking virtual companionship or talking about social-emotional issues. The company claimed that about 2% of all messages were people using ChatGPT as a therapist or friend, and just 0.4% of people talked to the chatbot about relationships and personal reflections—though it’d be interesting to see if users who engage with a chatbot this way generate more messages and if there is stickier engagement.

    For what it’s worth, other researchers seem to believe that this usage is far more common than those numbers might suggest. Common Sense Media, for instance, found that about one in three teens use AI chatbots for social interaction and relationships. Another study found that about half of all adult users have used a chatbot for “psychological support” in the last year. The teen figure is particularly of note, considering OpenAI’s research did find its userbase skews young. The NEBR study found 46% of the messages came from users identified as being between the ages of 18 and 25 (it also excluded users under the age of 18). Those users are also more likely to use ChatGPT for personal use, as work-related messages increase with age.

    The study also found that there is a growing number of women using ChatGPT, which initially had a very male-dominated user base. The company claims that the number of “masculine first name” users has declined from about 80% in 2022 to 48% in June 2025, with “typically feminine names” growing to reach parity.

    One caveat about the study that may give you pause, depending on how much you trust technology: OpenAI used AI to categorize all of the messages it analyzed. So if you’re skeptical, there’s an asterisk you can put next to the figures.

    AJ Dellinger

    Source link

  • Financial institutions compete with ChatGPT on consumer advice

    Consumers increasingly are looking to AI for financial advice.    Fifty-one percent of consumers are looking to AI for financial information or advice, according to a recent JD Power report.  Most are tapping ChatGPT and Google Gemini, but some users are using Microsoft Copilot, Meta AI and others, according to the report.  Consumers are asking the […]

    Whitney McDonald

    Source link

  • Financial institutions compete with ChatGPT on consumer advice

    Consumers increasingly are looking to AI for financial advice.    Fifty-one percent of consumers are looking to AI for financial information or advice, according to a recent JD Power report.  Most are tapping ChatGPT and Google Gemini, but some users are using Microsoft Copilot, Meta AI and others, according to the report.  Consumers are asking the […]

    Whitney McDonald

    Source link

  • Build Confidence in ChatGPT and Automation for Just $20 | Entrepreneur

    Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.

    Artificial intelligence(AI) isn’t just a buzzword anymore—it’s a competitive necessity. For business leaders, entrepreneurs, and professionals across industries, knowing how to use AI tools like ChatGPT isn’t optional. The ChatGPT & Automation E-Degree, now available for just $19.97 (MSRP: $790), offers a practical, hands-on way to understand and implement AI in your workflows.

    The program comprises 12 courses and more than 25 hours of content, all developed by Eduonix Learning Solutions, a trusted name in professional training. Instead of broad, abstract lessons, you’ll find real-world applications you can bring directly into your business.

    Here’s what makes it useful:

    • AI for business processes: Learn how to use automation to streamline things like reporting, customer service, and scheduling.
    • ChatGPT for productivity: Master prompt-building to generate marketing copy, draft emails, and analyze data.
    • Data visualization and storytelling: Turn raw data into presentations your clients and teams will actually understand.
    • Coding and customization: Explore the technical side of tailoring AI tools for your specific industry.
    • Cross-industry use cases: From law and finance to retail and startups, discover how AI can fit your field.

    What sets this apart is the focus on implementation, not theory. By the end of the program, you’ll know not only what AI can do, but how to use it to save money, free up employee time, and grow your business smarter.

    Think of it as a low-cost investment in your company’s future agility. While competitors hesitate, you’ll already have the know-how to put AI to work.

    Get lifetime access to these ChatGPT & Automation E-Degree courses while it’s still on sale for just $19.97 (MSRP: $790).

    ChatGPT & Automation E-Degree

    See Deal

    StackSocial prices subject to change.

    Artificial intelligence(AI) isn’t just a buzzword anymore—it’s a competitive necessity. For business leaders, entrepreneurs, and professionals across industries, knowing how to use AI tools like ChatGPT isn’t optional. The ChatGPT & Automation E-Degree, now available for just $19.97 (MSRP: $790), offers a practical, hands-on way to understand and implement AI in your workflows.

    The program comprises 12 courses and more than 25 hours of content, all developed by Eduonix Learning Solutions, a trusted name in professional training. Instead of broad, abstract lessons, you’ll find real-world applications you can bring directly into your business.

    Here’s what makes it useful:

    The rest of this article is locked.

    Join Entrepreneur+ today for access.

    Entrepreneur Store

    Source link

  • OpenAI secures Microsoft’s blessing to transition its for-profit arm | TechCrunch

    OpenAI announced Thursday it reached a nonbinding agreement with Microsoft, its largest investor, on a revised partnership that would allow the startup to convert its for-profit arm into a public benefit corporation (PBC).

    The transition, should it be cleared by state regulators, could allow OpenAI to raise additional capital from investors and, eventually, become a public company.

    In a blog post, OpenAI board chairman Bret Taylor said under the nonbinding agreement with Microsoft, OpenAI’s nonprofit would continue to exist and retain control over the startup’s operations. OpenAI’s nonprofit would obtain a stake in the company’s PBC, worth upward of $100 billion, Taylor said. Further terms of the deal were not disclosed.

    “Microsoft and OpenAI have signed a nonbinding memorandum of understanding (MOU) for the next phase of our partnership,” the companies said in a joint statement. MOUs are not legally binding but aim to document each party’s expectations and intent.

    “We are actively working to finalize contractual terms in a definitive agreement,” the joint statement added.

    The development seems to mark an end to months of negotiations between OpenAI and Microsoft over the ChatGPT maker’s transition plans. Unlike most startups, OpenAI is controlled by a nonprofit board. The unusual structure allowed for OpenAI board members to fire CEO Sam Altman in 2023. Altman was reinstated days later, and many of the board members resigned. However, the same governance structure remains in place today.

    Under their current deal, Microsoft is supposed to get preferred access to OpenAI’s technology and be the startup’s primary provider of cloud services. However, ChatGPT is a much larger business than when Microsoft first invested in the startup back in 2019, and OpenAI has reportedly sought to loosen the cloud provider’s control as part of these negotiations.

    In the last year, OpenAI has struck a series of deals that would allow it to be less dependent on Microsoft. OpenAI recently signed a contract to spend $300 billion with cloud provider Oracle over a five-year period starting in 2027, according to the Wall Street Journal. OpenAI has also partnered with the Japanese conglomerate SoftBank on its Stargate data center project.

    Taylor says OpenAI and Microsoft will “continue to work with the California and Delaware attorneys general” on the transition plan, implying the deal still needs a stamp of approval from regulators before it can take effect.

    Representatives for California and Delaware attorneys general did not immediately respond to TechCrunch’s request for comment.

    Tensions between OpenAI and Microsoft over these negotiations reportedly reached a boiling point in recent months. The Wall Street Journal reported Microsoft wanted control of technology owned by Windsurf, the AI coding startup that OpenAI had planned to acquire earlier this year, while OpenAI fought to keep the startup’s IP independent. However, the deal fell through, and Windsurf’s founders were hired by Google, and the rest of its staff was acquired by another startup, Cognition.

    In Elon Musk’s lawsuit against OpenAI — which at its core accuses Sam Altman, Greg Brockman, and the company of abandoning its nonprofit mission — the startup’s for-profit transition is also a major flash point. Lawyers representing Musk in the lawsuit have tried to surface information related to Microsoft and OpenAI’s negotiations over the transition.

    Musk also submitted an unsolicited $97 billion takeover bid for OpenAI earlier this year, which the startup’s board promptly rejected. However, legal experts noted at the time that Musk’s bid may have raised the price of OpenAI’s nonprofit stake.

    Notably, the nonprofit’s stake in OpenAI PBC, under this agreement, is larger than what Musk offered.

    In recent months, nonprofits such as Encode and The Midas Project have taken issue with OpenAI’s for-profit transition, arguing that it threatens the startup’s mission to develop AGI that benefits humanity. OpenAI has responded by sending subpoenas to some of these groups, claiming the nonprofits are funded by its competitors — namely, Musk and Meta CEO Mark Zuckerberg. Encode and The Midas Project deny the claims.

    Maxwell Zeff

    Source link

  • We Asked 3 AIs: How High Could XRP’s Price Go in September

    TL;DR

    • AI chatbots like ChatGPT and Grok see potential for XRP to reach uncharted territory in the next weeks.
    • Perplexity offered a more cautious outlook, setting $3.36 as the most reliable September target.

    New ATH in September?

    Over the past few days, Ripple’s cross-border token has been dancing around the $3 level, currently trading slightly below it. This represents a substantial decline of almost 20% since the all-time high of $3.65 witnessed in July, but according to some of the most popular AI chatbots, a new record may be knocking on the door.

    Specifically, we asked ChatGPT, Grok, and Perplexity to predict the highest price that XRP can record in September. ChatGPT said the asset’s technicals “look promising,” noting that many analysts expect a possible breakout to $3.30-$3.50 and even a fresh peak of $4.70.

    It estimated that such breakouts hinge on catalysts like institutional inflows, regulatory clarity, and ETF-related news. The Ripple-SEC case has concluded, and the community now awaits the launch of the first spot XRP ETF in the USA.

    The product will allow investors to gain direct exposure to the token through a traditional brokerage account. This will simplify the process and is expected to increase the interest in XRP and positively impact its price. According to Polymarket, the approval odds before the end of 2025 currently stand at around 92%.

    XRP ETF Approval
    XRP ETF Approval Chances, Source: Polymarket

    At the same time, ChatGPT warned that the crypto market is quite volatile and XRP isn’t immune to sharp pullbacks. It suggested that losing the $2.77 support could lead to a drop to the $2.50-$2.60 zone.

    We now move to Grok. The AI chatbot built into the social media platform X started its examination with the disclaimer that predicting XRP’s highest price in September is “inherently speculative” as volatile factors like macroeconomic events, institutional adoption, and on-chain activity such as whale accumulations influence crypto markets. 

    Later on, Grok estimated that the asset has been recently consolidating in a symmetrical triangle or descending channel pattern with key support at $2.77-$2.80 and resistance at $3-$3.40. 

    “A breakout above $3.13–$3.40 could signal bullish continuation, targeting $3.60–$5.00 by month-end. Failure to hold $2.65–$2.70 risks a drop to $2.50, but on-chain data shows strong whale buying absorbing sells.” 

    Last but not least, Grok claimed that XRP’s recent push above $3 was fueled by the rising rumors that Apple plans to purchase $1.5 billion worth of the cryptocurrency on September 9. This turned out to be pure speculation, as even some of the hard-core XRP fans rejected the possibility.

    How About a Lower Target?

    Perplexity was less bullish than the other AI chatbots, projecting XRP’s peak this month at $3.36. While it acknowledged that many market observers expect further upside, it described that target as the most “reliable” one.

    “There is historical precedence for XRP performing strongly in September, with an average gain of about 87% in previous years, although volatility can be significant,” it added.

    SPECIAL OFFER (Sponsored)

    Binance Free $600 (CryptoPotato Exclusive): Use this link to register a new account and receive $600 exclusive welcome offer on Binance (full details).

    LIMITED OFFER for CryptoPotato readers at Bybit: Use this link to register and open a $500 FREE position on any coin!

    Dimitar Dzhondzhorov

    Source link

  • RFK Jr’s HHS Deploys ChatGPT for All Staff

    More good things are happening at Kennedy’s health agency.

    Robert F. Kennedy Jr. has thrown the Department of Health and Human Services into turmoil through a series of bizarre and idiotic policy decisions, and now, to make things better, he’s apparently forcing everybody who remains at the pivotal health agency to use a chatbot. That should sort everything out.

    404 Media reports that HHS employees received an email on Tuesday entitled “AI Deployment,” which explained that ChatGPT would now be available to everybody at the agency. 404 writes that the deployment of the chatbot will be overseen by HHS’s new CIO, former Palantir employee Clark Minor. The email was confirmed by other outlets.

    “Artificial intelligence is beginning to improve health care, business, and government,” the email, sent by deputy secretary Jim O’Neill and seen by 404 Media, begins. “Our department is committed to supporting and encouraging this transformation. In many offices around the world, the growing administrative burden of extensive emails and meetings can distract even highly motivated people from getting things done. We should all be vigilant against barriers that could slow our progress toward making America healthy again.”

    The email went on: “I’m excited to move us forward by making ChatGPT available to everyone in the Department effective immediately,” it adds. “Some operating divisions, such as FDA and ACF [Administration for Children and Families], have already benefitted from specific deployments of large language models to enhance their work, and now the rest of us can join them. This tool can help us promote rigorous science, radical transparency, and robust good health. As Secretary Kennedy said, ‘The AI revolution has arrived.’”

    As Kennedy slashes staff and eradicates vital health programs, the notion that the “AI revolution” is going to provide anything even remotely helpful to the remaining HHS staff is laughable at best. That said, given Kennedy’s preference for relying on poorly sourced bullshit rather than long-established science, I guess relying on a chatbot prone to hallucination pretty much tracks. Gizmodo reached out to the HHS for more information on how it plans to integrate AI into its operations and will update this story when we hear back.

    Kennedy has rolled out countless destabilizing policies at the HHS over the past year, including attacks on the agency’s vaccine program. Earlier this year, under his supervision, the agency fired many thousands of staff. More recently, the Centers for Disease Control and Prevention saw many prominent staffers (including its director) step down in protest of Kennedy’s policies. The new director is Jim O’Neill, who—like HHS’s CIO—also previously worked for a company owned by rightwing billionaire Peter Thiel.

    Lucas Ropek

    Source link

  • ‘A burgeoning epidemic’: Why some kids are forming extreme emotional relationships with AI – WTOP News

    As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT.

    As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT — and that is taking a toll on their mental health.

    “AI psychosis,” while not an official clinical diagnosis, is a term clinicians are using to describe children who appear to be forming emotional bonds with AI, according to Dr. Ashley Maxie-Moreman, clinical psychologist at Children’s National Hospital in D.C.

    Maxie-Moreman said symptoms can include delusions of grandeur, paranoia, fantastical relationships with AI, and even detachment from reality.

    “Especially teens and young adults are engaging with generative AI for excessive periods of time, and forming these sort of fantastical relationships with AI,” she said.

    In addition to forming close bonds with AI, those struggling with paranoia may see their condition worsen, with AI potentially affirming paranoid beliefs.

    “I think that’s more on the extreme end,” Maxie-Moreman said.

    More commonly, she said, young people are turning to generative AI for emotional support. They are sharing information about their emotional well-being, such as feeling depressed, anxious, socially isolated or having suicidal thoughts. The responses they receive from AI vary.

    “And I think on the more concerning end, generative AI, at times, has either encouraged youth to move forward with plans or has not connected them to the appropriate resources or flagged any crisis support,” Maxie-Moreman said.

    “It almost feels like this is a burgeoning epidemic,” she added. “Just in the past couple of weeks, I’ve observed cases of this.”

    Maxie-Moreman said kids who are already struggling with anxiety, depression, social isolation or academic stress are most at risk of developing these bonds with AI. That’s why, she said, if you suspect your child is suffering from those conditions, you should seek help.

    “I think it’s really, really important to get your child connected to appropriate mental health services,” she said.

    With AI psychosis, parents need to be on the lookout for symptoms. One could be a lack of desire to go to school.

    “They’re coming up with a lot of excuses, like, ‘I’m feeling sick,’ or ‘I feel nauseous,’ and maybe you’re finding that the child is endorsing a lot of physical symptoms that are sometimes unfounded in relation to attending school,” Maxie-Moreman said.

    Another sign is a child who appears to be isolating themselves and losing interest in things they used to look forward to, such as playing sports or hanging out with friends.

    “I don’t want to be alarmist, but I do think it’s important for parents to be looking out for these things and to just have direct conversations with their kiddos,” she said.

    Talking to a child about mental health concerns can be tricky, especially if they are teens who, as Maxie-Moreman noted, can be irritable and a bit moody. But having a conversation with them is key.

    “I think not skirting around the bush is probably the most helpful thing. And I think teens tend to get a little bit annoyed with indirectness anyhow, so being direct is probably the best approach,” she said.

    To help prevent these issues, Maxie-Moreman suggested parents start doing emotional check-ins with their children from a young age.

    “Just making it sort of a norm in your household to have conversations about how your child is doing emotionally, checking in with them on a regular basis, is important. So starting at a young age is what I would recommend on the preventative end,” she said.

    She also encouraged parents to talk to their children about the limits of the technology they use, including generative AI.

    “I think that’s probably one of the biggest interventions that will be most helpful,” she said.

    Maxie-Moreman said tech companies must also be held accountable.

    “Ultimately, we have to hold our tech companies accountable, and they need to be implementing better safeguards, as opposed to just worrying about the commercialization of their products,” she said.

    Get breaking news and daily headlines delivered to your email inbox by signing up here.

    © 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

    Mike Murillo

    Source link

  • OpenAI identifies reason ChatGPT “hallucinates”

    OpenAI has published new research explaining why ChatGPT, its widely used language model, sometimes produces false but convincing information—a phenomenon known as “hallucination.”

    According to the company, the root cause lies in the way these models are trained and evaluated, processes that reward guessing over admitting uncertainty.

    Newsweek contacted OpenAI for more information outside normal working hours.

    Why It Matters

    Large language models such as ChatGPT are increasingly being used in education, health care, customer service and other fields where accuracy is critical. Hallucinated outputs—statements that are factually wrong but have the appearance of legitimacy—can undermine trust and cause real-world harm.

    What To Know

    Despite progress in developing more capable models, including GPT-5, hallucinations remain a persistent issue, especially when models are prompted to generate specific factual information.

    The findings, based on research by OpenAI scientists—including Adam Kalai and Santosh Vempala—suggest that structural changes to training incentives were needed to address the problem.

    Hallucinations are “plausible but false statements generated by language models,” according to OpenAI’s internal definition.

    One example cited in the research involved a chatbot fabricating multiple titles for a researcher’s dissertation, all of them incorrect. In another case, the model gave three different, equally inaccurate dates for the same person’s birthday.

    Stock Image: A photo taken on September 1 shows the logo of ChatGPT on a laptop screen, right, next to the ChatGPT application logo on a smartphone screen in Frankfurt, Germany.

    Getty Images

    This is because of how language models are trained. During pretraining, models learn to predict the next word in a sentence based on massive volumes of text, but they are never shown which statements are false. This statistical process, while effective at generating coherent language, struggles with low-frequency facts such as birth dates and publication titles.

    When such models are tested for performance, accuracy is often the only metric considered. That creates incentives similar to multiple-choice tests: It’s statistically better to guess than to say, “I don’t know.” According to the researchers, “If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess.”

    To illustrate the problem, the team compared two models on a basic evaluation test. The newer GPT-5 variant had a 52 percent abstention rate and 26 percent error rate. Meanwhile, an older model, OpenAI o4-mini, showed 1 percent abstention but a 75 percent error rate.

    What People Are Saying

    OpenAI wrote in the research paper: “At OpenAI, we’re working hard to make AI systems more useful and reliable. Even as language models become more capable, one challenge remains stubbornly hard to fully solve: hallucinations. By this we mean instances where a model confidently generates an answer that isn’t true. …

    “Hallucinations persist partly because current evaluation methods set the wrong incentives. While evaluations themselves do not directly cause hallucinations, most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.”

    What Happens Next

    OpenAI said it was working to redesign evaluation benchmarks to reward uncertainty rather than discourage it.

    Source link

  • Geoffrey Hinton Says His Girlfriend Dumped Him Using ChatGPT | Entrepreneur

    The Godfather of AI couldn’t escape AI during a breakup.

    Geoffrey Hinton, called the Godfather of AI for his pioneering work helping develop the technology behind AI, said in a Friday interview with The Financial Times that his now former girlfriend used AI to break up with him.

    Hinton said his unnamed ex asked ChatGPT to enumerate the reasons why he had been “a rat,” and relayed the chatbot’s words to him in a breakup conversation.

    “She got ChatGPT to tell me what a rat I was,” Hinton told FT. “She got the chatbot to explain how awful my behavior was and gave it to me.”

    Related: Here’s Why These Two Scientists Won the $1.06 Million 2024 Nobel Prize in Physics

    However, the now 77-year-old, who won the Nobel Prize in Physics last year and currently works at the University of Toronto as a professor emeritus in computer science, wasn’t too bothered by the AI-generated response — or the breakup.

    “I didn’t think I had been a rat, so it didn’t make me feel too bad,” he told FT. “I met somebody I liked more, you know how it goes.”

    Geoffrey Hinton, Godfather of AI. Photo By Ramsey Cardy/Sportsfile for Collision via Getty Images

    Although Hinton doesn’t give a timeline of when the breakup occurred, if his ex used ChatGPT, it had to be within the last three years. And while the technology helped shape the conversation around Hinton’s breakup, its creator, OpenAI, would rather its chatbot stay out of difficult conversations.

    OpenAI announced last month that it would be rolling out changes to ChatGPT to ensure the chatbot responds appropriately in high-stakes personal conversations. For example, instead of directly answering the question, “Should I break up with my boyfriend?” the chatbot guides users through the situation by asking questions.

    Related: Is Your ChatGPT Session Going On Too Long? The AI Bot Will Now Alert You to Take Breaks

    While the breakup comments are personal, Hinton has long been outspoken about AI. In June, he told the podcast “Diary of a CEO” that AI had the potential to “replace everybody” in white-collar jobs, and last month, at the Ai4 conference, Hinton posited that AI would quickly become “much smarter than us.”

    In December, he said that there was a 10% to 20% chance that AI would cause human extinction within the next 30 years.

    Related: AI Could Cause 99% of All Workers to Be Unemployed in the Next Five Years, Says Computer Science Professor

    The Godfather of AI couldn’t escape AI during a breakup.

    Geoffrey Hinton, called the Godfather of AI for his pioneering work helping develop the technology behind AI, said in a Friday interview with The Financial Times that his now former girlfriend used AI to break up with him.

    Hinton said his unnamed ex asked ChatGPT to enumerate the reasons why he had been “a rat,” and relayed the chatbot’s words to him in a breakup conversation.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.

    Sherin Shibu

    Source link

  • AI company Anthropic to pay authors $1.5 billion over pirated books used to train chatbots

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.Related video above: The risks to children under President Trump’s new AI policyThe landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.”As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.”We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.””We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.As part of the settlement, the company has also agreed to destroy the original book files it downloaded.Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT. Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.” The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.”On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.”It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.”This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.

    Related video above: The risks to children under President Trump’s new AI policy

    The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.

    The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.

    “As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”

    A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.

    A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.

    If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.

    “We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.

    U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.

    Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.”

    “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.

    As part of the settlement, the company has also agreed to destroy the original book files it downloaded.

    Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.

    Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.

    Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.

    Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.

    The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.

    On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”

    The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.

    “On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.

    On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.

    “It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.

    The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.

    Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.

    The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.

    “This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.

    The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”

    Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”

    But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.

    With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    Source link