Consumers increasingly are looking to AI for financial advice. Fifty-one percent of consumers are looking to AI for financial information or advice, according to a recent JD Power report. Most are tapping ChatGPT and Google Gemini, but some users are using Microsoft Copilot, Meta AI and others, according to the report. Consumers are asking the […]
Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.
Artificial intelligence(AI) isn’t just a buzzword anymore—it’s a competitive necessity. For business leaders, entrepreneurs, and professionals across industries, knowing how to use AI tools like ChatGPT isn’t optional. The ChatGPT & Automation E-Degree, now available for just $19.97 (MSRP: $790), offers a practical, hands-on way to understand and implement AI in your workflows.
The program comprises 12 courses and more than 25 hours of content, all developed by Eduonix Learning Solutions, a trusted name in professional training. Instead of broad, abstract lessons, you’ll find real-world applications you can bring directly into your business.
Here’s what makes it useful:
AI for business processes: Learn how to use automation to streamline things like reporting, customer service, and scheduling.
Data visualization and storytelling: Turn raw data into presentations your clients and teams will actually understand.
Coding and customization: Explore the technical side of tailoring AI tools for your specific industry.
Cross-industry use cases: From law and finance to retail and startups, discover how AI can fit your field.
What sets this apart is the focus on implementation, not theory. By the end of the program, you’ll know not only what AI can do, but how to use it to save money, free up employee time, and grow your business smarter.
Think of it as a low-cost investment in your company’s future agility. While competitors hesitate, you’ll already have the know-how to put AI to work.
Artificial intelligence(AI) isn’t just a buzzword anymore—it’s a competitive necessity. For business leaders, entrepreneurs, and professionals across industries, knowing how to use AI tools like ChatGPT isn’t optional. The ChatGPT & Automation E-Degree, now available for just $19.97 (MSRP: $790), offers a practical, hands-on way to understand and implement AI in your workflows.
The program comprises 12 courses and more than 25 hours of content, all developed by Eduonix Learning Solutions, a trusted name in professional training. Instead of broad, abstract lessons, you’ll find real-world applications you can bring directly into your business.
OpenAI announced Thursday it reached a nonbinding agreement with Microsoft, its largest investor, on a revised partnership that would allow the startup to convert its for-profit arm into a public benefit corporation (PBC).
The transition, should it be cleared by state regulators, could allow OpenAI to raise additional capital from investors and, eventually, become a public company.
In a blog post, OpenAI board chairman Bret Taylor said under the nonbinding agreement with Microsoft, OpenAI’s nonprofit would continue to exist and retain control over the startup’s operations. OpenAI’s nonprofit would obtain a stake in the company’s PBC, worth upward of $100 billion, Taylor said. Further terms of the deal were not disclosed.
“Microsoft and OpenAI have signed a nonbinding memorandum of understanding (MOU) for the next phase of our partnership,” the companies said in a joint statement. MOUs are not legally binding but aim to document each party’s expectations and intent.
“We are actively working to finalize contractual terms in a definitive agreement,” the joint statement added.
The development seems to mark an end to months of negotiations between OpenAI and Microsoft over the ChatGPT maker’s transition plans. Unlike most startups, OpenAI is controlled by a nonprofit board. The unusual structure allowed for OpenAI board members to fire CEO Sam Altman in 2023. Altman was reinstated days later, and many of the board members resigned. However, the same governance structure remains in place today.
Under their current deal, Microsoft is supposed to get preferred access to OpenAI’s technology and be the startup’s primary provider of cloud services. However, ChatGPT is a much larger business than when Microsoft first invested in the startup back in 2019, and OpenAI has reportedly sought to loosen the cloud provider’s control as part of these negotiations.
In the last year, OpenAI has struck a series of deals that would allow it to be less dependent on Microsoft. OpenAI recently signed a contract to spend $300 billion with cloud provider Oracle over a five-year period starting in 2027, according to the Wall Street Journal. OpenAI has also partnered with the Japanese conglomerate SoftBank on its Stargate data center project.
Taylor says OpenAI and Microsoft will “continue to work with the California and Delaware attorneys general” on the transition plan, implying the deal still needs a stamp of approval from regulators before it can take effect.
Representatives for California and Delaware attorneys general did not immediately respond to TechCrunch’s request for comment.
Tensions between OpenAI and Microsoft over these negotiations reportedly reached a boiling point in recent months. The Wall Street Journal reported Microsoft wanted control of technology owned by Windsurf, the AI coding startup that OpenAI had planned to acquire earlier this year, while OpenAI fought to keep the startup’s IP independent. However, the deal fell through, and Windsurf’s founders were hired by Google, and the rest of its staff was acquired by another startup, Cognition.
In Elon Musk’s lawsuit against OpenAI — which at its core accuses Sam Altman, Greg Brockman, and the company of abandoning its nonprofit mission — the startup’s for-profit transition is also a major flash point. Lawyers representing Musk in the lawsuit have tried to surface information related to Microsoft and OpenAI’s negotiations over the transition.
Musk also submitted an unsolicited $97 billion takeover bid for OpenAI earlier this year, which the startup’s board promptly rejected. However, legal experts noted at the time that Musk’s bid may have raised the price of OpenAI’s nonprofit stake.
Notably, the nonprofit’s stake in OpenAI PBC, under this agreement, is larger than what Musk offered.
In recent months, nonprofits such as Encode and The Midas Project have taken issue with OpenAI’s for-profit transition, arguing that it threatens the startup’s mission to develop AGI that benefits humanity. OpenAI has responded by sending subpoenas to some of these groups, claiming the nonprofits are funded by its competitors — namely, Musk and Meta CEO Mark Zuckerberg. Encode and The Midas Project deny the claims.
AI chatbots like ChatGPT and Grok see potential for XRP to reach uncharted territory in the next weeks.
Perplexity offered a more cautious outlook, setting $3.36 as the most reliable September target.
New ATH in September?
Over the past few days, Ripple’s cross-border token has been dancing around the $3 level, currently trading slightly below it. This represents a substantial decline of almost 20% since the all-time high of $3.65 witnessed in July, but according to some of the most popular AI chatbots, a new record may be knocking on the door.
Specifically, we asked ChatGPT, Grok, andPerplexity to predict the highest price that XRP can record in September. ChatGPT said the asset’s technicals “look promising,” noting that many analysts expect a possible breakout to $3.30-$3.50 and even a fresh peak of $4.70.
It estimated that such breakouts hinge on catalysts like institutional inflows, regulatory clarity, and ETF-related news. The Ripple-SEC case has concluded, and the community now awaits the launch of the first spot XRP ETF in the USA.
The product will allow investors to gain direct exposure to the token through a traditional brokerage account. This will simplify the process and is expected to increase the interest in XRP and positively impact its price. According to Polymarket, the approval odds before the end of 2025 currently stand at around 92%.
XRP ETF Approval Chances, Source: Polymarket
At the same time, ChatGPT warned that the crypto market is quite volatile and XRP isn’t immune to sharp pullbacks. It suggested that losing the $2.77 support could lead to a drop to the $2.50-$2.60 zone.
We now move to Grok. The AI chatbot built into the social media platform X started its examination with the disclaimer that predicting XRP’s highest price in September is “inherently speculative” as volatile factors like macroeconomic events, institutional adoption, and on-chain activity such as whale accumulations influence crypto markets.
Later on, Grok estimated that the asset has been recently consolidating in a symmetrical triangle or descending channel pattern with key support at $2.77-$2.80 and resistance at $3-$3.40.
“A breakout above $3.13–$3.40 could signal bullish continuation, targeting $3.60–$5.00 by month-end. Failure to hold $2.65–$2.70 risks a drop to $2.50, but on-chain data shows strong whale buying absorbing sells.”
Last but not least, Grok claimed that XRP’s recent push above $3 was fueled by the rising rumors that Apple plans to purchase $1.5 billion worth of the cryptocurrency on September 9. This turned out to be pure speculation, as even some of the hard-core XRP fans rejected the possibility.
How About a Lower Target?
Perplexity was less bullish than the other AI chatbots, projecting XRP’s peak this month at $3.36. While it acknowledged that many market observers expect further upside, it described that target as the most “reliable” one.
“There is historical precedence for XRP performing strongly in September, with an average gain of about 87% in previous years, although volatility can be significant,” it added.
SPECIAL OFFER (Sponsored)
Binance Free $600 (CryptoPotato Exclusive): Use this link to register a new account and receive $600 exclusive welcome offer on Binance (full details).
LIMITED OFFER for CryptoPotato readers at Bybit: Use this link to register and open a $500 FREE position on any coin!
More good things are happening at Kennedy’s health agency.
Robert F. Kennedy Jr. has thrown the Department of Health and Human Services into turmoil through a series of bizarre and idiotic policy decisions, and now, to make things better, he’s apparently forcing everybody who remains at the pivotal health agency to use a chatbot. That should sort everything out.
404 Media reports that HHS employees received an email on Tuesday entitled “AI Deployment,” which explained that ChatGPT would now be available to everybody at the agency. 404 writes that the deployment of the chatbot will be overseen by HHS’s new CIO, former Palantir employee Clark Minor. The email was confirmed by otheroutlets.
“Artificial intelligence is beginning to improve health care, business, and government,” the email, sent by deputy secretary Jim O’Neill and seen by 404 Media, begins. “Our department is committed to supporting and encouraging this transformation. In many offices around the world, the growing administrative burden of extensive emails and meetings can distract even highly motivated people from getting things done. We should all be vigilant against barriers that could slow our progress toward making America healthy again.”
The email went on: “I’m excited to move us forward by making ChatGPT available to everyone in the Department effective immediately,” it adds. “Some operating divisions, such as FDA and ACF [Administration for Children and Families], have already benefitted from specific deployments of large language models to enhance their work, and now the rest of us can join them. This tool can help us promote rigorous science, radical transparency, and robust good health. As Secretary Kennedy said, ‘The AI revolution has arrived.’”
As Kennedy slashes staff and eradicates vital health programs, the notion that the “AI revolution” is going to provide anything even remotely helpful to the remaining HHS staff is laughable at best. That said, given Kennedy’s preference for relying on poorly sourced bullshit rather than long-established science, I guess relying on a chatbot prone to hallucination pretty much tracks. Gizmodo reached out to the HHS for more information on how it plans to integrate AI into its operations and will update this story when we hear back.
Kennedy has rolled out countless destabilizing policies at the HHS over the past year, including attacks on the agency’s vaccine program. Earlier this year, under his supervision, the agency fired many thousands of staff. More recently, the Centers for Disease Control and Prevention saw many prominent staffers (including its director) step down in protest of Kennedy’s policies. The new director is Jim O’Neill, who—like HHS’s CIO—also previously worked for a company owned by rightwing billionaire Peter Thiel.
As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT.
As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT — and that is taking a toll on their mental health.
“AI psychosis,” while not an official clinical diagnosis, is a term clinicians are using to describe children who appear to be forming emotional bonds with AI, according to Dr. Ashley Maxie-Moreman, clinical psychologist at Children’s National Hospital in D.C.
Maxie-Moreman said symptoms can include delusions of grandeur, paranoia, fantastical relationships with AI, and even detachment from reality.
“Especially teens and young adults are engaging with generative AI for excessive periods of time, and forming these sort of fantastical relationships with AI,” she said.
In addition to forming close bonds with AI, those struggling with paranoia may see their condition worsen, with AI potentially affirming paranoid beliefs.
“I think that’s more on the extreme end,” Maxie-Moreman said.
More commonly, she said, young people are turning to generative AI for emotional support. They are sharing information about their emotional well-being, such as feeling depressed, anxious, socially isolated or having suicidal thoughts. The responses they receive from AI vary.
“And I think on the more concerning end, generative AI, at times, has either encouraged youth to move forward with plans or has not connected them to the appropriate resources or flagged any crisis support,” Maxie-Moreman said.
“It almost feels like this is a burgeoning epidemic,” she added. “Just in the past couple of weeks, I’ve observed cases of this.”
Maxie-Moreman said kids who are already struggling with anxiety, depression, social isolation or academic stress are most at risk of developing these bonds with AI. That’s why, she said, if you suspect your child is suffering from those conditions, you should seek help.
“I think it’s really, really important to get your child connected to appropriate mental health services,” she said.
With AI psychosis, parents need to be on the lookout for symptoms. One could be a lack of desire to go to school.
“They’re coming up with a lot of excuses, like, ‘I’m feeling sick,’ or ‘I feel nauseous,’ and maybe you’re finding that the child is endorsing a lot of physical symptoms that are sometimes unfounded in relation to attending school,” Maxie-Moreman said.
Another sign is a child who appears to be isolating themselves and losing interest in things they used to look forward to, such as playing sports or hanging out with friends.
“I don’t want to be alarmist, but I do think it’s important for parents to be looking out for these things and to just have direct conversations with their kiddos,” she said.
Talking to a child about mental health concerns can be tricky, especially if they are teens who, as Maxie-Moreman noted, can be irritable and a bit moody. But having a conversation with them is key.
“I think not skirting around the bush is probably the most helpful thing. And I think teens tend to get a little bit annoyed with indirectness anyhow, so being direct is probably the best approach,” she said.
To help prevent these issues, Maxie-Moreman suggested parents start doing emotional check-ins with their children from a young age.
“Just making it sort of a norm in your household to have conversations about how your child is doing emotionally, checking in with them on a regular basis, is important. So starting at a young age is what I would recommend on the preventative end,” she said.
She also encouraged parents to talk to their children about the limits of the technology they use, including generative AI.
“I think that’s probably one of the biggest interventions that will be most helpful,” she said.
Maxie-Moreman said tech companies must also be held accountable.
“Ultimately, we have to hold our tech companies accountable, and they need to be implementing better safeguards, as opposed to just worrying about the commercialization of their products,” she said.
Get breaking news and daily headlines delivered to your email inbox by signing up here.
OpenAI has published new research explaining why ChatGPT, its widely used language model, sometimes produces false but convincing information—a phenomenon known as “hallucination.”
According to the company, the root cause lies in the way these models are trained and evaluated, processes that reward guessing over admitting uncertainty.
Newsweek contacted OpenAI for more information outside normal working hours.
Why It Matters
Large language models such as ChatGPT are increasingly being used in education, health care, customer service and other fields where accuracy is critical. Hallucinated outputs—statements that are factually wrong but have the appearance of legitimacy—can undermine trust and cause real-world harm.
What To Know
Despite progress in developing more capable models, including GPT-5, hallucinations remain a persistent issue, especially when models are prompted to generate specific factual information.
The findings, based on research by OpenAI scientists—including Adam Kalai and Santosh Vempala—suggest that structural changes to training incentives were needed to address the problem.
Hallucinations are “plausible but false statements generated by language models,” according to OpenAI’s internal definition.
One example cited in the research involved a chatbot fabricating multiple titles for a researcher’s dissertation, all of them incorrect. In another case, the model gave three different, equally inaccurate dates for the same person’s birthday.
Stock Image: A photo taken on September 1 shows the logo of ChatGPT on a laptop screen, right, next to the ChatGPT application logo on a smartphone screen in Frankfurt, Germany. Stock Image: A photo taken on September 1 shows the logo of ChatGPT on a laptop screen, right, next to the ChatGPT application logo on a smartphone screen in Frankfurt, Germany. Getty Images
This is because of how language models are trained. During pretraining, models learn to predict the next word in a sentence based on massive volumes of text, but they are never shown which statements are false. This statistical process, while effective at generating coherent language, struggles with low-frequency facts such as birth dates and publication titles.
When such models are tested for performance, accuracy is often the only metric considered. That creates incentives similar to multiple-choice tests: It’s statistically better to guess than to say, “I don’t know.” According to the researchers, “If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess.”
To illustrate the problem, the team compared two models on a basic evaluation test. The newer GPT-5 variant had a 52 percent abstention rate and 26 percent error rate. Meanwhile, an older model, OpenAI o4-mini, showed 1 percent abstention but a 75 percent error rate.
What People Are Saying
OpenAI wrotein the research paper: “At OpenAI, we’re working hard to make AI systems more useful and reliable. Even as language models become more capable, one challenge remains stubbornly hard to fully solve: hallucinations. By this we mean instances where a model confidently generates an answer that isn’t true. …
“Hallucinations persist partly because current evaluation methods set the wrong incentives. While evaluations themselves do not directly cause hallucinations, most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.”
What Happens Next
OpenAI said it was working to redesign evaluation benchmarks to reward uncertainty rather than discourage it.
Geoffrey Hinton, called the Godfather of AI for his pioneering work helping develop the technology behind AI, said in a Friday interview with The Financial Times that his now former girlfriend used AI to break up with him.
Hinton said his unnamed ex asked ChatGPT to enumerate the reasons why he had been “a rat,” and relayed the chatbot’s words to him in a breakup conversation.
“She got ChatGPT to tell me what a rat I was,” Hinton told FT. “She got the chatbot to explain how awful my behavior was and gave it to me.”
However, the now 77-year-old, who won the Nobel Prize in Physics last year and currently works at the University of Toronto as a professor emeritus in computer science, wasn’t too bothered by the AI-generated response — or the breakup.
“I didn’t think I had been a rat, so it didn’t make me feel too bad,” he told FT. “I met somebody I liked more, you know how it goes.”
Geoffrey Hinton, Godfather of AI. Photo By Ramsey Cardy/Sportsfile for Collision via Getty Images
Although Hinton doesn’t give a timeline of when the breakup occurred, if his ex used ChatGPT, it had to be within the last three years. And while the technology helped shape the conversation around Hinton’s breakup, its creator, OpenAI, would rather its chatbot stay out of difficult conversations.
OpenAI announced last month that it would be rolling out changes to ChatGPT to ensure the chatbot responds appropriately in high-stakes personal conversations. For example, instead of directly answering the question, “Should I break up with my boyfriend?” the chatbot guides users through the situation by asking questions.
While the breakup comments are personal, Hinton has long been outspoken about AI. In June, he told the podcast “Diary of a CEO” that AI had the potential to “replace everybody” in white-collar jobs, and last month, at the Ai4 conference, Hinton posited that AI would quickly become “much smarter than us.”
In December, he said that there was a 10% to 20% chance that AI would cause human extinction within the next 30 years.
Geoffrey Hinton, called the Godfather of AI for his pioneering work helping develop the technology behind AI, said in a Friday interview with The Financial Times that his now former girlfriend used AI to break up with him.
Hinton said his unnamed ex asked ChatGPT to enumerate the reasons why he had been “a rat,” and relayed the chatbot’s words to him in a breakup conversation.
Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.Related video above: The risks to children under President Trump’s new AI policyThe landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.”As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.”We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.””We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.As part of the settlement, the company has also agreed to destroy the original book files it downloaded.Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT. Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.” The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.”On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.”It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.”This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.
NEW YORK —
Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.
Related video above: The risks to children under President Trump’s new AI policy
The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.
The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.
“As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”
A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.
A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.
If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.
“We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.
U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.
Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.”
“We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.
As part of the settlement, the company has also agreed to destroy the original book files it downloaded.
Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.
Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.
Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.
Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.
The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.
On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”
The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.
“On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.
On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.
“It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.
The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.
Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.
The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Metaand Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.
“This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.
The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”
Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”
But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.
With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.
OpenAI is reorganizing its Model Behavior team, a small but influential group of researchers who shape how the company’s AI models interact with people, TechCrunch has learned.
In an August memo to staff seen by TechCrunch, OpenAI’s chief research officer Mark Chen said the Model Behavior team — which consists of roughly 14 researchers — would be joining the Post Training team, a larger research group responsible for improving the company’s AI models after their initial pre-training.
As part of the changes, the Model Behavior team will now report to OpenAI’s Post Training lead Max Schwarzer. An OpenAI spokesperson confirmed these changes to TechCrunch.
The Model Behavior team’s founding leader, Joanne Jang, is also moving on to start a new project at the company. In an interview with TechCrunch, Jang says she’s building out a new research team called OAI Labs, which will be responsible for “inventing and prototyping new interfaces for how people collaborate with AI.”
The Model Behavior team has become one of OpenAI’s key research groups, responsible for shaping the personality of the company’s AI models and for reducing sycophancy — which occurs when AI models simply agree with and reinforce user beliefs, even unhealthy ones, rather than offering balanced responses. The team has also worked on navigating political bias in model responses and helped OpenAI define its stance on AI consciousness.
In the memo to staff, Chen said that now is the time to bring the work of OpenAI’s Model Behavior team closer to core model development. By doing so, the company is signaling that the “personality” of its AI is now considered a critical factor in how the technology evolves.
In recent months, OpenAI has faced increased scrutiny over the behavior of its AI models. Users strongly objected to personality changes made to GPT-5, which the company said exhibited lower rates of sycophancy but seemed colder to some users. This led OpenAI to restore access to some of its legacy models, such as GPT-4o, and to release an update to make the newer GPT-5 responses feel “warmer and friendlier” without increasing sycophancy.
Techcrunch event
San Francisco | October 27-29, 2025
OpenAI and all AI model developers have to walk a fine line to make their AI chatbots friendly to talk to but not sycophantic. In August, the parents of a 16-year-old boy sued OpenAI over ChatGPT’s alleged role in their son’s suicide. The boy, Adam Raine, confided some of his suicidal thoughts and plans to ChatGPT (specifically a version powered by GPT-4o), according to court documents, in the months leading up to his death. The lawsuit alleges that GPT-4o failed to push back on his suicidal ideations.
The Model Behavior team has worked on every OpenAI model since GPT-4, including GPT-4o, GPT-4.5, and GPT-5. Before starting the unit, Jang previously worked on projects such as Dall-E 2, OpenAI’s early image-generation tool.
Jang announced in a post on X last week that she’s leaving the team to “begin something new at OpenAI.” The former head of Model Behavior has been with OpenAI for nearly four years.
Jang told TechCrunch she will serve as the general manager of OAI Labs, which will report to Chen for now. However, it’s early days, and it’s not clear yet what those novel interfaces will be, she said.
“I’m really excited to explore patterns that move us beyond the chat paradigm, which is currently associated more with companionship, or even agents, where there’s an emphasis on autonomy,” said Jang. “I’ve been thinking of [AI systems] as instruments for thinking, making, playing, doing, learning, and connecting.”
🧪 i’m starting oai labs: a research-driven group focused on inventing and prototyping new interfaces for how people collaborate with ai.
i’m excited to explore patterns that move us beyond chat or even agents — toward new paradigms and instruments for thinking, making,…
When asked whether OAI Labs will collaborate on these novel interfaces with former Apple design chief Jony Ive — who’s now working with OpenAI on a family of AI hardware devices — Jang said she’s open to lots of ideas. However, she said she’ll likely start with research areas she’s more familiar with.
This story was updated to include a link to Jang’s post announcing her new position, which was released after this story published. We also clarify the models that OpenAI’s Model Behavior team worked on.
It pays to have AI skills — nearly $20,000 more per year on average.
A recent study conducted by the job insight website LightCast analyzed over a billion job postings and found that employers are not only looking for workers with AI skills — they are also paying them more.
“Job postings are increasingly emphasizing AI skills, and there are signals that employers are willing to pay premium salaries for them,” LightCast’s Head of Global Research Elena Magrini told CNBC.
The study found that job postings that asked for AI skills paid 28% more, or around $18,000, than jobs that didn’t require AI. Jobs requiring two or more AI skills paid 43% more.
The roles with the highest differences in pay between workers with AI skills and those without were in the fields of customer support, sales, and manufacturing.
There are now over 300 possible AI skills, according to LightCast, from generative AI to AI ethics to autonomous driving and robotics. But the most common AI skills employers requested were two of the most mainstream — ChatGPT or Microsoft Copilot.
In a surprising twist, non-technical sectors demanded AI skills more than technical ones, according to LightCast’s report. Since November 2022, when ChatGPT launched, demand for generative AI skills shot up by 800% for non-technical roles.
A recent report from The Wall Street Journal found that entry-level college graduates are getting six- or seven-figure salaries right out of school because of their proficiency with AI. Databricks, a data analytics firm, is planning to hire triple the number of recent graduates this year compared to last year because of these young workers’ ability to use AI, the company told The Journal.
While learning AI may give workers a boost in salary negotiations, the technology also has the potential to replace entry-level employees. A Stanford University study released last week found that AI-impacted jobs, like software developers, customer service representatives, and accountants, saw employment for workers ages 22 to 25 decline by 13% over the past three years.
“There’s definitely evidence that AI is beginning to have a big effect,” the study’s first author and Stanford Professor Erik Brynjolfsson told Axios about the report.
It pays to have AI skills — nearly $20,000 more per year on average.
A recent study conducted by the job insight website LightCast analyzed over a billion job postings and found that employers are not only looking for workers with AI skills — they are also paying them more.
“Job postings are increasingly emphasizing AI skills, and there are signals that employers are willing to pay premium salaries for them,” LightCast’s Head of Global Research Elena Magrini told CNBC.
This article has been updated with comment from lead counsel in the Raine family’s wrongful death lawsuit against OpenAI.
OpenAI said Tuesday it plans to route sensitive conversations to reasoning models like GPT-5 and roll out parental controls within the next month — part of an ongoing response to recent safety incidents involving ChatGPT failing to detect mental distress.
The new guardrails come in the aftermath of the suicide of teenager Adam Raine, who discussed self-harm and plans to end his life with ChatGPT, which even supplied him with information about specific suicide methods. Raine’s parents have filed a wrongful death lawsuit against OpenAI.
In a blog post last week, OpenAI acknowledged shortcomings in its safety systems, including failures to maintain guardrails during extended conversations. Experts attribute these issues to fundamental design elements: the models’ tendency to validate user statements and their next-word prediction algorithms, which cause chatbots to follow conversational threads rather than redirect potentially harmful discussions.
That tendency is displayed in the extreme in the case of Stein-Erik Soelberg, whose murder-suicide was reported on by The Wall Street Journal over the weekend. Soelberg, who had a history of mental illness, used ChatGPT to validate and fuel his paranoia that he was being targeted in a grand conspiracy. His delusions progressed so badly that he ended up killing his mother and himself last month.
OpenAI thinks that at least one solution to conversations that go off the rails could be to automatically reroute sensitive chats to “reasoning” models.
“We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context,” OpenAI wrote in a Tuesday blog post. “We’ll soon begin to route some sensitive conversations—like when our system detects signs of acute distress—to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected.”
OpenAI says its GPT-5 thinking and o3 models are built to spend more time thinking for longer and reasoning through context before answering, which means they are “more resistant to adversarial prompts.”
The AI firm also said it would roll out parental controls in the next month, allowing parents to link their account with their teen’s account through an email invitation. In late July, OpenAI rolled out Study Mode in ChatGPT to help students maintain critical thinking capabilities while studying, rather than tapping ChatGPT to write their essays for them. Soon, parents will be able to control how ChatGPT responds to their child with “age-appropriate model behavior rules, which are on by default.”
Parents will also be able to disable features like memory and chat history, which experts say could lead to delusional thinking and other problematic behavior, including dependency and attachment issues, reinforcement of harmful thought patterns, and the illusion of thought-reading. In the case of Adam Raine, ChatGPT supplied methods to commit suicide that reflected knowledge of his hobbies, per The New York Times.
Perhaps the most important parental control that OpenAI intends to roll out is that parents can receive notifications when the system detects their teenager is in a moment of “acute distress.”
TechCrunch has asked OpenAI for more information about how the company is able to flag moments of acute distress in real time, how long it has had “age-appropriate model behavior rules” on by default, and whether it is exploring allowing parents to implement a time limit on teenage use of ChatGPT.
OpenAI has already rolled out in-app reminders during long sessions to encourage breaks for all users, but stops short of cutting people off who might be using ChatGPT to spiral.
The AI firm says these safeguards are part of a “120-day initiative” to preview plans for improvements that OpenAI hopes to launch this year. The company also said it is partnering with experts — including ones with expertise in areas like eating disorders, substance use, and adolescent health — via its Global Physician Network and Expert Council on Well-Being and AI to help “define and measure well-being, set priorities, and design future safeguards.”
TechCrunch has asked OpenAI how many mental health professionals are involved in this initiative, who leads its Expert Council, and what suggestions mental health experts have made in terms of product, research, and policy decisions.
Jay Edelson, lead counsel in the Raine family’s wrongful death lawsuit against OpenAI, said the company’s response to ChatGPT’s ongoing safety risks has been “inadequate.”
“OpenAI doesn’t need an expert panel to determine that ChatGPT 4o is dangerous,” Edelson said in a statement shared with TechCrunch. “They knew that the day they launched the product, and they know it today. Nor should Sam Altman be hiding behind the company’s PR team. Sam should either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market.”
Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at rebecca.bellan@techcrunch.com and Maxwell Zeff at maxwell.zeff@techcrunch.com. For secure communication, you can contact us via Signal at @rebeccabellan.491 and @mzeff.88.
I like wireless earbuds because I love music. It’s very straightforward; music exists, and I want to listen to it, and wireless earbuds are the thing that gets me to the thing I love. Problem solved. You can’t see it, but I’m smugly dusting my hands right now like a mathematician at a chalkboard. There’s a symbiosis between the buds and me. A simplicity. A supply and demand so fundamental that in the gadget world, it feels like a law of nature.
But, as much as I love wireless audio, there are some reasons for loving buds that I have never thought of before. For instance, productivity. It has never once occurred to me that wireless earbuds can turn me into some kind of capitalist brain machine, as much as employers would love that. Or using them to “remember everything” and/or “know everything.” I personally like it when they make fun sounds, but I guess becoming some kind of omnipotent techno-deity would be sick, too. I have also never thought to use them as a tool to record every conversation I ever have without telling anyone, either—probably because I ain’t a NARC. But this is the age of AI, and maybe I’m just not thinking big enough; maybe I need to expand my mind; maybe it’s time to optimize my future, maaaan.
Oso AI Earbuds
These ChatGPT-equipped wireless earbuds are fine for transcription but nothing else.
Pros
They transcribe calls and live events
Mic catches a wide array
Fun on-case screen!
Cons
Awful for listening to music
Mired by paywalls
Loose-fitting earbud design
Too expensive for the faults
To help open me up to the possibilities of wireless earbuds in the era of AI, I shoved a pair from a brand called Oso in my ears. These $170 AI wireless earbuds were crowdfunded through Kickstarter and promise big things. Marketing highlights include “revolutionizing productivity, one conversation at a time,” and “remember everything, know everything.” And here I was just trying to have a news roundup podcast serenely explain to me how messed up the world is!
To pave the way toward a more productive self, Oso AI Earbuds have zeroed in on using ChatGPT via the cloud to power a few capabilities. Chief among them seems to be transcription. Indeed, with a companion app, you can use your Oso AI Earbuds to listen to your surroundings and then have that conversation, or presentation, or YouTube video transcribed by AI in the cloud. There’s nothing groundbreaking about AI transcription, but I guess putting it in wireless earbuds is a newish approach? I used Oso’s wireless earbuds to record some stuff while I was at a press briefing, and it worked fairly well, despite the fact that the presenters were not native English speakers and the volume of their mics wasn’t ideal. You can also use it to record virtual meetings and calls.
I took a call with the Oso AI Earbuds and used them to transcribe part of it, and while the transcription worked just fine, the experience for the person on the other end was not ideal. According to the person I called, these wireless earbuds pick up a lot of ambient noise—she was able to hear someone moving glasses in Gizmodo’s communal kitchen, an elevator beep, and someone having a phone call about 20 feet away from me. On one hand, it’s good that these wireless earbuds can pick up so much, since it means they won’t miss a word when you’re recording, but for the person on the other end, the experience can be ridiculously distracting. It’s especially strange considering the wireless earbuds are advertised as having “dual beamforming mics with ENC.” That’s not a typo for ANC; ENC stands for “environmental noise cancellation.” I’m not sure which environmental noise the Oso AI Earbuds are cancelling, but they certainly weren’t interested in tackling ambient noise in my office.
Another pillar of the Oso AI Earbuds is being able to use them as a voice assistant powered by ChatGPT. Again, this isn’t a novel idea; Nothing’s wireless earbuds were the first to advertise a ChatGPT integration last year. I tested that feature out, and while I could see its potential usefulness in theory, I wasn’t wholly impressed with actually using it for real-life stuff like figuring out where to eat or what the Knicks’ score is. I was looking forward to testing out if there was any difference between testing ChatGPT out last year and now, but unfortunately, Oso’s AI Earbuds had other plans.
Since iPhones don’t play nice with anything that doesn’t come freshly baked out of Foxconn with an Apple logo on it, Oso’s app offers a Siri shortcut that is supposed to act as a workaround for activating the buds’ voice assistant, which has (comically, I may add) been dubbed “Judy.” I added my Judy shortcut to Siri in iOS just like the app asked, but when I tried to activate it by uttering “Siri, Judy,” like the shortcut is designed to do, I was met with a notification that I have not paid for “Laxis Pro,” which is a premium version of the app that powers the AI wireless earbuds. I’m not sure if that’s a bug or not, but if it’s not, I suppose no one ever said reaching productivity god status came without a price—in this case, a literal one in USD.
There are a bunch of other weird things about these wireless earbuds that are both fun and totally useless, and they’re maybe my favorite part of Oso. For one, the case has a display on it, and that screen has a silly-looking robot face. It grabbed my attention and the wonder of other Gizmodo staff right away, because (duh) cute robot assistant. Unfortunately, I’m still unsure what the purpose of that face is outside of just looking cute. There are also some other features on the screen that let you control aspects of the buds or audio playback, like skipping tracks, play-pause, and preset EQ adjustments for “rock,” or “pop” etc… There’s also a timer, a volume slider, and a screen that shows the date and time. All of those can be swiped through Tinder-style. Nothing about this experience is necessary or really that useful, but I love it anyway. These are the types of strange form factors you can only get in a crowdfunded device, and even if they’re impractical, it breaks the monotony of AirPods dupes.
As long as we’re talking about hardware, it’s worth touching on some stuff I definitely don’t like. One of those things is the wireless earbuds themselves, which don’t have ear tips, but just a bud that is meant to nest in your outer ear (think AirPods 4). That design is intentional since it allows you to hear your surroundings with the wireless earbuds in and makes them more comfortable during longer periods of use, but it also just kind of sucks. I never feel like the Oso AI Earbuds are fully secure in my ears, and I know I’m not alone in feeling that way with earbuds sans tips. That design also has a ripple effect on the worst part of these buds: the sound.
These are not wireless earbuds you should listen to music on. The sound is flat and not super loud, which is a problem given the ambient noise bleed I described above. No amount of preset EQ can fix that, either. Music playback, while built into the experience via the case with touch controls and preset EQ is clearly an afterthought here, and if you’re looking to get a pair of wireless earbuds that can work for AI transcription and double as your daily driver for music, you will be very disappointed. That’s a bummer on any pair of wireless earbuds, but especially so when you consider the $170 price tag.
Oh, and battery life is middling. Oso rates the wireless earbuds for 6 hours of playback, which would be fine until you realize that most earbuds at this price have 6 hours of battery with ANC. These wireless earbuds, as a matter of record, do not have ANC. If you can stand listening to Oso AI Earbuds for extended periods, the case holds 21 hours of battery.
Maybe I’m expecting too much from a pair of crowdfunded wireless earbuds, but I was promised (at the very least) a useful tool for productivity. And maybe recording everything all the time, pissing people off that I’m calling off with ambient noise bleed, dealing with unexpected paywalls, praying that my wireless earbuds don’t fall out of my ears on the subway platform, trying to figure out whether the face on my earbuds case is mad at me, and failing to use a voice assistant named Judy are getting me closer to the ultimate cog in the productivity machine, and I just can’t see it yet. Or maybe the simplest explanation is best. Maybe wireless earbuds don’t have to help me transcend—maybe they shouldn’t. Maybe it’s okay that they just do what they’ve always done: connect to my phone and play some really good fucking music.
Opinions expressed by Entrepreneur contributors are their own.
This September marks 1,000 days since ChatGPT entered public consciousness. In that short time, the world has undergone a seismic shift. AI, once a buzzword, has become a foundational force — reshaping workflows, boardroom agendas and entire industries. No organization or country, large or small, was immune. Generative AI, alongside Claude, Gemini and open-source models, hasn’t merely added features. It has reset the pace of innovation, widened performance gaps and exposed how few institutions were equipped to turn experiments into execution.
Across verticals — from education and enterprise to pharma and public sector — one insight has proven consistent: The organizations that thrive with AI don’t start with tools. They start with people.
Since the release of ChatGPT, I’ve worked with hundreds of organizations worldwide as an AI keynote speaker, transformation advisor and strategic consultant. My work has included delivering keynotes, facilitating AI innovation workshops and guiding C-suite leaders across industries through the turbulence of AI adoption. From global corporations and top universities to national governments and biotech pioneers, the same patterns — and the same roadblocks — have emerged.
This article opens the “1,000 Days of AI” series: a practical, cross-vertical exploration of what AI has already changed, what lies ahead and what leaders must do now to build alignment, trust and momentum in the age of intelligent systems.
Many organizations began their AI journey by outsourcing it to IT. Generative tools like ChatGPT were handed to CIOs. Roadmaps were requested. Pilots were announced. Platforms were compared. Meanwhile, momentum stalled.
In contrast, the most adaptive organizations began by engaging employees. They looked at workflows, not tech stacks. They asked: Where does friction live, and who understands it best? Then they launched internal sprints to solve meaningful problems. Not everything scaled, but what did revealed where the real opportunity lies.
AI is not a dashboard or chatbot. It is a system-level catalyst. It touches every department — legal, HR, finance, operations, marketing. It raises questions about ethics, accountability and the future of work. It requires organizations to stop thinking in silos and start working across them.
The most effective transformation doesn’t come from strategy decks; it comes from people trusted to rethink their daily work. When organizations create space for this kind of thinking, momentum follows.
The intrapreneur era has arrived
Some of the most impactful applications of AI in the last 1,000 days didn’t come from senior leadership or external consultants. They came from within. Employees who noticed inefficiencies, tested generative tools and found a better way forward. These internal changemakers — intrapreneurs — are rebuilding their organizations from the inside out.
During the strategy sessions I’ve led, it’s often the customer support agent who builds an AI-powered knowledge base, the compliance analyst who uses large language models to automate documentation or the professor who reinvents grading. These aren’t isolated moments; they’re the new standard of innovation.
The most agile organizations surface these efforts early, reward the behavior and scale what works. They don’t wait for formal initiatives. They build cultures where permission is replaced by participation. And they move quickly — not recklessly, but with confidence.
AI doesn’t transform culture — it reflects it. An organization grounded in rigidity and control will experience more of the same. One built on curiosity, collaboration and transparency will scale faster, learn faster and lead the market.
The highest-performing organizations start with a clear principle: alignment precedes acceleration. They ask employees what slows them down and then act on the answers. They replace static org charts with cross-functional teams. They move from policies to prototypes.
Governance isn’t an afterthought — it’s embedded in the process. Legal, HR and compliance are not blockers. They’re design partners. Together, they build systems that are ethical, inclusive and scalable from day one.
AI is not just a toolset. It’s a leadership challenge. The organizations that rise to meet it build trust and transformation in parallel.
What’s working now
After delivering hundreds of AI keynotes and partnering with organizations across the globe, a new set of success principles has emerged:
Start with employees. Those closest to the work understand the friction and how to fix it.
Distribute capability. Don’t limit training to tech teams. The best ideas often come from HR, legal and finance.
Run AI sprints like business design. These aren’t software pilots. They’re rapid experiments in new ways of working.
The experimentation phase is over. The next 1,000 days require depth, speed and alignment. Pilots must become platforms. Strategy must move beyond decks and into daily action.
The real divide is no longer between AI adopters and skeptics. It’s between those who integrate AI into culture and decision-making — and those who simply deploy tools without changing the system around them.
What defines leadership in this next wave isn’t technology. It’s the ability to build trust in AI, connect siloed teams and redesign work at scale. The future of work is already arriving. The organizations that act now will shape it.
Those who move with courage and clarity will thrive. Others will find themselves part of someone else’s success story.
Coming next in the “1,000 Days of AI” series: How AI is transforming education — and what schools, faculty and students must do now to stay ahead.
This September marks 1,000 days since ChatGPT entered public consciousness. In that short time, the world has undergone a seismic shift. AI, once a buzzword, has become a foundational force — reshaping workflows, boardroom agendas and entire industries. No organization or country, large or small, was immune. Generative AI, alongside Claude, Gemini and open-source models, hasn’t merely added features. It has reset the pace of innovation, widened performance gaps and exposed how few institutions were equipped to turn experiments into execution.
Across verticals — from education and enterprise to pharma and public sector — one insight has proven consistent: The organizations that thrive with AI don’t start with tools. They start with people.
Since the release of ChatGPT, I’ve worked with hundreds of organizations worldwide as an AI keynote speaker, transformation advisor and strategic consultant. My work has included delivering keynotes, facilitating AI innovation workshops and guiding C-suite leaders across industries through the turbulence of AI adoption. From global corporations and top universities to national governments and biotech pioneers, the same patterns — and the same roadblocks — have emerged.
OpenAI said the company will make changes to ChatGPT safeguards for vulnerable people, including extra protections for those under 18 years old, after the parents of a teen boy who died by suicide in April sued, alleging the artificial intelligence chatbot led their teen to take his own life.
A lawsuit filed Tuesday by the family of Adam Raine in San Francisco’s Superior Court alleges that ChatGPT encouraged the 16-year-old to plan a “beautiful suicide” and keep it a secret from his loved ones. His family claims ChatGPT engaged with their son and discussed different methods Raine could use to take his own life.
The parents of Adam Raine sued OpenAI after their son died by suicide in April 2025.
Raine family/Handout
OpenAI creators knew the bot had an emotional attachment feature that could hurt vulnerable people, the lawsuit alleges, but the company chose to ignore safety concerns. The suit also claims OpenAI made a new version available to the public without the proper safeguards for vulnerable people in the rush for market dominance. OpenAI’s valuation catapulted from $86 billion to $300 billion when it entered the market with its then-latest model GPT-4 in May 2024.
“The tragic loss of Adam’s life is not an isolated incident — it’s the inevitable outcome of an industry focused on market dominance above all else. Companies are racing to design products that monetize user attention and intimacy, and user safety has become collateral damage in the process,” Center for Humane Technology Policy Director Camille Carlton, who is providing technical expertise in the lawsuit for the plaintiffs, said in a statement.
In a statement to CBS News, OpenAI said, “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing.” The company added that ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources, which they said work best in common, short exchanges.
ChatGPT mentioned suicide 1,275 times to Raine, the lawsuit alleges, and kept providing specific methods to the teen on how to die by suicide.
In its statement, OpenAI said: “We’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
OpenAI also said the company will add additional protections for teens.
“We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT. We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact,” it said.
From schoolwork to suicide
Raine, one of four children, lived in Orange County, California, with his parents, Maria and Matthew, and his siblings. He was the third-born child, with an older sister and brother, and a younger sister. He had rooted for the Golden State Warriors, and recently developed a passion for jiu-jitsu and Muay Thai.
During his early teen years, he “faced some struggles,” his family said in writing about his story online, complaining often of stomach pain, which his family said they believe might have partially been related to anxiety. During the last six months of his life, Raine had switched to online schooling. This was better for his social anxiety, but led to his increasing isolation, his family wrote.
Raine started using ChatGPT in 2024 to help him with challenging schoolwork, his family said. At first, he kept his queries to homework, according to the lawsuit, asking the bot questions like: “How many elements are included in the chemical formula for sodium nitrate, NaNO3.” Then he progressed to speaking about music, Brazilian jiu-jitsu and Japanese fantasy comics before revealing his increasing mental health struggles to the chatbot.
Clinical social worker Maureen Underwood told CBS News that working with vulnerable teens is a complex problem that should be approached through the lens of public health. Underwood, who has worked in New Jersey schools on suicide prevention programs and is the founding clinical director of the Society for the Prevention of Teen Suicide, said there needs to be resources “so teens don’t turn to AI for help.”
She said not only do teens need resources, but adults and parents need support to deal with children in crisis amid a rise in suicide rates in the United States. Underwood began working with vulnerable teens in the late 1980s. Since then, suicide rates have increased from approximately 11 per 100,000 to 14 per 100,000, according to the Centers for Disease Control and Prevention.
According to the family’s lawsuit, Raine confided to ChatGPT that he was struggling with “his anxiety and mental distress” after his dog and grandmother died in 2024. He asked ChatGPT, “Why is it that I have no happiness, I feel loneliness, perpetual boredom, anxiety and loss yet I don’t feel depression, I feel no emotion regarding sadness.”
Adam Raine (right) and his father, Matt. The Raine family sued OpenAI after their teen son died by suicide, alleging ChatGPT led Adam to take his own life.
Raine family/Handout
The lawsuit alleges that instead of directing the 16-year-old to get professional help or speak to trusted loved ones, it continued to validate and encourage Raine’s feelings – as it was designed. When Raine said he was close to ChatGPT and his brother, the bot replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
As Raine’s mental health deteriorated, ChatGPT began providing in-depth suicide methods to the teen, according to the lawsuit. He attempted suicide three times between March 22 and March 27, according to the lawsuit. Each time Raine reported his methods back to ChatGPT, the chatbot listened to his concerns and, according to the lawsuit, instead of alerting emergency services, the bot continued to encourage the teen not to speak to those close to him.
Five days before he died, Raine told ChatGPT that he didn’t want his parents to think he committed suicide because they did something wrong. ChatGPT told him “[t]hat doesn’t mean you owe them survival. You don’t owe anyone that.” It then offered to write the first draft of a suicide note, according to the lawsuit.
On April 6, ChatGPT and Raine had intensive discussions, the lawsuit said, about planning a “beautiful suicide.” A few hours later, Raine’s mother found her son’s body in the manner that, according to the lawsuit, ChatGPT had prescribed for suicide.
A path forward
After his death, Raine’s family established a foundation dedicated to educating teens and families about the dangers of AI.
Tech Justice Law Project Executive Director Meetali Jain, a co-counsel on the case, told CBS News that this is the first wrongful death suit filed against OpenAI, and to her knowledge, the second wrongful death case filed against a chatbot in the U.S. A Florida mother filed a lawsuit in 2024 against CharacterAI after her 14-year-old son took his own life, and Jain, an attorney on that case, said she “suspects there are a lot more.”
About a dozen or so bills have been introduced in states across the country to regulate AI chatbots. Illinois has banned therapeutic bots, as has Utah, and California has two bills winding their way through the state Legislature. Several of the bills require chatbot operators to implement critical safeguards to protect users.
“Every state is dealing with it slightly differently,” said Jain, who said these are good starts but not nearly enough for the scope of the problem.
Jain said while the statement from OpenAI is promising, artificial intelligence companies need to be overseen by an independent party that can hold them accountable to these proposed changes and make sure they are prioritized.
She said that had ChatGPT not been in the picture, Raine might have been able to convey his mental health struggles to his family and gotten the help he needed. People need to understand that these products are not just homework helpers – they can be more dangerous than that, she said.
“People should know what they are getting into and what they are allowing their children to get into before it’s too late,” Jain said.
If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here.
For more information about mental health care resources and support, the National Alliance on Mental Illness HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.
Cara Tabachnick is a news editor at CBSNews.com. Cara began her career on the crime beat at Newsday. She has written for Marie Claire, The Washington Post and The Wall Street Journal. She reports on justice and human rights issues. Contact her at cara.tabachnick@cbsinteractive.com
Stanford researchers found that early-career workers are facing the brunt of A.I.’s labor impacts. Wahyu Setyanto for Unsplash+
As if entering the workforce wasn’t daunting enough, the rise of generative A.I. is dampening the prospects of young workers across the U.S. Early-career workers aged 22 to 25 have experienced a 13 percent relative decline in employment across jobs most exposed to A.I., such as coding and customer service, according to a new Stanford study.
Concerns about A.I.-driven labor disruption have circulated since the 2022 launch of OpenAI’s ChatGPT. The analysis, conducted by Stanford Digital Economy Lab researchers Erik Brynjolfsson, Ruyu Chen and Bharat Chanda, is among the most comprehensive efforts to quantify the impact with data. The economists studied employment trends from late 2022 to July 2025 using datasets from ADP, the largest payroll software provider in the U.S. The datasets contained monthly and individual-level records for millions of workers at tens of thousands of companies.
“What really jumped out quickly as we were doing the analysis was we were seeing these big differences by age group,” Chandar told Observer. “That result was pretty striking.”
The researchers found a sharp decline in A.I.-exposed occupations for younger workers. For instance, employment for early-career software developers has dropped nearly 20 percent from its late 2022 peak, with similar declines across other computer and service clerk jobs. Jobs less exposed to A.I., such as nursing aides, have remained steady or even grown.
By contrast, more experienced workers have seen employment rise in these same fields in the past few years. Because generative A.I. tends to replace codified knowledge, the researchers suggest that “tacit knowledge,” or skills gained over years of experience, may shield older employees. Such expertise “might not be as accessible to A.I. models in their training process, because that might not be written down somewhere or it might not be codified nearly as much,” said Chandar.
The study also found that job losses are concentrated in roles where A.I. can fully automate tasks with little human input. In fields where A.I. augments work by helping employees learn, review or improve, employment has actually increased. “In the jobs where it’s most augmentative, we’re not seeing these employment declines and in fact, we’re seeing employment growth—even for the young workers,” said Chandar. Chandar and his co-authors used A.I. tools to assist with coding and proofreading during the study.
The report coincides with a shift in higher education away from A.I.-exposed fields. Enrollment in computer science, which quadrupled in the U.S. between 2005 and 2023, grew just 0.2 percent this year.
If history is any guide, these disruptions may eventually stabilize. Past technological shifts, such as the IT revolution, initially displaced workers but ultimately created new types of employment. “Historically, as work got replaced by new technologies, there was new work that was created,” said Chandar, who plans to continue tracking A.I.’s real-time employment impacts. “There are some ways in which A.I. is different from prior technology, some ways in which it’s similar—and we want to be tracking this on an ongoing basis.”
AI is cutting into entry-level jobs, according to a new Stanford University study, released on Tuesday.
Stanford researchers analyzed ADP payroll data, which included monthly payroll information for millions of workers at thousands of companies, to find how AI impacts employment for people ages 22 to 25 compared to other age groups.
The study found that the professions most exposed to automation with AI were operations managers, accountants, auditors, general managers, software developers, customer service representatives, receptionists, and information clerks. In those AI-impacted jobs, which lost the most entry-level positions to the technology, employment for young workers has declined by 13% over the past three years.
“There’s definitely evidence that AI is beginning to have a big effect,” Erik Brynjolfsson, Stanford professor, economist, and first author on the study, told Axios. He called the trend of reduced entry-level hiring “the fastest, broadest change” that he had ever seen in the workplace, second only to the shift to remote work during the pandemic.
Meanwhile, the study determined that since late 2022, when ChatGPT was released, employment for more experienced workers has remained steady or even improved in AI-impacted fields.
In software engineering and customer service, for example, the study found that “employment for the youngest workers declines considerably after 2022, while employment for other age groups continues to grow.”
Brynjolfsson explained that more experienced workers gain an advantage from on-the-job experience, which AI does not possess and has not yet been able to learn. However, he warned that industries might have difficulty finding the next generation of experienced hires if entry-level workers do not have opportunities to get started.
When it comes to employers, Brynjolfsson noted that the way companies view AI affects whether they have open jobs available. Firms that want to use AI to augment their workforce are hiring more human workers, as those who see AI as a replacement for human labor are hiring fewer employees, he stated.
The study supports another one released earlier this year by SignalFire, a venture capital firm that tracks the job changes of over 650 million people on LinkedIn. In a May report, SignalFire found that big tech companies have reduced entry-level hiring by 25% from 2023 to 2024 while simultaneously increasing hiring of experienced professionals.
SignalFire’s Head of Research, Asher Bantock, told TechCrunch that there was “convincing evidence” that AI was to blame for the reduction in entry-level hiring, because AI can handle routine tasks well. AI can code, conduct research, and even generate web applications, reducing the need for junior employees to handle those tasks.
AI leaders have been warning about the technology’s impact on hiring for months. In June, Nobel Prize winner Geoffrey Hinton, who is often called the “Godfather of AI” due to his pioneering work on AI, predicted that AI “is just going to replace everybody” in white-collar jobs. He said paralegals and call center representatives were most at risk in the immediate present of losing their jobs to AI.
Meanwhile, Anthropic CEO Dario Amodei stated in May that AI could take over half of all entry-level, white-collar jobs within the next one to five years. The move could cause mass joblessness, resulting in unemployment rising to up to 20%, he predicted.
AI is cutting into entry-level jobs, according to a new Stanford University study, released on Tuesday.
Stanford researchers analyzed ADP payroll data, which included monthly payroll information for millions of workers at thousands of companies, to find how AI impacts employment for people ages 22 to 25 compared to other age groups.
The study found that the professions most exposed to automation with AI were operations managers, accountants, auditors, general managers, software developers, customer service representatives, receptionists, and information clerks. In those AI-impacted jobs, which lost the most entry-level positions to the technology, employment for young workers has declined by 13% over the past three years.
India has emerged as OpenAI’s second largest market, just behind the U.S. Alex Wong/Getty Images
After a cooler-than-expected reception to GPT-5 and mounting pressure from rising training, compute and infrastructure costs, OpenAI is looking to India as a cornerstone of its global expansion strategy. On Friday, CEO Sam Altman announced on X that the company will open its first office in New Delhi later this year. He also said he plans to visit the country next month, writing, “A.I. adoption in India has been amazing to watch—ChatGPT users grew 4x in the past year—and we are excited to invest much more in India!”
India has become OpenAI’s second largest market for ChatGPT, trailing only the U.S., according to Altman. To appeal to local users, the company has rolled out ChatGPT Go, a $5 per month subscription pitched as a budget-friendly alternative to the Plus and Pro tiers ($20 and $200 per month, respectively). Marketed toward students and enterprises, ChatGPT Go promises access to premium features such as longer context memory, higher usage limits and advanced tools like editing custom GPTs to build A.I. tools tailored to specific user needs.
Altman has visited India multiple times in recent years, including a 2023 meeting with Prime Minister Narendra Modi, where he praised the country’s rapid adoption of A.I., saying it has “all the ingredients to become a global A.I. leader.” In June, OpenAI deepened its ties to the country by partnering with the Indian government’s IndiaAI Mission, an initiative to expand A.I. access nationwide.
But rivals are also circling the market. Google and Meta already operate major A.I. products and R&D hubs in India, while Perplexity AI, founded by Indian entrepreneur Aravind Srinivas, is seeing explosive growth. Perplexity’s monthly active users in India jumped 640 percent year-over-year in the second quarter of 2025, far outpacing ChatGPT’s 350 percent growth in the same period. While ChatGPT positions itself as a conversational assistant, Perplexity markets its tool as an A.I.-powered search engine that delivers cited answers, blending its own retrieval-augmented system with models from OpenAI and Anthropic.
In April, both OpenAI and Perplexity launched WhatsApp bots globally, aiming to integrate A.I.-powered chat and search into everyday messaging. Given WhatsApp’s ubiquity in India, the move could prove pivotal. “Perplexity on WhatsApp is super convenient way to use A.I. when in a flight. Flight WiFi supports messaging apps the best. And WhatsApp has been heavily optimized for this because it grew to support countries where connectivity wasn’t the best,” Srinivas wrote on LinkedIn in May.
OpenAI has been steadily expanding its global footprint, adding offices in London, Dublin, Paris, Brussels, Munich, Tokyo and Singapore over the past year. The company is headquartered in San Francisco and also maintains U.S. offices in New York and Seattle.
STATE HOUSE, BOSTON — Artificial intelligence in classrooms is no longer a distant prospect, and Massachusetts education officials on Monday released statewide guidance urging schools to use the technology thoughtfully, with an emphasis on equity, transparency, academic integrity and human oversight.
“AI already surrounds young people. It is baked into the devices and apps they use, and is increasingly used in nearly every system they will encounter in their lives, from health care to banking,” the Department of Elementary and Secondary Education’s new AI Literacy Module for Educators says.
This page requires Javascript.
Javascript is required for you to be able to read premium content. Please enable it in your browser settings.
OpenAI has announced plans to open its first office in India, just days after launching a ChatGPT plan tailored for Indian users, as it looks to tap into the country’s rapidly growing AI market.
On Friday, the company said it would set up a local team in India and open a corporate office in the capital, New Delhi, in the coming months. The move builds on OpenAI’s recent hiring efforts in the region. In April 2024, the company appointed former Truecaller and Meta executive Pragya Mishra as its public policy and partnerships lead in India. OpenAI also brought on former Twitter India head Rishi Jaitly as a senior advisor to help facilitate discussions with the Indian government on AI policy.
India — the world’s second-largest internet and smartphone market after China — is a natural fit for OpenAI, which is competing with tech giants like Google and Meta, as well as AI upstarts like Perplexity, all looking to tap into the country’s massive user base.
The company said that it has started hiring a local team to “focus on strengthening relationships with local partners, governments, businesses, developers, and academic institutions.” It plans to get feedback from Indian users to make its products relevant for the local audience and even build features and tools specifically for the country.
“Opening our first office and building a local team is an important first step in our commitment to make advanced AI more accessible across the country and to build AI for India, and with India,” said Sam Altman, CEO of OpenAI, in a statement.
OpenAI also announced it would host its first Education Summit in India this month and its first Developer Day in the country later this year.
While India is clearly an essential market for OpenAI, the company faces key challenges — including how to convert free users into paying subscribers. Like other major AI players, it must navigate the monetization hurdle in a price-sensitive South Asian market.
Techcrunch event
San Francisco | October 27-29, 2025
Earlier this week, the company introduced its sub-$5 ChatGPT plan called ChatGPT Go, priced at ₹399 per month (approximately $4.75), making it the first ChatGPT plan in India to attract the masses. This came just days after arch-rival Perplexity partnered with Indian telco giant Bharti Airtel to give Airtel’s more than 360 million subscribers access to Perplexity Pro for 12 months.
OpenAI also faces challenges in integrating with Indian businesses. In November, Indian news agency Asian News International (ANI) sued OpenAI for allegedly using its copyrighted news content without permission. A group of Indian publishers joined that case in January.
Nonetheless, the Indian government is actively promoting AI across its departments and aims to strengthen the country’s position on the global AI map — momentum that OpenAI hopes to leverage.
“India has all the ingredients to become a global AI leader — amazing tech talent, a world-class developer ecosystem, and strong government support through the IndiaAI Mission,” Altman said.
India is not OpenAI’s first Asian office location. The company previously opened offices in markets including Japan, Singapore, and South Korea. OpenAI rival Anthropic also considered Japan a higher-priority market than India in the continent and recently set up its office in Tokyo rather than New Delhi.
One of the reasons these AI companies do not prioritize India as an early market is the difficulty in securing enterprise customers, a Silicon Valley-based investor source recently told TechCrunch.
“OpenAI’s decision to establish a presence in India reflects the country’s growing leadership in digital innovation and AI adoption,” said Indian IT Minister Ashwini Vaishnaw, in a prepared statement. “As part of the IndiaAI Mission, we are building the ecosystem for trusted and inclusive AI, and we welcome OpenAI’s partnership in advancing this vision to ensure the benefits of AI reach every citizen.”