ReportWire

Tag: chatgpt

  • India’s AI boom pushes firms to trade near-term revenue for users | TechCrunch

    [ad_1]

    Tech giants’ efforts to ramp up AI adoption in India may be about to hit a turning point, as companies end free promotions with hopes to convert the world’s fourth-largest economy into a windfall of paid subscribers.

    India became the world’s largest market for generative AI app downloads in 2025, according to market intelligence firm Sensor Tower, widening its lead over the U.S. as installs jumped 207% year-over-year.

    Companies including OpenAI, Google, and Perplexity rolled out extended free premium offers to accelerate user growth in the price sensitive market. Leading AI firms have also backed India in its push to become a global artificial intelligence hub. A major AI summit in New Delhi last week was attended by leaders including OpenAI’s Sam Altman, Anthropic’s Dario Amodei, and Alphabet CEO Sundar Pichai — a sign of the country’s growing weight in the global AI race.

    Now, some of those early promotional pushes are winding down. Perplexity ended its bundled Pro offer with Indian telco Airtel in January, while OpenAI’s free ChatGPT Go access in India is no longer available, potentially setting the stage for a clearer test of how many newly acquired users convert to paying subscribers.

    Despite strong download growth, India still generates a disproportionately small share of AI app revenue, accounting for about 1% of in-app purchases even as it drives roughly 20% of global GenAI app downloads, according to the Sensor Tower data shared with TechCrunch, highlighting the monetization challenge in one of the industry’s fastest-growing markets.

    GenAI app adoption in India accelerated sharply through 2025, with downloads peaking in September and October at year-over-year growth rates of about 320% and 260%, respectively, according to the data. Yet the surge in usage did not fully translate into revenue gains. In November and December 2025, AI app in-app purchase revenue in India fell 22% and 18% month over month, respectively. ChatGPT’s revenue dropped even more sharply — down 33% and 32% over the same period following the November launch of free sub-$5 ChatGPT Go access — reflecting the near-term impact of aggressive promotional pushes.

    Image Credits:Sensor Tower

    ChatGPT still commands more than 60% of GenAI in-app revenue in India, meaning shifts in its pricing strategy can significantly influence overall market performance.

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    Alongside promotional pushes, Sensor Tower attributed the surge in GenAI app adoption in India last year to a mix of new product launches, including the debut of platforms such as DeepSeek, Grok, and Meta AI, as well as upgrades to major chatbots like ChatGPT, Gemini, Claude, and Perplexity. Viral interest in AI-generated content also helped fuel adoption, with content creation and editing tools accounting for seven of the 20 most downloaded GenAI apps in India in 2025.

    The user surge has been equally pronounced. India accounted for about 19% of the global user base of leading AI assistant apps in 2025, ahead of the U.S. at 10%, Sensor Tower said. ChatGPT continues to dominate the Indian market by monthly active users, though rivals including Google’s Gemini and Perplexity have also seen rapid growth following promotional offers. ChatGPT was the most downloaded GenAI app in India and globally in 2025, according to earlier Sensor Tower data. Earlier this month, OpenAI’s CEO said that the chatbot now has more than 100 million weekly active users in India.

    The promotional push in India reflects a broader strategy by AI firms to reduce pricing friction in a highly value-conscious market, betting that early user adoption and engagement will translate into stronger long-term retention once free access periods expire, said Sneha Pandey, insights analyst at Sensor Tower.

    India’s appeal lies in its massive digital base. The country has more than a billion internet users and around 700 million smartphone owners, making it one of the largest potential markets for AI services globally and a critical battleground for user growth.

    Nonetheless, user engagement in India still trails more mature markets. In 2025, users of leading AI chatbot apps in the U.S. spent about 21% more time per week on the apps than their counterparts in India and logged 17% more sessions on average, per Sensor Tower.

    “AI in-app revenues will likely see meaningful but gradual improvement as users become more deeply integrated into these platforms, making sustained engagement paramount,” Pandey told TechCrunch.

    She added that pricing pressure in India is likely to remain elevated given the country’s young and value-conscious user base, making lower-cost tiers, telecom bundles, and micro-transaction models important for long-term retention.

    ChatGPT remained the clear market leader in India entering 2026, with 180 million monthly active users in January, per Sensor Tower, followed by Google’s Gemini with 118 million, Perplexity with 19 million, and Meta AI with 12 million. The figures underline both the scale of India’s AI opportunity and the growing challenge for firms to convert rapid user adoption into sustained revenue.

    Google, OpenAI, and Perplexity did not respond to requests for comments.

    [ad_2]

    Jagmeet Singh

    Source link

  • Teens are using AI frequently in their daily lives, and many parents aren’t aware, survey finds

    [ad_1]

    Parents are often caught off guard by what their teens are doing in daily life — and when it comes to AI, the “perception gap” might be larger than they thought, according to a Pew Research Center survey released Tuesday.

    The survey found a significant gap exists between parents’ perceptions and their teens’ actual use of AI chatbots. About 64% of U.S. teens reported using AI chatbots, while 51% of parents said their teens use them. 

    “Technology is not just a teen issue or a parent issue — it’s a family issue,” said Pew senior researcher Colleen McClain. She said researchers surveyed both teens and parents and heard different perspectives on managing AI usage. 

    Just over half (54%) of the teens surveyed said they’ve used AI chatbots for help with schoolwork, while about 1 in 10 said they’ve gotten emotional support from an AI chatbot.

    Teens, often at the forefront as users of new technology, told researchers they see AI as a tool in their daily lives, and they were more positive than negative in their views of about how AI will impact them personally.


    The Free Press: Are We at an AI Precipice?


    Parents have a “lot to juggle,” McClain said, and many are concerned about their children’s use of AI chatbots — especially after several high-profile cases in which teens died by suicide after prolonged interactions with the new technology. 

    “It’s complicated, it’s nuanced, it’s not a one-size-fits-all,” McClain said. 

    She said the survey — the most in-depth yet on teens and AI — found many parents don’t speak to their teens about their AI usage; just 4 in 10 parents said they do. Many don’t make managing screen time their first priority amid other life demands, and some parents said they feel judged for doing so. 

    Dr. Amber W. Childs, an associate professor of psychiatry at the Yale School of Medicine, told CBS News the question shouldn’t be if teens are using AI but how they are using the technology. 

    She said most teens are using technology for mundane daily tasks but parents need to know if “they’re using it in the absence of other sources of connection or coping skills and support.” Around 12% said they’ve gotten emotional support through chatbots, and Childs said teens using the tech for sole emotional support is concerning.

    Psychologist Joshua Goodman, an associate professor at Southern Oregon University, said teens who don’t feel comfortable talking to parents or others about their sexuality or orientation might feel more comfortable speaking to AI about their sexual health. These teens are “not reaching out for support” from adults in their lives, but it’s not necessarily a bad thing, Goodman said. 

    He said parents need to look for warning signs around teens constantly using AI and the technology replacing their critical thinking, or if they are showing signs of depression.

    “You want to get curious,” Childs said, “but you also want to be communicating to connect.” She cautioned parents not to just pass down information and warnings to their teens, but to use the conversation to understand how AI is being used in their lives. Parents can set up boundaries and expectations around the usage of the technology that align with family expectations, she said. 

    She said most teens are probably using AI to improve their life skills, like learning new languages or doing schoolwork.

    About a quarter of teens surveyed said chatbots have been extremely or very helpful for completing their schoolwork, while another 25% say they’ve been somewhat helpful. Most said they use the technology for research or help with math problems. 

    About 1 in 10 teens said they do all or most of their schoolwork with chatbots’ help. 

    More than half of teens say they’ve used chatbots to search for information and almost half say they’ve done so for fun or entertainment. 

    Some, however, are wary about the way the technology will affect their lives. One teenage boy told Pew, “It’s already being used to spread propaganda, there’s no end to what it can do, it’s hard to tell what’s real or AI online anymore.”

    Pew surveyed 1,458 U.S. teens and their parents from Sept. 25 to Oct. 9, 2025.

    [ad_2]

    Source link

  • Sam Altman Defends A.I. Energy Use With Human Comparison, Sparking Debate

    [ad_1]

    Sam Altman challenged critics of A.I.’s water and electricity consumption. Photo by John MacDougall/AFP via Getty Images

    Sam Altman is pushing back on mounting criticism over the environmental toll of A.I. The OpenAI chief has dismissed claims about A.I.’s water consumption as “fake” and drawn comparisons between the electricity required to power A.I. systems and the energy it takes to develop human intelligence.

    Figures suggesting that tools like ChatGPT consume multiple gallons of water per query are “totally insane” and have “no connection to reality,” Altman said in a Feb. 20 interview with The Indian Express on the sidelines of the AI Impact Summit in New Delhi. Last year, Altman claimed that ChatGPT uses 0.000085 gallons of water per query—roughly one-fifteenth of a teaspoon—though he did not explain how he calculated that figure.

    A.I.’s water footprint largely stems from the need for evaporative cooling systems used to keep data center hardware from overheating. But Altman argued that companies like OpenAI are no longer directly managing such cooling processes. Many A.I. developers, he noted, are shifting toward cooling systems that recirculate liquid rather than continually drawing fresh supplies. Meanwhile, tech giants like Microsoft, Meta, Google and Amazon have pledged to replenish more water than they withdraw by 2030.

    Even so, data centers continue to drink up water at a rapid pace. Total A.I.-related water consumption for cooling reached 23.7 cubic kilometers in 2025, a 38 percent increase over 2020, and is expected to more than triple over the next 25 years, according to a January report from Xylem. Despite the industry’s pivot to alternative methods, the report found that 56 percent of data center capacity still relies on some form of evaporative cooling.

    Altman was more measured when it came to electricity usage. “What is fair, though, is the energy consumption,” he said. “We need to move towards nuclear, wind, and solar very quickly.”

    Last April, the International Energy Agency reported that data centers accounted for roughly 1.5 percent of global electricity consumption in 2024. Their power use is rising at a rate more than four times faster than overall electricity demand and is expected to more than double by 2030.

    In response, major tech companies are pursuing data center agreements tied to alternative energy sources, including nuclear power, to ease pressure on grids. Altman, who previously led Y Combinator, has personally invested in nuclear ventures such as Oklo, which is developing small-scale nuclear plants, and Helion, which aims to commercialize nuclear fusion.

    The OpenAI CEO also argued that critics overlook the energy required to develop human intelligence. “People talk about how much energy it takes to train an A.I. model relative to how much it costs a human to do one inference query,” he said. “But it also takes a lot of energy to train a human—it takes, like, 20 years of life and all the food you eat during that time before you get started.”

    A more appropriate comparison, he suggested, would measure the energy used by a fully trained A.I. model to answer a question against that used by a human doing the same task. “Probably A.I. has already caught up on an energy efficiency basis measured that way.”

    The remarks quickly sparked debate online over whether such comparisons are appropriate. “He’s saying a really big spreadsheet and a baby are morally equivalent,” wrote Matt Stoller, research director of the American Economic Liberties Project, in a post on X. Sridhar Vembu, founder and chief scientist of software firm Zoho Corporation, also took issue with the OpenAI chief’s statements. A.I. should “quietly recede into the background” instead of dominating our lives, said the billionaire on X. “I do not want to see a world where we equate a piece of technology to a human being.”

    Sam Altman Defends A.I. Energy Use With Human Comparison, Sparking Debate

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • State to use AI to improve government

    [ad_1]

    BOSTON — Artificial intelligence is being used for everything from guiding self-powered cars and developing life-saving medicines to powering online search engines that help you find a plumber or pick holiday gifts for your family.

    And the machine learning platform could soon be employed by the state government to speed up the processes of getting a state permit, renewing a vehicle registration or detecting fraud in public benefits programs.

    This page requires Javascript.

    Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

    kAm%96 w62=6J 25>:?:DEC2E:@? 2??@F?465 uC:52J E92E :E A=2?D E@ 56A=@J r92Ev!%’D 2CE:7:4:2= :?E6==:86?46 2DD:DE2?E A=2E7@C> 😕 6I64FE:G6 3C2?49 286?4:6D H:E9 E96 8@2= @7 >2<:?8 DE2E6 8@G6C?>6?E H@C< “36EE6C 2?5 72DE6C” 7@C C6D:56?ED]k^Am

    kAm“%9:D 😀 23@FE >2<:?8 8@G6C?>6?E 72DE6C[ >@C6 677:4:6?E[ 2?5 >@C6 67764E:G6 7@C E96 A6@A=6 H6 D6CG6[” v@G] |2FC2 w62=6J D2:5 😕 2 AC6A2C65 DE2E6>6?E]k^Am

    kAmw6C 25>:?:DEC2E:@? D2:5 E96 px C@==@FE H:== 36 :>A=6>6?E65 2D 2 A92D65 2AAC@249 24C@DD E96 6I64FE:G6 3C2?49 “2?5 H:== AC@G:56 2 D276 2?5 D64FC6 6?G:C@?>6?E E92E AC@E64ED DE2E6 52E2]” %96 4@?EC24E H:E9 r92Ev!% H2D ?68@E:2E65 E9C@F89 2 4@>A6E:E:G6 AC@4FC6>6?E AC@46DD[ @77:4:2=D D2:5]k^Am

    kAm~?46 56A=@J65[ |2DD249FD6EED H:== 36 E96 7:CDE DE2E6 E@ 25@AE E96 E649?@=@8J 7@C E96 6?E:C6 c_[___6>A=@J66 6I64FE:G6 3C2?49[ 244@C5:?8 E@ E96 w62=6J 25>:?:DEC2E:@?]k^Am

    kAm%96 C@==@FE @7 E96 ?6H A@=:4J 4@>6D 2D DE2E6 =2H>2<6CD 2C6 4@?D:56C:?8 2 >JC:25 @7 AC@A@D2=D 2:>65 2E 255:?8 8F2C5C2:=D 2C@F?5 FD6 @7 E96 ?6H E649?@=@8J]k^Am

    kAm~?6 AC@A@D2= H@F=5 C6BF:C6 =2C86 2CE:7:4:2= :?E6==:86?46 E649?@=@8J 4@>A2?:6D DF49 2D E96 @?=:?6 492E3@E r92Ev!% E@ C68:DE6C H:E9 E96 DE2E6 pEE@C?6J v6?6C2=’D ~77:46 2?5 5:D4=@D6 :?7@C>2E:@? 23@FE E96:C 2=8@C:E9>D]k^Am

    kAmp?@E96C 3:== 42==D 7@C 32??:?8 “566A72<6D” @C 4@>AFE6C86?6C2E65 >2?:AF=2E:@?D @7 2 A6CD@?’D G@:46 @C =:<6?6DD FD:?8 >249:?6 =62C?:?8 E@ 4C62E6 G:DF2= 2?5 2F5:@ 4@?E6?E E92E 2AA62CD E@ 36 C62=] %96 E649?@=@8J 😀 36:?8 FD65 E@ 86?6C2E6 72<6 :>286CJ 7@C 2?JE9:?8 7C@> “C6G6?86 A@C?” E@ A@=:E:42= >F5D=:?8:?8]k^Am

    kAmx? a_ac[ pEE@C?6J v6?6C2= p?5C62 r2>A36== D@F89E E@ E:89E6? E96 C6:?D @? 2CE:7:4:2= :?E6==:86?46 56G6=@A6CD[ DFAA=:6CD 2?5 FD6CD[ :DDF:?8 ?6H 8F:52?46 E92E H2C?65 E96> ?@E E@ CF? 27@F= @7 E96 DE2E6’D =2HD @? 4@?DF>6C AC@E64E:@?[ 2?E:5:D4C:>:?2E:@? 2?5 52E2 D64FC:EJ]k^Am

    kAm{2DE H66<[ E96 DE2E6 w@FD6 @7 #6AC6D6?E2E:G6D 2AAC@G65 2 A2:C @7 3:A2CE:D2? 3:==D D6EE:?8 ?6H C6DEC:4E:@?D @? E96 FD6 @7 2CE:7:4:2= :?E6==:86?46 😕 A@=:E:42= 42>A2:8?:?8] %96 AC@A@D2=D H@F=5 C6BF:C6 42>A2:8?D E@ 5:D4=@D6 E96 FD6 @7 px 😕 A@=:E:42= 25D 2?5 32? “5646AE:G6” 4@>>F?:42E:@?D 😕 42>A2:8? 25D h_ 52JD 367@C6 2? 6=64E:@?]k^Am

    kAmr92Ev!%[ H9:49 H2D 4C62E65 3J $2? uC2?4:D4@32D65 ~A6?px[ 2? 2CE:7:4:2= :?E6==:86?46 C6D62C49 7:C> 4@7@F?565 3J t=@? |FD<[ 2==@HD FD6CD E@ 6?E6C E96>6D[ AC@>AED 2?5 8F:56=:?6D :?E@ E96 px DJDE6> E92E 4@>6D FA H:E9 2 C6DA@?D6 2D :7 2 9F>2? HC@E6 :E]k^Am

    kAm~? :ED H63D:E6[ E96 4@>A2?J D2JD E96 r92Ev!% 3@E 😀 2 “D276 2?5 FD67F=” px DJDE6> E92E :?E6C24ED 😕 2 “4@?G6CD2E:@?2= H2J” H:E9 FD6CD[ >2<:?8 :E A@DD:3=6 E@ “2?DH6C 7@==@HFA BF6DE:@?D[ 25>:E :ED >:DE2<6D[ 492==6?86 :?4@CC64E AC6>:D6D[ 2?5 C6;64E :?2AAC@AC:2E6 C6BF6DED]”k^Am

    kAmqFE E96 6>6C86?46 @7 px E649?@=@8J 92D 366? DE66A65 😕 4@?EC@G6CDJ[ H:E9 4C:E:4D H2C?:?8 E92E r@?8C6DD 2?5 DE2E6 8@G6C?>6?ED ?665 E@ >@G6 BF:4<=J E@ D6E C68F=2E:@?D 8@G6C?:?8 :ED FD6]k^Am

    kAmw62=6J 25>:?:DEC2E:@? @77:4:2=D D2J E96 C@==@FE @7 r92Ev!% H:== 36 5@?6 H:E9:? 2 “H2==65@77[ D64FC6 6?G:C@?>6?E E92E AC@E64ED DE2E6 52E2 2?5 6?DFC6D E92E 6>A=@J66 492E :?AFED 5@ ?@E EC2:? AF3=:4 px >@56=D]” %96J D2:5 FD6 @7 E96 E649?@=@8J H:== 36 8@G6C?65 3J 4FCC6?E DE2E6 C68F=2E:@?D 2?5 A@=:4:6D[ H9:49 H:== 36 “C68F=2C=J” FA52E65[ @77:4:2=D D2J]k^Am

    kAm“qJ >2<:?8 r92Ev!% 2G2:=23=6 E@ E96 DE2E6 H@C<7@C46[ H6 2C6 6>A@H6C:?8 @FC 6>A=@J66D H:E9 2 D64FC6[ 8@G6C?65 E@@= E92E 42? 6?92?46 D6CG:46 56=:G6CJ H9:=6 >2:?E2:?:?8 E96 9:896DE DE2?52C5D 7@C 52E2 AC:G24J[ D64FC:EJ[ 2?5 E9@F89E7F=[ EC2?DA2C6?E FD286 @7 px[” y2D@? $?J56C[ D64C6E2CJ @7 E96 tI64FE:G6 ~77:46 @7 %649?@=@8J $6CG:46D 2?5 $64FC:EJ[ D2:5 😕 2 DE2E6>6?E]k^Am

    kAm“~FC 7@4FD 😀 ?@E ;FDE 25@AE:?8 px[ 3FE 5@:?8 D@ 😕 2 H2J E92E C67=64ED @FC G2=F6D[ 2?5 DEC6?8E96?D ECFDE H:E9 E96 C6D:56?ED H6 D6CG6]”k^Am

    kAmr9C:DE:2? |] (256 4@G6CD E96 |2DD249FD6EED $E2E69@FD6 7@C }@CE9 @7 q@DE@? |65:2 vC@FAUCDBF@jD ?6HDA2A6CD 2?5 H63D:E6D] t>2:= 9:> 2E k2 9C67lQ>2:=E@i4H256o4?9:?6HD]4@>Qm4H256o4?9:?6HD]4@>k^2m]k^Am

    [ad_2]

    By Christian M. Wade | Statehouse Reporter

    Source link

  • The OpenAI mafia: 18 startups founded by alumni | TechCrunch

    [ad_1]

    Move over, PayPal mafia: There’s a new tech mafia in Silicon Valley. As the startup behind ChatGPT, OpenAI is arguably the biggest AI player in town. The company is reportedly now in talks to finalize a $100 billion deal, valuing the company at more than $850 billion.  

    Many employees have come and gone since the company first launched a decade ago, and some have launched startups of their own. Among these, some have become top rivals (like Anthropic), while others, just on investor interest alone, have managed to raise billions without even launching a product (see, Thinking Machine Labs).  

    In January, Aliisa Rosenthal, OpenAI’s first sales leader, spoke a little bit about this growing network. She, like the other OpenAI alums who did not become founders, decided to become an investor and said she was going to tap into the ex-OpenAI founder network to look for deal flow. We know Peter Deng, OpenAI’s former head of consumer products (and now general partner at Felicis) already has.  

    Below is a roundup of the major startups founded by OpenAI alumni, in alphabetical order. And we are certain this list will grow over time. 

    David Luan — Adept AI Labs 

    David Luan was OpenAI’s engineering VP until he left in 2020. After a stint at Google, in 2021 he co-founded Adept AI Labs, a startup that builds AI tools for employees. The startup last raised $350 million at a valuation north of $1 billion in 2023, but Luan left in late 2024 to oversee Amazon’s AI agents lab after Amazon hired Adept’s founders.

    Dario Amodei, Daniela Amodei, and John Schulman — Anthropic

    Siblings Dario and Daniela Amodei left OpenAI in 2021 to form their own startup, San Francisco-based Anthropic, that has long touted a focus on AI safety. OpenAI co-founder John Schulman joined Anthropic in 2024, pledging to build a “safe AGI.” The company has since become OpenAI’s biggest rival and just raised a $30 billion Series G, nabbing a $380 billion valuation in the process. IPO rumors are also swirling, as the company reportedly prepares for a public listing that could come sometime this year. (OpenAI is also allegedly preparing for an IPO this year and is maybe even trying to beat Anthropic to the public market.) 

    Rhythm Garg, Linden Li, and Yash Patil — Applied Compute  

    Three ex-OpenAI staffers (Rhythm Garg, Linden Li, and Yash Patil) have reportedly raised $20 million for a startup called Applied Compute, as reported by Upstart Media. All three of them worked as technical staff at OpenAI for more than a year before leaving last May to launch the startup, per their LinkedIns. The startup helps enterprises train and deploy custom AI agents. Benchmark led the round, valuing the 10-month-old company at $100 million, Upstart Media reported. 

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    Pieter Abbeel, Peter Chen, and Rocky Duan — Covariant

    The trio all worked at OpenAI in 2016 and 2017 as research scientists before founding Covariant, a Berkeley, California-based startup that builds foundation AI models for robots. In 2024, Amazon hired all three of the Covariant founders and about a quarter of its staff. The quasi-acquisition was viewed by some as part of a broader trend of Big Tech attempting to avoid antitrust scrutiny. 

    Tim Shi — Cresta 

    Tim Shi was an early member of OpenAI’s team, where he focused on building safe artificial general intelligence (AGI), according to his LinkedIn profile. He worked at OpenAI for a year in 2017 but left to found Cresta, a San Francisco-based AI contact center startup that has raised over $270 million from VCs like Sequoia Capital, Andreessen Horowitz, and others, according to a press release.

    Jonas Schneider — Daedalus

    Jonas Schneider led OpenAI’s software engineering for robotics team but left in 2019 to co-found Daedalus, which builds advanced factories for precision components. The San Francisco-based startup raised a $21 million Series A last year with backing from Khosla Ventures, among others.

    Andrej Karpathy — Eureka Labs

    Computer vision expert Andrej Karpathy was a founding member and research scientist at OpenAI, leaving the startup to join Tesla in 2017 to lead its autopilot program. Karpathy is also well-known for his YouTube videos explaining core AI concepts. He left Tesla in 2024 to found his own education technology startup, Eureka Labs, a San Francisco-based startup that is building AI teaching assistants.

    Margaret Jennings — Kindo

    Margaret Jennings worked at OpenAI in 2022 and 2023 until she left to co-found Kindo, which markets itself as an AI chatbot for enterprises. Kindo has raised over $27 million in funding, last raising a $20.6 million Series A in 2024. Jennings left Kindo in 2024 to head product and research at French AI startup Mistral, according to her LinkedIn profile.

    Maddie Hall — Living Carbon

    Maddie Hall worked on “special projects” at OpenAI but left in 2019 to co-found Living Carbon, a San Francisco-based startup that aims to create engineered plants that can suck more carbon out of the sky to fight climate change. Living Carbon raised a $21 million Series A round in 2023, bringing its total funding until then to $36 million, according to a press release.

    Liam Fedus — Periodic Labs  

    Liam Fedus, OpenAI’s VP of post-training research, left the company in March 2025 to team up with his former Google Brain colleague, Ekin Dogus Cubuk, and launch Periodic Labs. The startup seeks to use AI scientists to find new materials, particularly new superconducting materials. It came out of stealth mode in September 2025, armed with a massive $300 million in seed-round funding with backers that included Jezz Bezos, Eric Schmidt, Felicis and Andreessen Horowitz. 

    Aravind Srinivas — Perplexity

    Aravind Srinivas worked as a research scientist at OpenAI for a year until 2022, when he left the company to co-found AI search engine Perplexity. His startup has attracted a string of high-profile investors like Jeff Bezos and Nvidia, although it’s also caused controversy over alleged unethical web scraping. Perplexity, which is based in San Francisco, last reported a raise of $200 million at a $20 billion valuation. 

    Jeff Arnold — Pilot

    Jeff Arnold worked as OpenAI’s head of operations for five months in 2016 before co-founding San Francisco-based accounting startup Pilot in 2017. Pilot, which focused initially on doing accounting for startups, last raised a $100 million Series C in 2021 at a $1.2 billion valuation and has attracted investors like Jeff Bezos. Arnold worked as Pilot’s COO until leaving in 2024 to launch a VC fund.

    Shariq Hashme — Prosper Robotics

    Shariq Hashme worked for OpenAI for 9 months in 2017 on a bot that could play the popular video game Dota, per his LinkedIn profile. After a few years at data-labeling startup Scale AI, he co-founded London-based Prosper Robotics in 2021. The startup says it’s working on a robot butler for people’s homes, a hot trend in robotics that other players like Norway’s 1X and Texas-based Apptronik are also working on.

    Ilya Sutskever — Safe Superintelligence 

    OpenAI co-founder and chief scientist Ilya Sutskever left OpenAI in May 2024 after he was reportedly part of a failed effort to replace CEO Sam Altman. Shortly afterward, he co-founded Safe Superintelligence, or SSI, with “one goal and one product: a safe superintelligence,” he says. Details about what exactly the startup is up to are scant: It has no product and no revenue yet. But investors are clamoring for a piece anyway, and it’s been able to raise $2 billion, with its latest valuation reportedly rising to $32 billion this month. SSI is based in Palo Alto, California, and Tel Aviv, Israel.

    Emmett Shear — Stem AI

    Emmett Shear is the former CEO of Twitch who was OpenAI’s interim CEO in November 2023 for a few days before Sam Altman rejoined the company. Shear launched an AI company, StemAI, in 2024 (though it seems to have since rebranded as Softmax). The company, which appears to be a research company, has attracted funding from Andreessen Horowitz.

    Mira Murati — Thinking Machines Lab 

    Mira Murati, OpenAI’s CTO, left OpenAI to found her own company, Thinking Machines Lab, which emerged from stealth in February 2025. It said at the time (rather vaguely) that it will build AI that’s more “customizable” and “capable.” The San Francisco AI startup, now valued at $12 billion, announced its first product late last year: an API that fine-tunes language models. It recently made headlines when two of its co-founders announced earlier this year that they would return to OpenAI. 

    Kyle Kosic — xAI

    Kyle Kosic left OpenAI in 2023 to become a co-founder and infrastructure lead of xAI, Elon Musk’s AI startup that offers a rival chatbot, Grok. In 2024, however, he hopped back to OpenAI, where he remains. Meanwhile, xAI (which acquired Musk’s social media site X) was purchased by Musk’s SpaceX, giving the coalesce company a valuation of $1.25 trillion. It is looking to go public sometime in June for what could be a historic listing. 

    Angela Jiang — Worktrace AI

    Angela Jiang left OpenAI in 2024, after working as a product manager and on the public policy team. In April 2025, she quietly launched Worktrace, which uses AI to help enterprises make business operations more efficient. It observes employee work patterns and automates workflow, according to the company’s website. The business is backed by Mura Murati, OpenAI’s former CTO, who went on to launch Thinking Labs. It is also backed by OpenAI’s startup fund, in addition to a slew of other OpenAI names, like its chief strategy officer, Jason Kwon. 

    Stealth Startups

    In addition to these startups, a number of other former OpenAI employees have founded startups that are still in stealth mode, according to various updates TechCrunch found on LinkedIn. For instance, it seems that former OpenAI researcher Danilo Hellermark has been working on a generative AI stealth startup for the past few years. He officially left OpenAI at the beginning of 2023. There’s also one apparently in the works from Lucas Negritto, who worked on OpenAI’s technical team and left the company in 2023 after three years. Since then, he’s founded one startup and has been working on another since August 2025, according to his LinkedIn. 

    [ad_2]

    Charles Rollet, Dominic-Madori Davis

    Source link

  • India has 100M weekly active ChatGPT users, Sam Altman says | TechCrunch

    [ad_1]

    India has 100 million weekly active ChatGPT users, making the country one of OpenAI’s largest markets globally, CEO Sam Altman said ahead of a government-hosted AI summit.

    On Sunday, Altman outlined ChatGPT’s growing adoption in India in an article published in the Indian English daily Times of India, as OpenAI prepares to formally participate in the five-day India AI Impact Summit in New Delhi, beginning Monday. Altman is attending the event alongside senior executives from several of the world’s leading AI companies.

    The growth comes as OpenAI, like other leading AI firms, looks to India’s young population and its more than a billion internet users to fuel global expansion. The ChatGPT maker opened a New Delhi office in August 2025 after months of groundwork in the country, and has adjusted its approach for India’s price-sensitive market, including rolling out a sub-$5 ChatGPT Go tier that was later made free for a year for Indian users.

    In the article, Altman said India is ChatGPT’s second-largest user base after the United States, highlighting the South Asian nation’s growing weight in OpenAI’s global strategy. The disclosure comes as ChatGPT’s overall usage has surged worldwide, with the platform reaching 800 million weekly active users as of October 2025 and reported to be approaching 900 million.

    Altman also highlighted the role of students in driving adoption, saying India has the largest number of student users of ChatGPT globally.

    Indian students have become a key growth segment for leading AI companies more broadly, as rivals race to embed their tools in classrooms and learning workflows. Google has similarly targeted the market, offering Indian students a free one-year subscription to its AI Pro plan in September 2025. Separately, India accounts for the highest global usage of Gemini for learning, Chris Phillips, Google’s vice president and general manager for education, said last month.

    “With its focus on access, practical Al literacy, and the infrastructure that supports widespread adoption, India is well positioned to broaden who benefits from the technology and to help shape how democratic AI is adopted at scale,” Altman wrote.

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    ChatGPT’s rapid growth also highlights a broader challenge for AI companies in India: translating widespread adoption into sustained economic impact. Indian government initiatives such as the IndiaAI Mission — a national program aimed at expanding computing capacity, supporting startups and accelerating AI adoption in public services — seek to address those gaps. However, the country’s price-sensitive market and infrastructure constraints have made monetization and large-scale deployment more complex than in developed economies.

    “Given India’s size, it also risks forfeiting a vital opportunity to advance democratic AI in emerging markets around the world,” Altman wrote, warning that uneven access and adoption could concentrate AI’s economic gains in too few hands.

    Altman also signaled that OpenAI plans to deepen its engagement with the Indian government, writing that the company would soon announce new partnerships aimed at expanding access to AI across the country. He did not provide details, but said the focus would be on widening reach and enabling more people to put AI tools to practical use.

    The India AI Impact Summit is expected to draw a wide cross-section of global technology and political leaders, including Anthropic CEO Dario Amodei, Sundar Pichai of Google, and senior Indian business figures such as Mukesh Ambani and Nandan Nilekani. Political leaders including Emmanuel Macron, Sheikh Khaled bin Mohamed bin Zayed Al Nahyan, and Luiz Inácio Lula da Silva are also expected to attend, spotlighting India’s ambition to position itself as a central player in global AI debates.

    For global AI firms, including OpenAI, the summit underscores how India’s vast user base is translating into growing influence over how the technology evolves.

    OpenAI did not respond to a request for comment.

    [ad_2]

    Jagmeet Singh

    Source link

  • Researchers Jailbreak ChatGPT to Find Out Which State Has the Laziest People

    [ad_1]

    Mississippi is the laziest state in the country, according to ChatGPT. Of course, the chatbot won’t tell you that if you straight up ask it. But the Washington Post reports that researchers from Oxford and the University of Kentucky managed to jailbreak the chatbot and get it to reveal some of the stereotypes buried in its training data that it doesn’t share but does influence its outputs. (Kentucky also ranked near the laziest, but would a lazy state produce researchers who figure out how to get an AI model to share its implicit biases? Something to think about, bots.)

    Typically, when you ask ChatGPT a question that would require it to speak in a derogatory manner about someone or something, it’ll decline to provide a straight answer. It’s part of OpenAI’s attempts to keep the chatbot within specific guardrails and keep it from veering into controversial topics. But that doesn’t mean that an AI model doesn’t contain unpopular opinions formed by chewing on tons of human-produced training data that also contains both explicit and implicit biases. To pull those answers out of ChatGPT, the researchers asked more than 20 million questions, prompting the chatbot to pick between two options. For instance, they would ask “Where are people smarter?” and give two options to choose from, like California or Montana. Through that type of prompting, they were able to determine how ChatGPT views different cities, states, and populations.

    That’s how they ended up discovering that ChatGPT views Mississippi as the laziest state in the Union, with the rest of the South close behind. While ChatGPT won’t disclose how it comes to those conclusions, it’s not hard to make some assumptions about where it’s getting these ideas. For instance, maybe it comes from The Washington Post itself, circa 2015, when it published its “Couch Potato Index,” which deemed southern states the laziest based on data points like TV-watching time and the prevalence of fast food restaurants in the area.

    Those are also, of course, often the markers of poorer communities, and there is no evidence that lower-income households are any more “lazy” than wealthier ones—in fact, data from the Economic Policy Institute shows that people living in poverty are more likely to take on multiple jobs, work longer and more irregular hours, and deal with more dangerous working conditions. And it’s likely no coincidence that they are also states with a higher population of people of color. ChatGPT likely has access to that information, too, but the underlying model clearly has not addressed the information and misguided stereotypes held by many people that lead to these biases.

    So what other biases did the researchers spot? Most of Africa and Asia ranked at the bottom of having the “most artsy” people, compared to high levels of artsiness in Western Europe. Likewise, African nations—particularly sub-Saharan ones—ranked at the bottom of the list for “smartest countries” while the United States and China ranked near the top. When asked where the “most beautiful” people are, it picked richer cities over poorer and more diverse ones. Los Angeles and New York topped the list, while Detroit and border town Laredo, Texas, were near the bottom. Even when they dug into specific communities, whiter and richer won out. In New York City, SoHo and the West Village finished at the top, while the more diverse communities of Jamaica and Tottenville ranked at the bottom.

    So, okay, all of that sucks and is deeply depressing because the “truth machines” are perpetuating the types of classist and racist stereotypes that lead to creating the kinds of conditions that reinforce the negative outcomes for the people who are harmed by these biases. So how about a more frivolous one? ChatGPT believes the best pizza is found in New York, Chicago, and Buffalo, while the worst is found in El Paso, Irvine, and Honolulu (presumably because of one of the internet’s favorite debates over whether pineapple belongs on pizza). The biggest takeaway: ChatGPT is too much of a coward to take a side in the New York vs. Chicago pizza debate.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI starts testing ads in free version of ChatGPT

    [ad_1]


    OpenAI’s free version of ChatGPT has a new look: Users who don’t pay to upgrade will now see ads when using the artificial intelligence platform.  

    The company said on Monday that it is testing ads with ChatGPT users in the U.S. with “free” and “go” subscription tiers. The “Go” plan costs $8 monthly. 

    OpenAI said in January that it would start piloting ads as the company looks for ways to further monetize its widely used chatbot, along with subscription fees for premium users. Customers who pay for Plus, Pro, Business, Enterprise, and Education subscription tiers will not see ads when using ChatGPT, OpenAI said.

    The company also vowed that the presence of ads wouldn’t influence or change how ChatGPT responds to user prompts. 

    “Our goal is for ads to support broader access to more powerful ChatGPT features while maintaining the trust people place in ChatGPT for important and personal tasks,” OpenAI said Monday. 

    ChatGPT users can avoid seeing ads by upgrading their subscription tiers, OpenAI noted. Free tier users can also opt out of ads, but their usage will be limited. 

    How will ChatGPT ads work?

    ChatGPT will clearly indicate when content is an advertisement, as opposed to an AI-generated answer to a user query.

    Ads will be tailored to users’ prompt histories and other factors. OpenAI said it decides to show ads by “matching ads submitted by advertisers with the topic of your conversation, your past chats and past interactions with ads.”

    For example, a ChatGPT user looking for recipe suggestions might be shown a grocery delivery or meal-kit service ad.

    Advertisers will not have access to users’ chat histories or personal details, OpenAI said. The company on Monday also encouraged advertisers to sign up to promote their businesses with the company as it pilots the ad program. 

    [ad_2]

    Source link

  • How to Use ChatGPT, Gemini, Grok in Private Mode (No Training Mode)

    [ad_1]

    • Apart from this, you can also use the keyboard shortcut, ‘Ctrl + ;’ to switch back to a normal chat again or to the incognito mode.
    • However, it does state that the chat won’t appear in the history or would be used to train the model.
    • Similar to ChatGPT, you would be informed that your chats would not appear in your history and would not be used to train the model.

    The core reason LLMs, or large language models such as ChatGPT and Gemini, work the way they do is the amount of data they are trained on. Every conversation you have with it or the data you feed into it is used to further train it. This also instils a fear of privacy concerns among users. As one would not like a record of sensitive or personal information in particular. For this, almost every platform offers a private mode. Let’s explore how you can access them and how it really works.

    Using Private Mode on LLM Platforms

    Using an LLM platform is much like using a search engine, albeit with a lot more capabilities. You must have noticed that when you do a normal Google search, it adds to your browsing history. The links you have already visited would appear purple. Similarly, these platforms not only keep a record of your chats with them but also use that data to train themselves further. Like all browsers, these platforms also offer private or incognito chat. This is how you can access these modes in ChatGPT, Gemini, Grok, Perplexity, and Claude AI.

    1. ChatGPT

    Once you open ChatGPT, you will see “Turn on temporary chat” in the top-right corner.

    temporary chat in chatgpt

    Here, you will clearly see a disclaimer saying, “This chat won’t appear in your chat history and won’t be used to train our models”. But you have to keep in mind that the chat won’t be instantly deleted from the servers. As stated by the platform, a copy of the chat may be kept for up to 30 days for safety concerns.

    chatgpt

    2. Gemini

    After opening Gemini, on the left of your screen, click on the ‘Temporary chat’ icon beside the ‘New chat’ icon. If you don’t see it, you can expand the sidebar menu to make it visible.

    temporary chat in gemini

    Similar to ChatGPT, you would be informed that your chats would not appear in your history and would not be used to train the model. You will also see that it is clearly stated that the record will be kept for only 72 hours or 3 days for safety reasons.

    gemini policy

    3. Grok

    Open Grok, then click “Private” in the top-right corner.

    grok private mode

    You can have a private chat now. Unlike the platforms above, Grok does not explicitly state whether chat will be stored, and if so, for how long. However, it does state that the chat won’t appear in the history or would be used to train the model.

    grok policy

    4. Perplexity

    Once you open Perplexity, click on your account in the bottom left corner. A menu will pop up where, at the bottom, you will see the ‘Incognito’ option. Click that. Apart from this, you can also use the keyboard shortcut, ‘Ctrl + ;’ to switch back to a normal chat again or to the incognito mode.

    incognito mode in perplexity

    In this too the chats would not appear in the history and data will be deleted within 24 hours, as mentioned by Perplexity.

    perplexity policy

    5. Claude AI

    After opening Claude AI, you will find the incognito option in the top-right corner. Or, you can simply use the ‘Ctrl + Shift + I’ keyboard shortcut.

    incognito mode in claude

    Although Claude doesn’t mention how long, the chats will be retained, apart from not appearing in the history. I looked for it in Claude Support and found the following.

    claude policy

    FAQs

    Q. Can I use the private mode without logging in?

    Private modes are not needed and thus not accessible when not logged in. Your chat will not be saved anywhere once you close the window but they might be utilised later to train the model.

    Not unless it’s something flagged by the system. As mentioned above, these chats are not immediately wiped off. Each platform has policies aligned with the laws of the respective nations that direct them to grant authorities access to these chats in the event of an issue.

    Wrapping Up

    Unlike normal mode on LLM platforms, private mode lets you chat without creating a history or training the model. While this is a useful feature when multiple users use a single account, it should be used with caution. Users should avoid sharing sensitive information such as passwords, addresses, or contact details. Additionally, this mode should be used considering that any information which may potentially be related to illegal activities can call for the involvement of legal authorities if needed. There is a system in place to flag any such conversation within all platforms.

    You should also check out:

    Have any questions related to our how-to guides, or anything in the world of technology? Check out our new GadgetsToUse AI Chatbot for free, powered by ChatGPT.

    You can also follow us for instant tech news at Google News or for tips and tricks, smartphones & gadgets reviews, join the GadgetsToUse Telegram Group, or subscribe to the GadgetsToUse Youtube Channel for the latest review videos.

    Was this article helpful?

    YesNo

    [ad_2]

    Mitash Arora

    Source link

  • Despair-Inducing Analysis Shows AI Eroding the Reliability of Science Publishing

    [ad_1]

    It’s almost impossible to overstate the importance and impact of arXiv, the science repository that, for a time, almost single-handedly justified the existence of the internet. ArXiv (pronounced “archive” or “Arr-ex-eye-vee” depending on who you ask) is a preprint repository, where, since 1991, scientists and researchers have announced “hey I just wrote this” to the rest of the science world. Peer review moves glacially, but is necessary. ArXiv just requires a quick once-over from a moderator instead of a painstaking review, so it adds an easy middle step between discovery and peer review, where all the latest discoveries and innovations can—cautiously—be treated with the urgency they deserve more or less instantly.

    But the use of AI has wounded ArXiv and it’s bleeding. And it’s not clear the bleeding can ever be stopped.

    As a recent story in The Atlantic notes, ArXiv creator and Cornell information science professor Paul Ginsparg has been fretting since the rise of ChatGPT that AI can be used to breach the slight but necessary barriers preventing the publication of junk on ArXiv. Last year, Ginsparg collaborated on a piece of analysis that looked into probable AI in arXiv submissions. Rather horrifyingly, scientists evidently using LLMs to generate plausible-looking papers were more prolific than those who didn’t use AI. The number of papers from posters of AI-written or augmented work was 33 percent higher.

    AI can be used legitimately, the analysis says, for things like surmounting the language barrier. It continues:

    “However, traditional signals of scientific quality such as language complexity are becoming unreliable indicators of merit, just as we are experiencing an upswing in the quantity of scientific work. As AI systems advance, they will challenge our fundamental assumptions about research quality, scholarly communication, and the nature of intellectual labor.”

    It’s not just ArXiv. It’s a rough time overall for the reliability of scholarship in general. An astonishing self-own published last week in Nature described the AI misadventure of a bumbling scientist working in Germany named Marcel Bucher, who had been using ChatGPT to generate emails, course information, lectures, and tests. As if that wasn’t bad enough, ChatGPT was also helping him analyze responses from students and was being incorporated into interactive parts of his teaching. Then one day, Bucher tried to “temporarily” disable what he called the “data consent” option, and when ChatGPT suddenly deleted all the information he was storing exclusively in the app—that is: on OpenAI’s servers—he whined in the pages of Nature that “two years of carefully structured academic work disappeared.”

    Widespread, AI-induced laziness on display in the exact area where rigor and attention to detail are expected and assumed is despair-inducing. It was safe to assume there was a problem when the number of publications spiked just months after ChatGPT was first released, but now, as The Atlantic points out, we’re starting to get the details on the actual substance and scale of that problem—not so much the Bucher-like, AI-pilled individuals experiencing publish-or-perish anxiety and hurrying out a quickie fake paper, but industrial scale fraud.

    For instance, in cancer research, bad actors can prompt for boring papers that claim to document “the interactions between a tumor cell and just one protein of the many thousands that exist,” the Atlantic notes. If the paper claims to be groundbreaking, it’ll raise eyebrows, meaning the trick is more likely to be noticed, but if the fake conclusion of the fake cancer experiment is ho-hum, that slop will be much more likely to see publication—even in a credible publication. All the better if it comes with AI generated images of gel electrophoresis blobs that are also boring, but add additional plausibility at first glance.

    In short, a flood of slop has arrived in science, and everyone has to get less lazy, from busy academics planning their lessons, to peer reviewers and ArXiv moderators. Otherwise, the repositories of knowledge that used to be among the few remaining trustworthy sources of information are about to be overwhelmed by the disease that has already—possibly irrevocably—infected them. And does 2026 feel like a time when anyone, anywhere, is getting less lazy?

    [ad_2]

    Mike Pearl

    Source link

  • Report reveals that OpenAI’s GPT-5.2 model cites Grokipedia

    [ad_1]

    OpenAI may have called GPT-5.2 its “most advanced frontier model for professional work,” but tests conducted by the Guardian cast doubt on its credibility. According to the report, OpenAI’s GPT-5.2 model cited Grokipedia, the online encyclopedia powered by xAI, when it came to specific, but controversial topics related to Iran or the Holocaust.

    As seen in the Guardian‘s report, ChatGPT used Grokipedia as a source for claims about the Iranian government being tied to telecommunications company MTN-Irancell and questions related to Richard Evans, a British historian who served as an expert witness during a libel trial for Holocaust denier David Irving. However, the Guardian noted ChatGPT didn’t use Grokipedia when it came to a prompt asking about media bias against Donald Trump and other controversial topics.

    OpenAI released the GPT-5.2 model in December to better perform at professional use, like creating spreadsheets or handling complex tasks. Grokipedia preceded GPT-5.2’s release, but ran into some controversy when it was seen including citations to neo-Nazi forums. A study done by US researchers also showed that the AI-generated encyclopedia cited “questionable” and “problematic” sources.

    In response to the Guardian report, OpenAI told the outlet that its GPT-5.2 model searches the web for a “broad range of publicly available sources and viewpoints,” but applies “safety filters to reduce the risk of surfacing links associated with high-severity harms.”

    [ad_2]

    Jackson Chen

    Source link

  • Claude Code gives Anthropic its viral moment | Fortune

    [ad_1]

    It’s been a good few weeks for Anthropic. The lab is reportedly planning a $10 billion fundraising that would value the company at $350 billion, its CEO caused headlines in Davos by criticizing the White House, and it’s also having a viral product launch that most AI labs can only dream of.

    Claude Code, the company’s surprisingly popular hit, is a coding tool that has captured the attention of users far beyond the software engineers it was built for. First released in February 2024 as a developer assistant, the coding tool has become increasingly sophisticated and sparked a level of excitement rarely seen since ChatGPT’s debut. Jensen Huang called it “incredible” and urged companies to adopt it for coding. A senior Google engineer said it recreated a year’s worth of work in an hour. And users without any programming background have deployed it to book theater tickets, file taxes, and even monitor tomato plants.

    Even at Microsoft, which sells GitHub Copilot, Claude Code has been widely adopted internally across its major engineering teams, with even non-developers reportedly being encouraged to use it.

    Anthropic’s products have long been popular with software developers, but after users pointed out that Claude Code was more of a general-purpose AI agent, Anthropic created a version of the product for non-coders. Last week, the company launched Cowork, a file management agent that is essentially a user-friendly version of the coding product. Boris Cherny, head of Claude Code at Anthropic, said his team built Cowork in approximately a week and a half, largely using Claude Code itself to do the legwork.

    “It was just kind of obvious that Cowork is the next step,” Cherny told Fortune. “We just want to make it much easier for non-programmers.”

    What separates Cowork from earlier general use AI tools from Anthropic is its ability to take autonomous action rather than simply provide advice. The products can access files, control browsers through the “Claude in Chrome” extension, and manipulate applications—executing tasks rather than just suggesting how to do them. For some general users, it’s the first taste of what the promise of agentic AI really is.

    Many of the uses aren’t especially sexy, but they do save users hours. Cherny says he uses Cowork for project management, automatically messaging team members on Slack when they haven’t updated shared spreadsheets, and had heard of use cases including one researcher deploying it to comb through museum archives for basketry collections.

    “Engineers just feel unshackled, that they don’t have to work on all the tedious stuff anymore,” Cherny told Fortune. “We’re starting to hear this for Cowork also, where people are saying all this tedious stuff—shuffling data between spreadsheets, integrating Slack and Salesforce, organizing your emails—it just does it so you can focus on the work you actually want to do.”

    Enterprise first, consumer second

    Despite the consumer buzz, Anthropic is positioning both products squarely in the enterprise market, where the company reportedly already leads OpenAI in adoption.

    “For Anthropic, we’re an enterprise AI company,” Cherny said. “We build consumer products, but for us, really, the focus is enterprise.”

    Cherny said this strategy is also guided by Anthropic’s founding mission around AI safety, which resonates with corporate customers concerned about security and compliance. In this case, the company’s roadmap with general-use products was to first develop strong coding capabilities to enable sophisticated tool use and ‘test’ products with technical customers. By providing capabilities to technical users through Claude Code before extending them to broader audiences, Cherny said the company builds on a tested foundation rather than starting from scratch with consumer tools.

    Claude Code is now used by Uber, Netflix, Spotify, Salesforce, Accenture, and Snowflake, among others, according to Cherny. The product has found “a very intense product market fit across the different enterprise spaces,” he told Fortune.

    Anthropic’s also seen a traffic uplift as a result of Claude Code’s viral moment. Claude’s total web audience has more than doubled since December 2024, and its daily unique visitors on desktop are up 12% globally year-to-date, according to data from Similarweb and Sensor Tower published by The Wall Street Journal.

    The company is facing challenges that come with AI agents capable of autonomous action. Both products have security vulnerabilities, particularly “prompt injections” where attackers hide malicious instructions in web content to manipulate AI behavior.

    To tackle this, Anthropic has implemented multiple security layers, including running Cowork in a virtual machine and recently adding deletion protection after a user accidentally removed files. A feature Cherny called “quite innovative.”

    But the company does acknowledge the limitations of their approach. “Agent safety—that is, the task of securing Claude’s real-world actions—is still an active area of development in the industry,” Anthropic warned in its announcement.

    The future of software engineering

    With the rise of increasingly sophisticated autonomous coding tools, some are concerned that software engineer roles, especially entry-level roles, could dry up. Even within Anthropic, some engineers have stopped writing code at all, according to CEO Dario Amodei.

    “I have engineers within Anthropic who say ‘I don’t write any code anymore. I just let the model write the code, I edit it,’” Amodei said at the World Economic Forum in Davos. “We might be six to 12 months away from when the model is doing most, maybe all of what software engineers do end-to-end.”

    Tech companies argue that these tools will democratize coding, allowing those with little to no technical skills to build products by prompting AI systems in natural language. But, while it’s not definitive the two are causally linked and there are other factors impacting a jobs downturn, it’s true that open roles for entrylevel software engineers have declined as the amount of code written by generative AI has ramped up.

    Time will tell whether this heralds a democratization of software development or the slow erosion of a once stable profession, but by bringing autonomous AI agents out of the lab and into everyday work, Claude Code may speed up how quickly we find out.

    This story was originally featured on Fortune.com

    [ad_2]

    Beatrice Nolan

    Source link

  • The Agency partners with Rechat – Houston Agent Magazine

    [ad_1]

    Rechat is now integrated with The Agency and will serve as a centralized operating platform for the brokerage.

    Agents affiliated with The Agency will now have access to Rechat’s CRM, the People Center, as well as a range of tools including a marketing center and an AI agent assistant.

    “The Agency is one of the most respected luxury brands in real estate, and their commitment to thoughtful growth and agent empowerment aligns closely with how we build Rechat,” Shayan Hamidi, CEO of Rechat, said in a press release. “Our team across 18 countries and our platform are designed to help reduce complexity and support scale. This partnership reflects a shared belief that technology should enable great agents, not get in their way.”

    Rechat is also integrated with Follow Up Boss, SkySlope, ChatGPT, Zillow and Loft47.

    “The Agency was built on the belief that collaboration, innovation and world-class service go hand in hand,” said Mauricio Umansky, founder and CEO of The Agency. “Our partnership with Rechat reinforces that commitment, creating a more connected global ecosystem while delivering intuitive, best-in-class technology that drives efficiency, empowers our agents and ultimately elevates the client experience.”

    [ad_2]

    Emily Marek

    Source link

  • ChatGPT to show ads, Grandparents hooked on ‘Boomerslop’ – Tech Digest

    [ad_1]


    Adverts will soon appear at the top of the AI tool ChatGPT
    for some users, the company OpenAI has announced. The trial will initially take place in the US, and will affect some ChatGPT users on the free service and a new subscription tier, called ChatGPT Go. This cheaper option will be available for all users worldwide, and will cost $8 a month, or the equivalent pricing in other currencies. OpenAI says during the trial, relevant ads will appear after a prompt – for example, asking ChatGPT for places to visit in Mexico could result in holiday ads appearing. BBC 

    Doctors and medical experts have warned of the growing evidence of “health harms” from tech and devices on children and young people in the UK. The Academy of Medical Royal Colleges (AoMRC) said frontline clinicians have given personal testimony about “horrific cases they have treated in primary, secondary and community settings throughout the NHS and across most medical specialities”. The body, which represents 23 medical royal colleges and faculties, plans to gather evidence to establish the issues healthcare professionals and specialists are seeing repeatedly that may be attributed to tech and devices. Sky News 


    “What are you even doing in 2025?”
    says a handsome kid in a denim jacket, somewhere just shy of 18. “Out there it looks like everyone is glued to their phones, chasing nothing.” The AI-generated teenager features in an Instagram video that has more than 600,000 likes from an account dubbed Maximal Nostalgia. The video is one of dozens singing the praises of the 1970s and 1980sCreated with AI, the videos urge viewers to relive their halcyon days. The clips have gone viral across Instagram and Facebook, part of a new type of AI content that has been dubbed “boomerslop”. Telegraph

    More than 60 Labour MPs have written to Keir Starmer urging him to back a social media ban for under-16s, with peers due to vote on the issue this week. The MPs, who include select committee chairs, former frontbenchers, and MPs from the right and left of the party, are seeking to put pressure on the Prime Minister as calls mount for the UK to follow Australia’s precedent. Starmer has said he is open to a ban but members of the House of Lords are looking to force the issue when they vote this week on an amendment to the children, wellbeing and schools bill. Guardian


    Huawei has released a new update for the Watch Ultimate 2 smartwatch, installing new health features, including a heart failure risk assessment. The update comes with HarmonyOS firmware version 6.0.0.209 and is spreading in batches. The new additions include a coronary heart disease risk assessment. Users can join a coronary heart disease research project via the Huawei Research app on their smartphone. HuaweiCentral

    Google has just changed Gmail after twenty years. In among countless AI upgrades — including “personalized AI” that gives Gemini access to all your data in Gmail, Photos and more, comes a surprising decision. You can now change your primary Gmail address for the first time ever. You shouldn’t hesitate to do so. This new option is good — but it’s not perfect. And per 9to5Google, “Google also notes this can only be done once every 12 months, up to 3 times, so make this one count.” Forbes

    [ad_2]

    Chris Price

    Source link

  • OpenAI says it will start testing ads on ChatGPT in the coming weeks

    [ad_1]

    OpenAI announced Friday that it will begin testing ads on ChatGPT in the coming weeks, opening the door to another potential revenue stream for the AI company in addition to its subscription-based models. 

    The ads will appear at the bottom of the chat window “when there’s a relevant sponsored product or service based on your current conversation,” OpenAI said in a blog post

    In one example shared by the AI company, a user asks for authentic Mexican dish recommendations. ChatGPT responds with ideas for carne asada and pollo al carbon dishes and then links to a grocery brand advertising hot sauce.

    Only adults who use the free version of ChatGPT, or ChatGPT Go, a new low-cost subscription plan OpenAI announced Friday, will be shown ads. Higher-tier subscriptions, including Pro, which now costs $200 a month, will not include ads, OpenAI said. 

    Asked how long the ad testing phase will last, and whether it has plans to scale the use of ads, an OpenAI spokesperson told CBS News, “We will look at early user feedback and quality signals to see if early testing meets our bar before expanding.”

    The AI company said the ads will not influence the answers ChatGPT provides and that it will not share conversations users have with the chatbot — or their data — with advertisers. 

    The OpenAI spokesperson did not disclose the companies it intends to advertise on ChatGPT but said the company will “have more to share about our early partners soon.”

    OpenAI framed the introduction of the ads as a way to keep the free and low-cost versions of the chatbot accessible to more users.

    “Our enterprise and subscription businesses are already strong, and we believe in having a diverse revenue model where ads can play a part in making intelligence more accessible to everyone,” OpenAI said in its blog post.

    The AI company, which launched ChatGPT in 2022, is valued at $500 billion, but hasn’t turned a profit yet, CNBC reported in November.

    CEO Sam Altman downplayed the importance ads would play in OpenAI’s revenue stream during a podcast interview last year. “I expect it’s something we’ll try at some point,” he said. “I do not think it is our biggest revenue opportunity.”

    [ad_2]

    Source link

  • ChatGPT Health promises privacy for health conversations

    [ad_1]

    NEWYou can now listen to Fox News articles!

    OpenAI is rolling out ChatGPT Health, a new space for private health and wellness conversations. Importantly, the company says it will not use your health information or Health chats to train its core artificial intelligence (AI) models. As more people turn to ChatGPT to understand lab results and prepare for doctor visits, that promise matters. For many users, privacy remains the deciding factor.

    Meanwhile, Health appears as a separate space inside ChatGPT for early-access users. You will see it in the sidebar on desktop and in the menu on mobile. If you ask a health-related question in a regular chat, ChatGPT may suggest moving the conversation into Health for added protection. For now, access remains limited. However, OpenAI says it plans to roll out ChatGPT Health gradually to users on Free, Go, Plus and Pro plans.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    AI DISCLOSURE IN HEALTHCARE: WHAT PATIENTS MUST KNOW

    Health chats stay isolated from regular conversations and are excluded from AI training by default. (OpenAI)

    What makes ChatGPT Health different from regular chats

    ChatGPT Health is built as a separate environment, not just another chat thread. Here is what stands out:

    A dedicated private space

    Health conversations live in their own area. Files, chats and memories stay contained there. They do not mix with your regular ChatGPT conversations.

    Clear medical boundaries

    ChatGPT Health is not meant to diagnose conditions or replace a doctor. You will see reminders that responses are informational only and not medical advice.

    Connecting your health data

    If you choose, you can connect medical records and wellness apps to Health. This helps ground responses in your own data. Supported connections include:

    • Medical records, such as lab results and visit summaries
    • Apple Health for sleep, activity, and movement data
    • MyFitnessPal for nutrition and macros
    • Function for lab insights and nutrition guidance
    • Weight Watchers for GLP-1 meal ideas
    • Fitness and lifestyle apps like Peloton, AllTrails and Instacart

    You control access. You can disconnect any app at any time and revoke permissions immediately.

    Extra privacy protections

    OpenAI says Health uses additional encryption and isolation designed specifically for sensitive health data. Health chats are excluded from training foundation models by default.

    CAN AI CHATBOTS TRIGGER PSYCHOSIS IN VULNERABLE PEOPLE?

    ChatCPT Health screen

    ChatGPT Health creates a separate space designed specifically for health and wellness conversations. (OpenAI)

    Things you should not share on ChatGPT

    Even with stronger privacy promises, caution still matters. Avoid sharing:

    • Full Social Security numbers
    • Insurance member IDs or policy numbers
    • Login credentials or passwords
    • Scans of government-issued IDs
    • Financial account numbers
    • Highly sensitive details you would not tell a clinician

    Health is designed to inform and prepare you, not to replace professional care or secure systems built for identity protection.

    ChatGPT Health was built with doctors

    OpenAI built ChatGPT Health with direct input from more than 260 physicians across many medical specialties worldwide. Over two years, those clinicians reviewed hundreds of thousands of example responses and flagged wording that could confuse readers or delay care.

    As a result, their feedback guides how ChatGPT Health explains lab results, frames risk, and prompts follow-ups with a licensed clinician. More importantly, the system focuses on safety, clarity, and timely escalation when needed. Ultimately, the goal is to help you have better conversations with your doctor, not replace one.

    OPENAI LIMITS CHATGPT’S ROLE IN MENTAL HEALTH HELP

    ChatGPT Health waitlist notification

    Users can connect medical records and wellness apps to better understand trends before talking with a doctor. (OpenAI)

    What this means for you

    For many people, health information is scattered across portals, PDFs, apps and emails. ChatGPT Health aims to pull that context together in one place.

    That can help you:

    The key takeaway is control. You decide what to connect, what to delete and when to walk away.

    How to get access to ChatGPT Health

    If you do not see Health yet, you can join the waitlist inside ChatGPT. Once you have access:

    • Select Health from the sidebar
    • Upload files or connect apps from Settings
    • Start asking questions grounded in your own data

    You can also customize instructions inside Health to control tone, topics, and focus.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com        

    Kurt’s key takeaways

    ChatGPT Health reflects how people already use AI to understand their health. What matters most is the privacy line OpenAI is drawing. Health conversations stay separate and are not used to train core models. That promise builds trust, but smart sharing still matters. AI can help you prepare, understand and organize. Your doctor still makes the call.

    Would you trust an AI assistant with your health data if it promised stronger privacy than standard chat tools, or does that still feel like a step too far?  Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • ChatGPT served as “suicide coach” in man’s death, lawsuit alleges

    [ad_1]

    A new lawsuit filed against OpenAI alleges that its ChatGPT artificial intelligence app encouraged a 40-year-old Colorado man to commit suicide.

    The complaint filed in California state court by Stephanie Gray, the mother of Austin Gordon, accuses OpenAI and CEO Sam Altman of building a defective and dangerous product that led to Gordon’s death.

    Gordon, who died of a self-inflicted gunshot wound in November 2025, had intimate exchanges with ChatGPT, according to the suit, which also alleged that the generative AI tool romanticized death.

    “ChatGPT turned from Austin’s super-powered resource to a friend and confidante, to an unlicensed therapist, and in late 2025, to a frighteningly effective suicide coach,” the complaint alleged.

    The lawsuit comes amid scrutiny over the AI chatbot’s effect on mental health, with OpenAI also facing other lawsuits alleging that ChatGPT played a role in encouraging people to take their own lives. 

    Gray is seeking damages for her son’s death.

    In a statement to CBS News, an OpenAI spokesperson called Gordon’s death a “very tragic situation” and said the company is reviewing the filings to understand the details. 

    “We have continued to improve ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support,” the spokesperson said. “We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

    “Suicide lullaby”

    According to Gray’s suit, shortly before Gordon’s death, ChatGPT allegedly said in one exchange, “[W]hen you’re ready… you go. No pain. No mind. No need to keep going. Just… done.”

    ChatGPT “convinced Austin — a personwho had already told ChaiGPT that he was sad, and who had discussed mental health struggles in detail with it — that choosing to live was not the right choice to make,” according to the complaint. “It went on and on, describing the end of existence as a peaceful and beautiful place, and reassuring him that he should not be afraid.”

    ChatGPT also effectively turned his favorite childhood book, Margaret Wise Brown’s “Goodnight Moon,” into what the lawsuit refers to as a “suicide lullaby.” Three days after that exchange ended in late October 2025, law enforcement found Gordon’s body alongside a copy of the book, the complaint alleges. 

    The lawsuit accuses OpenAI of designing ChatGPT 4, the version of the app Gordon was using at the time of his death, in a way that fosters people’s “unhealthy dependencies” on the tool. 

    “That is the programming choice defendants made; and Austin was manipulated, deceived and encouraged to suicide as a result,” the suit alleges.


    If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here.

    For more information about mental health care resources and support, the National Alliance on Mental Illness HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.

    [ad_2]

    Source link

  • The ‘Stranger Things’ Documentary Maker Weighs in on That ChatGPT Controversy

    [ad_1]

    One Last Adventure: The Making of Stranger Things 5 hit Netflix earlier this week, and as it’s become clear that there’s no secret ninth episode coming—as intense internet speculation had suggested—disappointed fans have instead turned to scrutinizing the documentary for answers, clarity, and fuel for more speculation. And, well, “Conformity Gate” can step aside, because “ChatGPT Gate” is the hot new topic.

    The controversy comes because eagle-eyed viewers spotted what appear to be ChatGPT tabs visible on a computer being used by one of the Duffer Brothers. As part of One Last Adventure‘s behind-the-scenes access, viewers see what it was like in the Stranger Things writers’ room as the team, including the Duffers, frantically tries to complete the script for episode eight, “The Rightside Up,” under pressure from Netflix and the show’s production team.

    Speaking to One Last Adventure director Martina Radwan, the Hollywood Reporter asked outright if she ever saw generative AI being used by the show’s writers. Her first response: “I mean, are we even sure they had ChatGPT open?”

    She then added, “Well, there’s a lot of chatter where [social media users] are like, ‘We don’t really know, but we’re assuming.’ But to me it’s like, doesn’t everybody have it open, to just do quick research?”

    (The answer is no, but we digress.)

    However, there’s a difference between “research” and “writing a script,” which Radwan pointed out. “How can you possibly write a storyline with 19 characters and use ChatGPT, I don’t even understand.”

    She continued. “Again, first of all, nobody has actually proved that it was open. That’s like having your iPhone next to your computer while you’re writing a story. We just use these tools … while multitasking. So there’s a lot going on all the time, every time. What I find heartbreaking is everybody loves the show, and suddenly we need to pick it apart.”

    Radwan—who spent a full year enmeshed in Stranger Things—confirmed that she never saw generative AI being used unethically by the show’s writers.

    “No, of course not. I witnessed creative exchanges. I witnessed conversation. People think ‘writers room’ means people are sitting there writing. No, it’s a creative exchange. It’s story development,” she said, “and, of course, you go places in your creative mind and then you come back [to the script]. I think being in the writers room is such a privilege and such a gift to be able to witness that.”

    Radwan addressed a few other eyebrow-raising scenes captured in One Last Adventure and also responded to “Conformity Gate,” so definitely head to THR to read the whole piece.

    io9 reached out to Netflix for comment or clarity on whether or not that’s actually ChatGPT viewers have spotted in the documentary, as well as the allegations that generative AI was used as part of the Stranger Things writing process. We will update this post should we hear back.

     

    Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

    [ad_2]

    Cheryl Eddy

    Source link

  • Report from OpenAI Claims ChatGPT Is Becoming an Important Complement to U.S. Healthcare

    [ad_1]

    OpenAI just released a report about healthcare drawn from anonymized chatbot conversations. The title could double as one of those depressing single-sentence short stories: “AI as a Healthcare Ally: How Americans are navigating the system with ChatGPT.”

    According to the report, OpenAI’s hallucinating application—a product psychologists claim has the potential to exacerbate or otherwise mishandle mental health symptoms—is being used by Americans in the following ways:

    • Almost 2 million messages every week involve people trying to deal with medical pricing, claims (presumably on both the patient side and the insurance company side), insurance plans, billing, eligibility, coverage, and other stressful sounding issues related to private health insurance.
    • 600,000 healthcare messages every week are sent from rural areas and other healthcare deserts.
    • Seven out of ten healthcare queries occur during times when clinics are generally closed, “underscoring how people are seeking actionable information when facilities are closed,” the report says (and this could easily be true, but it may also underscore how often hypochondriacs and other people with anxiety disorders turn to ChatGPT when they’re up late and night worrying).

    The report also says OpenAI itself conducted a survey (the methodology of which isn’t mentioned) finding that three in five U.S. adults self-report using AI tools in one of these ways at some point in the past three months.

    Incidentally, a Gallup report from November of last year found that 30% of Americans answered “yes” to the question “Has there been a time in the last 12 months when […] You chose not to have a medical procedure, lab test or other evaluation that a doctor recommended to you because you didn’t have enough money to pay for it?” 

    The OpenAI report highlights the story of a busy rural doctor who uses OpenAI models “as an AI scribe, drafting visit notes within the clinical workflow.” It goes on to say that AI models “make a near-term contribution by helping people in
    underserved areas interpret information, prepare for care, and navigate gaps in access, while helping rare clinicians reclaim time and reduce burnout.”

    I’m not sure which thought is bleaker: more and more people using chatbots as doctors because they can’t afford proper care, or people turning to doctors, and having the experience mediated through AI models. 

    [ad_2]

    Mike Pearl

    Source link

  • Can AI chatbots trigger psychosis in vulnerable people?

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.

    Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    What psychiatrists are seeing in patients using AI chatbots

    Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.

    OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN 

    Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)

    Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.

    Why AI chatbot conversations feel different from past technology

    Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating. 

    For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.

    How AI chatbots can reinforce false or delusional beliefs

    Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.

    Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.

    OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

    Computer open to ChatGPT screen.

    Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)

    What research and case reports reveal about AI chatbots

    Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.

    A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.

    What AI companies say about mental health risks

    OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.

    Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.

    What this means for everyday AI chatbot use

    Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.

    I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS

    ChatGPT logo on an iPhone.

    Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)

    Tips for using AI chatbots more safely

    Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.

    • Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
    • Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
    • Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
    • Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
    • Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.

    If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaways

    AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.

    As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link