A new lawsuit filed against OpenAI alleges that its ChatGPT artificial intelligence app encouraged a 40-year-old Colorado man to commit suicide.
The complaint filed in California state court by Stephanie Gray, the mother of Austin Gordon, accuses OpenAI and CEO Sam Altman of building a defective and dangerous product that led to Gordon’s death.
Gordon, who died of a self-inflicted gunshot wound in November 2025, had intimate exchanges with ChatGPT, according to the suit, which also alleged that the generative AI tool romanticized death.
“ChatGPT turned from Austin’s super-powered resource to a friend and confidante, to an unlicensed therapist, and in late 2025, to a frighteningly effective suicide coach,” the complaint alleged.
The lawsuit comes amid scrutiny over the AI chatbot’s effect on mental health, with OpenAI also facing other lawsuits alleging that ChatGPT played a role in encouraging people to take their own lives.
Gray is seeking damages for her son’s death.
In a statement to CBS News, an OpenAI spokesperson called Gordon’s death a “very tragic situation” and said the company is reviewing the filings to understand the details.
“We have continued to improve ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support,” the spokesperson said. “We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
“Suicide lullaby”
According to Gray’s suit, shortly before Gordon’s death, ChatGPT allegedly said in one exchange, “[W]hen you’re ready… you go. No pain. No mind. No need to keep going. Just… done.”
ChatGPT “convinced Austin — a personwho had already told ChaiGPT that he was sad, and who had discussed mental health struggles in detail with it — that choosing to live was not the right choice to make,” according to the complaint. “It went on and on, describing the end of existence as a peaceful and beautiful place, and reassuring him that he should not be afraid.”
ChatGPT also effectively turned his favorite childhood book, Margaret Wise Brown’s “Goodnight Moon,” into what the lawsuit refers to as a “suicide lullaby.” Three days after that exchange ended in late October 2025, law enforcement found Gordon’s body alongside a copy of the book, the complaint alleges.
The lawsuit accuses OpenAI of designing ChatGPT 4, the version of the app Gordon was using at the time of his death, in a way that fosters people’s “unhealthy dependencies” on the tool.
“That is the programming choice defendants made; and Austin was manipulated, deceived and encouraged to suicide as a result,” the suit alleges.
For more information about mental health care resources and support, the National Alliance on Mental Illness HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.
Apple and Google’s surprise AI partnership announcement on Monday sent shockwaves across the tech industry (and lifted Google’s market cap above $4 trillion). The two tech giants’ deal to infuse Google’s AI technology into Apple’s mobile software, including in an updated version of the Siri digital assistant, has major implications in the high-stakes battle to dominate AI and to own the platform that will define the next generation of computing.
While there are still many unanswered questions about the partnership, including the financial component and the duration of the deal, some key takeaways are already clear. Here’s why the deal is good news for Google, so-so news for Apple, and bad news for OpenAI.
The deal is further validation that Google has got its AI mojo back
When OpenAI debuted ChatGPT in November 2022, and throughout a good part of the next two years, many industry observers had their doubts about Google’s prospects in the changing landscape. The search giant at times appeared to be floundering as it raced to field models that could be as capable as OpenAI’ s ChatGPT and Anthropic’s Claude. Google endured several embarrassing product debuts, when its Bard chatbot and then its successor Gemini models got facts wrong, recommended glue as a pizza topping, and generated images of historically anachronistic Black Nazis.
But today, Google’s latest Gemini models (Gemini 3) are among the most capable on the market and gaining traction among both consumers and businesses. The company has also been attracting lots of customers to its Google Cloud, in part because of the power of its bespoke AI chips, called tensor processing units (or TPUs), which may offer cost and speed advantages over Nvidia’s graphics processing units (GPUs) for running AI models.
Apple’s statement on Monday that “after careful consideration” it had determined that Google’s AI technology “provides the most capable foundation for Apple Foundation Models” served as Gemini’s ultimate validation—particularly given that until now, OpenAI was Apple’s preferred technology provider for “Apple Intelligence” offerings. Analysts at Bank of America said the deal reinforced “Gemini’s position as a leading LLM for mobile devices” and should also help strengthen investor confidence in the durability of Google’s search distribution and long-term monetization.
Hamza Mudassir, who runs an AI agent startup and teaches strategy and policy at the University of Cambridge’s Judge School of Business, said Apple’s decision is likely about more than just Gemini’s technical capabilities. Apple does not allow partners to train on Apple user data, and Mudassir theorized that Apple may have concluded Google’s control over its ecosystem—such as owning its own cloud—could provide data privacy and intellectual property guarantees that perhaps OpenAI or Anthropic couldn’t match.
The deal also likely translates directly into revenue for Google. Although the financial details of the were not disclosed, a previous report from Bloomberg suggested Apple was paying Google about $1 billion a year for the right to use its tech.
The bigger prize for Google may be the foot-in-the-door the deal provides to Apple’s massive distribution channel: the approximately 1.5 billion iPhone users worldwide. With Gemini powering the new version of Siri, Google may get a share of any revenue those users generate through product discovery and purchases made through a Gemini-powered Siri. Eventually, it might potentially even lead to an arrangement that would see Gemini’s chatbot app pre-installed on iPhones.
For Apple, the implications of the deal are a bit more ambivalent
Apple’s Tim Cook
David Paul Morris/Bloomberg via Getty Images
The iPhone maker will obviously benefit from giving users a much more capable Siri, as well as other AI features, at an attractive cost and while guaranteeing user privacy. Dan Ives, an equity analyst who covers Apple for Wedbush, said in a note the deal provided Apple with “a stepping stone to accelerate its AI strategy into 2026 and beyond.”
But Apple’s continuing need to rely on partners—first OpenAI and now Google—to deliver these AI features is a worrisome sign, suggesting that Apple, a champion of vertical integration, is still struggling to build its own LLM.
It’s a problem that has dogged the company since the beginning of the generative AI era: For months last year several Apple Intelligence features were delayed, and the long-awaited debut of an updated Siri has been pushed back numerous times. These delays have taken a toll on Apple’s reputation as a tech leader and angered customers, some of whom filed a class action lawsuit against the company after the AI features promoted in ads for the iPhone 16 weren’t initially available on the device.
When Apple CEO Tim Cook promised an updated version of Siri would be released in 2026, many assumed it would be powered by Apple’s own AI models. But apparently those models are not yet ready for prime time and the new Siri will be powered by Google instead.
Daniel Newman, an analyst at the Futurum Group, said that 2026 is a “make-or-break year” for Apple. “We have long said the company has the user base and distribution that allows it to be more patient in chasing new trends like AI, but this is a critical year for Apple,” Newman said.
Cook has shaken up the ranks, installing a new head of AI who previously worked at Google on Gemini. And, if the delays turn out to be related to Apple’s specific requirements around things like privacy, it may ultimately prove to have been worth the wait. Ideally, Apple would want an AI model that matches the capabilities of those from OpenAI, Anthropic, and Google but which is compact enough to run entirely on an iPhone, so that user data does not have to be transmitted to the cloud. It’s possible, said Mudassir, that Apple is grappling with technical limitations involving the amount of power these models consume and how much heat they generate. Partnering with Google buys Apple time to make breakthroughs in compression and architecture while also getting Wall Street “off its back,” he said.
Apple defenders note that the company is rarely a first mover in new technology—it was not the first to create an MP3 player, a smartphone, wireless earphones, or a smart watch, yet it came from behind to dominate many of those product categories with a combination of design innovation and savvy marketing. And Apple has a history of learning from partners for key technology, such as chips, before ultimately bringing these efforts in-house.
Or, in the case of internet search, Apple simply partnered with Google for the long-term, using the Google engine to handle search queries in its Safari browser. The fact that Apple never developed its own search engine has not hurt its growth. Could the same principle hold true for AI?
But the Apple-Google tie up is almost certainly bad news for OpenAI
OpenAI CEO Sam Altman
Florian Gaertner/Photothek via Getty Images
While the Google partnership is not exclusive, meaning that Apple may continue to rely on OpenAI’s models for some of its Apple Intelligence features and OpenAI still has a chance to prove its models’ worth to Cupertino, Apple’s decision to go with Google is definitely a blow. At the very least, it solidifies the narrative that Google has not only caught up with OpenAI, but has now edged past it in having the best AI models in the market.
Deprived of built-in distribution through Apple’s customer base, OpenAI may find it harder to grow its own user base. The company currently boasts more than 800 million weekly users, but recent reports suggest that the rate of usage may be slowing. OpenAI CEO Sam Altman has noted that many people currently see ChatGPT as synonymous with AI. But that perception could fray if Apple users find delight in using Gemini through Siri and come to see Gemini as the better model. . Altman told reporters last month that he sees Apple as his company’s primary long-term rival. OpenAI is in the process of developing a new kind of AI device, with help from Apple’s former chief designer Jony Ive, that Altman hopes will rival the phone as the primary way consumers interface with AI assistants. That device may debut this year. As long as Apple was dependent on ChatGPT to power Siri, OpenAI had a good view into the capabilities its new device would be competing against. OpenAI is unlikely to have as much insight into Apple’s AI capabilities going forward, which may make it harder for the upstart to position its new device as an iPhone killer.
OpenAI has to hope its new device is a hit that may enable it to cement users into a closed ecosystem, not dissimilar to the one Apple has built around its hardware device and iOS software. This “walled garden” approach is one way to keep users from switching to rival products when they offer broadly similar capabilities. OpenAI will also have to hope its AI researchers achieve breakthroughs that give it a more decisive and long-lasting edge over Google. That might convince Apple to rely more heavily on OpenAI again in the future. Or, it could obviate the need for OpenAI to have distribution on Apple’s devices at all.
Although public backlash against data centers has been intense over the past 12 months, all of the tech industry’s biggest companies have promised additional buildouts of AI infrastructure in the coming year. That includes OpenAI partner Microsoft, which, on Tuesday, announced what it calls a “community-first” approach to AI Infrastructure.
Microsoft’s announcement, which comes only a day after Mark Zuckerberg said that Meta would launch its own AI infrastructure program, isn’t unexpected. Last year, the company announced that it planned to spend billions to expand its AI capacity. What is a little unusual are the promises the company has now made about how it will handle that buildout.
On Tuesday, Microsoft promised to take the “steps needed to be a good neighbor in the communities where we build, own, and operate our data centers.” That includes, according to the company, its plans to “pay its own way” to ensure that local electricity bills don’t go through the roof in the places where it builds. Specifically, the company says it will work with local utility companies to ensure that the rates it pays cover its full share of its burden on the local grid.
“We will work closely with utility companies that set electricity prices and state commissions that approve these prices,” Microsoft said. “Our goal is straightforward: to ensure that the electricity cost of serving our data centers is not passed on to residential customers.”
The company has also promised to create jobs in the communities where it touches down, as well as to minimize the amount of water that its centers need to function. Water usage by data centers has obviously been a contentious topic, with data centers accused of creating substantial issues for local water supplies and spurring other environmental concerns. The jobs promise is also relevant, given lingering questions around the number of both short-term and permanent jobs that such projects typically create.
It’s pretty clear why Microsoft feels it is necessary to make these promises right now. Data center construction has become a political flashpoint in recent years, generating intense backlash and protest from local communities. Data Center Watch, an organization that tracks anti-data center activism, has observed that there are as many as 142 different activist groups across 24 states currently organizing against such developments.
This backlash has already impacted Microsoft directly. In October, the company abandoned plans for a new data center in Caledonia, Wisconsin, after “community feedback” was overwhelmingly negative. In Michigan, meanwhile, the company’s plans for a similar project in a small central township have recently inspired locals to take to the streets in protest. On Tuesday, around the same time Microsoft announced its “good neighbor” pledge, an op-ed in an Ohio newspaper (where Microsoft is currently developing several data center campuses) excoriated the company, blaming it and its peers for climate change.
Techcrunch event
San Francisco | October 13-15, 2026
Concerns have extended even to the White House, where an AI buildout has become one of the major tenets of the Trump administration. On Monday, President Trump took to social media to promise that Microsoft specifically would make “major changes” to ensure that Americans’ electricity bills wouldn’t rise. Trump said the changes would “ensure that Americans don’t ‘pick up the tab’ for their power consumption.”
In short, by now, Microsoft understands that it’s fighting a tide of negative public opinion. It remains to be seen whether the company’s new assurances of jobs, environmental stewardship, and low electricity bills will be enough to turn the tide.
OpenAI reportedly asking contractors to upload real work from past jobs
OpenAI and training data company Handshake AI are asking third-party contractors to upload real work that they did in past and current jobs, according to a report in Wired.
This appears to be part of a larger strategy across AI companies that are hiring contractors to generate high-quality training data in the hopes that this will eventually allow their models to automate more white-collar work.
In OpenAI’s case, a company presentation reportedly asks contractors to describe tasks they’ve performed at other jobs and upload examples of “real, on-the-job work” that they’ve “actually done.” These examples can include “a concrete output (not a summary of the file, but the actual file), e.g., Word doc, PDF, Powerpoint, Excel, image, repo.”
The company reportedly instructs contractors to delete proprietary and personally identifiable information before uploading, and it points them to a ChatGPT “Superstar Scrubbing” tool to do so.
Nonetheless, intellectual property lawyer Evan Brown told Wired that any AI lab taking this approach is “putting itself at great risk” with an approach that requires “a lot of trust in its contractors to decide what is and isn’t confidential.”
Before the end of 2025, Disney surprised everyone (derogatory) by entering into a $1 billion deal with OpenAI. With it, over 200 Disney characters have been licensed for the Sora video platform and image generator ChatGPT. From the general public and Hollywood alike, reactions weren’t exactly thrilled, but SAG-AFTRA is keeping an open mind. Sort of.
Talking to Deadline at CES, union presidents Sean Astin and Duncan Crabtree-Ireland were asked what the deal could mean for the acting industry. In 2023 and 2024, SAG-AFTRA went on strike to secure better protections against synthetic performers and actors being made to sign their voice rights away. Crabtree-Ireland admitted he doesn’t know the full details of the agreement made between the two companies but revealed they contacted SAG-AFTRA before the public reveal.
In the call announcing the deal, “top execs” from Disney and OpenAI said the deal has “certain assurances,” such as “explicitly excluding any licensing of any performer images or voices,” explained Crabtree-Ireland. “One concern I have, and I expect Sean shares, is precisely why Disney would want to do it. Making a deal like that before the IP litigation, copyright litigation is resolved, could be smart.”
While Disney can’t be stopped from partnering with OpenAI, SAG-AFTRA knows it’s made progress where it can. Crabtree-Ireland noted the union’s contract from 2023 requires companies to proactively disclose if they’ve created and used a synthetic performance, of which there’ve been “zero notices” about so far. But they’re not stopping there: every future negotiation SAG-AFTRA takes part in will involve “looking at how AI is rolling out and developing.” Machine learning technology has greatly evolved in recent years, and the hope is to “continue creating separation between AI, as an algorithmic tool, and humanity.”
Michigan Attorney General Dana Nessel is urging state utility regulators to reconsider their approval of special power contracts for a massive data center planned in Washtenaw County, warning the fast-tracked decision could leave electric customers exposed to higher costs.
The project, tied to Oracle, OpenAI, and developer Related Digital, would be among the largest data centers in the country and is expected to consume as much electricity as nearly one million homes. Its scale has caused concerns among residents, environmental advocates, and consumer watchdogs about long-term impacts on electric rates, grid reliability, and the environment.
Nessel’s move also pits her against Gov. Gretchen Whitmer, a fellow Democrat who has publicly backed the data center as “the largest economic project in Michigan history.” Whitmer celebrated the project when it was announced last fall, citing thousands of construction jobs and hundreds of permanent positions.
On Thursday, U.S. Senate candidate Abdul El-Sayed, a progressive Democrat, released what he called “terms of engagement” aimed at protecting communities from higher utility bills, grid strain, and environmental harm tied to data centers.
At least 15 data center projects have been proposed across the state in the past year.
The split among Democrats is part of a broader debate over whether Michigan should keep fast-tracking energy-hungry data center projects tied to the AI boom.
In her petition, Nessel challenges the commission’s authority to approve the contracts behind closed doors without holding a contested case hearing that would allow discovery, sworn testimony, and full public review. She also questions whether the conditions imposed by the commission are meaningful or enforceable.
In a statement Friday, the Michigan Public Service Commission said it “looks forward to considering Nessel’s petition for rehearing,” but the commission “unequivocally rejects any claim that these contracts were inadequately reviewed.”
The commission said its professional staff, advisory staff, and commissioners were provided with unredacted versions of the special contracts and reviewed them thoroughly to ensure existing customers are protected. The commission said its order recognizes DTE’s legal obligation to serve the data center while imposing what it described as the strongest consumer protections for a data center power contract in the country.
The attorney general is seeking clarification on how those conditions would protect ratepayers, noting that many appear to rely on repeated assurances from DTE, rather than concrete commitments backed by evidence. Nessel also objected to the commission allowing DTE to serve as the project’s financial backstop, rather than requiring the data center operator to provide sufficient collateral to cover potential risks.
“I remain extremely disappointed with the Commission’s decision to fast-track DTE’s secret data center contracts without holding a contested case hearing,” Nessel said in a statement. “This was an irresponsible approach that cut corners and shut out the public and their advocates. Granting approval of these contracts ex parte serves only the interests of DTE and the billion-dollar businesses involved, like Oracle, OpenAI, and Related Companies, not the Michigan public the Commission is meant to protect. ”
She said the commission’s approval process served the interests of DTE and the companies behind the project rather than Michigan residents.
“The Commission imposed some conditions on DTE to supposedly hold ratepayers harmless, but these conditions and how they’ll be enforced remain unclear,” Nessel said. “As Michigan’s chief consumer advocate, it is my responsibility to ensure utility customers in this state are adequately protected, especially on a project so massive, so expensive, and so unprecedented.”
Large portions of the contracts remain heavily redacted, preventing outside parties from verifying DTE’s claims that serving the data center will not raise rates for existing customers. Nessel said a contested case is necessary to review the full contracts, assess affordability claims, and confirm that protections, such as collateral requirements and exit fees are in place.
The commission ordered DTE to formally accept its conditions within 30 days of its Dec. 18 order. Nessel said that timeline complicates decisions about whether further legal challenges are necessary, prompting her office to file the rehearing petition in part to preserve its arguments.
More than 5,000 public comments opposing the data center power deal were submitted to the commission ahead of its December vote. Critics argue the rush to approve the contracts is part of a broader pattern as deep-pocketed utilities and developers seek to capitalize on the AI boom, which is driving a nationwide surge in electricity demand from large-scale data centers.
“As my office continues to review all potential options to defend energy customers in our state, we must demand further clarity on what protections the Commission has put in place and continue to demand a full contested case concerning these still-secret contracts,” Nessel said.
OpenAI just released a report about healthcare drawn from anonymized chatbot conversations. The title could double as one of those depressing single-sentence short stories: “AI as a Healthcare Ally: How Americans are navigating the system with ChatGPT.”
Almost 2 million messages every week involve people trying to deal with medical pricing, claims (presumably on both the patient side and the insurance company side), insurance plans, billing, eligibility, coverage, and other stressful sounding issues related to private health insurance.
600,000 healthcare messages every week are sent from rural areas and other healthcare deserts.
Seven out of ten healthcare queries occur during times when clinics are generally closed, “underscoring how people are seeking actionable information when facilities are closed,” the report says (and this could easily be true, but it may also underscore how often hypochondriacs and other people with anxiety disorders turn to ChatGPT when they’re up late and night worrying).
The report also says OpenAI itself conducted a survey (the methodology of which isn’t mentioned) finding that three in five U.S. adults self-report using AI tools in one of these ways at some point in the past three months.
Incidentally, a Gallup report from November of last year found that 30% of Americans answered “yes” to the question “Has there been a time in the last 12 months when […] You chose not to have a medical procedure, lab test or other evaluation that a doctor recommended to you because you didn’t have enough money to pay for it?”
The OpenAI report highlights the story of a busy rural doctor who uses OpenAI models “as an AI scribe, drafting visit notes within the clinical workflow.” It goes on to say that AI models “make a near-term contribution by helping people in underserved areas interpret information, prepare for care, and navigate gaps in access, while helping rare clinicians reclaim time and reduce burnout.”
I’m not sure which thought is bleaker: more and more people using chatbots as doctors because they can’t afford proper care, or people turning to doctors, and having the experience mediated through AI models.
From infrastructure battles to physical-world intelligence, A.I.’s next chapter is already taking shape. Unsplash
In November, ChatGPT turned three, with a global user base rapidly approaching one billion. At this point, A.I. is no longer an esoteric acronym that needs explaining in news stories. It has become a daily utility, woven into how we work, learn, shop and even love. The field is also far more crowded than it was just a few years ago, with competitors emerging at every layer of the stack.
Over the past year, conversation around A.I. has taken on a more complicated tone. Some argue that consumer chatbots are nearing a plateau. Others warn that startup valuations are inflating into a bubble. And, as always, there’s the persistent anxiety that A.I. may one day outgrow human control altogether.
So what comes next? Much of the industry’s energy is now focused on the infrastructure side of A.I. Big Tech companies are racing to solve the hardware bottlenecks that limit today’s systems, while startups experiment with applications far beyond chatbots. At the same time, researchers are beginning to look past language models altogether, toward models that can reason about the physical world.
Below are the key themes Observer has identified over the past year of covering this space. Many of these developments are still unfolding and are likely to shape the field well into 2026 and beyond.
A.I. chips
Even as OpenAI faces growing competition at the model level, its primary chip supplier, Nvidia, remains in a league of its own. Demand for its GPUs continues to outstrip supply, and no rival has yet meaningfully disrupted its dominance. Traditional semiconductor companies such as AMD and Intel are racing to claw back market share, while some of Nvidia’s largest customers are designing their own chips to reduce dependence on a single supplier.
To borrow from philosopher Ludwig Wittgenstein, the limits of language are the limits of our world. Today’s A.I. systems have grown remarkably fluent in human language—especially English—but language captures only a narrow slice of intelligence. That limitation has prompted some researchers to argue that large language models alone can never reach human-level understanding.
That belief is fueling a push toward so-called “world models,” which aim to teach machines how the physical world works—how objects move, how space is structured, and how cause and effect unfold. LeCun is now leaving Meta to build such a system himself. Fei-Fei Li’s startup, World Labs, unveiled its first model in November after nearly two years of development. Google DeepMind has released early versions through its Genie projects, and Nvidia is betting heavily on physical A.I. with its Cosmos models.
Language-specific A.I.
While pioneering researchers look beyond language, linguistic barriers remain one of A.I.’s most practical challenges. More than half of the internet’s content is written in English, skewing training data and limiting performance in other languages.
It’s only natural that there’s a consumer hardware angle of A.I. This year brought a wave of experiments in wearable A.I.—some met with curiosity, others with discomfort.
Friend, a startup selling an A.I. pendant, sparked backlash after a New York City subway campaign framed its product as a substitute for human companionship. In December, Meta acquired Limitless, the maker of a $99 wearable that records and summarizes conversations. Earlier in the year, Amazon bought Bee, which produces a $50 bracelet designed to transcribe daily activity and generate summaries.
Meta is also developing a new line of smart glasses with EssilorLuxottica, the company behind Ray-Ban and Oakley. In July, Mark Zuckerberg went so far as to suggest that people without A.I.-enhanced glasses could eventually face a “significant cognitive disadvantage.” Meanwhile, OpenAI is quietly collaborating with former Apple design chief Jony Ive on a mysterious hardware project of its own. This all suggests the next phase of A.I. may be something we wear, not just something we type into.
OpenAI is seeking a new “head of preparedness” to guide the company’s safety strategy amid mounting concerns over how artificial intelligence tools could be misused.
According to the job posting, the new hire will be paid $555,000 to lead the company’s safety systems team, which OpenAI says is focused on ensuring AI models are “responsibly developed and deployed.” The head of preparedness will also be tasked with tracking risks and developing mitigation strategies for what OpenAI calls “frontier capabilities that create new risks of severe harm.”
“This will be a stressful job and you’ll jump into the deep end pretty much immediately,” CEO Sam Altman wrote in an X post describing the position over the weekend.
He added, “This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges.”
OpenAI did not immediately respond to a request for comment.
The company’s investment in safety efforts comes as scrutiny intensifies over artificial intelligence’s influence on mental health, following multiple allegations that OpenAI’s chatbot, ChatGPT, was involved in interactions preceding a number of suicides.
In one case earlier this year covered by CBS News, the parents of a 16-year-old sued the company, alleging that ChatGPT encouraged their son to plan his own suicide. That prompted OpenAI to announce new safety protocols for users under 18.
ChatGPT also allegedly fueled what a lawsuit filed earlier this month described as the “paranoid delusions” of a 56-year-old man who murdered his mother and then killed himself. At the time, OpenAI said it was working on improving its technology to help ChatGPT recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support.
Beyond mental health concerns, worries have also increased over how artificial intelligence could be used to carry out cybersecurity attacks. Samantha Vinograd, a CBS News contributor and former top Homeland Security official in the Obama administration, addressed the issue on CBS News’ “Face the Nation with Margaret Brennan” on Sunday.
“AI doesn’t just level the playing field for certain actors,” she said. “It actually brings new players onto the pitch, because individuals, non-state actors, have access to relatively low-cost technology that makes different kinds of threats more credible and more effective.”
Altman acknowledged the growing safety hazards AI poses in his X post, writing that while the models and their capabilities have advanced quickly, challenges have also started to arise.
“The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities,” he wrote.
Now, he continued, “We are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides … in a way that lets us all enjoy the tremendous benefits.”
According to the job posting, a qualified applicant would have “deep technical expertise in machine learning, AI safety, evaluations, security or adjacent risk domains” and have experience with “designing or executing high-rigor evaluations for complex technical systems,” among other qualifications.
OpenAI first announced the creation of a preparedness team in 2023, according to TechCrunch.
A while back, we stopped paying for Spotify. It wasn’t out of protest or principle—it was just one of those decisions you make when you realize how many monthly charges have crept into your life. We already have Apple Music as a part of the Apple One bundle, so it made sense to stop paying for one more thing.
In practice, though, it was kind of annoying. The problem isn’t the catalog or interface. In fact, there are a lot of things I prefer about Spotify over Apple Music. The real problem, however, was the decade of carefully built playlists. Rebuilding them manually in Apple Music would take hours. Having to add every song, one at a time, meant enough friction that, for a while, we just… didn’t do it.
Sure, there are services you can pay for to move your Spotify playlists to Apple Music, but I’m not sure how I feel about random third-party services that require you to sign into your Spotify and Apple accounts. Actually, I know exactly how I feel about them, and it’s just not something I’m going to do.
Then, almost accidentally, I found what might be the most genuinely useful thing I’ve done with ChatGPT on an iPhone yet.
An Inc.com Featured Presentation
Recently, the ChatGPT iOS app added app integrations, including the ability to interact directly with Apple Music. That alone sounded mildly interesting. I played around with it long enough to connect my Apple Music account and ask ChatGPT to make me a Christmas Playlist. What I really wanted, though, was the playlist I’ve been listening to for years–the one I made in Spotify.
Then I realized that ChatGPT could probably just recreate that playlist, but I didn’t want to have to type up the whole list. Instead, I opened Spotify, pulled up my Christmas playlist, and took a few screenshots. Then I opened ChatGPT and said, essentially: “Create this playlist in Apple Music.”
That was it. ChatGPT read the screenshot, identified every song, matched them in Apple Music, and built the playlist automatically. There was no manual searching or copy-pasting track names. And, most importantly, there were no sketchy third-party migration tools involved.
Go inside one interesting founder-led company each day to find out how its strategy works, and what risk factors it faces. Sign up for 1 Smart Business Story from Inc. on Beehiiv.
OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company’s safety strategy. It comes at the end of a year that’s seen OpenAI hit with numerous accusations about ChatGPT’s impacts on users’ mental health, including a few wrongful deathlawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledged that the “potential impact of models on mental health was something we saw a preview of in 2025,” along with other “real challenges” that have arisen alongside models’ capabilities. The Head of Preparedness “is a critical role at an important time,” he said.
Per the job listing, the Head of Preparedness (who will make $555K, plus equity), “will lead the technical strategy and execution of OpenAI’s Preparedness framework, our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm.” It is, according to Altman, “a stressful job and you’ll jump into the deep end pretty much immediately.”
Over the last couple of years, OpenAI’s safety teams have undergone a lot of changes. The company’s former Head of Preparedness, Aleksander Madry, was reassigned back in July 2024, and Altman said at the time that the role would be taken over by execs Joaquin Quinonero Candela and Lilian Weng. Weng left the company a few months later, and in July 2025, Quinonero Candela announced his move away from the preparedness team to lead recruiting at OpenAI.
There was a time when most Americans had little to no knowledge about their local data center. Long the invisible but critical backbone of the internet, server farms have rarely been a point of interest for folks outside of the tech industry, let alone an issue of particularly captivating political resonance.
Well, as of 2025, it would appear those days are officially over.
Over the past 12 months, data centers have inspired protests in dozens of states, as regional activists have sought to combat America’s ever-increasing compute buildup. Data Center Watch, an organization tracking anti-data center activism, writes that there are currently 142 different activist groups across 24 states that are organizing against data center developments.
Such a sudden populist uprising appears to be a natural response to an industry that has grown so quickly that it’s now showing up in people’s backyards. Indeed, as the AI industry has swelled to dizzying heights, so, too, has the cloud computing business. Recent U.S. Census Bureau data shows that, since 2021, construction spending on data centers has skyrocketed a stunning 331%. Spending on these projects totals in the hundreds of billions of dollars. So many new data centers have been proposed in recent months that many experts believe that a majority of them will not — and, indeed, could not possibly — be built.
This buildout shows no signs of slowing down in the meantime. Major tech giants — including Google, Meta, Microsoft, and Amazon — have all announced significant capital expenditure projections for the new year, a majority of which will likely go toward such projects.
New AI infrastructure isn’t just being pushed by Silicon Valley but by Washington, D.C., where the Trump administration has made artificial intelligence a central plank of its agenda. The Stargate Project, announced in January, set the stage for 2025’s massive AI infrastructure buildout by heralding a supposed “re-industrialization of the United States.”
Techcrunch event
San Francisco | October 13-15, 2026
In the process of scaling itself exponentially, an industry that once had little public exposure has suddenly been thrust into the limelight — and is now suffering backlash. Danny Cendejas, an activist with the nonprofit MediaJustice, has been personally involved in a number of actions against data centers, including a protest that took place in Memphis, Tennessee, earlier this year, where locals came out to decry the expansion of Colossus, a project from Elon Musk’s startup, xAI.
Cendejas told TechCrunch that he meets new people every week who express interest in organizing against a data center in their community. “I don’t think this is going to stop anytime soon,” he said. “I think it’s going to keep building, and we’re going to see more wins — more projects are going to be stopped.”
Evidence in support of Cendejas’ assessment is everywhere you look. Across the country, communities have reacted to newly announced server farms in much the same way the average person might react to the presence of a highly contagious plague. In Michigan, for instance, where developers are currently eyeing 16 different locations for potential data center construction, protesters recently descended upon the state’s capitol, saying things like: “Michiganders do not want data centers in our yards, in our communities.” Meanwhile, in Wisconsin — another development hot spot — angry locals appear to have recently dissuaded Microsoft from using their town as a headquarters for a new 244-acre data center. In Southern California, the tiny city of Imperial Valley recently filed a lawsuit to overturn its county’s approval of a data center project, expressing environmental concerns as the rationale.
The discontent surrounding these projects has gotten so intense that politicians believe it could make or break particular candidates at the ballot box. In November, it was reported that rising electricity costs — which many believe are being driven by the AI boom — could become a critical issue that determines the 2026 midterm elections.
“The whole connection to everybody’s energy bills going up — I think that’s what’s really made this an issue that is so stark for people,” Cendejas told TechCrunch. “So many of us are struggling month to month. Meanwhile, there’s this huge expansion of data centers…[People are wondering] Where is all that money coming from? How are our local governments giving away subsidies and public funds to incentivize these projects, when there’s so much need in our communities?”
In some cases, protests appear to be working and even halting (if only temporarily) planned developments. Data Center Watch claims that some $64 billion worth of developments have been blocked or delayed as the result of grassroots opposition. Cendejas is certainly a believer in the idea that organized action can halt companies in their tracks. “All this public pressure is working,” he said, noting that he could sense a “very palpable anger” around the issue.
Unsurprisingly, the tech industry is fighting back. Earlier this month, Politico reported that a relatively new trade group, the National Artificial Intelligence Association (NAIA), has been “distributing talking points to members of Congress and organizing local data center field trips to better pitch voters on their value.” Tech companies, including Meta, have been taking out ad campaigns to sell voters on the economic benefits of data centers, the outlet wrote. In short: The tech industry’s AI hopes are pegged to a compute buildout of epic proportions, so for now it’s safe to say that in 2026 the server surge will continue, as will the backlash and polarization that surround it.
I’m hearing plenty of statements like that these days, from people smart enough to know that their AI ChatBuddy (my term) doesn’t actually have a personality or a will.
I write about AI a lot. I get a lot of comments on those posts. I talk to business people and regular people about implementing AI, and – I think because of my long stretch of experience with the science and my nuanced approach to how AI should be implemented – people feel like they can trust me with thoughts on AI they might not tell anyone else.
What I hear often, far too often, is how their AI is more than just an interface to some data. Their ChatBuddy snarked back at them, or it said something cute. It made them feel better about themselves. Or worse.
An Inc.com Featured Presentation
Look, I’m all for fun, and I’m down with getting your spark however you want to strike the flint. I don’t, in any way, blame AI users for falling into this trap.
Because it is indeed a trap. It’s on purpose.
The makers of these AI ChatBuddy models are building these emotional attachment hooks into the product. Then they tweak those hooks when the public gives feedback like “It’s too nice,” “It’s not nice enough,” “It agrees with me too much,” “It disagrees with me too much.”
Go inside one interesting founder-led company each day to find out how its strategy works, and what risk factors it faces. Sign up for 1 Smart Business Story from Inc. on Beehiiv.
Famed San Francisco-based startup accelerator and venture capital firm Y Combinator says that one AI model provider has overtaken OpenAI in popularity among the accelerator’s latest batch of startups, eliminating the ChatGPT-maker’s previous market dominance.
On a December 22 episode of official Y Combinator podcast Lightcone, YC general partner Diana Hu said that “shockingly,” Anthropic’s Claude AI models are the most popular among the accelerator’s new winter 2026 batch of startups, dethroning OpenAI.
“For the longest time, OpenAI was the clear winner,” said Hu, adding that when YC started the podcast in February 2024, OpenAI’s models were preferred by over 90 percent of that batch’s firms. Even in early 2025, added YC president and CEO Garry Tan, Anthropic’s models were only preferred by around one-fourth of new YC-backed startups.
But in the past three to six months, said Hu, usage of Anthropic’s Claude models among new YC firms skyrocketed to over 52 percent. Hu partially credited this fast takeoff to the rise of vibe-coding platforms, such as Replit and Lovable, which use Claude models to power software that enables people without coding experience to develop websites and applications through natural language. Several of the entrepreneurs in this latest YC batch are building their own code-generation companies to compete with the likes of Replit and Lovable.
An Inc.com Featured Presentation
Hu said that Anthropic has developed a reputation for providing top-line coding models, and as more startups enter the coding space, they are consistently relying on Claude. But while coding may bring entrepreneurs into the Anthropic ecosystem, that’s not how the “vast majority” of people are using Claude, YC managing partner Jared Friedman said.
“I wonder if there’s a bleed-through effect,” Friedman opined, “where people are using Claude for their personal coding and then as a result, they’re more likely to choose it for their application, even if their application is not doing coding at all.”
Tan postulated that once a user has spent enough time with a certain AI model, they become familiar and comfortable with that model’s “personality,” which makes it harder to switch. Hu agreed, and said that OpenAI’s models have “black cat energy,” while Anthropic’s have that of more of a “happy-go-lucky, very helpful golden retriever.”
Personally, Tan said he is still using ChatGPT as his daily AI tool, mainly because of the platform’s ability to remember details about users across multiple conversations. “It knows me, it knows my personality, it knows the things I think about,” Tan said, adding that memory, and the consumer experiences enabled by it, is fast-becoming a legitimate moat for OpenAI.
Go inside one interesting founder-led company each day to find out how its strategy works, and what risk factors it faces. Sign up for 1 Smart Business Story from Inc. on Beehiiv.
Even as OpenAI works to harden its Atlas AI browser against cyberattacks, the company admits that prompt injections, a type of attack that manipulates AI agents to follow malicious instructions often hidden in web pages or emails, is a risk that’s not going away anytime soon — raising questions about how safely AI agents can operate on the open web.
“Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully ‘solved,’” OpenAI wrote in a Monday blog post detailing how the firm is beefing up Atlas’ armor to combat the unceasing attacks. The company conceded that “agent mode” in ChatGPT Atlas “expands the security threat surface.”
OpenAI launched its ChatGPT Atlas browser in October, and security researchers rushed to publish their demos, showing it was possible to write a few words in Google Docs that were capable of changing the underlying browser’s behavior. That same day, Brave published a blog post explaining that indirect prompt injection is a systematic challenge for AI-powered browsers, including Perplexity’s Comet.
OpenAI isn’t alone in recognizing that prompt-based injections aren’t going away. The U.K.’s National Cyber Security Centre earlier this month warned that prompt injection attacks against generative AI applications “may never be totally mitigated,” putting websites at risk of falling victim to data breaches. The U.K. government agency advised cyber professionals to reduce the risk and impact of prompt injections, rather than think the attacks can be “stopped.”
For OpenAI’s part, the company said: “We view prompt injection as a long-term AI security challenge, and we’ll need to continuously strengthen our defenses against it.”
The company’s answer to this Sisyphean task? A proactive, rapid-response cycle that the firm says is showing early promise in helping discover novel attack strategies internally before they are exploited “in the wild.”
That’s not entirely different from what rivals like Anthropic and Google have been saying: that to fight against the persistent risk of prompt-based attacks, defenses must be layered and continuously stress-tested. Google’s recent work, for example, focuses on architectural and policy-level controls for agentic systems.
But where OpenAI is taking a different tact is with its “LLM-based automated attacker.” This attacker is basically a bot that OpenAI trained, using reinforcement learning, to play the role of a hacker that looks for ways to sneak malicious instructions to an AI agent.
The bot can test the attack in simulation before using it for real, and the simulator shows how the target AI would think and what actions it would take if it saw the attack. The bot can then study that response, tweak the attack, and try again and again. That insight into the target AI’s internal reasoning is something outsiders don’t have access to, so, in theory, OpenAI’s bot should be able to find flaws faster than a real-world attacker would.
It’s a common tactic in AI safety testing: build an agent to find the edge cases and test against them rapidly in simulation.
“Our [reinforcement learning]-trained attacker can steer an agent into executing sophisticated, long-horizon harmful workflows that unfold over tens (or even hundreds) of steps,” wrote OpenAI. “We also observed novel attack strategies that did not appear in our human red teaming campaign or external reports.”
Image Credits:OpenAI
In a demo (pictured in part above), OpenAI showed how its automated attacker slipped a malicious email into a user’s inbox. When the AI agent later scanned the inbox, it followed the hidden instructions in the email and sent a resignation message instead of drafting an out-of-office reply. But following the security update, “agent mode” was able to successfully detect the prompt injection attempt and flag it to the user, according to the company.
The company says that while prompt injection is hard to secure against in a foolproof way, it’s leaning on large-scale testing and faster patch cycles to harden its systems before they show up in real-world attacks.
An OpenAI spokesperson declined to share whether the update to Atlas’ security has resulted in a measurable reduction in successful injections, but says the firm has been working with third parties to harden Atlas against prompt injection since before launch.
Rami McCarthy, principal security researcher at cybersecurity firm Wiz, says that reinforcement learning is one way to continuously adapt to attacker behavior, but it’s only part of the picture.
“A useful way to reason about risk in AI systems is autonomy multiplied by access,” McCarthy told TechCrunch.
“Agentic browsers tend to sit in a challenging part of that space: moderate autonomy combined with very high access,” said McCarthy. “Many current recommendations reflect that trade-off. Limiting logged-in access primarily reduces exposure, while requiring review of confirmation requests constrains autonomy.”
Those are two of OpenAI’s recommendations for users to reduce their own risk, and a spokesperson said Atlas is also trained to get user confirmation before sending messages or making payments. OpenAI also suggests that users give agents specific instructions, rather than providing them access to your inbox and telling them to “take whatever action is needed.”
“Wide latitude makes it easier for hidden or malicious content to influence the agent, even when safeguards are in place,” per OpenAI.
While OpenAI says protecting Atlas users against prompt injections is a top priority, McCarthy invites some skepticism as to the return on investment for risk-prone browsers.
“For most everyday use cases, agentic browsers don’t yet deliver enough value to justify their current risk profile,” McCarthy told TechCrunch. “The risk is high given their access to sensitive data like email and payment information, even though that access is also what makes them powerful. That balance will evolve, but today the trade-offs are still very real.”
Sam Altman claims that the AI device that OpenAI is currently building with famed Apple designer Jony Ive is actually a family of devices, and teased that they will likely not include a key component used by nearly every other smart device.
In a video interview on Dec. 18, Big Technology’s Alex Kantrowitz pushed the OpenAI co-founder and CEO to provide additional news about the device, which was officially announced in May. Responding to rumors that the device would be phone-sized and lack a screen, Altman clarified that OpenAI will be releasing a “small family of devices,” rather than a single device.
As for the rumored lack of a screen, Altman didn’t provide any firm information, but did opine on the current state of user interfaces for AI applications. He predicted that over time, computers will evolve from “dumb, reactive” machines into smarter, “proactive” entities that can understand user intent. But current devices, like laptops and smartphones, are not “well-suited to that kind of world.”
Altman said that he wants OpenAI’s devices to break some of the “unquestioned assumptions” around how smart devices work, since AI is such a unique technology, with screens being a prime example of such an assumption. Using a screen would limit OpenAI’s device to “the same way we’ve had graphical user interfaces working for many decades,” says Altman, and a keyboard would only slow down interactions.
“I don’t think the current form factor of devices is the optimal fit, it’d be very odd if it were, for this incredible new affordance we have,” Altman said.
Ive has also expressed a distaste for screens in devices over recent years, and has even expressed regret for the “unintended consequences” of his role in popularizing smart devices with screens through the iPhone and iPad.
In a November interview, Ive said that he “can’t bear products that are like a dog wagging their tail in your face,” and that the new devices will be designed to spark joy in users. The devices are still a long way away, though. Ive said in that same interview that he plans to reveal the devices within two years.
OpenAI’s latest AI model launch has raised questions about the company’s wide range of projects and priorities, due in part to an NSFW image that co-founder and CEO Sam Altman generated and shared to promote it.
On December 16, OpenAI released an updated image-generation feature for ChatGPT, powered by its latest text-to-image AI model, named GPT-Image-1.5. Altman posted about the new model on his X account, and, as an example of its capabilities, included an AI-generated image of himself as a shirtless, muscular firefighter standing above a Christmas-themed December calendar.
According to X’s metrics, Altman’s firefighter post has been viewed over four million times and reposted over 1,000 times. Several of those reposts pointed out that the December dates in the calendar aren’t accurate to 2025, while others remarked on the disparity between Altman’s bold claims of using AI to cure cancer and eliminate poverty and OpenAI’s current offerings.
GPT-Image-1.5 is designed to compete against Nano Banana, the popular AI image generator and editor Google released in August. According to a recent report from The Information, OpenAI deprioritized development on new image models several months ago, but when Google released Nano Banana, “leaders at OpenAI rushed to improve its image technology.”
An Inc.com Featured Presentation
The Information also reported that according to some OpenAI employees, for much of 2025 “Altman seemed to be running OpenAI as if it had already conquered the chatbot market,” venturing beyond the core ChatGPT business into AI video and social media with Sora, web browsers with ChatGPT Atlas, and a physical device currently being designed by Jony Ive. Some of these initiatives reportedly “took resources away from efforts to increase ChatGPT’s mass appeal.”
In a video posted to OpenAI’s X account on December 17, OpenAI co-founder and president Greg Brockman admitted that new products like image generation require large amounts of compute, which has forced leadership to make difficult trade-offs.
When OpenAI released its previous frontier image-generation model in March of this year, it set off a viral trend of users generating images in the style of beloved anime production company Studio Ghibli. Usually, having your product go viral is an absolute win for businesses, but according to Brockman, the trend was so massive that OpenAI decided to “take a bunch of compute from research and move it to our deployment” in order to meet the demand. “That was really sacrificing the future for the present,” Brockman said in the video.
Amazon is in discussions with OpenAI to invest $10 billion in the company while supplying more of its AI chips and cloud computing services, according to The Financial Times. The deal would push OpenAI’s valuation over $500 billion but is likely to raise more questions about the company’s circular investment agreements involving chips and data centers.
The two companies are also in talks about the possibility of OpenAI helping Amazon with its online marketplace, similar to deals it has made with Etsy, Shopify and Instacart. However, any agreement still wouldn’t allow Amazon to market OpenAI’s most advanced models on its developer cloud platform, as Microsoft holds the exclusive rights to those until the 2030s.
OpenAI recently restructured its agreement with Microsoft to allow it to use data center capacity from other suppliers. Around the same time, it made a string of deals with NVIDIA, Oracle, AMD and others to build out data center capacity and acquire or rent AI chips.
The new deal would require OpenAI to use Amazon’s Trainium AI chips and rent more data center capacity from Amazon Web Services (AWS). That’s on top of the $38 billion that OpenAI has already committed to renting servers from AWS over the next seven years.
These deals have sounded alarms among investors considering their circular nature. In many of those, including this latest Amazon deal, OpenAI is taking investment money and then sending that cash back to the same company for infrastructure or chips. And the amounts are staggering, with just two companies, Softbank and Oracle, spending a combined $400 billion on new data centers for OpenAI’s compute needs. And so far, OpenAI has lost more money than it makes.
What to know about the $1 billion Disney agreement with OpenAI – CBS News
Watch CBS News
Disney announced Thursday that it would invest $1 billion in OpenAI and license more than 200 of its animated and illustrated characters to use in Sora’s user-generated content. Jo Ling Kent has more.
Time magazine is spotlighting key players in the artificial intelligence revolution for its 2025 Person of the Year, the magazine announced Thursday. “The architects of AI” are the latest recipients of the designation, which for more than a century has been given out on an annual basis to an influential person, group of people or, occasionally, a defining cultural theme or idea.
Previous Person of the Year title-holders have held varying roles in a vast range of occupations, with President Trump taking last year’s cover and Taylor Swift capturing the one before. In 2025,
Time’s 2025 honorific was given to the minds and financiers behind AI’s rise to renown and notoriety, including Nvidia CEO Jensen Huang, Softbank CEO Masayoshi Son and Baidu CEO Robin Li, who spoke directly with the magazine for its feature story.
“Person of the Year is a powerful way to focus the world’s attention on the people that shape our lives,” wrote Sam Jacobs, Time’s editor-in-chief, in an editorial piece about the magazine’s decision. “And this year, no one had a greater impact than the individuals who imagined, designed, and built AI.”
Jacobs described 2025 as “the year when artificial intelligence’s full potential roared into view, and when it became clear that there will be no turning back or opting out,” adding: “Whatever the question was, AI was the answer.”
The magazine prepared two separate covers for the issue. In one, artist Jason Seiler painted an interpretative recreation of the iconic 1932 photograph “Lunch Atop a Skyscraper,” an image that depicted workers seated side-by-side on a steel beam hanging high above New York City during the construction of 30 Rockefeller Plaza, which became a symbol of American resilience during the Great Depression.
A cast of tech industry characters at the forefront of AI development are perched on the beam in Seiler’s recreation. Mark Zuckerberg, of Meta, Lisa Su, of Advanced Micro Devices, Elon Musk, of xAI, Sam Altman, of Open AI, Demis Hassabis, of DeepMind Technologies, Dario Amodei, of Anthropic, and Fei-Fei Li, of Stanford’s Human-Centered AI Institute, are all pictured, along with Huang.
The second cover illustration, by artist Peter Crowther, places the same executives among scaffolding at what looks like a construction site for the giant letters “AI.”
From left, cover art by Jason Seiler and Peter Crowther for TIME’s 2025 Person of the Year magazine spread.
Jason Seiler/TIME; Peter Crowther/TIME
“Every industry needs it, every company uses it, and every nation needs to build it,” Huang said of balancing the pressures to implement AI responsibly and deploy it to the public as quickly as possible. “This is the single most impactful technology of our time.”
Most of the industry figures pictured on Time’s cover did not speak to the magazine for the story, so this year’s spread mainly focuses on the implications — positive, negative and in between — of the companies they have built and the technology they continue forging.
AI often took center stage in 2025 in investigative news reports, economic and academic studies, and in Washington, D.C., as policymakers grappled with how to regulate its evolution while tech giants scrambled to trump their competitors’ inventions, as the use of some of them, like chatbots, grew to be commonplace, at times with tragic consequences.
“For these reasons, we recognize a force that has dominated the year’s headlines, for better or for worse,” Jacobs wrote in his editorial. “For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the possible, the Architects of AI are TIME’s 2025 Person of the Year.”