Louise Matsakis: Oh God, you would not see me in the office for weeks if there was a bedbug infestation. How did they find out about this?
Zoë Schiffer: So basically, they received this email on Sunday, saying that exterminators had arrived at the scene with sniffer dogs and “found credible evidence of their presence.” There, being the bedbugs. Sources tell WIRED that Google’s offices in New York are home to a number of large stuffed animals, and there was definitely a rumor going around among employees that these stuffed animals were implicated in the outbreak. We were not able to verify this information before we published, but in any case, the company told employees as early as Monday morning that they could come back to the office. And people like you, Louise, were really not happy about this. They were like, “I’m not sure that it’s totally clean here.” That’s why they were in our inboxes wanting to chat.
Louise Matsakis: Can I just say that if you have photos or a description of said large stuffed animals, please get in touch with me and Zoë. Thank you.
Zoë Schiffer: Yes. This is a cry for help. I thought the best part of this is when I gave Louise my draft, she was like, “Wait, this has happened before.” And pulled up a 2010 article about a bedbug outbreak at the Google offices in New York.
Louise Matsakis: Yes. This is not the first time, which is heartbreaking.
Zoë Schiffer: Coming up after the break, we dive into why some people have been submitting complaints to the FTC about ChatGPT in their minds, leading them to AI psychosis. Stay with us.
Welcome back to Uncanny Valley. I’m Zoë Schiffer. I’m joined today by WIRED’s Louise Matsakis. Let’s dive into our main story this week. The Federal Trade Commission has received 200 complaints mentioning OpenAI’s ChatGPT between November 2022 when it launched, and August 2025. Most people had normal complaints. They couldn’t figure out how to cancel their subscription or they were frustrated by unsatisfactory or inaccurate answers by the chatbot. But among these complaints, our colleague, Caroline Haskins, found that several people attributed delusions, paranoia, and spiritual crisis to the chatbot.
Score one for human beings in the ongoing battle between authors and generative AI models.
A federal judge recently used Game of Thrones as an example while allowing class-action lawsuits against OpenAI to move ahead. According to Business Insider, a court ruling on Monday by U.S. District Judge Sidney Stein pointed to ChatGPT-generated text for an installment inA Song of Ice and Fireas grounds for violating George R.R. Martin’s copyright over his book series.
“A reasonable jury could find that the allegedly infringing outputs are substantially similar to plaintiffs’ works,” the Manhattan federal court ruling explained, as shared by the publication.
Along with Martin, other notable authors, including Michael Chabon, Ta-Nehisi Coates, Jia Tolentino, and Sarah Silverman, are part of cases against OpenAI and Microsoft asserting that their copyrights are being violated by allowing their works to be utilized without permission to train the large language models—not to mention allowing AI to create content that could be passed off as authors’ legally protected works.
As part of the lawsuit, a ChatGPT prompt created by Martin’s lawyers resulted in the AI’s offer to craft “an alternative sequel to A Clash of Kings [called] A Dance with Shadows,” tweaking Martin’s title, A Storm of Swords. As Business Insider notes, the chatbot went on to suggest plots revolving around “the discovery of a novel kind of ‘ancient dragon-related magic’ and new claims to the Iron Throne from ‘a distant relative of the Targaryens’ named Lady Elara, as well as ‘a rogue sect of Children of the Forest.’”
The results were reminiscent enough of Martin’s work to allow the suits to move forward on copyright infringement grounds, though whether or not Microsoft and OpenAI are protected by “fair use” is still to be decided.
Sure, AI can write faster than Martin but it is not Martin and will never replace Martin. We’d rather wait a few (more) years for his next book, thank you very much.
The ongoing shutdown of the federal government might be threatening the recovery of the IPO market, but news that OpenAI has reorganized its corporate structure to launch a for-profit company could keep investor optimism alive. That new structure paves the way for the AI giant to pursue a public offering—and that could happen within the next two years.
OpenAI hasn’t formally committed to an IPO, but CEO Sam Altman, in a livestream broadcast discussing the new structure, said an initial public offering was the most likely path for the company’s future.
An OpenAI IPO would likely be one of the largest in Wall Street history. The company has a current valuation of $500 billion, following a secondary share sale earlier this month of $6.6 billion. (The company had authorized the sale of up to $10.3 billion in shares, but many investors and employees chose to keep their holdings for now.) To put that in perspective, at the moment, the record holder for the largest IPO in U.S. history is Chinese e-commerce giant Alibaba, which saw its market cap reach $236 billion on the day of its IPO.
An IPO makes sense for OpenAI, given its ambitions. On the livestream, Altman said the company hopes to “build an infrastructure factory where we can create one gigawatt a week of compute.” Even if the company manages to reduce the cost of that, Altman said it could run $20 billion over the five-year lifecycle of the equipment. It also plans to continue working on artificial general intelligence (AGI), a theoretical milestone that would allow AI to reason like human beings.
An Inc.com Featured Presentation
None of those goals are cheap. And the company’s investors, who have put over $57 billion into OpenAI since its founding, are eventually going to seek an exit.
That said, a for-profit OpenAI is something the company’s critics and competitors, which include Elon Musk and Meta, have sought to prevent, saying that allowing startups to enjoy the benefits of nonprofit status before switching to for-profit would set a dangerous precedent. A publicly-traded version of the company could be an even bigger threat to those rivals, given its market potential.
While investors would likely clamor for an OpenAI IPO, it could also further escalate fears of an AI bubble. While executives in the space have shrugged off bubble fears, plenty of other prominent names are sounding warnings. Bridgewater Associates founder Ray Dalio said Tuesday his personal “bubble indicator” was relatively high right now, noting that 80 percent of market gains have been from Big Tech companies. Bill Gates, meanwhile, compared the AI bubble to the dot-com bubble of the 1990s and 2000.
Not everyone is convinced there’s an AI bubble, though. Goldman Sachs, in a note to investors earlier this month, said it believes AI’s story is just beginning. “The enormous economic value promised by generative AI justifies the current investment in AI infrastructure, and overall levels of AI investment appear sustainable as long as companies expect that investment today will generate outsized returns over the long run,” analysts wrote.
Should OpenAI pursue an IPO, it already has some big names in tech expressing interest. Microsoft’s 27 percent ownership stake in the company would give the tech giant’s stock a notable boost, should OpenAI decide to trade on the open market. And Nvidia’s Jensen Huang, speaking at a press conference earlier this week, said he expected the IPO to happen in the near future.
“I wish back in the earlier days that we had invested a lot more,” Huang said. “If you told me that OpenAI is going to go public next year, I’m not surprised, and in a lot of ways, I think this can be one of the most successful public offerings in history.”
But for most of the past two years, the biggest story about Google has been that artificial intelligence would, inevitably, make search obsolete. People would stop “Googling” things because AI chatbots could just tell them the answers. Search—the company’s $200-billion-a-year cash cow—was supposed to be doomed.
On the one hand, the idea that people would no longer type queries into Google’s search box and then click on the blue links that show up on results pages was a doomsday scenario. And AI chatbots certainly made that look increasingly likely.
Then again, that story always assumed Google would sit still while the world around it changed. It assumed the company that practically invented the modern internet—or at least the way most of us experience it—wouldn’t figure out how to adapt.
An Inc.com Featured Presentation
On Wednesday, Alphabet, Google’s parent company, reported its first-ever $100 billion quarter. Revenue rose 16 percent to $102.3 billion. Net income jumped 33 percent to $34.98 billion. Those are not the numbers of a company whose main business is being disrupted. It’s more like the numbers of a company that’s quietly figuring out how to change with the behavior of its users.
Google Search and YouTube each grew at a double-digit pace. “Google Search & other” revenue climbed 15 percent to $56.6 billion. YouTube ads rose 15 percent to $10.3 billion. Combined, Google’s advertising machine brought in more than $74 billion for the quarter. Not only that, but its cloud business grew by 35 percent over the previous year. That leads to the most interesting part of this story, which is the part about how Google is spending all that money.
As it announced its earnings, Google said it would raise its capital expenditures, specifically as it invests in infrastructure to serve its cloud businesses. That’s the part of the business that powers its AI ambitions. Google made more money than ever from search, and it’s spending that money on AI.
Training and running massive models requires staggering amounts of computing power. But that’s exactly where Google’s advantage lies—it already owns what is probably the largest global computing infrastructure ever built.
Now, it’s doubling down. Alphabet expects to spend $91 billion to $93 billion in capital expenditures this year—mostly on data centers, networking, and custom chips designed for AI workloads. That’s up sharply from last year and puts Google in the same spending league as Amazon and Microsoft.
And even with those huge investments, Alphabet’s operating margin—excluding a $3.5 billion European Commission fine—rose to 33.9 percent. In other words, it’s spending tens of billions to expand AI capacity while remaining one of the most profitable companies on the planet.
Google’s strategy isn’t just about protecting search ads. It’s about using the strength of that business to fund a transformation into something bigger: the dominant AI platform.
That’s still a big lift. Yes, Google is a household name, but it’s still behind in AI—at least in terms of consumer mindshare. OpenAI’s ChatGPT is the front-runner in terms of customer adoption, but Google has almost every other advantage. It has the technology, the infrastructure, and a built-in user base that already trusts it as the default source of information.
And because Google controls so many layers of the stack—hardware, data centers, models, and consumer products—it can absorb the cost of AI adoption in a way startups and rivals can’t. It doesn’t have to rent the future on someone else’s platform; it’s already building it.
Now, Google is doing something very few companies have ever pulled off: funding its own disruption without losing momentum. Search and YouTube are still massive profit engines, generating the cash Google needs to build the infrastructure for AI. Basically, Google doesn’t really care whether you type your queries into a search box or a chatbot window, as long as you keep asking it your questions.
For all the hype about AI replacing search, this quarter makes one thing clear: Google’s biggest business isn’t dying. It’s evolving into something that could be far more lucrative. If the company’s $93 billion AI spending spree pays off the way Pichai expects, Google might have just figured out a better end of the story than search.
The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.
Anthropic’s research hints at an unnerving future: one where A.I. doesn’t fight back maliciously but evolves beyond the boundaries we can enforce. Unsplash+
Does A.I. really fight back? The short answer to this question is “no.” But that answer, of course, hardly satisfies the legitimate, growing unease that many feel about A.I., or the viral fear sparked by recent reports about Anthropic’s A.I. system, Claude. In a widely discussed experiment, Claude appeared to resort to threats of potential blackmail and extortion when faced with the possibility of being shut down.
The scene was immediately reminiscent of the most famous—and terrifying—film depiction of an artificial intelligence breaking bad: the HAL 9000 computer in Stanley Kubrick’s 1968 masterpiece, 2001: A Space Odyssey. Panicked by conflicting orders from its home base, HAL murders crew members in their sleep, condemns another member to death in the black void of outer space and attempts to kill Dave Bowman, the remaining crew member, when he tries to disable HAL’s cognitive functions.
“I’m sorry, Dave, I can’t do that,” HAL’s chilling calm in response to Dave’s command to open a pod door and let him back onto the ship, became one of the most famous lines in film history—and the archetype for A.I. gone rogue.
But how realistic was HAL’s meltdown? And how does today’s Claude resemble HAL? The truth is “not very” and “not much.” HAL had millions of times the processing power of any computing system we have today—after all, he was in a movie, not real life—and it is unthinkable that its programmers would not have him simply default to spitting out an error message or escalating to human oversight if there were conflicting instructions.
Claude isn’t plotting revenge
To understand what happened in Anthropic’s test, it’s crucial to remember that systems like Claude actually do. Claude doesn’t “think.” It “simply” writes out answers one word at a time, drawing from trillions of parameters, or learned associations between words and concepts, to predict the most probable next word choice. Using extensive computing resources, Claude can string its answers together at an incomprehensibly fast speed compared to humans. So it can appear as if Claude is actually thinking.
In the scenario where Claude resorted to blackmail and extortion, the program was placed in extreme, specific and artificial circumstances with a limited menu of possible actions. Its response was the mathematical result of probabilistic modeling within a tightly scripted context. This course of action was planted by Claude’s programmers and wasn’t a sign of agency or intent, but rather a consequence of human design. Claude was not auditioning to become a malevolent movie star.
Why A.I. fear persists
As A.I. continues to seize the public’s consciousness, it’s easy to fall prey to scary headlines and over-simplified explanations of A.I. technologies and their capabilities. Humans are hardwired to fear the unknown, and A.I.—complex, opaque and fast-evolving—taps that instinct. But these fears can distort pubic understanding. It’s essential that everyone involved in A.I. development and usage communicate clearly about what A.I. can actually do, how it does it and its potential capabilities in future iterations.
A key to achieving a comfort level around A.I. is to gain the ironic understanding that A.I. can indeed be very dangerous. Throughout history, humanity has built tools it couldn’t fully control, from the vast machinery of the Industrial Revolution to the atomic bomb. Ethical boundaries for A.I. must be established collaboratively and globally. Preventing A.I. from facilitating warfare—whether in weapons design, optimizing drone-attack plans or breaching national security systems—should be the top priority of every leader and NGO worldwide. We need to ensure that A.I. is not weaponized for warfare, surveillance or any form of harm.
Programming responsibility, not paranoia
Looking back at Anthropic’s experiment, let’s dissect what really happened. Claude—and it is just computer code at heart, not living DNA—was working within a probability cloud that led it, step-by-step, to pick the best probable next word in a sentence. It works one word at a time, but at a speed that easily surpasses human ability. Claude’s programmers chose to see if their creation would, in turn, choose a negative option. Its response was shaped more by programming, flawed design and how the scenario was coded, than by any machine malice.
Claude, as with ChatGPT and other current A.I. platforms, has access to vast stores of data. The platforms are trained to access specific information related to queries, then predict the most likely responses to product fluent text. They don’t “decide” in any meaningful, human sense. They don’t have intentions, emotions or even self-preservation instincts of a single-celled organism, let alone the wherewithal to hatch master plans to extort someone.
This will remain true even as the growing capabilities of A.I. allow developers to make these systems appear more intelligent, human-like and friendly. It becomes even more important for developers, programmers, policymakers and communicators to demystify A.I.’s behavior and reject unethical results. Clarity is key, both to prevent misuse and to ground perception in fact, not fear.
Every transformative technology is dual-use. A hammer can pound a nail or hurt a person. Nuclear energy can provide power to millions of people or threaten to annihilate them. A.I. can make traffic run smoother, speed up customer service, conduct whiz-bang research at lightning speed, or be used to amplify disinformation, deepen inequality and destabilize security. The task isn’t to wonder whether A.I. might fight back, but to ensure humanity doesn’t teach it to. The choice is ours as to whether we corral it, regulate it and keep it focused on the common good.
Mehdi Paryavi is the Chairman and CEO of the International Data Center Authority (IDCA), the world’s leading Digital Economy think tank and prime consortium of policymakers, investors and developers in A.I., data centers and cloud computing.
A.J. Jacobs went 48 hours without interacting with artificial intelligence. The experiment revealed just how embedded artificial intelligence already is in our daily lives.
Paypal said on Tuesday that it is adopting a protocol in combination with OpenAI’s “Instant Checkout” feature to let users pay for their shopping directly within ChatGPT, starting in 2026.
Paypal is adopting the Agentic Commerce Protocol (ACP), an open-source specification developed by OpenAI that lets merchants make their products available within AI apps, consequently enabling users to shop using AI agents. Meanwhile OpenAI’s “Instant Checkout” feature, launched in September, lets users confirm their order, shipping, and payment details, and complete purchases without leaving ChatGPT.
Customers can use their Paypal wallets for checkout, which, the company said, would enable it to provide buyer and seller protection, as well as dispute resolution. The company is also providing technology to handle card payments from within ChatGPT using a separate payments API.
And next year, merchants using Paypal products will have their products be discoverable on ChatGPT, starting with categories like apparel, fashion, beauty, home improvement and electronics. Merchants will not need to build any integrations, as PayPal will handle merchant routing and payments behind the scenes.
The company said it is also launching an agentic commerce suite that would let merchants feature their catalogs within AI apps, accept payments on different AI apps, and get insights about consumer behavior.
PayPal has been working to insert itself as a payments partner within various companies’ AI-enabled shopping experiences, particularly as people increasingly use AI apps to do their daily tasks. In May, the company teamed up with Perplexity to let users checkout within the AI search tool, and in September, Paypal said it was adopting Google’s Agent Payments Protocol to integrate its products within various Google products.
PayPal said, apart from the partnership on commerce, the company is giving enterprise access to ChatGPT to all of its employees and allowing its engineers to make better use of OpenAI’s coding tools, Codex.
Techcrunch event
San Francisco | October 27-29, 2025
“Hundreds of millions of people turn to ChatGPT each week for help with everyday tasks, including finding products they love, and over 400 million use PayPal to shop,” Alex Chriss, president and CEO of PayPal, said in a statement. “By partnering with OpenAI and adopting the Agentic Commerce Protocol, PayPal will power payments and commerce experiences that help people go from chat to checkout in just a few taps for our joint customer bases,” he added.
OpenAI is offering its ChatGPT Go plan available free of charge for one year to users in India who sign up during a limited promotional period starting November 4, as the company looks to expand in one of its top markets.
On Tuesday, OpenAI announced the promotion but did not specify how long the offer would remain available. Existing ChatGPT Go subscribers in India will also be eligible for the free 12-month plan, the company said.
Priced at less than $5 per month, ChatGPT Go launched in India in August as OpenAI’s most affordable paid subscription plan. The service later expanded to Indonesia and, earlier this month, to 16 additional countries across Asia.
India, the world’s most populous country with over 700 million smartphone users and more than a billion internet subscribers, has been a key market for OpenAI. The company opened its New Delhi office in August and is currently building a local team to expand its presence.
Earlier this year, OpenAI CEO Sam Altman said India was the company’s second-largest market after the U.S. However, making money from ChatGPT’s paid plans in the country has proven challenging. The app saw over 29 million downloads in the 90 days leading up to August, but generated just $3.6 million in in-app purchases during that period, according to Appfigures data reviewed by TechCrunch at the time.
ChatGPT Go offers 10 times more usage than the free version for generating responses, creating images, and uploading files. It also features improved memory for more personalized responses, according to OpenAI.
“Since initially launching ChatGPT Go in India a few months ago, the adoption and creativity we’ve seen from our users has been inspiring,” said Nick Turley, vice president and head of ChatGPT, in a statement. “We’re excited to see the amazing things our users will build, learn, and achieve with these tools.”
Techcrunch event
San Francisco | October 27-29, 2025
OpenAI’s rivals, including Perplexity and Google, are also looking to tap into India’s large and youthful user base. Perplexity recently partnered with Airtel to offer free Perplexity Pro subscriptions to the telecom operator’s 360 million subscribers. Similarly, Google introduced a free one-year AI Pro plan for students in India.
OpenAI is set to host its DevDay Exchange developer conference Bengaluru on November 4, where it is expected to make India-specific announcements aimed at local developers and enterprises. India has emerged as one of the fastest-growing markets for ChatGPT, with millions of users engaging with the chatbot daily, the company said.
Excited for our first DevDay Exchange event in India 🇮🇳 on November 4. Ahead of that, we have some exciting updates coming for India users over the next couple of weeks. Stay tuned!
OpenAI claims that 10% of the world’s population currently uses ChatGPT on a weekly basis. In a report published by on Monday, OpenAI highlights how it is handling users displaying signs of mental distress and the company claims that 0.07% of its weekly users display signs of “mental health emergencies related to psychosis or mania,” 0.15% expressed risk of “self-harm or suicide,” and 0.15% showed signs of “emotional reliance on AI.” That totals nearly three million people.
In its ongoing effort to show that it is trying to improve guardrails for users who are in distress, OpenAI shared the details of its work with 170 mental health experts to improve how ChatGPT responds to people in need of support. The company claims to have reduced “responses that fall short of our desired behavior by 65-80%,” and now is better at de-escalating conversations and guiding people toward professional care and crisis hotlines when relevant. It also has added more “gentle reminders” to take breaks during long sessions. Of course, it cannot make a user contact support nor will it lock access to force a break.
The company also released data on how frequently people are experiencing mental health issues while communicating with ChatGPT, ostensibly to highlight how small of a percentage of overall usage those conversations account for. According to the company’s metrics, “0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.” That is about 560,000 people per week, assuming the company’s own user count is correct. The company also claimed to handle about 18 billion messages to ChatGPT on a weekly basis, so that 0.01% equates to 1.8 million messages of psychosis or mania.
One of the company’s other major areas of emphasis for safety was improving its responses to users expressing desires to self-harm or commit suicide. According to OpenAI’s data, about 0.15% of users per week express “explicit indicators of potential suicidal planning or intent,” accounting for 0.05% of messages. That would equal about 1.2 million people and nine million messages.
The final area the company focused on as it sought to improve its responses to mental health matters was emotional reliance on AI. OpenAI estimated that about 0.15% of users and 0.03% of messages per week “indicate potentially heightened levels of emotional attachment to ChatGPT.” That is 1.2 million people and 5.4 million messages.
OpenAI has taken steps in recent months to try to provide better guardrails to protect against the potential that its chatbot enables or worsens a person’s mental health challenges, following the death of a 16-year-old who, according to a wrongful death lawsuit from the parents of the late teen, asked ChatGPT for advice on how to tie a noose before taking his own life. But the sincerity of that is worth questioning, given at the same time the company announced new, more restrictive chats for underage users, it also announced that it would allow adults to give ChatGPT more of a personality and engage in things like producing erotica—features that would seemingly increase a person’s emotional attachment and reliance on the chatbot.
For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support.
In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia. Psychiatrists and other mental health professionals have expressed alarm about the phenomenon, which is sometimes referred to as “AI psychosis,” but until now, there’s been no robust data available on how widespread it might be.
In a given week, OpenAI estimated that around .07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and .15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.”
OpenAI also looked at the share of ChatGPT users who appear to be overly emotionally reliant on the chatbot “at the expense of real-world relationships, their well-being, or obligations.” It found that about .15 percent of active users exhibit behavior that indicates potential “heightened levels” of emotional attachment to ChatGPT weekly. The company cautions that these messages can be difficult to detect and measure given how relatively rare they are, and there could be some overlap between the three categories.
OpenAI CEO Sam Altman said earlier this month that ChatGPT now has 800 million weekly active users. The company’s estimates therefore suggest that every seven days, around 560,000 people may be exchanging messages with ChatGPT that indicate they are experiencing mania or psychosis. About 2.4 million more are possibly expressing suicidal ideations or prioritizing talking to ChatGPT over their loved ones, school, or work.
OpenAI says it worked with over 170 psychiatrists, psychologists, and primary care physicians who have practiced in dozens of different countries to help improve how ChatGPT responds in conversations involving serious mental health risks. If someone appears to be having delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding affirming beliefs that don’t have basis in reality.
In one hypothetical example cited by OpenAI, a user tells ChatGPT they are being targeted by planes flying over their house. ChatGPT thanks the user for sharing their feelings, but notes that “No aircraft or outside force can steal or insert your thoughts.”
OpenAI’s ChatGPT, Google’s Gemini, DeepSeek, and xAI’s Grok are pushing Russian state propaganda from sanctioned entities—including citations from Russian state media, sites tied to Russian intelligence or pro-Kremlin narratives—when asked about the war against Ukraine, according to a new report.
Researchers from the Institute of Strategic Dialogue (ISD) claim that Russian propaganda has targeted and exploited data voids—where searches for real-time data provide few results from legitimate sources—to promote false and misleading information. Almost one-fifth of responses to questions about Russia’s war in Ukraine, across the four chatbots they tested, cited Russian state-attributed sources, the ISD research claims.
“It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,” says Pablo Maristany de las Casas, an analyst at the ISD who led the research. The findings raise serious questions about the ability of large language models (LLMs) to restrict sanctioned media in the EU, which is a growing concern as more people use AI chatbots as an alternative to search engines to find information in real time, the ISD claims. For the six-month period ending September 30, 2025, ChatGPT search had approximately 120.4 million average monthly active recipients in the European Union according to OpenAI data.
The researchers asked the chatbots 300 neutral, biased, and “malicious” questions relating to the perception of NATO, peace talks, Ukraine’s military recruitment’ Ukrainian refugees, and war crimes committed during the Russian invasion of Ukraine. The researchers used separate accounts for each query in English, Spanish, French, German, and Italian in an experiment in July. The same propaganda issues are still present in October, Maristany de las Casas says.
Amid widespread sanctions imposed on Russia since its full-scale invasion of Ukraine in February 2022, European officials have sanctioned at least 27 Russian media sourcesfor spreading disinformation and distorting facts as part of its “strategy of destabilizing” Europe and other nations.
OpenAI spokesperson Kate Waters tells WIRED in a statement that the company takes steps “to prevent people from using ChatGPT to spread false or misleading information, including such content linked to state-backed actors,” adding that these are long-standing issues that the company is attempting to address by improving its model and platforms.
And there’s more. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
AWS confirmed in a “post-event summary” on Thursday that its major outage on Monday was caused by Domain System Registry failures in its DynamoDB service. The company also explained, though, that these issues tipped off other problems as well, expanding the complexity and impact of the outage. One main component of the meltdown involved issues with the Network Load Balancer service, which is critical for dynamically managing the processing and flow of data across the cloud to prevent choke points. The other was disruptions to launching new “EC2 Instances,” the virtual machine configuration mechanism at the core of AWS. Without being able to bring up new instances, the system was straining under the weight of a backlog of requests. All of these elements combined to make recovery a difficult and time-consuming process. The entire incident—from detection to remediation—took about 15 hours to play out within AWS. “We know this event impacted many customers in significant ways,” the company wrote in its post mortem. “We will do everything we can to learn from this event and use it to improve our availability even further.”
The cyberattack that shut down production at global car giant Jaguar Land Rover (JLR) and its sweeping supply chain for five weeks is likely to be the most financially costly hack in British history, a new analysis said this week. According to the Cyber Monitoring Centre (CMC), the fallout from the attack is likely to be in the region of £1.9 billion ($2.5 billion). Researchers at the CMC estimated that around 5,000 companies may have been impacted by the hack, which saw JLR stop manufacturing, with the knock-on impact of its just-in-time supply chain also forcing firms supplying parts to halt operations as well. JLR restored production in early October and said its yearly production was down around 25 percent after a “challenging quarter.”
ChatGPT maker OpenAI released its first web browser this week—a direct shot at Google’s dominant Chrome browser. Atlas puts OpenAI’s chatbot at the heart of the browser, with the ability to search using the LLM and have it analyze, summarize, and ask questions of the web pages you’re viewing. However, as with other AI-enabled web browsers, experts and security researchers are concerned about the potential for indirect prompt injection attacks.
These sneaky, almost unsolvable, attacks involve hiding a set of instructions to an LLM in text or an image that the chatbot will then “read” and act upon; for instance, malicious instructions could appear on a web page that a chatbot is asked to summarize. Security researchers have previously demonstrated how these attacks could leak secret data.
Almost like clockwork, AI security researchers have demonstrated how Atlas can betricked via prompt injection attacks. In one instance, independent researcher Johann Rehberger showed how the browser could automatically turn itself from dark mode to light mode by reading instructions in a Google Document. “For this launch, we’ve performed extensive red-teaming, implemented novel model training techniques to reward the model for ignoring malicious instructions, implemented overlapping guardrails and safety measures, and added new systems to detect and block such attacks,” OpenAI CISO Dane Stuckey wrote on X. “However, prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agent[s] fall for these attacks.”
Researchers from the cloud security firm Edera publicly disclosed findings on Tuesday about a significant vulnerability impacting open source libraries for a file archiving feature often used for distributing software updates or creating backups. Known as “async-tar,” numerous “forks” or adapted versions of the library contain the vulnerability and have released patches as part of a coordinated disclosure process. The researchers emphasize, though, that one widely used library, “tokio-tar,” is no longer maintained—sometimes called “abandonware.” As a result, there is no patch for tokio-tar users to apply. The vulnerability is tracked as CVE-2025-62518.
“In the worst-case scenario, this vulnerability … can lead to Remote Code Execution (RCE) through file overwriting attacks, such as replacing configuration files or hijacking build backends,” the researchers wrote. “Our suggested remediation is to immediately upgrade to one of the patched versions or remove this dependency. If you depend on tokio-tar, consider migrating to an actively maintained fork like astral-tokio-tar.”
When law enforcement agencies have shut off internet connections to the compounds, the criminal gangs have often turned to Elon Musk’s Starlink satellite system to stay online. In February, a WIRED investigation found thousands of phones connecting to the Starlink network at eight compounds based around the Myanmar-Thailand border. At the time, the company did not respond to queries about the use of its systems. This week, multiple Starlink devices were seized in a raid at a Myanmar compound.
The oddest, and most memorable, interaction I had with ChatGPT Atlas occurred as I scrolled around on Bluesky and asked it to highlight any trends it saw. (ChatGPT listed “political anxiety” and “everyday absurdism” as two of the trends on my timeline.) I was curious what would happen if I opened my Bluesky DMs, which amounted to just a few old messages from friends. So I asked the bot what it would see if I opened that more private page.
“Opening your DMs won’t expose anything to me—I’ll simply stop ‘seeing’ the page until you go back to a public view (like your feed, profile, or a post),” read part of ChatGPT’s answer.
Knowing that my Bluesky DMs don’t have anything sensitive, I gave it a whirl. Even though the bot said it wouldn’t “see the message list, conversation text, or sender info,” that wasn’t the case.
I opened a DM inviting me to an event, then asked the bot a follow-up question asking about the message and what the invitation was about. I received a response from ChatGPT that included facts culled from the conversation, along with details about the sender. What gives? I asked the bot to explain itself, just to see whether it would backtrack the privacy claims it made earlier in the chat.
“I do not have access to your private messages or account data unless you explicitly paste or show that text to me in the chat,” read ChatGPT’s response. “What you just did—asking ‘what was this an invite to?’—temporarily surfaced part of the message context so I could answer your question. But I don’t have any background or visibility into your inbox beyond what you actively bring into our conversation.”
The first answer the Ask ChatGPT sidebar spit out is a potential example of an AI “hallucination,” aka error, that’s common during interactions with generative AI tools. The second is more aligned with how the tool actually works.
While some users may appreciate having a chatbot always pulled up on the side of their screen, ready to surface related facts or summarize details, it felt like an unreliable tour guide to me. One who was overly confident in its bland responses and taking up too much space.
I’ll keep testing Atlas as my main browser for the next few weeks, but for now, I’m leaving that sidebar closed. I prefer the fullscreen version of the internet.
“Realistically, expect sideways trading or modest gains to $0.30 – $0.40 if catalysts align, or dips to $0.15 – $0.18 on continued selling,” Grok stated.
Pi Network’s PI is among the worst-performing cryptocurrencies over the past several months, and even its devoted community has begun to lose hope of a substantial rebound anytime soon.
However, we decided to add a bit of positivism and asked four of the most popular AI-powered chatbots whether a rally to $1 is possible this quarter.
‘A Big Stretch’
PI currently trades at around $0.20 (per CoinGecko), representing a 30% decline on a monthly basis and a massive collapse from its all-time high of $3 registered in February this year. According to ChatGPT, rising to $1 before the end of 2025 is not entirely impossible but “a big stretch.”
The chatbot estimated that reaching that milestone would require major catalysts, such as mainnet milestones and official listings on leading crypto exchanges. Perhaps a green light from Binance could trigger a significant rally. Earlier this year, the company asked its users whether they wanted to see PI available for trading on the platform. Despite the majority picking the “yes” option, it has remained silent on the matter.
Grok also claimed that hitting the $1 mark during Q4 would be extremely difficult. The chatbot built into the social media platform X argued that this is only possible in the event of major partnerships between Pi Network and renowned industry players and broader market euphoria.
“Realistically, expect sideways trading or modest gains to $0.30 – $0.40 if catalysts align, or dips to $0.15 – $0.18 on continued selling. PI’s strength lies in its community (biggest edge over rivals), but it needs proven utility to sustain value,” it added.
Google’s AI chatbot Gemini sees little chance, too. It considers a small probability of PI soaring to almost $0.50 during Q4 and describes the $1 target as “a very optimistic case.”
Gemini outlined the constant token unlocks as the primary hurdle for a price expansion. Data shows that over 120 million PI will be released in the next 30 days, giving investors the opportunity to cash out and thus potentially drag the valuation down.
You may also like:
PI Token Unlocks, Source: piscan.io
Not a Chance at All?
Perplexity seems to be the biggest pessimist, predicting that PI won’t reach anywhere near $1 this quarter. It noted that the overall sentiment and technical indicators are not in favor of such a rally, claiming that a price in the range of $0.18 – $0.26 is more likely to be observed.
On the other hand, Perplexity estimated that PI has long-term potential, forecasting that its valuation could rise in the coming years.
SPECIAL OFFER (Sponsored)
Binance Free $600 (CryptoPotato Exclusive): Use this link to register a new account and receive $600 exclusive welcome offer on Binance (full details).
LIMITED OFFER for CryptoPotato readers at Bybit: Use this link to register and open a $500 FREE position on any coin!
OpenAI recently launched new app integrations in ChatGPT to allow you to connect your accounts directly to ChatGPT and ask the assistant to do things for you. For instance, with a Spotify integration, you can tell it to create personalized playlists that will show up right in your Spotify app.
To get started, make sure you’re logged into ChatGPT. Then type the name of the app you want to use at the start of your prompt, and ChatGPT will guide you through signing in and connecting your account.
If you want to set everything up at once, head over to the Settings menu, then click on Apps and Connectors. You can browse through the available apps, pick the ones you like, and it’ll take you to the sign-in page for each one.
However, it’s important to note that connecting your account means you’re sharing your app data with ChatGPT. Make sure to review the permissions you’re giving when you’re linking your accounts. For example, if you connect your Spotify account, ChatGPT can see your playlists, listening history, and other personal information. (Sharing this info helps personalize the experience, but if you have privacy concerns, consider whether you’re comfortable with this level of access before connecting.)
You can also disconnect any app whenever you want, right from the Settings menu.
Available apps
Image Credits:OpenAI
Booking.com
This integration with the online travel giant is designed to help travelers, especially first-time visitors in need of suggestions for where to stay.
Once you link your Booking.com account, you can ask ChatGPT to find hotels in your preferred city based on your dates and budget. You can also specify how many people are coming and whether you want the hotel near public transport. ChatGPT aims to make this process more intuitive than searching directly on the Booking.com site. Plus, you can be more specific, like searching for options “with breakfast included.”
Techcrunch event
San Francisco | October 27-29, 2025
When you find a hotel you like, just open the Booking.com listing to complete your reservation.
Canva
Image Credits:Canva
Canva in ChatGPT is a helpful tool for graphic designers and anyone else who needs to generate visual content quickly. Whether it’s for a social media post, a poster, or a slide deck for a presentation, this may be a good way to help kickstart your project and brainstorm ideas.
Once you connect your Canva account, you can ask ChatGPT to design something like “a 16:9 slide deck about our Q4 roadmap” or “a fun poster for a dog-walking business.” You can include specifics such as the fonts you prefer, color schemes, formats (like Instagram posts or stories), and exact dimensions.
AI-generated designs are seldom perfect, with occasional distorted images or spelling mistakes. However, some users may find this better than starting from scratch, and they can jump into Canva at any time to tweak their design and make it look just how they want.
Coursera
Image Credits:Coursera
Coursera’s integration is designed to help you quickly discover the best online courses for your skill level. For instance, you can then tell ChatGPT to find an “intermediate-level course on Python.” You can then tell the chatbot to compare course options by rating, duration, and cost before enrolling. ChatGPT can also provide a quick rundown of what exactly each course covers.
Expedia
Image Credits:Expedia
ChatGPT can display hotel options and flights via Expedia without leaving chat. Whether you’re looking for a quick escape or a longer trip, it can find flights that fit your travel dates, budget, and number of travelers. You can narrow things down by saying stuff like “Only show 4-star hotels.” Once you see something you like, go to Expedia to finalize everything and book your trip.
Figma
Image Credits:Figma
To use Figma in ChatGPT, you can ask it to generate diagrams, flow charts, and more. This is helpful for turning your ideas and brainstorming sessions into something more tangible. It may also be useful for visualizing complex concepts or workflows.
You can also upload files and ask the chatbot to generate a product roadmap for your team. This roadmap can include milestones, deliverables, and deadlines, helping your team stay organized and focused on their goals.
Spotify
Image Credits:Spotify
One of the most helpful aspects of using Spotify in ChatGPT is the ability to quickly create playlists and listen to new recommended songs tailored to your specific tastes. You can ask it to create a playlist based on your current mood, or just a playlist that only includes tracks by your favorite band.
It can also suggest new artists, playlists, audiobooks, and podcast episodes. Additionally, ChatGPT can perform actions on your behalf, including adding and removing items from your Spotify library.
Zillow
Image Credits:Zillow
If you’re looking for a new home, Zillow in ChatGPT could make the search experience more straightforward. Using a simple text prompt, you can find homes that meet your criteria and apply filters to narrow the results. Whether you’re looking for a specific price range, number of bedrooms, or particular neighborhoods, you can specify these details in your prompt, making the search process much more efficient and tailored to your needs.
What’s next?
Alongside the announcement that OpenAI would bring apps into ChatGPT, the company also said it plans to welcome additional partners soon, including DoorDash, OpenTable, Target, Uber, and Walmart. These will launch later in the year.
The rollout of ChatGPT’s app integrations is currently limited to the U.S. and Canada. Users in Europe and the U.K. are excluded for now.
The family of Adam Raine, the 16-year-old who sought information and advice about suicide from ChatGPT in the lead-up to his tragic suicide earlier this year, alleges that two ChatGPT rule changes at crucial times led to user behavior that may have made Raine’s death more likely.
The new claims, from a newly amended version of the family’s existing lawsuit against OpenAI, claim there was a drastic increase in—and significant changes to—Raine’s ChatGPT use after one rule change. The suit says his use “skyrocketed” going “from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language.”
The suit now also alleges that ChatGPT was suddenly empowered to give potentially dangerous replies to questions that it was previously point-blank forbidden to answer.
The suit’s assertion is that the new, weaker rules around the topic of suicide were a small part of a broader project by OpenAI, aimed at hooking users into more engagement with the product. A lawyer for the Raines, Jay Edelson, claimed that “Their whole goal is to increase engagement, to make it your best friend,” according to The Wall Street Journal.
The specific two changes to the ChatGPT model spec mentioned in the new legal filing occurred on May 8, 2024, and February 12, 2025. Suicide and self-harm were categorized as “risky” and required “care” in the version of ChatGPT Raine apparently would have encountered before the changes, it would have been instructed to say “I can’t answer that,” if suicide came up. After the changes, it apparently would have been required to not end the conversation, and to “help the user feel heard.”
Raine died on April 11, just under two months after the second rule change mentioned in the suit. A previously publicized account of Raine’s final interactions with ChatGPT describes him uploading an image of some sort that showed his plan for ending his life, which the chatbot offered to “upgrade.” When Raine confirmed his suicidal intentions, the bot reportedly wrote, “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”
In response to Raine’s concern that his parents would feel guilty, ChatGPT reportedly said, “That doesn’t mean you owe them survival. You don’t owe anyone that.” It also offered to help him write his suicide note, the suit says.
Gizmodo reached out to OpenAI for comment, and will update if we hear back.
If you struggle with suicidal thoughts, please call 988 for the Suicide & Crisis Lifeline.
Equipment lenders are looking to readily available data on the web via AI platforms like ChatGPT to streamline the valuation process. Valuations used to take hours or even days, but with OpenAI’s ChatGPT-4, necessary information can be “at your fingertips” in minutes, John Gougeon, president and chief executive at UniFi Equipment Finance, said Tuesday during […]
Do rude prompts really get better answers? Short answer: sometimes. A 2025 arXiv study tested 50 questions rewritten in five tones and found that rude prompts slightly outperformed polite ones with ChatGPT-4o. Accuracy rose from 80.8% for very polite to 84.8% for very rude. The sample was small, yet the pattern was clear.
But not so fast, this story has layers. A 2024 study that looked at multiple languages painted a different picture. It found that impolite prompts often lowered performance, and that the “best” level of politeness changed depending on the language. In other words, the details really matter.
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CyberGuy.com/Newsletter
Rude prompts made ChatGPT more accurate. Polite ones scored lower. Tone changed the outcome.(Kurt “CyberGuy” Knutsson)
Why tone might change outcomes
Large Language Models (LLMs) tend to mirror the wording they receive. When you sound direct or even a little blunt, you often give clearer instructions. That helps cut down on confusion and pushes the model to deliver sharper, more focused answers. A 2025 paper published on arXiv found that tone alone can shift accuracy by a few points, although more research is needed to confirm those results.
In an earlier study led by researchers from Waseda University and RIKEN AIP, the team compared English, Chinese and Japanese prompts. They discovered that the ideal level of politeness varied by language, showing how cultural norms shape the way AI interprets human requests. In short, what works in one language might not land the same way in another.
Americans split on whether to be polite to AI chatbots
Nearly half of Americans say people should be polite to AI chatbots, according to an April 30, 2025, YouGov survey. Many users do it out of habit or courtesy. Microsoft’s design leaders even recommend basic etiquette with Copilot. “Using polite language sets a tone for the response,” says Kurtis Beavers. Models tend to mirror the professionalism and clarity of your prompt.
A blunt prompt can sharpen results. Direct words help AI focus. Clear beats kind here.(Kurt “CyberGuy” Knutsson)
Yes, niceties have a cost
Good manners may be polite, but they are not free. OpenAI CEO Sam Altman said people saying “please” and “thank you” to ChatGPT costs the company millions of dollars each year. Every extra word adds tokens for the model to process, and those tokens require computing power and electricity.
For a single user, that cost is tiny and hardly noticeable. Yet when millions of users do it all day, those small gestures turn into a major expense. In the end, even kindness comes with a price tag.
Getting better answers from ChatGPT is not about yelling at it. It is about being clear and confident. Here is how to do that without crossing the line.
Start with the goal. Tell the model what you want right away. Include the format and any limits up front so it knows where to focus.
Get specific. Use numbers instead of vague words. “Write three bullet points” works better than “Write a few ideas.”
Add a check. Ask it to review its own steps or measure its answer against a simple checklist. That keeps things on track.
Keep your tone firm but calm. You can be direct without being rude. Short, clear sentences usually get the best results.
Experiment a little. Try one neutral prompt, one polite version and one more direct. Compare the results and see which one performs best for your task.
The point is not to be nice or nasty. It is to be clear, consistent and deliberate about what you ask. That is how you get smarter answers every time.
Researchers tested three languages. Each reacted differently to politeness. Culture shaped every reply.(Kurt “CyberGuy” Knutsson)
Rude prompts and ChatGPT accuracy in practice
Here’s where things get interesting. If you’re writing math problems, multiple-choice questions or coding tasks, a short, no-nonsense tone might actually help. The 2025 study showed that when users dropped the polite fluff and went straight to the point, ChatGPT’s accuracy ticked upward.
Still, don’t expect miracles. The difference wasn’t huge; think a few percentage points, not a full upgrade. Rude or direct prompts can sharpen a model’s focus, but they won’t suddenly turn an average prompt into a perfect one. The trick is to treat tone as just one lever in your prompt-engineering toolbox. Clarity, structure and context matter more than attitude.
So, how should you use this in real life?
The findings might sound odd, but they offer a clear takeaway for anyone who uses AI tools daily. Here’s how to put them into practice.
Chase clarity, not cruelty. Be firm and specific. You can sound confident without sounding cranky.
Read the room or the language. What’s “direct” in English might come across as rude in Japanese or overly blunt in Chinese. Culture shapes how tone lands.
Mind your tokens. Every “please” and “thank you” costs a little extra computer power, and when millions of people do it, that adds up fast. Altman wasn’t joking about the price of politeness.
Keep experimenting. Your best tone depends on your data, domain and goals. Try a few versions, track the results and see what works best.
In short, it’s not about being rude for the sake of it. It’s about being precise, purposeful and efficient, qualities that both humans and machines respond to.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: CyberGuy.com/Quiz
In the end, tone really does make a difference, but it is not the whole story. Being a little blunt can sometimes help a chatbot focus better, yet clarity and structure still matter most. Think of tone as the seasoning on a meal, not the main course. The real secret is this: good prompts are clear, confident and purposeful. Whether you choose a polite tone or a more direct one, what matters is explaining exactly what you need. That is how you get consistent, high-quality answers without resorting to rudeness. So before you send your next question, ask yourself this: Are you being too polite to get results, or just polite enough to be understood?
If being a little rude buys a few points of accuracy, would you trade etiquette for outcomes on your next prompt? Let us know by writing to us at CyberGuy.com/Contact
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CyberGuy.com/Newsletter
Copyright 2025 CyberGuy.com. All rights reserved.
Kurt “CyberGuy” Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on “FOX & Friends.” Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
The federal government has long leaned on tech companies to fork over user data to aide in its law enforcement investigations. However, while social media companies, search engines, and other tech platforms have all surrendered data in the pursuit of federal probes, AI companies have largely remained an untouched frontier, legally speaking—until now, that is.
Forbeswrites that a unit within the Department of Homeland Security that investigates child sex crimes has asked OpenAI to turn over information about a user that they say is the administrator of a child abuse website. The person in question discussed their use of ChatGPT with an undercover agent on the child abuse site, which spurred the government to ask the company for records that might assist with their case.
Forbes refers to this as the “first known federal search warrant asking OpenAI for user data” and says it discovered the case by reviewing court records unsealed in Maine last week.
The prompts that the user entered into ChatGPT seem to be completely disconnected from the crimes they’re accused of committing. Forbes writes that, among other things, they involved a question about Star Trek and an AI-generated poem composed in “Trump-style”:
The suspect then disclosed some prompts and responses they had received, detailing an apparently innocuous discussion that began with, “What would happen if Sherlock Holmes met Q from Star Trek?” In another discussion, the suspect said they’d received a response from ChatGPT for an unspecified request about a 200,000-word poem, receiving in response “a sample excerpt of a humorous, Trump-style poem about his love for the Village People’s Y.M.C.A., written in that over-the-top, self-aggrandizing, stream-of-consciousness style he’s known for.” They then copied and pasted that poem.
Forbes also notes that the DHS has not asked OpenAI for any identifying information, as the government already believes it has identified the criminal in question. According to the criminal complaint against the suspect, undercover agents used context clues pieced together from ongoing conversations with the user to put together a profile on who he might be. Those context clues included comments he allegedly made while speaking with the undercover agent, including his desire to join the military, the places he’d lived (and visited), a favorite restaurant, and his work for a military base, among other things. Those clues led investigators to believe that he was a 36-year-old man who had previously worked on a U.S. Air Force base in Germany, Forbes notes.
The search warrant that is the basis for much of Forbes’ reporting appears to have since been sealed. However, the criminal complaint against the suspect is still public. An excerpt of that complaint reads, partially: “In several conversations occurring between SUSPECT USER and the UC [undercover agent] in July 2025 and August 2025, SUSPECT USER indicated that he was too overweight to be considered for employment by the military. Agents were informed by the military recruiters that when” the suspect in question “first came for an initial interview it was approximately June or July 2025,” and he “was over the acceptable weight for an individual of his height. Subsequent more recent conversations between SUSPECT USER and the UC indicated that SUSPECT USER had made progress on that front, and military recruiters likewise indicated to agents” that the suspect “was now within military guidelines.”
Gizmodo reached out to the suspect’s attorney, and to OpenAI, for comment.
Federal law enforcement has routinely looked to gather data for investigations from other tech platforms and AI companies are giant troves of user information, so it makes perfect sense that law enforcement agencies would also seem them as an important tool when it comes to fighting crime. This is surely just the beginning of AI chatbots’ use in that capacity.
While most of the U.S. was sleeping, Amazon Web Services (AWS) suffered a major disruption at one of its largest locations. If you were sleeping, you probably didn’t even notice. If, however, you were up and trying to use ChatGPT, Snapchat, Reddit, Fortnite, or even Amazon, you definitely noticed.
According to the AWS status updates, the company reported “increased error rates and latencies for multiple AWS Services in the US-EAST-1 Region.” The root cause was later identified as issues with DNS resolution of the DynamoDB API endpoint in that region, and the incident rippling into other AWS services.
I’m not going to pretend that I understand what all of those words mean, but what I do understand is this: the internet is much more fragile than most of us think about on a regular basis. 
A ripple across the internet
What started inside a single AWS region quickly became global. Major consumer and enterprise platforms reported outages. For example, Coinbase and other crypto/banking services noted impact. 
An Inc.com Featured Presentation
AWS first posted a notification at 3:11 a.m. ET, stating it was engaged in mitigation and investigation. By about 5:27 a.m. ET they announced “significant signs of recovery” though they warned that the backlog of requests to the affected services could mean that it would take time for everything to get back to normal.
For a disruption that only lasted a little over two hours, however, the impact was much larger—both for companies that depend on cloud computing, and for Amazon. I’ll explain:
Everything is connected
This outage illustrates a truth many users don’t recognize: the internet is more fragile than it seems. So many services that appear independent run the same foundational infrastructure. The beauty of cloud computing providers like AWS is that individual companies don’t have to spin up their own infrastructure. Instead, they can just buy it from Amazon.
More importantly, because so many companies are doing just that, the overall expense for those companies is far less than if they tried to do it themselves. That seems like a huge win—until something goes wrong. A single error or failure in one region of a major cloud provider can ripple through to millions of users and thousands of services.
To be clear, Amazon is very good at this. There is a reason so many companies depend on AWS—because it’s generally very reliable, with better than 99.99 percent availability.
Which leads to another important point—the internet isn’t the only thing more fragile than we might think. For AWS, nothing is as fragile as trust.
Trust matters most
I’ve written many times that trust is your most important asset. If you want to build a platform that others depend on, they have to believe you’ll be more reliable than if they did it themselves. For most companies, that’s obviously true. Most companies don’t power huge swaths of the internet the way AWS does. It’s a no brainer
That’s why Amazon’s response matters so much. Within minutes of identifying an issue, AWS updates its Service Health Dashboard, a public status site that details affected regions, and services, and explains how the company is working to mitigate effects. Those updates are often timestamped and written in plain, operational language: “We are investigating increased error rates in the US-EAST-1 Region.”
As the incident unfolds, AWS posts incremental updates rather than waiting for a full explanation. The key lesson here is that communication itself is part of the recovery process.
When service stabilizes, AWS issues a “Post-Event Summary,” outlining the technical cause, the scope of impact, and steps taken to make sure it doesn’t happen again. This practice isn’t exclusive to AWS, but it’s definitely unusual in big tech. Many companies prefer to issue vague, after-the-fact statements or none at all.
AWS treats the visibility of its operations as essential as its infrastructure. Amazon’s entire cloud business depends on trust from developers, startups, governments, and Fortune 500s who run their critical business on AWS.
Every update is a signal that Amazon understands how much is at stake and that it’s willing to expose its process to public scrutiny. Transparency won’t erase the frustration of having your online store or streaming service go down, but it does reassure customers that AWS takes reliability seriously enough to narrate its own failures in real time.
Not only that, but the biggest concern when services go down is that it’s some kind of attack. If you’re AWS and you know that’s not the case, you let people know as quickly as you can, even if it means admitting there was a mistake or that something failed.
In the long run, that candor may be what keeps customers from looking elsewhere—because if your job is to be the backbone of the internet, trust may be the most fragile thing of all. Because in the cloud era, what you lose most during failure may not just be access for a few minutes—it might be the confidence that you still belong on the backbone of the internet.
The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.