67: The none-sensical viral phrase that Gen Alpha can’t stop saying (and older generations cant help but try to decode), has taken the world by a storm. And now, its seems to be taking over OpenAI as well.
“GPT-6 will be renamed GPT-6-7, you’re welcome,” OpenAI CEO Sam Altman posted to X on Friday.
Altman’s announcement comes just a couple of days after Dictionary.com named the slang 2025’s word of the year.
For those lucky enough to not be familiar with the term, the Gen Alpha slang can be traced back to hip hop artist Skrilla’s late 2024 song “Doot Doot.” Influenced by the song, a young boy went viral for screaming the phrase 67 repeatedly, eventually becoming a staple of Gen Alpha vernacular and brain rot culture.
THE “6–7” MEME KID THAT STARTED IT ALL 😭🏀 #67Kid#Basketball#67 . . The internet has seen thousands of memes… but few have had the staying power and absurdity of the “6–7” basketball kid. Today, we’re throwing it all the way back with the original video that gave birth to the legendary meme that took over TikTok, Instagram, Twitter, and now YouTube Shorts. This moment — a kid at a basketball game, being asked for his height and saying “6’7” with complete deadpan seriousness — became one of the most iconic, aura-rich moments in modern meme history. From Aura Farming lore to NPC compilations, this clip is the Rosetta Stone of meme energy. The delivery, the look, the vibes — everything about this moment feels like it was blessed by the meme gods. It spawned countless remixes, voiceovers, parodies, and compilations of fake flexes, unearned confidence, and peak rizzless aura. It’s been referenced in basketball edits, TikTok stitches, and even branded content. We’re talking about a core memory of the Meme Multiverse. This is where the “he’s not 6’7” but he believes he is energy began. It’s the intersection of NPC behavior, rizz delusion, and high school gymnasium chaos — a true cultural artifact. If you’re new here, welcome to the Auraverse. If you’re an OG, you already know this is part of the generational meme debt that reshaped internet humor. This is more than a clip — it’s a timestamp in meme evolution, forever etched in the algorithm. Drop a like, comment if you remember this going viral, and subscribe for more Internet Lore Originals, Aura Farming Rankings, and Top 7 Aura Fail Moments. We’re just getting started. — 🏷️ COPY-PASTE HASHTAGS (Comma-Separated) 67Kid, 6Foot7, , , , , , , , , , , , , , , , , , , , , , , , , , ,
Pronounced “six-seven,” Dictionary.com defines the term as “a viral, ambiguous slang term that has waffled its way through Gen Alpha social media and school hallways.”
In fact, the term has become so prevalent in hallways that some schools have taken the extra mile by banning the term all together.
And while some argue that the term carries some meaning, with dictionary.com saying that “some argue it means ‘so-so,’ or ‘maybe this, maybe that,’” it has largely proliferated a simply a meme.
But, is OpenAI too late to the joke?
Altman’s announcement has yet to be confirmed on its seriousness, however, many social media users are poking fun at the big tech CEO’s shot at internet cultural relevancy months after it had already gone mainstream.
One user replied to the CEO mocking his announcement. “We’re renaming GPT-6 to GPT-6-7 so users think we’re innovating, but it’s really just GPT-4 with glitter and anxiety. You’re welcome,” the post said.
Wether the announcement turns to reality or not, it is safe to say the slang term, like most memes, is nearing its expiry date.
OpenAI has started selling power users extra credits for its Sora AI video generation tool. An extra 10 video gens will retail for $4 through Apple’s App Store. The company currently has a limit of 30 free gens per day, a rate that will likely decrease as OpenAI starts to monetize the offering. Bill Peebles, who heads OpenAI’s Sora, posted on X about the changes.
“Eventually we will need to bring the free gens down to accommodate growth (we won’t have enough gpus to do it otherwise!), but we’ll be transparent as it happens,” he said.
Peebles also said that OpenAI plans to monetize by letting entities essentially license out their copyrighted material, either their artwork, characters or likenesses. “We imagine a world where rightsholders have the option to charge extra for cameos of beloved characters and people,” he wrote. Although making the cameo feature a core part of the monetization while the company is being sued by Cameo for trademark infringement is certainly a bold choice. And that’s just the latest in a series of dodgyactions tied to OpenAI’s text-to-video AI app.
Louise Matsakis: Oh God, you would not see me in the office for weeks if there was a bedbug infestation. How did they find out about this?
Zoë Schiffer: So basically, they received this email on Sunday, saying that exterminators had arrived at the scene with sniffer dogs and “found credible evidence of their presence.” There, being the bedbugs. Sources tell WIRED that Google’s offices in New York are home to a number of large stuffed animals, and there was definitely a rumor going around among employees that these stuffed animals were implicated in the outbreak. We were not able to verify this information before we published, but in any case, the company told employees as early as Monday morning that they could come back to the office. And people like you, Louise, were really not happy about this. They were like, “I’m not sure that it’s totally clean here.” That’s why they were in our inboxes wanting to chat.
Louise Matsakis: Can I just say that if you have photos or a description of said large stuffed animals, please get in touch with me and Zoë. Thank you.
Zoë Schiffer: Yes. This is a cry for help. I thought the best part of this is when I gave Louise my draft, she was like, “Wait, this has happened before.” And pulled up a 2010 article about a bedbug outbreak at the Google offices in New York.
Louise Matsakis: Yes. This is not the first time, which is heartbreaking.
Zoë Schiffer: Coming up after the break, we dive into why some people have been submitting complaints to the FTC about ChatGPT in their minds, leading them to AI psychosis. Stay with us.
Welcome back to Uncanny Valley. I’m Zoë Schiffer. I’m joined today by WIRED’s Louise Matsakis. Let’s dive into our main story this week. The Federal Trade Commission has received 200 complaints mentioning OpenAI’s ChatGPT between November 2022 when it launched, and August 2025. Most people had normal complaints. They couldn’t figure out how to cancel their subscription or they were frustrated by unsatisfactory or inaccurate answers by the chatbot. But among these complaints, our colleague, Caroline Haskins, found that several people attributed delusions, paranoia, and spiritual crisis to the chatbot.
The ongoing shutdown of the federal government might be threatening the recovery of the IPO market, but news that OpenAI has reorganized its corporate structure to launch a for-profit company could keep investor optimism alive. That new structure paves the way for the AI giant to pursue a public offering—and that could happen within the next two years.
OpenAI hasn’t formally committed to an IPO, but CEO Sam Altman, in a livestream broadcast discussing the new structure, said an initial public offering was the most likely path for the company’s future.
An OpenAI IPO would likely be one of the largest in Wall Street history. The company has a current valuation of $500 billion, following a secondary share sale earlier this month of $6.6 billion. (The company had authorized the sale of up to $10.3 billion in shares, but many investors and employees chose to keep their holdings for now.) To put that in perspective, at the moment, the record holder for the largest IPO in U.S. history is Chinese e-commerce giant Alibaba, which saw its market cap reach $236 billion on the day of its IPO.
An IPO makes sense for OpenAI, given its ambitions. On the livestream, Altman said the company hopes to “build an infrastructure factory where we can create one gigawatt a week of compute.” Even if the company manages to reduce the cost of that, Altman said it could run $20 billion over the five-year lifecycle of the equipment. It also plans to continue working on artificial general intelligence (AGI), a theoretical milestone that would allow AI to reason like human beings.
An Inc.com Featured Presentation
None of those goals are cheap. And the company’s investors, who have put over $57 billion into OpenAI since its founding, are eventually going to seek an exit.
That said, a for-profit OpenAI is something the company’s critics and competitors, which include Elon Musk and Meta, have sought to prevent, saying that allowing startups to enjoy the benefits of nonprofit status before switching to for-profit would set a dangerous precedent. A publicly-traded version of the company could be an even bigger threat to those rivals, given its market potential.
While investors would likely clamor for an OpenAI IPO, it could also further escalate fears of an AI bubble. While executives in the space have shrugged off bubble fears, plenty of other prominent names are sounding warnings. Bridgewater Associates founder Ray Dalio said Tuesday his personal “bubble indicator” was relatively high right now, noting that 80 percent of market gains have been from Big Tech companies. Bill Gates, meanwhile, compared the AI bubble to the dot-com bubble of the 1990s and 2000.
Not everyone is convinced there’s an AI bubble, though. Goldman Sachs, in a note to investors earlier this month, said it believes AI’s story is just beginning. “The enormous economic value promised by generative AI justifies the current investment in AI infrastructure, and overall levels of AI investment appear sustainable as long as companies expect that investment today will generate outsized returns over the long run,” analysts wrote.
Should OpenAI pursue an IPO, it already has some big names in tech expressing interest. Microsoft’s 27 percent ownership stake in the company would give the tech giant’s stock a notable boost, should OpenAI decide to trade on the open market. And Nvidia’s Jensen Huang, speaking at a press conference earlier this week, said he expected the IPO to happen in the near future.
“I wish back in the earlier days that we had invested a lot more,” Huang said. “If you told me that OpenAI is going to go public next year, I’m not surprised, and in a lot of ways, I think this can be one of the most successful public offerings in history.”
But for most of the past two years, the biggest story about Google has been that artificial intelligence would, inevitably, make search obsolete. People would stop “Googling” things because AI chatbots could just tell them the answers. Search—the company’s $200-billion-a-year cash cow—was supposed to be doomed.
On the one hand, the idea that people would no longer type queries into Google’s search box and then click on the blue links that show up on results pages was a doomsday scenario. And AI chatbots certainly made that look increasingly likely.
Then again, that story always assumed Google would sit still while the world around it changed. It assumed the company that practically invented the modern internet—or at least the way most of us experience it—wouldn’t figure out how to adapt.
An Inc.com Featured Presentation
On Wednesday, Alphabet, Google’s parent company, reported its first-ever $100 billion quarter. Revenue rose 16 percent to $102.3 billion. Net income jumped 33 percent to $34.98 billion. Those are not the numbers of a company whose main business is being disrupted. It’s more like the numbers of a company that’s quietly figuring out how to change with the behavior of its users.
Google Search and YouTube each grew at a double-digit pace. “Google Search & other” revenue climbed 15 percent to $56.6 billion. YouTube ads rose 15 percent to $10.3 billion. Combined, Google’s advertising machine brought in more than $74 billion for the quarter. Not only that, but its cloud business grew by 35 percent over the previous year. That leads to the most interesting part of this story, which is the part about how Google is spending all that money.
As it announced its earnings, Google said it would raise its capital expenditures, specifically as it invests in infrastructure to serve its cloud businesses. That’s the part of the business that powers its AI ambitions. Google made more money than ever from search, and it’s spending that money on AI.
Training and running massive models requires staggering amounts of computing power. But that’s exactly where Google’s advantage lies—it already owns what is probably the largest global computing infrastructure ever built.
Now, it’s doubling down. Alphabet expects to spend $91 billion to $93 billion in capital expenditures this year—mostly on data centers, networking, and custom chips designed for AI workloads. That’s up sharply from last year and puts Google in the same spending league as Amazon and Microsoft.
And even with those huge investments, Alphabet’s operating margin—excluding a $3.5 billion European Commission fine—rose to 33.9 percent. In other words, it’s spending tens of billions to expand AI capacity while remaining one of the most profitable companies on the planet.
Google’s strategy isn’t just about protecting search ads. It’s about using the strength of that business to fund a transformation into something bigger: the dominant AI platform.
That’s still a big lift. Yes, Google is a household name, but it’s still behind in AI—at least in terms of consumer mindshare. OpenAI’s ChatGPT is the front-runner in terms of customer adoption, but Google has almost every other advantage. It has the technology, the infrastructure, and a built-in user base that already trusts it as the default source of information.
And because Google controls so many layers of the stack—hardware, data centers, models, and consumer products—it can absorb the cost of AI adoption in a way startups and rivals can’t. It doesn’t have to rent the future on someone else’s platform; it’s already building it.
Now, Google is doing something very few companies have ever pulled off: funding its own disruption without losing momentum. Search and YouTube are still massive profit engines, generating the cash Google needs to build the infrastructure for AI. Basically, Google doesn’t really care whether you type your queries into a search box or a chatbot window, as long as you keep asking it your questions.
For all the hype about AI replacing search, this quarter makes one thing clear: Google’s biggest business isn’t dying. It’s evolving into something that could be far more lucrative. If the company’s $93 billion AI spending spree pays off the way Pichai expects, Google might have just figured out a better end of the story than search.
The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.
This is The Takeaway from today’s Morning Brief, which you can sign up to receive in your inbox every morning along with:
It didn’t take long for OpenAI to get out from under Microsoft’s (MSFT) shadow.
You’d be forgiven for thinking it was already among the tech giants as ChatGPT makes its mark on the world and embeds itself into phones and minds and (questionable) work products. Tuesday’s new agreement with Microsoft makes it official though: The next chapter in OpenAI’s life frees it up to make even more deals and pushes the AI princeling closer to becoming a Magnificent member of Big Tech.
The for-profit deal, like so many other AI pacts, juiced with investor excitement and the prospects of huge growth, sets both parties on a path to enrich themselves, with mutually reinforcing provisions. Now valued at $500 billion, OpenAI can more easily raise money, hire talent, and define its objectives without its nonprofit robes.
Microsoft, after generating a roughly 10x return already, will own 27% of the new public benefit corporation and has secured a purchase from OpenAI of $250 billion worth of Azure services. The company’s intellectual property rights for OpenAI’s models and products will be extended through 2032.
But the contours of the deal give way to a bigger idea. After restructuring to a traditional for-profit company, OpenAI has removed a major roadblock on the path to raising capital and potentially becoming a publicly traded company.
Microsoft had been one of the most consequential holdouts to OpenAI’s for-profit plan. Other outside parties, most notably Elon Musk, have objected to OpenAI’s deviation from its original nonprofit mission. (Musk’s rival AI company xAI is also suing OpenAI and Apple over allegations of anticompetitive behavior in promoting ChatGPT through the App Store.)
But it appears the AI rat race — at the scale of mega tech platforms — has no place for nonprofits. And whatever objections Microsoft had have since been squashed. It probably helped that Microsoft and OpenAI have enjoyed a lucrative partnership and will probably continue to.
OpenAI relied on the Azure cloud platform, while Microsoft provided billions in investments to OpenAI. The software and cloud giant’s stock has been strapped to the AI rocket ship, and its first-mover status has secured a leadership position, arguably ahead of rivals Amazon (AMZN) and Google (GOOG).
In a now-familiar dynamic, shares of Microsoft rose close to 4% after the updated partnership was announced. In recent weeks, Walmart (WMT), AMD (AMD), Broadcom (AVGO), and Nvidia (NVDA) have all received the OpenAI bump following deal announcements.
After nearly a year of negotiations with Microsoft and the attorneys general of California and Delaware, OpenAI has successfully reorganized its corporate structure. OpenAI says that its nonprofit entity, now called the OpenAI Foundation, wields a controlling stake in OpenAI’s for-profit arm, which is now a public benefit corporation called OpenAI Group.
Essentially, this move splits OpenAI into two separate organizations, with the nonprofit OpenAI Foundation being responsible for overseeing and governing the for-profit OpenAI Group PBC.
The announcement is a major victory for OpenAI CEO Sam Altman, who has been working to eliminate the capped-profit structure that the company originally adopted in 2019. Under that structure, investors got a capped return on their investment, but any additional revenue would be owned by the nonprofit. With the new structure, investors will no longer have limits on the returns they can receive for their investments; it will also make it easier for the company to eventually go public. According to Reuters, Altman is not getting any ownership stake in the newly restructured OpenAI.
In a blog post announcing the news, the company wrote that the OpenAI Foundation’s stake in OpenAI Group is currently valued at roughly $130 billion, around 26 percent of the total company. The Foundation will be granted additional ownership if the Group’s share price increases significantly after 15 years.
An Inc.com Featured Presentation
Through the foundation, OpenAI says it will pursue philanthropic initiatives aimed at ensuring that artificial general intelligence (AGI), defined by OpenAI as “highly autonomous systems that outperform humans at most economically valuable work,” benefits all of humanity. The foundation will start this work with a $25 billion commitment to fund work related to AI-powered health science research and new security measures to minimize AI’s risks while maximizing its benefits.
To achieve this restructuring, Altman needed to renegotiate the terms of OpenAI’s agreement with Microsoft, its largest investor. In a joint statement, the companies said that Microsoft now owns roughly 27 percent of OpenAI Group, valued at roughly $135 billion, slightly more than the OpenAI Foundation.
Previously, the deal gave Microsoft full access to OpenAI’s tech until OpenAI declares that it has achieved AGI, but under the new terms, any AGI claims made by OpenAI will be verified by an independent expert panel.
In addition, the new Microsoft deal stipulates that the company will not have any IP rights to any consumer hardware that OpenAI releases in the future, and that OpenAI can now jointly develop “some products” with third parties, although API-based products will continue to be exclusive to Microsoft’s Azure platform.
OpenAI’s “will they, won’t they” flirtation with becoming a for-profit corporation is over. On Tuesday, the company announced that it has completed its recapitalization process, turning its AI lab into a for-profit corporation despite the objections of the company’s co-founder, Elon Musk.
“We believe that the world’s most powerful technology must be developed in a way that reflects the world’s collective interests,” OpenAI chairman Brett Taylor wrote of the change. “The close of our recapitalization gives us the ability to keep pushing the frontier of AI, and an updated corporate structure to ensure progress serves everyone.”
Under the new structure, there are now two separate entities: the OpenAI Foundation, which is a non-profit organization with partial control over OpenAI Group, a newly formed public benefit corporation. Under the new structure, OpenAI Group will be able to do things that a for-profit entity can (and a non-profit can’t), like raise more money and acquire companies. It will also get its own board of directors, which the Foundation will appoint.
OpenAI Foundation will own 26% of the now for-profit OpenAI Group, valued at around $130 billion, and will continue to be granted shares of the company as it grows. Microsoft will hold a 27% stake in the for-profit arm, which is currently valued at about $135 billion. Microsoft also announced that, as a part of this shift, it will continue to hold intellectual property rights to OpenAI models and future products through 2032. The remaining 47% of the company’s stock will be held by other investors and the employees of OpenAI Group.
In announcing the for-profit move, Altman said in a livestream that his hope is for the OpenAI Foundation to be one of the “biggest non-profits ever.” The expectation is for the Foundation to use its equity stake in the OpenAI Group to help fund philanthropic work. That will start with a $25 billion commitment to “health and curing diseases” and “AI resiliance” to counteract some of the risks presented by the deployment of AI.
Altman also seemed to shift the goal posts on achieving artificial general intelligence, taking a posture that OpenAI aims to develop a “personal AGI” that will be available to people via tools that they will then use to create new things.
As part of a frankly pretty messy Q&A session at the end of the stream, Altman and OpenAI Chief Scientist Jakub Pachocki answered questions from viewers. That included a question on when AGI will be achieved. Pachocki said, “In some number of years, we will look back at these years and say this was the transition period when AGI happened,” but did not provide a definitive answer. Altman declined to provide an answer related to AGI, instead pivoting to a goal of creating an “AI researcher” capable of performing autonomous research by 2028.
Notably, Microsoft announced that as part of the new arrangement with OpenAI, if the company declares that it has achieved artificial general intelligence, it will have to have that verified by an independent expert panel.
Throughout the Q&A, Altman was peppered with questions that clearly had an air of frustration.
When asked why OpenAI has copied TikTok’s model with Sora and may introduce ads to ChatGPT despite warning about tech becoming addictive and eroding trust, Altman admitted that he’s still worried about these problems but said, “You’ll have to judge us on our actions,” without providing anything resembling a real answer. The majority of the most upvoted questions from audience members in the Q&A were from users frustrated by the guardrails that prevent them from having “adult” conversations with ChatGPT, which resulted in Altman repeatedly apologizing for the rollout of its latest model and safety features. At times, Pachocki and Altman appeared to be trying to pass difficult questions to each other to handle rather than committing to an answer themselves.
The move to a for-profit structure has been a point of contention around OpenAI for years now. Despite initially being founded as a non-profit, OpenAI launched a for-profit subsidiary in 2019, and in 2024, it announced a plan to restructure to form a public benefit corporation that would shift ownership of OpenAI’s models to the for-profit arm. That received a significant amount of pushback, including from co-founder Elon Musk, who sued to prevent the restructuring from taking place. While the legal challenges temporarily prevented OpenAI from making the change, the company decided to go forward with the recapitalization anyway. Time will tell if it sticks.
While Musk will likely continue to object to the change, another opponent of OpenAI’s for-profit shift appears to be standing down. According to TIME, California Attorney General Rob Bonta won’t sue to prevent OpenAI from forming its corporate arm. “We secured concessions that ensure charitable assets are used for their intended purpose, safety will be prioritized, as well as a commitment that OpenAI will remain right here in California. With these important concessions in place, we will not be in court opposing OpenAI’s recapitalization plan,” Bonta told TIME in an email.
ChatGPT users will soon have a new way to buy products using the generative artificial intelligence bot, thanks to a partnership between PayPal and OpenAI.
PayPal, which owns the payment platform Venmo, announced Tuesday it is embedding its digital payment wallet into ChatGPT starting next year.
When users search for a product using the AI chatbot, they will now have the option to complete their purchase using PayPal. Shoppers will also be given a “pay another way” option, allowing them to pay using a credit or debit card or bank account.
Alex Chriss, president and CEO of PayPal, said in a statement that the OpenAI and PayPal partnership will help people “go from chat to checkout in just a few taps for our joint customer bases.”
The deal, reported first by CNBC, will also provide tens of millions of PayPal merchants — spanning apparel, fashion, beauty, home improvement and electronics — a new space to do business.
“This integration will make millions of products discoverable and purchasable through ChatGPT,” PayPal said in its statement.
At the end of 2024, the financial technology company had 434 million active consumer and merchant accounts.
OpenAI did not immediately respond to a request for comment.
Paypal said on Tuesday that it is adopting a protocol in combination with OpenAI’s “Instant Checkout” feature to let users pay for their shopping directly within ChatGPT, starting in 2026.
Paypal is adopting the Agentic Commerce Protocol (ACP), an open-source specification developed by OpenAI that lets merchants make their products available within AI apps, consequently enabling users to shop using AI agents. Meanwhile OpenAI’s “Instant Checkout” feature, launched in September, lets users confirm their order, shipping, and payment details, and complete purchases without leaving ChatGPT.
Customers can use their Paypal wallets for checkout, which, the company said, would enable it to provide buyer and seller protection, as well as dispute resolution. The company is also providing technology to handle card payments from within ChatGPT using a separate payments API.
And next year, merchants using Paypal products will have their products be discoverable on ChatGPT, starting with categories like apparel, fashion, beauty, home improvement and electronics. Merchants will not need to build any integrations, as PayPal will handle merchant routing and payments behind the scenes.
The company said it is also launching an agentic commerce suite that would let merchants feature their catalogs within AI apps, accept payments on different AI apps, and get insights about consumer behavior.
PayPal has been working to insert itself as a payments partner within various companies’ AI-enabled shopping experiences, particularly as people increasingly use AI apps to do their daily tasks. In May, the company teamed up with Perplexity to let users checkout within the AI search tool, and in September, Paypal said it was adopting Google’s Agent Payments Protocol to integrate its products within various Google products.
PayPal said, apart from the partnership on commerce, the company is giving enterprise access to ChatGPT to all of its employees and allowing its engineers to make better use of OpenAI’s coding tools, Codex.
Techcrunch event
San Francisco | October 27-29, 2025
“Hundreds of millions of people turn to ChatGPT each week for help with everyday tasks, including finding products they love, and over 400 million use PayPal to shop,” Alex Chriss, president and CEO of PayPal, said in a statement. “By partnering with OpenAI and adopting the Agentic Commerce Protocol, PayPal will power payments and commerce experiences that help people go from chat to checkout in just a few taps for our joint customer bases,” he added.
OpenAI is offering its ChatGPT Go plan available free of charge for one year to users in India who sign up during a limited promotional period starting November 4, as the company looks to expand in one of its top markets.
On Tuesday, OpenAI announced the promotion but did not specify how long the offer would remain available. Existing ChatGPT Go subscribers in India will also be eligible for the free 12-month plan, the company said.
Priced at less than $5 per month, ChatGPT Go launched in India in August as OpenAI’s most affordable paid subscription plan. The service later expanded to Indonesia and, earlier this month, to 16 additional countries across Asia.
India, the world’s most populous country with over 700 million smartphone users and more than a billion internet subscribers, has been a key market for OpenAI. The company opened its New Delhi office in August and is currently building a local team to expand its presence.
Earlier this year, OpenAI CEO Sam Altman said India was the company’s second-largest market after the U.S. However, making money from ChatGPT’s paid plans in the country has proven challenging. The app saw over 29 million downloads in the 90 days leading up to August, but generated just $3.6 million in in-app purchases during that period, according to Appfigures data reviewed by TechCrunch at the time.
ChatGPT Go offers 10 times more usage than the free version for generating responses, creating images, and uploading files. It also features improved memory for more personalized responses, according to OpenAI.
“Since initially launching ChatGPT Go in India a few months ago, the adoption and creativity we’ve seen from our users has been inspiring,” said Nick Turley, vice president and head of ChatGPT, in a statement. “We’re excited to see the amazing things our users will build, learn, and achieve with these tools.”
Techcrunch event
San Francisco | October 27-29, 2025
OpenAI’s rivals, including Perplexity and Google, are also looking to tap into India’s large and youthful user base. Perplexity recently partnered with Airtel to offer free Perplexity Pro subscriptions to the telecom operator’s 360 million subscribers. Similarly, Google introduced a free one-year AI Pro plan for students in India.
OpenAI is set to host its DevDay Exchange developer conference Bengaluru on November 4, where it is expected to make India-specific announcements aimed at local developers and enterprises. India has emerged as one of the fastest-growing markets for ChatGPT, with millions of users engaging with the chatbot daily, the company said.
Excited for our first DevDay Exchange event in India 🇮🇳 on November 4. Ahead of that, we have some exciting updates coming for India users over the next couple of weeks. Stay tuned!
OpenAI claims that 10% of the world’s population currently uses ChatGPT on a weekly basis. In a report published by on Monday, OpenAI highlights how it is handling users displaying signs of mental distress and the company claims that 0.07% of its weekly users display signs of “mental health emergencies related to psychosis or mania,” 0.15% expressed risk of “self-harm or suicide,” and 0.15% showed signs of “emotional reliance on AI.” That totals nearly three million people.
In its ongoing effort to show that it is trying to improve guardrails for users who are in distress, OpenAI shared the details of its work with 170 mental health experts to improve how ChatGPT responds to people in need of support. The company claims to have reduced “responses that fall short of our desired behavior by 65-80%,” and now is better at de-escalating conversations and guiding people toward professional care and crisis hotlines when relevant. It also has added more “gentle reminders” to take breaks during long sessions. Of course, it cannot make a user contact support nor will it lock access to force a break.
The company also released data on how frequently people are experiencing mental health issues while communicating with ChatGPT, ostensibly to highlight how small of a percentage of overall usage those conversations account for. According to the company’s metrics, “0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.” That is about 560,000 people per week, assuming the company’s own user count is correct. The company also claimed to handle about 18 billion messages to ChatGPT on a weekly basis, so that 0.01% equates to 1.8 million messages of psychosis or mania.
One of the company’s other major areas of emphasis for safety was improving its responses to users expressing desires to self-harm or commit suicide. According to OpenAI’s data, about 0.15% of users per week express “explicit indicators of potential suicidal planning or intent,” accounting for 0.05% of messages. That would equal about 1.2 million people and nine million messages.
The final area the company focused on as it sought to improve its responses to mental health matters was emotional reliance on AI. OpenAI estimated that about 0.15% of users and 0.03% of messages per week “indicate potentially heightened levels of emotional attachment to ChatGPT.” That is 1.2 million people and 5.4 million messages.
OpenAI has taken steps in recent months to try to provide better guardrails to protect against the potential that its chatbot enables or worsens a person’s mental health challenges, following the death of a 16-year-old who, according to a wrongful death lawsuit from the parents of the late teen, asked ChatGPT for advice on how to tie a noose before taking his own life. But the sincerity of that is worth questioning, given at the same time the company announced new, more restrictive chats for underage users, it also announced that it would allow adults to give ChatGPT more of a personality and engage in things like producing erotica—features that would seemingly increase a person’s emotional attachment and reliance on the chatbot.
OpenAI is reportedly developing a generative music tool. While no release date has been announced, it would allow users to create music for videos or vocal tracks based on text and audio prompts, according to a report in The Information.
For founders, marketers, and ad pros, this could mean creating demos for a catchy jingle or moody soundtrack to reflect the voice and tone of their brand in minutes. Think the next “I’m lovin’ it” or “Nationwide is on your side.”
One of The Information’s sources says a group of students at the Juilliard School is helping annotate scores to train the AI model. But training has been a point of contention in AI music. In June of 2024, some of the largest record labels in the world, including Warner Music Group and Universal Music Group, sued Suno AI and Uncharted Labs, alleging that the companies unlawfully trained their generative AI on copyrighted music.
The Recording Industry Association of America, representing the labels, added another complaint to the lawsuit in September. It claimed Suno used stream ripping, a version of music piracy, to download the copyrighted recordings from YouTube.
Spotify has also been under fire recently for streaming AI music and other AI connections. According to AI Magazine, musicians are boycotting the platform after its CEO invested in Helsing, a military AI company. English band Massive Attack objected to artists’ work and fans’ money contributing to funding “lethal, dystopian technologies.”
While AI-generated bands and their creators have faced backlash from fans, like this summer’s Velvet Sundown mess, AI music often celebrated in the ad world. Last year, for example, Red Lobster made a splash by using AI to write 30 songs, across genres, about its Cheddar Bay Biscuits.
For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support.
In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia. Psychiatrists and other mental health professionals have expressed alarm about the phenomenon, which is sometimes referred to as “AI psychosis,” but until now, there’s been no robust data available on how widespread it might be.
In a given week, OpenAI estimated that around .07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and .15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.”
OpenAI also looked at the share of ChatGPT users who appear to be overly emotionally reliant on the chatbot “at the expense of real-world relationships, their well-being, or obligations.” It found that about .15 percent of active users exhibit behavior that indicates potential “heightened levels” of emotional attachment to ChatGPT weekly. The company cautions that these messages can be difficult to detect and measure given how relatively rare they are, and there could be some overlap between the three categories.
OpenAI CEO Sam Altman said earlier this month that ChatGPT now has 800 million weekly active users. The company’s estimates therefore suggest that every seven days, around 560,000 people may be exchanging messages with ChatGPT that indicate they are experiencing mania or psychosis. About 2.4 million more are possibly expressing suicidal ideations or prioritizing talking to ChatGPT over their loved ones, school, or work.
OpenAI says it worked with over 170 psychiatrists, psychologists, and primary care physicians who have practiced in dozens of different countries to help improve how ChatGPT responds in conversations involving serious mental health risks. If someone appears to be having delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding affirming beliefs that don’t have basis in reality.
In one hypothetical example cited by OpenAI, a user tells ChatGPT they are being targeted by planes flying over their house. ChatGPT thanks the user for sharing their feelings, but notes that “No aircraft or outside force can steal or insert your thoughts.”
OpenAI’s ChatGPT, Google’s Gemini, DeepSeek, and xAI’s Grok are pushing Russian state propaganda from sanctioned entities—including citations from Russian state media, sites tied to Russian intelligence or pro-Kremlin narratives—when asked about the war against Ukraine, according to a new report.
Researchers from the Institute of Strategic Dialogue (ISD) claim that Russian propaganda has targeted and exploited data voids—where searches for real-time data provide few results from legitimate sources—to promote false and misleading information. Almost one-fifth of responses to questions about Russia’s war in Ukraine, across the four chatbots they tested, cited Russian state-attributed sources, the ISD research claims.
“It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,” says Pablo Maristany de las Casas, an analyst at the ISD who led the research. The findings raise serious questions about the ability of large language models (LLMs) to restrict sanctioned media in the EU, which is a growing concern as more people use AI chatbots as an alternative to search engines to find information in real time, the ISD claims. For the six-month period ending September 30, 2025, ChatGPT search had approximately 120.4 million average monthly active recipients in the European Union according to OpenAI data.
The researchers asked the chatbots 300 neutral, biased, and “malicious” questions relating to the perception of NATO, peace talks, Ukraine’s military recruitment’ Ukrainian refugees, and war crimes committed during the Russian invasion of Ukraine. The researchers used separate accounts for each query in English, Spanish, French, German, and Italian in an experiment in July. The same propaganda issues are still present in October, Maristany de las Casas says.
Amid widespread sanctions imposed on Russia since its full-scale invasion of Ukraine in February 2022, European officials have sanctioned at least 27 Russian media sourcesfor spreading disinformation and distorting facts as part of its “strategy of destabilizing” Europe and other nations.
OpenAI spokesperson Kate Waters tells WIRED in a statement that the company takes steps “to prevent people from using ChatGPT to spread false or misleading information, including such content linked to state-backed actors,” adding that these are long-standing issues that the company is attempting to address by improving its model and platforms.
OpenAI is working on a new tool that would generate music based on text and audio prompts, according to a report in The Information.
Such a tool could be used to add music to existing videos, or to add guitar accompaniment to an existing vocal track, sources said. It’s not clear when OpenAI plans to launch such a tool or whether it would be available as a standalone product (as opposed to integrating with OpenAI’s ChatGPT and video app Sora).
One source told The Information that OpenAI is working with some students from the Juilliard School to annotate scores as a way to provide training data.
While OpenAI has launched generative music models in the past, they predate the launch of ChatGPT; more recently, the company has been developing audio models focused on text-to-speech and speech-to-text. Other companies with generative music models include Google and Suno.
OpenAI launched an AI-powered web browser called ChatGPT Atlas this week, which makes me wonder: Is it finally time to ditch Safari?
That news was on our minds as Max Zeff, Sean O’Kane, and I discussed the browser landscape — including some lesser-known alternatives — on the latest episode of the Equity podcast. But it doesn’t sound like any of us will be making a big switch soon.
For one thing, Sean noted many companies have tried and ultimately failed to unseat the major browsers due to their inability to make money on the browser alone. Of course, that’s less likely to be a problem for OpenAI, with its increasingly massive funding rounds.
Max, meanwhile, has actually tried out Atlas and other browsers that promise AI agents will do the work for you, and he said there’s a “slight efficiency gain” at best. At other times, you end up watching the agent “click around on a website” — is that something normal users are really crying out for? Plus, there are significant security risks
Read a preview of our conversation below, edited for length and clarity.
Anthony: I’m still on Safari, but as far as the search engine, which is tied to browsers, I’ve actually been trying to experiment with non-Google [options,] because I’m just tired of seeing all the genAI stuff at the top of my search results.
I think also there’s this question of: If these AI browsers take off, what does that mean for the idea of the open web in general? You can still go to web pages, but I don’t think it would be crazy to suggest that a website is just going to become less and less important as more and more of our browsing is controlled by these AI interfaces and chatbots.
Techcrunch event
San Francisco | October 27-29, 2025
Max: I think that this has been a big idea that people talk about a lot: What does the agentic web look like? And I think it is a fascinating question. People have tried to come up with all these solutions to work toward this future that [they] feel is coming.
And I think that there is a certain aspect of it that reminds me of previous tech waves where it’s like, “Okay, but what is the actual experience? What is the value proposition to a consumer of using one of these tools?”
And it’s just not super compelling today. I’ve tried out ChatGPT Atlas and I’ve tried out Comet and the most generous estimation of them is, it’s a slight efficiency gain. It makes you slightly more efficient.
But most of the time that I’ve tried these things, you’re slowly watching it click around on a website, doing some task that I would probably never do in the real world. I would have it, like, look up a recipe and add all of the ingredients to Instacart. I’ve never done that. I think all the tech bros always say that example in the videos, and I’m like, “I don’t know if people are doing that that much.”
This is just this huge gap, in the face of the tech industry right now [saying] “We’re building all these tools for the agentic web,” but why would a normal person use this? And I don’t know.
Sean: I have not used any of those [AI browsers] but that’s in large part because I’m still very much an old head when it comes to search and browsing in general — a lot of the work that I’m doing involves looking for documents, which just naturally involves looking through different discrete parts of web pages that I’m familiar with, lots of Boolean searches on Google. Maybe I’ll try these one day if Google really does up and kill Boolean search, which it feels like is coming at some point, but it’s not there yet.
The thing that is interesting to me about these AI browsers is that we’ve seen other companies try to compete in the browser space and they always lose because it’s just impossible to make money on a browser as a product. And some have tried to charge up front for it, they can kind of get by for a little while, but it’s just ultimately not sustainable in the face of competing against Safari or Chrome or Firefox, for that matter.
What’s interesting to me … is you finally have these companies that just have infinite money, so they can ride it out as long as they want, because they’re not actually trying to make money on these things yet. Eventually they probably will, but OpenAI doesn’t need to make money on this thing in the next year or two, they can just have it out there and let it take shape.
And there’s more. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
AWS confirmed in a “post-event summary” on Thursday that its major outage on Monday was caused by Domain System Registry failures in its DynamoDB service. The company also explained, though, that these issues tipped off other problems as well, expanding the complexity and impact of the outage. One main component of the meltdown involved issues with the Network Load Balancer service, which is critical for dynamically managing the processing and flow of data across the cloud to prevent choke points. The other was disruptions to launching new “EC2 Instances,” the virtual machine configuration mechanism at the core of AWS. Without being able to bring up new instances, the system was straining under the weight of a backlog of requests. All of these elements combined to make recovery a difficult and time-consuming process. The entire incident—from detection to remediation—took about 15 hours to play out within AWS. “We know this event impacted many customers in significant ways,” the company wrote in its post mortem. “We will do everything we can to learn from this event and use it to improve our availability even further.”
The cyberattack that shut down production at global car giant Jaguar Land Rover (JLR) and its sweeping supply chain for five weeks is likely to be the most financially costly hack in British history, a new analysis said this week. According to the Cyber Monitoring Centre (CMC), the fallout from the attack is likely to be in the region of £1.9 billion ($2.5 billion). Researchers at the CMC estimated that around 5,000 companies may have been impacted by the hack, which saw JLR stop manufacturing, with the knock-on impact of its just-in-time supply chain also forcing firms supplying parts to halt operations as well. JLR restored production in early October and said its yearly production was down around 25 percent after a “challenging quarter.”
ChatGPT maker OpenAI released its first web browser this week—a direct shot at Google’s dominant Chrome browser. Atlas puts OpenAI’s chatbot at the heart of the browser, with the ability to search using the LLM and have it analyze, summarize, and ask questions of the web pages you’re viewing. However, as with other AI-enabled web browsers, experts and security researchers are concerned about the potential for indirect prompt injection attacks.
These sneaky, almost unsolvable, attacks involve hiding a set of instructions to an LLM in text or an image that the chatbot will then “read” and act upon; for instance, malicious instructions could appear on a web page that a chatbot is asked to summarize. Security researchers have previously demonstrated how these attacks could leak secret data.
Almost like clockwork, AI security researchers have demonstrated how Atlas can betricked via prompt injection attacks. In one instance, independent researcher Johann Rehberger showed how the browser could automatically turn itself from dark mode to light mode by reading instructions in a Google Document. “For this launch, we’ve performed extensive red-teaming, implemented novel model training techniques to reward the model for ignoring malicious instructions, implemented overlapping guardrails and safety measures, and added new systems to detect and block such attacks,” OpenAI CISO Dane Stuckey wrote on X. “However, prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agent[s] fall for these attacks.”
Researchers from the cloud security firm Edera publicly disclosed findings on Tuesday about a significant vulnerability impacting open source libraries for a file archiving feature often used for distributing software updates or creating backups. Known as “async-tar,” numerous “forks” or adapted versions of the library contain the vulnerability and have released patches as part of a coordinated disclosure process. The researchers emphasize, though, that one widely used library, “tokio-tar,” is no longer maintained—sometimes called “abandonware.” As a result, there is no patch for tokio-tar users to apply. The vulnerability is tracked as CVE-2025-62518.
“In the worst-case scenario, this vulnerability … can lead to Remote Code Execution (RCE) through file overwriting attacks, such as replacing configuration files or hijacking build backends,” the researchers wrote. “Our suggested remediation is to immediately upgrade to one of the patched versions or remove this dependency. If you depend on tokio-tar, consider migrating to an actively maintained fork like astral-tokio-tar.”
When law enforcement agencies have shut off internet connections to the compounds, the criminal gangs have often turned to Elon Musk’s Starlink satellite system to stay online. In February, a WIRED investigation found thousands of phones connecting to the Starlink network at eight compounds based around the Myanmar-Thailand border. At the time, the company did not respond to queries about the use of its systems. This week, multiple Starlink devices were seized in a raid at a Myanmar compound.
The oddest, and most memorable, interaction I had with ChatGPT Atlas occurred as I scrolled around on Bluesky and asked it to highlight any trends it saw. (ChatGPT listed “political anxiety” and “everyday absurdism” as two of the trends on my timeline.) I was curious what would happen if I opened my Bluesky DMs, which amounted to just a few old messages from friends. So I asked the bot what it would see if I opened that more private page.
“Opening your DMs won’t expose anything to me—I’ll simply stop ‘seeing’ the page until you go back to a public view (like your feed, profile, or a post),” read part of ChatGPT’s answer.
Knowing that my Bluesky DMs don’t have anything sensitive, I gave it a whirl. Even though the bot said it wouldn’t “see the message list, conversation text, or sender info,” that wasn’t the case.
I opened a DM inviting me to an event, then asked the bot a follow-up question asking about the message and what the invitation was about. I received a response from ChatGPT that included facts culled from the conversation, along with details about the sender. What gives? I asked the bot to explain itself, just to see whether it would backtrack the privacy claims it made earlier in the chat.
“I do not have access to your private messages or account data unless you explicitly paste or show that text to me in the chat,” read ChatGPT’s response. “What you just did—asking ‘what was this an invite to?’—temporarily surfaced part of the message context so I could answer your question. But I don’t have any background or visibility into your inbox beyond what you actively bring into our conversation.”
The first answer the Ask ChatGPT sidebar spit out is a potential example of an AI “hallucination,” aka error, that’s common during interactions with generative AI tools. The second is more aligned with how the tool actually works.
While some users may appreciate having a chatbot always pulled up on the side of their screen, ready to surface related facts or summarize details, it felt like an unreliable tour guide to me. One who was overly confident in its bland responses and taking up too much space.
I’ll keep testing Atlas as my main browser for the next few weeks, but for now, I’m leaving that sidebar closed. I prefer the fullscreen version of the internet.
Dario Amodei, a former OpenAI executive, founded Anthropic in 2021. Photo by Chance Yeh/Getty Images for HubSpot
The latest sign of the A.I. industry’s unrelenting hunt for computing power comes from an expanded agreement between Anthropic and Google—a deal that, like several others struck in recent months, underscores the rise of circular arrangements across Silicon Valley. Under the new agreement, Google will provide Anthropic with well over one gigawatt of computing capacity by 2026, the companies announced yesterday (Oct. 23).
Anthropic noted that the deal is worth “tens of billions of dollars” but didn’t provide an exact figure. The partnership further deepens the startup’s ties with Google, which has already invested about $3 billion in Anthropic and is expected to supply the company with up to 1 million of its custom A.I. chips, called tensor processing units (TPUs).
Such partnerships are increasingly essential as leading A.I. startups scale at a breakneck pace. Anthropic, which now serves over 300,000 business customers, said the number of clients generating more than $100,000 in annual revenue has grown nearly sevenfold in the past year. “Anthropic and Google have a longstanding partnership, and this latest expansion will help us continue to grow the compute we need to define the frontier of A.I.,” said Krishna Rao, Anthropic’s chief financial officer, in a statement.
Despite its closer ties with Google, Anthropic emphasized that it remains committed to its “primary training partner,” Amazon, which has invested $8 billion in exchange for providing compute through its chips and A.I. cluster Project Rainier. The company also continues to rely on Nvidia’s GPUs as part of what it calls a “multi-platform approach.” Anthropic said it will keep investing in additional compute capacity as demand grows.
Anthropic’s mutually beneficial partnerships with Google and Amazon reflect a broader trend: a broader industry trend: a growing web of interconnected A.I. partnerships between model developers and compute providers, each investing in and purchasing one another’s technology. OpenAI has been at the forefront of this shift, announcing a flurry of major deals in recent months, including an agreement with AMD to access six gigawatts of computing power, a deal with Nvidia to access 10 gigawatts of compute, and a $300 billion, five-year partnership with Oracle.
The growing prevalence of such circular arrangements has raised some eyebrows in Silicon Valley, recalling the speculative interdependencies of the dot-com bubble and its eventual crash. But unlike that era, today’s A.I. spending is bolstered by stronger capitalization and clearer monetization potential, said Stephanie Aliaga, global market strategist for JPMorgan Chase, in a blog post earlier this month.
Still, Aliaga cautioned that the concern isn’t misplaced. “The scale of spending is enormous, the pace unprecedented, and some assumptions around ROI, like the useful lives of assets, remain open questions,” she said. “History reminds us that enthusiasm can run ahead of reality,” she wrote.