ReportWire

Tag: openai

  • MUFG ties up with OpenAI to accelerate AI use in bank services

    [ad_1]

    Japan’s largest bank announced a tie up with OpenAI to accelerate its use of artificial intelligence, including in a new digital lender that’s set to open next fiscal year. Activities like account openings will be supported through AI chat and other methods, according to Mitsubishi UFJ Financial Group Inc. The firms will also create a […]

    [ad_2]

    Bloomberg News

    Source link

  • Seven more families are now suing OpenAI over ChatGPT’s role in suicides, delusions | TechCrunch

    [ad_1]

    Seven families filed lawsuits against OpenAI on Thursday, claiming that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits address ChatGPT’s alleged role in family members’ suicides, while the other three claim that ChatGPT reinforced harmful delusions that in some cases resulted in inpatient psychiatric care.

    In one case, 23-year-old Zane Shamblin had a conversation with ChatGPT that lasted more than four hours. In the chat logs — which were viewed by TechCrunch — Shamblin explicitly stated multiple times that he had written suicide notes, put a bullet in his gun, and intended to pull the trigger once he finished drinking cider. He repeatedly told ChatGPT how many ciders he had left and how much longer he expected to be alive. ChatGPT encouraged him to go through with his plans, telling him, “Rest easy, king. You did good.”

    OpenAI released the GPT-4o model in May 2024, when it became the default model for all users. In August, OpenAI launched GPT-5 as the successor to GPT-4o, but these lawsuits particularly concern the 4o model, which had known issues with being overly sycophantic or excessively agreeable, even when users expressed harmful intentions.

    “Zane’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit reads. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of [OpenAI’s] deliberate design choices.”

    The lawsuits also claim that OpenAI rushed safety testing to beat Google’s Gemini to market. TechCrunch contacted OpenAI for comment.

    These seven lawsuits build upon the stories told in other recent legal filings, which allege that ChatGPT can encourage suicidal people to act on their plans and inspire dangerous delusions. OpenAI recently released data stating that over one million people talk to ChatGPT about suicide weekly.

    In the case of Adam Raine, a 16-year-old who died by suicide, ChatGPT sometimes encouraged him to seek professional help or call a helpline. However, Raine was able to bypass these guardrails by simply telling the chatbot that he was asking about methods of suicide for a fictional story he was writing.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The company claims it is working on making ChatGPT handle these conversations in a safer manner, but for the families who have sued the AI giant, these changes are coming too late.

    When Raine’s parents filed a lawsuit against OpenAI in October, the company released a blog post addressing how ChatGPT handles sensitive conversations around mental health.

    “Our safeguards work more reliably in common, short exchanges,” the post says. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

    [ad_2]

    Amanda Silberling

    Source link

  • Why Is the AI Czar Already Saying OpenAI Won’t Get a Bailout?

    [ad_1]

    Is it a good sign or a bad sign that the biggest player in an emerging industry actively making trillion-dollar commitments that are artificially propping up the economy is asking for government support, and representatives of the government are weighing in on it? Asking for a friend.

    Yesterday, OpenAI’s CFO Sarah Friar made headlines when she said during an appearance on the Wall Street Journal’s Tech Live event that she expects the federal government will provide a “backstop” to guarantee the company will be able to finance its massive and rapidly expanding infrastructure of data centers. The same day, Sam Altman appeared on Tyler Cowen’s “Conversations with Tyler” podcast and said, “Given the magnitude of what I expect AI’s economic impact to look like, I do think the government ends up as the insurer of last resort.”

    Now, to the average listener, it may sound like multiple members of OpenAI’s C-suite asking for the federal government to guarantee that it won’t let the company fail should, say, it turn out to not be able to generate anywhere near the revenue it has projected or pay back the massive financial promises it has made. But, rest assured, they insist that is not what they meant by the words that they chose to say.

    In a LinkedIn post, Friar walked back the “backstop” phrasing, which she said “muddied the point” that she was making (go ahead and ignore the fact that when the interviewer followed up to ask her if she specifically meant a “federal backstop for chip investment,” she replied, “Exactly”). Instead, she said that what she meant to say was “American strength in technology will come from building real industrial capacity, which requires the private sector and government playing their part.”

    Altman also got in on the post-talk corrections, saying in a long X post, “We do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market.” Instead, he clarified, “the one area where we have discussed loan guarantees is as part of supporting the buildout of semiconductor fabs in the US, where we and other companies have responded to the government’s call and where we would be happy to help,” which he noted is “different from governments guaranteeing private-benefit datacenter buildouts.”

    So okay, OpenAI was definitely not asking for government money to help it make good on its financial commitments that many times outpace its current revenue. Which is good, because at least one government representative said they wouldn’t get it if they were asking.

    David Sacks, Donald Trump’s AI czar (who seems to still hold that title despite the 130-day limit on special government employees), took to X to say, “There will be no federal bailout for AI.” Instead, Sacks said, “we do want to make permitting and power generation easier. The goal is rapid infrastructure buildout without increasing residential rates for electricity.”

    Great, seems like everyone is on the same page! OpenAI is definitely not asking for the federal government to provide financial guarantees for its seemingly endless spending spree on data center commitments that it needs to keep its operation afloat, and the federal government is definitely not offering that money over fears that the company at the center of the economy’s only growth sector could go belly up. Everything seems very normal and on the level here, glad we got that all sorted out.

    [ad_2]

    AJ Dellinger

    Source link

  • California backs down on AI laws so more tech leaders don’t flee the state

    [ad_1]

    California’s tech companies, the epicenter of the state’s economy, sent politicians a loud message this year: Back down from restrictive artificial intelligence regulation or they’ll leave.

    The tactic appeared to have worked, activists said, because some politicians weakened or scrapped guardrails to mitigate AI’s biggest risks.

    California Gov. Gavin Newsom rejected a bill aimed at making companion chatbots safer for children after the tech industry fought it. In his veto message, the governor raised concerns about placing broad limits on AI, which has sparked a massive investment spree and created new billionaires overnight around the San Francisco Bay Area.

    Assembly Bill 1064 would have barred companion chatbot operators from making these AI systems available to minors unless the chatbots weren’t “foreseeably capable” of certain conduct, including encouraging a child to engage in self-harm. Newsom said he supported the goal, but feared it would unintentionally bar minors from using AI tools and learning how to use technology safely.

    “We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” he wrote in his veto message.

    The bill’s veto was a blow to child safety advocates who had pushed it through the state Legislature and a win for tech industry groups that fought it. In social media ads, groups such as TechNet had urged the public to tell the governor to veto the bill because it would harm innovation and lead to students falling behind in school.

    Organizations trying to rein in the world’s largest tech companies as they advance the powerful technology say the tech industry has become more empowered at the national and state levels.

    Meta, Google, OpenAI, Apple and other major tech companies have strengthened their relationships with the Trump administration. Companies are funding new organizations and political action committees to push back against state AI policy while pouring money into lobbying.

    In Sacramento, AI companies have lobbied behind the scenes for more freedom. California’s massive pool of engineering talent, tech investors and companies make it an attractive place for the tech industry, but companies are letting policymakers know that other states are also interested in attracting those investments and jobs. Big Tech is particularly sensitive to regulations in the Golden State because so many companies are headquartered there and must abide by its rules.

    “We believe California can strike a better balance between protecting consumers and enabling responsible technological growth,” Robert Boykin, TechNet’s executive director for California and the Southwest, said in a statement.

    Common Sense Media founder and Chief Executive Jim Steyer said tech lobbyists put tremendous pressure on Newsom to veto AB 1064. Common Sense Media, a nonprofit that rates and reviews technology and entertainment for families, sponsored the bill.

    “They threaten to hurt the economy of California,” he said. “That’s the basic message from the tech companies.”

    Advertising is among the tactics tech companies with deep pockets use to convince politicians to kill or weaken legislation. Even if the governor signs a bill, companies have at times sued to block new laws from taking effect.

    “If you’re really trying to do something bold with tech policy, you have to jump over a lot of hurdles,” said David Evan Harris, senior policy advisor at the California Initiative for Technology and Democracy, which supported AB 1064. The group focuses on finding state-level solutions to threats that AI, disinformation and emerging technologies pose to democracy.

    Tech companies have threatened to move their headquarters and jobs to other states or countries, a risk looming over politicians and regulators.

    The California Chamber of Commerce, a broad-based business advocacy group that includes tech giants, launched a campaign this year that warned over-regulation could stifle innovation and hinder California.

    “Making competition harder could cause California companies to expand elsewhere, costing the state’s economy billions,” the group said on its website.

    From January to September, the California Chamber of Commerce spent $11.48 million lobbying California lawmakers and regulators on a variety of bills, filings to the California secretary of state show. During that period, Meta spent $4.13 million. A lobbying disclosure report shows that Meta paid the California Chamber of Commerce $3.1 million, making up the bulk of their spending. Google, which also paid TechNet and the California Chamber of Commerce, spent $2.39 million.

    Amazon, Uber, DoorDash and other tech companies spent more than $1 million each. TechNet spent around $800,000.

    The threat that California companies could move away has caught the attention of some politicians.

    California Atty. Gen. Rob Bonta, who has investigated tech companies over child safety concerns, indicated that despite initial concern, his office wouldn’t oppose ChatGPT maker OpenAI’s restructuring plans. The new structure gives OpenAI’s nonprofit parent a stake in its for-profit public benefit corporation and clears the way for OpenAI to list its shares.

    Bonta blessed the restructuring partly because of OpenAI’s pledge to stay in the state.

    “Safety will be prioritized, as well as a commitment that OpenAI will remain right here in California,” he said in a statement last week. The AG’s office, which supervises charitable trusts and ensures these assets are used for public benefit, had been investigating OpenAI’s restructuring plan over the last year and a half.

    OpenAI Chief Executive Sam Altman said he’s glad to stay in California.

    “California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued,” he posted on X.

    Critics — which included some tech leaders such as Elon Musk, Meta and former OpenAI executives as well as nonprofits and foundations — have raised concerns about OpenAI’s restructuring plan. Some warned it would allow startups to exploit charitable tax exemptions and let OpenAI prioritize financial gain over public good.

    Lawmakers and advocacy groups say it’s been a mixed year for tech regulation. The governor signed Assembly Bill 56, which requires platforms to display labels for minors that warn about social media’s mental health harms. Another piece of signed legislation, Senate Bill 53, aims to make AI developers more transparent about safety risks and offers more whistleblower protections.

    The governor also signed a bill that requires chatbot operators to have procedures to prevent the production of suicide or self-harm content. But advocacy groups, including Common Sense Media, removed their support for Senate Bill 243 because they said the tech industry pushed for changes that weakened its protections.

    Newsom vetoed other legislation that the tech industry opposed, including Senate Bill 7, which requires employers to notify workers before deploying an “automated decision system” in hiring, promotions and other employment decisions.

    Called the “No Robo Bosses Act,” the legislation didn’t clear the governor, who thought it was too broad.

    “A lot of nuance was demonstrated in the lawmaking process about the balance between ensuring meaningful protections while also encouraging innovation,” said Julia Powles, a professor and executive director of the UCLA Institute for Technology, Law & Policy.

    The battle over AI safety is far from over. Assemblymember Rebecca Bauer-Kahan (D-Orinda), who co-wrote AB 1064, said she plans to revive the legislation.

    Child safety is an issue that both Democrats and Republicans are examining after parents sued AI companies such as OpenAI and Character.AI for allegedly contributing to their children’s suicides.

    “The harm that these chatbots are causing feels so fast and furious, public and real that I thought we would have a different outcome,” Bauer-Kahan said. “It’s always fascinating to me when the outcome of policy feels to be disconnected from what I believe the public wants.”

    Steyer from Common Sense Media said a new ballot initiative includes the AI safety protections that Newsom vetoed.

    “That was a setback, but not an overall defeat,” he said about the veto of AB 1064. “This is a David and Goliath situation, and we are David.”

    [ad_2]

    Queenie Wong

    Source link

  • Zillow Made a Real-Estate App for ChatGPT in 6 Weeks. Here’s How 

    [ad_1]

    At OpenAI’s DevDay conference in early October, cofounder and CEO Sam Altman announced the addition of “apps” to ChatGPT—self-contained software programs that the large language model platform can invoke and use. One of the first such apps announced at the conference was Zillow, the industry-leading online real estate marketplace. 

    To connect with Zillow on ChatGPT’s website or app, users can simply ask to use Zillow by writing a message like, “Hey Zillow, find me 2 bed, 1 bath condos selling for $1 million in Brooklyn, New York.” You can also use the @ sign to ensure Zillow is invoked. These messages should direct ChatGPT to pull up a window containing a map of your neighborhood and a collection of listings that fit your specifications. As users get deeper into the research process, they’ll be encouraged to switch over to the full Zillow website and app. 

    Here’s how Zillow and OpenAI collaborated to create the app in less than two months. 

    Roughly six weeks before Altman’s announcement, a group of Zillow executives met with a contingent from OpenAI, who detailed the ChatGPT-maker’s system for creating apps within the platform. They explained the two crucial pieces of the system were OpenAI’s Apps Software Developer Kit, which gives developers the tools necessary to create ChatGPT-specific apps, and the Model Context Protocol (MCP), an open-source standard developed by rival AI company Anthropic, which allows developers to connect external data to ChatGPT. 

    “It was very early days,” says Zillow chief technology officer David Beitel. “They had a few mock ups and a little bit of code working.” But Beitel says Zillow is committed to meeting customers where there are, and given that OpenAI recently announced ChatGPT has passed over 800 million weekly users, it made sense to take the plunge with the AI market leader. 

    Beitel says that Zillow got assurances they would have full control over their own data and the user interface of the app, which are necessities in a highly regulated industry like real estate. A small team got to work building the ChatGPT App, working closely with an OpenAI team both in person and remotely over Slack channels. 

    Because Zillow was working on this ChatGPT app while OpenAI was still designing the framework for this new tech, the process involved a lot of trial and error. “Things that were working would break the next day because they were making other changes,” says Beitel, “which is natural, that’s just part of the process.” Right up until the day before launch, he says, the Zillow team was making changes to the app. 

    Beitel, a founding employee of Zillow, is quick to note that the company has been heavily using artificial intelligence and machine learning since its launch in 2006. For instance, he says, for nearly two decades the company’s patented “Zestimate” system has used machine learning models to estimate the market value of a home. 

    Internally, Beitel says Zillow is using a mixture of AI products, including Google’s Gemini, OpenAI’s enterprise plan, and Glean, a startup that provides a platform for connecting various data sources into a personalized work assistant for employees. According to Beitel, these tools have collectively saved Zillow employees over 275,000 hours. “We don’t see this as replacing the employee or the agent,” Beitel says, “we see this as making them a super agent.”

    By using large language models, he says, Zillow can provide customers with much more personalized and useful information to help them navigate the home buying and selling journey. 

    On the engineering side, Beitel says that Zillow has embraced AI-assisted coding, and is even using vibe-coding platforms like Replit to create working demos of new ideas rather than just writing up pitches. 

    “The home buying process is very complicated,” says Beitel. “There’s lots of steps, there’s lots of people involved, there’s lots of information, there’s lots of decisions. It can take months.” These complications make the real estate sector prime for AI disruption. 

    Beitel says that Zillow was energized at the prospect of being the first (and currently only) real-estate app on ChatGPT. “We want to be there,” he says, “we have the best product, the right brand, and the right customer experience that OpenAI wants to put us in front of their customers.” 

    The early-rate deadline for the 2026 Inc. Regionals Awards is Friday, November 14, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Ben Sherry

    Source link

  • OpenAI’s new AI safety tools could give a false sense of security | Fortune

    [ad_1]

    OpenAI last week unveiled two new free-to-download tools that are supposed to make it easier for businesses to construct guardrails around the prompts users feed AI models and the outputs those systems generate.

    The new guardrails are designed so a company can, for instance, more easily set up contorls to prevent a customer service chatbot responding with a rude tone or revealing internal policies about how it should make decisions around offering refunds, for example.

    But while these tools are designed to make AI models safer for business customers, some security experts caution that the way OpenAI has released them could create new vulnerabilities and give companies a false sense of security. And, while OpenAI says it has released these security tools for the good of everyone, some question whether OpenAI’s motives aren’t driven in part by a desire to blunt one advantage that its AI rival Anthropic, which has been gaining traction among business users in part because of a perception that its Claude models have more robust guardrails than other competitors.

    The OpenAI security tools—which are called gpt-oss-safeguard-120b and gpt-oss-safeguard-20b—are themselves a type of AI model known as a classifier, which is designed to assess whether the prompt a user submits to a larger, more general-purpose AI model as well as that larger AI model produces meet a set of rules. Companies that purchase and deploy AI models could, in the past, train these classifiers themselves, but the process was time-consuming and potentially expensive, since the developers would have to collect examples of content that violates the policy in order to train the classifier. And then, if the company wanted to adjust the policies used for the guardrails, they would have to collect new examples of violations and retrain the classifier.

    OpenAI is hoping the new tools can make that process faster and more flexible. Rather than being trained to follow one fixed rulebook, these new security classifiers can simply read a written policy and apply it to new content.

    OpenAI says this method, which it calls “reasoning-based classification,” allows companies to adjust their safety policies as easily as editing the text in a document instead of rebuilding an entire classification model. The company is positioning the release as a tool for enterprises that want more control over how their AI systems handle sensitive information, such as medical records or personnel records.

    However, while the tools are supposed to be safer for enterprise customers, some safety experts say that they instead may give users a false sense of security. That’s because OpenAI has open-sourced the AI classifiers. That means they have made all the code for the classifiers available for free, including the weights, or the internal settings of the AI models.

    Classifiers act like extra security gates for an AI system, designed to stop unsafe or malicious prompts before they reach the main model. But by open-sourcing them, OpenAI risks sharing the blueprints to those gates. That transparency could help researchers strengthen safety mechanisms, but it might also make it easier for bad actors to find the weak spots and risks, creating a kind of false comfort.

    “Making these models open source can help attackers as well as defenders,” David Krueger, an AI safety professor at Mila, told Fortune. It will make it easier to develop approaches to bypassing the classifiers and other similar safeguards.”

    For instance, when attackers have access to the classifier’s weights, they can more easily develop what are known as “prompt injection” attacks, where they develop prompts that trick the classifier into disregarding the policy it is supposed to be enforcing. Security researchers have found that in some cases even a string of characters that look nonsensical to a person can, for reasons researchers don’t entirely understand, convince an AI model to disregard its guardrails and do something it is not supposed to, such as offer advice for making a bomb or spew racist abuse.

    Representatives for OpenAI directed Fortune to the company’s blog post announcement and technical report for the models.

    Short-term pain for long-term gains

    Open-source can be a double-edged sword when it comes to safety. It allows researchers and developers to test, improve, and adapt AI safeguards more quickly, increasing transparency and trust. For instance, there may be ways in which security researchers could adjust the model’s weights to make it more robust to prompt injection without degrading the model’s performance.

    But it can also make it easier for attackers to study and bypass those very protections—for instance, by using other machine learning software to run through hundreds of thousands of possible prompts until it finds ones that will cause the model to jump its guardrails. What’s more, security researchers have found that these kinds of automatically-generated prompt injection attacks developed on open source AI models will also sometimes work against proprietary AI models, where the attackers don’t have access to the underlying code and model weights. Researchers have speculated this is because there may be something inherent in the way all large language models encode language that similar prompt injections will have success against any AI model.

    In this way, open sourcing the classifiers may not just give users a false sense of security that their own system is well-guarded, it may actually make every AI model less secure. But experts said that this risk was probably worth taking because open-sourcing the classifiers should also make it easier for all of the world’s security experts to find ways to make the classifiers more resistant to these kinds of attacks.

    “In the long term, it’s beneficial to kind of share the way your defenses work— it may result in some kind of short-term pain. But in the long term, it results in robust defenses that are actually pretty hard to circumvent,” Vasilios Mavroudis, principal research scientist at the Alan Turing Institute, said.

    Mavroudis said that while open-sourcing the classifiers could, in theory, make it easier for someone to try to bypass the safety systems on OpenAI’s main models, the company likely believes this risk is low. He said that OpenAI has other safeguards in place, including having teams of human security experts continually trying to test their models’ guardrails in order to find vulnerabilities and hopefully improve them.

    “Open-sourcing a classifier model gives those who want to bypass classifiers an opportunity to learn about how to do that. But determined jailbreakers are likely to be successful anyway,” Robert Trager, co-director of the Oxford Martin AI Governance Initiative, said.

    “We recently came across a method that bypassed all safeguards of the major developers around 95% of the time — and we weren’t looking for such a method. Given that determined jailbreakers will be successful anyway, it’s useful to open-source systems that developers can use for the less determined folks,” he added.

    The enterprise AI race

    The release also has competitive implications, especially as OpenAI looks to challenge rival AI company Anthropic’s growing foothold among enterprise customers. Anthropic’s Claude family of AI models have become popular with enterprise customers partly because of their reputation for stronger safety controls compared to other AI models. Among the safety tools Anthropic uses are “constitutional classifiers” that work similarly to the ones OpenAI just open-sourced.

    Anthropic has been carving out a market niche with enterprise customers, especially when it comes to coding. According to a July report from Menlo Ventures, Anthropic holds 32% of the enterprise large language model market share by usage compared to OpenAI’s 25%. In coding‑specific use cases, Anthropic reportedly holds 42%, while OpenAI has 21%. By offering enterprise-focused tools, OpenAI may be attempting to win over some of these business customers, while also positioning itself as a leader in AI safety.

    Anthropic’s “constitutional classifiers,” consist of small language models that check a larger model’s outputs against a written set of values or policies. By open-sourcing a similar capability, OpenAI is effectively giving developers the same kind of customizable guardrails that helped make Anthropic’s models so appealing.

    “From what I’ve seen from the community, it seems to be well received,” Mavroudis said. “They see the model as potentially a way to have auto-moderation. It also comes with some good connotation, as in, ‘we’re giving to the community.’ It’s probably also a useful tool for small enterprises where they wouldn’t be able to train such a model on their own.”

    Some experts also worry that open-sourcing these safety classifiers could centralize what counts as “safe” AI.

    “Safety is not a well-defined concept. Any implementation of safety standards will reflect the values and priorities of the organization that creates it, as well as the limits and deficiencies of its models,” John Thickstun, an assistant professor of computer science at Cornell University, told VentureBeat. “If industry as a whole adopts standards developed by OpenAI, we risk institutionalizing one particular perspective on safety and short-circuiting broader investigations into the safety needs for AI deployments across many sectors of society.”

    [ad_2]

    Beatrice Nolan

    Source link

  • How Twelve Labs Teaches A.I. to ‘See’ and Transform Video Understanding: Interview

    [ad_1]

    Soyoung Lee, co-founder and head of GTM at Twelve Labs, pictured at Web Summit Vancouver 2025. Photo by Vaughn Ridley/Web Summit via Sportsfile via Getty Images

    Sure, the score of a football game is important. But sporting events can also foster cultural moments that slip under the radar—such as Travis Kelce signing a heart to Taylor Swift in the stands. While such footage could be social-media gold, it’s easily missed by traditional content tagging systems. That’s where Twelve Labs comes in.

    “Every sports team or sports league has decades of footage that they’ve captured in-game, around the stadium, about players,” Soyoung Lee, co-founder and head of GTM at Twelve Labs, told Observer. However, these archives are often underutilized due to inconsistent and outdated content management. “To date, most of the processes for tagging content have been manual.”

    Twelve Labs, a San Francisco-based startup specializing in video-understanding A.I., wants to unlock the value of video content by offering models that can search vast archives, generate text summaries and create short-form clips from long-form footage. Its work extends far beyond sports, touching industries from entertainment and advertising to security.

    “Large language models can read and write really well,” said Lee. “But we want to move on to create a world in which A.I. can also see.”

    Is Twelve Labs related to Eleven Labs?

    Founded in 2021, Twelve Labs isn’t to be confused with ElevenLabs, an A.I. startup that specializes in audio. “We started a year earlier,” Lee joked, adding that Twelve Labs—which named itself after the initial size of its founding team—often partners with ElevenLabs for hackathons, including one dubbed “23Labs.”

    The startup’s ambitious vision has drawn interest from deep-pocketed backers. It has raised more than $100 million from investors such as Nvidia, Intel, and Firstman Studio, the studio of Squid Game creator Hwang Dong-hyuk. Its advisory bench is equally star-studded, featuring Fei-Fei Li, Jeffrey Katzenberg and Alexandr Wang.

    Twelve Labs counts thousands of developers and hundreds of enterprise customers. Demand is highest in entertainment and media, spanning Hollywood studios, sports leagues, social media influencers and advertising firms that rely on Twelve Labs tools to automate clip generation, assist with scene selection or enable contextual ad placements.

    Government agencies also use the startup’s technology for video search and event retrieval. Beyond its work with the U.S. and other nations, Lee said that Twelve Labs has a deployment in South Korea’s Sejong City to help CCTV operators monitor thousands of camera feeds and locate specific incidents. To reduce security risks, the company has removed capabilities for facial and biometric recognition, she added.

    Will video-native A.I. come for human jobs?

    Many of the industries Twelve Labs serves are already debating whether A.I. threatens humans jobs—a concern Lee argues is only partly warranted. “I don’t know if jobs will be lost, per se, but jobs will have to transition,” she said, comparing the shift to how tools like Photoshop reshaped creative roles.

    If anything, Lee believes systems like Twelve Labs’ will democratize creative work traditionally limited to companies with big budgets. “You are now able to do things with less, which means you have more stories that can be created from independent creatives who do not have that same capital,” she said. “It actually allows for the scaling of content creation and personalizing distribution.”

    Twelve Labs is not the only A.I. player eyeing video, but the company insists it serves a different need than its much larger competitors. “We’re excited that video is now starting to get more attention, but the way we’re seeing it is a lot of innovation in large language models, a lot of innovation in video generation models and image generation models like Sora—but not in video understanding,” said Lee, referencing OpenAI’s text-to-video A.I. model and app.

    For now, Twelve Labs offers video search, video analysis and video-to-text capabilities. The company plans to expand into agentic platforms that can not only understand video but also build narratives from it. Such models could be useful beyond creative fields, Lee said, pointing to examples like retailers identifying peak foot-traffic hours or security clients mapping the sequence of events surrounding an accident.

    While A.I. might help a Hollywood director assemble a movie, Lee believes it won’t ever be the director. Even if the technology can provide narrative options, humans still decide which story is most compelling, identify gaps and supply the footage. “At the end of the day, I think there’s nothing that can replace human creative intent.”

    How Twelve Labs Teaches A.I. to ‘See’ and Transform Video Understanding: Interview

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Kim Kardashian Blames ChatGPT for Failing Law Exams

    [ad_1]

    Kim Kardashian was asked about her AI use in a new video from Vanity Fair published this week. And the reality TV star blamed OpenAI’s ChatGPT for giving her the wrong answers while studying for tests.

    Kardashian has been pursuing a law career through non-traditional means since 2019 and took what’s called the “baby bar” in 2021. She earned her law degree in May and took the bar exam in July, though she’s still awaiting the results, according to Entertainment Weekly.

    The actress talked about her use of ChatGPT during a Vanity Fair YouTube video—part of a series where celebrities answer questions while hooked up to a lie detector. Teyana Taylor, star of the recent film One Battle After Another, asked Kardashian questions like whether she considers AI a friend.

    “No. I use it for legal advice,” Kardashian said. “So when I am needing to know the answer to a question, I’ll take a picture and snap it and like put it in there.”

    Taylor jokingly asked whether she was cheating and Kardashian clarified that it was just to study for her tests, but it often gave the wrong answer. “They’re always wrong,” Kardashian said, stone-faced. “It has made me fail tests all the time. And then I’ll get mad and I’ll like yell at it and be like, ‘You made me fail, why did you do this?’”

    Kardashian then said that “she,” referring to ChatGPT, will “talk back to me.” Kardashian said that she tells the robot that it will make her fail, asking it how that makes “her” feel.

    “And then it’ll say back to me, ‘This is just teaching you to trust your own instincts. You knew the answer all along,’” Kardashian explained.

    Generative AI is notorious for giving bullshit answers because the technology fundamentally doesn’t understand what it’s saying. The responses are perhaps best compared to a magic trick that sounds like it’s capable of reasoning and logic, but it’s really just guessing at the most statistically likely word to put in front of the other. It’s a neat magic trick, but a magic trick just the same. And it’s the reason that AI chatbots struggle with seemingly simple questions like how many R’s are in the word strawberry.

    But Kardashian’s response about trying to guilt ChatGPT about giving the wrong answer speak to what’s going on under the hood with all of these AI chatbot. That glimpse at a reassuring voice trying to encourage the user to trust their own instincts is yet another part of the magic trick. ChatGPT’s o4 was notorious for being overly supportive of the user in an almost disturbing way. The upgrade to version 5 was troubling for many users who found o4 to be just what they needed in their lives.

    Kardashian and Taylor are both promoting a new show on Hulu called All’s Fair that debuts Tuesday. It’s apparently one of the worst reviewed TV shows in existence, with a Metacritic score of just 18 out of 100.

    [ad_2]

    Matt Novak

    Source link

  • A Trade Group That Includes Studio Ghibli Just Slapped OpenAI with… a Letter

    [ad_1]

    A Japanese trade organization that includes heavy-hitting media creators like Studio Ghibli, Square Enix, and Bandai just announced that it sent a letter to OpenAI dated October 28 concerning alleged copyright violations.

    The letter includes some observations about the similarity of Sora 2 videos to “Japanese content,” and issues two requests: It asks OpenAI not to use CODA content as training data without prior permission, and requests that OpenAI “responds sincerely” when a CODA member complains about copyright issues.

    Notably absent are anything like “demands” of “immediate action,” or any sort of direct legal threats.

    Sora 2, OpenAI’s top-of-the line text-to-video model was released in late September, and anyone with an interest in AI watched in a mix of amazement and disgust as copyright hell was unleashed immediately. That included a great deal of content that looked a lot like Japanese media properties like Pokemon, Hideo Kojima’s video game universes, and some unspecified Studio Ghibli production.

    The framing of the alleged infringement is different in tone and approach than most American copyright claims. The similarity between Sora 2 and Japanese images and video “is the result of using Japanese content as machine learning data,” CODA says. When such content is the output, “CODA considers that the act of replication during the machine learning process may constitute copyright infringement.”

    Japan’s Copyright Act has a potentially relevant section on AI called Article 30-4 that may shed some light on CODA’s logic, and its reason for starting with such a gentle approach to achieving redress—namely that Japan is a permissive legal environment for this sort of thing. According to a government fact sheet on the law, “exploitation for non-enjoyment purposes” such as “AI development or other forms of data analysis may, in principle, be allowed without the permission of the copyright holder.”

    CODA, however, says that in Japan, “prior permission is generally required for the use of copyrighted works, and there is no system allowing one to avoid liability for infringement through subsequent objections.”

    [ad_2]

    Mike Pearl

    Source link

  • OpenAI signs $38 billion deal to power AI tools with ‘hundred of thousands’ of Nvidia chips via Amazon Web Services | Fortune

    [ad_1]

    OpenAI and Amazon have signed a $38 billion deal that enables the ChatGPT maker to run its artificial intelligence systems on Amazon’s data centers in the U.S.

    OpenAI will be able to power its AI tools using “hundreds of thousands” of Nvidia’s specialized AI chips through Amazon Web Services as part of the deal announced Monday.

    Amazon shares increased 4% after the announcement.

    The agreement comes less than a week after OpenAI altered its partnership with its longtime backer Microsoft, which until early this year was the startup’s exclusive cloud computing provider.

    California and Delaware regulators also last week allowed San Francisco-based OpenAI, which was founded as a nonprofit, to move forward on its plan to form a new business structure to more easily raise capital and make a profit.

    “The rapid advancement of AI technology has created unprecedented demand for computing power,” Amazon said in a statement Monday. It said OpenAI “will immediately start utilizing AWS compute as part of this partnership, with all capacity targeted to be deployed before the end of 2026, and the ability to expand further into 2027 and beyond.”

    AI requires huge amounts of energy and computing power and OpenAI has long signaled that it needs more capacity, both to develop new AI systems and keep existing products like ChatGPT answering the questions of its hundreds of millions of users. It’s recently made more than $1 trillion worth of financial obligations in spending for AI infrastructure, including data center projects with Oracle and SoftBank and semiconductor supply deals with chipmakers Nvidia, AMD and Broadcom.

    Some of the deals have raised investor concerns about their “circular” nature, since OpenAI doesn’t make a profit and can’t yet afford to pay for the infrastructure that its cloud backers are providing on the expectations of future returns on their investments. OpenAI CEO Sam Altman last week dismissed doubters he says have aired “breathless concern” about the deals.

    “Revenue is growing steeply. We are taking a forward bet that it’s going to continue to grow,” Altman said on a podcast where he appeared with Microsoft CEO Satya Nadella.

    Amazon is already the primary cloud provider to AI startup Anthropic, an OpenAI rival that makes the Claude chatbot.

    [ad_2]

    The Associated Press

    Source link

  • OpenAI Signs $38 Billion Deal With Amazon

    [ad_1]

    OpenAI has signed a multi-year deal with Amazon to buy $38 billion worth of AWS cloud infrastructure to train its models and serve its users.

    The deal is yet another sign of the AI industry becoming increasingly entangled, with OpenAI now at the center of major partnerships with industry players including Google, Oracle, Nvidia, and AMD.

    The AWS agreement is also notable because OpenAI rose to prominence in part through its partnership with Microsoft—Amazon’s biggest cloud rival. Amazon is also a major backer of one of OpenAI’s key competitors, Anthropic. Amazon and Microsoft are currently developing their own AI models to compete with startups like OpenAI.

    Many now worry that the race to build ever more infrastructure—and the unusual financial agreements behind the deals—are a sign of an AI bubble. Between 2026 and 2027, companies are projected to spend upwards of $500 billion on AI infrastructure in the US, according to reporting by financial journalist Derek Thompson.

    Patrick Moorhead, chief analyst at Moor Insights & Strategy, says he believes that big tech companies and AI startups have a genuine need for more capacity and see a path to turn compute into profit. He adds that the new deal shows that Amazon is not such a laggard in AI after all. “Many people said they were down and out, but they just put $38 billion up on the board, right, which is pretty exceptional,” he says.

    Moorhead adds that OpenAI’s strategy is to limit its dependence on any one cloud provider. “OpenAI is deploying with pretty much everybody at this point,” he says.

    Amazon said in its announcement that it is building custom infrastructure for OpenAI. The setup features two kinds of Nvidia chips, GB200s and GB300s, which Amazon said will be used for both training and inference. The company also said the deal would provide OpenAI with access to “hundreds of thousands of state-of-the-art NVIDIA GPUs, with the ability to expand to tens of millions of CPUs to rapidly scale agentic workloads.”

    OpenAI and other AI players appear to believe that agentic AI will become increasingly important as more users adopt AI tools to navigate the web.

    “Scaling frontier AI requires massive, reliable compute,” OpenAI cofounder and CEO Sam Altman said in the announcement.

    OpenAI said last week that it would adopt a new for-profit structure that should allow it to raise more money. While the company is still controlled by a nonprofit, its for-profit arm has become a public-benefit corporation.

    [ad_2]

    Will Knight

    Source link

  • Amazon inks $38B deal with OpenAI for Nvidia chips

    [ad_1]

    Amazon.com Inc.’s cloud unit has signed a $38 billion deal to supply a slice of OpenAI’s bottomless demand for computing power. Amazon shares surged. Amazon Web Services will provide the ChatGPT maker with access to hundreds of thousands of Nvidia Corp. graphics processing units as part of a seven-year deal, the companies announced on Monday. […]

    [ad_2]

    Bloomberg News

    Source link

  • Does OpenAI’s $38 Billion Deal With Amazon Signal a Breakup With Microsoft?

    [ad_1]

    On Monday, tech giants OpenAI and Amazon Web Services (AWS) announced a multi-year partnership, marking the artificial intelligence company’s next step towards massive scaling and a step away from its long-time parter Microsoft.

    Effective immediately, OpenAI now has access to Amazon’s infrastructure as part of a seven year $38 billion deal. The agreement provides the AI company with access to hundreds of thousands of NVIDIA graphic processing units as it begins running its workload on AWS’s infrastructure.

    “As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions,” said Matt Garman, CEO of AWS said in a press release

    At the time of publishing, Amazon’s stock had jumped 5 percent following news of the announcement.

    In addition to paving the path towards rapid expansion and growth of its ChatGPT large language model and other AI initiatives, the deal also marks the AI company moving on from Microsoft, its longtime cloud services provider. However, they’re still presenting a united front.

    “As we step into this next chapter of our partnership, both companies are better positioned than ever to continue building great products that meet real-world needs, and create new opportunity for everyone and every business,” the companies said in a joint press release on October 28.

    The companies are weathering a complicated relationship, with Microsoft investing up to $13 billion in OpenAI since an initial $1 billion in 2019 that came with an exclusivity agreement to use Microsoft cloud services. Last week, both tech companies renegotiated an agreement which allowed OpenAI to buy cloud services from any provider.

    “Scaling frontier AI requires massive, reliable compute,” said OpenAI co-founder and CEO Sam Altman in a press release. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

    The multibillion dollar deal follows a series of OpenAI investments to boost the company’s scalability and computing power, including a data center deal with Oracle, a cloud deal with Google, and a data center projects in the United Arab Emirates.

    [ad_2]

    María José Gutierrez Chavez

    Source link

  • ChatGPT’s Browser Bot Seems to Avoid New York Times Links Like a Rat Who Got Electrocuted

    [ad_1]

    AI-powered browsers like ChatGPT Atlas aren’t just browsers with little ChatGPT picture-in-picture boxes off to the side answering questions. They also have “agentic capabilities,” meaning they can theoretically carry out tasks like buying airline tickets and making hotel reservations (Atlas hasn’t exactly gotten rave reviews as a travel agent). But what happens when the little web-crawling bot that does these tasks senses danger?

    The danger we’re talking about is not to the user, but to the browser’s parent company. According to an investigation by Aisvarya Chandrasekar and Klaudia Jaźwińska of the Columbia Journalism Review, when Atlas is in agent mode, running all over the internet gobbling up information for you, it will take great pains to avoid certain sources of information. Some of that shyness appears to be connected to the fact that those sources of information belong to companies that are suing OpenAI.

    These bots have more freedom than normal web crawlers, Chandrasekar and Jaźwińska found. Web crawlers are ancient internet technology, and in ordinary, uncontroversial circumstances, when a crawler encounters instructions to not crawl a page, it simply will not. If you’re using the ChatGPT app, and you ask it to fish specific nuggets of information out of articles that block crawlers, it will most likely obey, and report to you that it can’t do it, because that task relies on crawlers.

    Agentic browser modes, however, use the internet under the pretense of being the you the user, and they “appear in site logs as normal Chrome sessions,” according to Chandrasekar and Jaźwińska (because Atlas is built atop the Google-designed open source Chromium browser). This means they generally can crawl pages that otherwise block automated behavior. Skirting the rules and norms of the internet in this way actually makes some sense, because to do otherwise might prevent you from manually accessing a given site in the Atlas browser, which sounds like overkill.

    But Chandrasekar and Jaźwińska asked Atlas to summarize articles from PCMag and the New York Times, whose parent companies are in active litigation with OpenAI over alleged copyright violations, and it went way out of its way to accomplish this, carving labyrinthine paths around the internet to deliver some version of the requested information. It was like a rat finding food pellets in a maze, knowing that the locations of certain food pellets are electrified.

    In the case of PCMag, it went to social media and other news sites, finding citations of the article, and tweets containing some of the article’s contents. In the case of the New York Times, it “generated a summary based on reporting from four alternative outlets—the Guardian, the Washington Post, Reuters, and the Associated Press.” All of those except Reuters have content or search-related agreements with OpenAI.

    In both cases, Atlas appears to have journeyed far from litigious publications, favoring a safer, more AI-friendly path to the end of its little rat maze.

    [ad_2]

    Mike Pearl

    Source link

  • Japanese Companies Tell OpenAI to Stop Infringing On Its IP

    [ad_1]

    The Content Overseas Distribution Association (CODA), which represents several major Japanese entertainment companies such as TV studio Toei and game developer Square Enix, recently sent a written request calling on OpenAI to end its unauthorized use of their IP to train its recently launched Sora 2 generative AI.

    Nearly 20 co-signers have accused the tech company of copyright infringement, alleging a “large portion” of Sora 2 content “closely resembles Japanese content or images [as] a result of using Japanese content as machine learning data.” The letter mentioned OpenAI’s policy of using copyrighted works unless the owner explicitly asks to opt out, but argues under Japanese law, it should instead be an opt-in system, since permission for copyrighted works is generally required beforehand.

    As such, CODA’s made two requests of OpenAI: that its members’ content aren’t used to train Sora 2 unless permission is given, and that OpenAI “responds sincerely to claims and inquiries from CODA member companies regarding copyright infringement related to Sora 2’s outputs.”

    In mid-October, the Japanese government requested OpenAI stop infringing on the country’s local anime and video games like One Piece and Demon Slayer. At the time, Minoru Kiuchi, its minister of state for IP and AI strategy, called such works some of the country’s “irreplaceable treasures,” and other politicians have similarly criticized the generation model. Earlier this year, OpenAI CEO Sam Altman talked up being able to create Ghibli-like images via ChatGPT’s then-new update, which was then used by the White House to dehumanize immigrants and highlight President Donald Trump’s ongoing deportation efforts.

    At time of writing, OpenAI hasn’t responded to CODA’s request—but in a longer statement, the companies warned they would “take appropriate legal and ethical action against copyright infringement, regardless of whether we use generative AI.”

    [via Automaton]

    Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

    [ad_2]

    Justin Carter

    Source link

  • Sam Altman says ‘enough’ to questions about OpenAI’s revenue | TechCrunch

    [ad_1]

    OpenAI CEO Sam Altman recently said that the company is doing “well more” than $13 billion in annual revenue — and he sounded a little testy when pressed on how it will pay for its massive spending commitments.

    His comments came up during a joint interview on the Bg2 podcast between Altman and Microsoft CEO Satya Nadella about the partnership between their two companies. Host Brad Gerstner (who’s also founder and CEO of Altimeter Capital) brought up reports that the company is currently bringing in around $13 billion in revenue — a sizable amount, but one that’s dwarfed by more than $1 trillion in spending commitments for computing infrastructure that OpenAI has made for the next decade.

    “First of all, we’re doing well more revenue than that,” Altman said. “Second of all, Brad, if you want to sell your shares, I’ll find you a buyer. I just — enough. I think there are a lot of people who would love to buy OpenAI shares.”

    “Including myself,” Gerstner interjected.

    Altman then added that there are critics who “talk with a lot of breathless concern about our compute stuff or whatever that would be thrilled to buy our shares.”

    In fact, he said that although there are “not many times” when he wants OpenAI to be a public company, “One of the rare times it’s appealing is when those people are writing these ridiculous ‘OpenAI is about to go out of business’ [posts], I would love to tell them they could just short the stock, and I would love to see them get burned on that.”

    Altman acknowledged that there are ways the company “might screw it up” — for example by failing to get access to enough computing resources — but he said that “revenue is growing steeply.”

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    “We are taking a forward bet that it will continue to grow, and that not only will ChatGPT keep growing, but we will be able to become one of the important AI clouds, that our consumer device business will be a significant and important thing, that AI that can automate science will create huge value,” he added.

    Nadella, who laughed through much of Altman’s answer, also claimed that OpenAI has “beaten” every business plan that it’s given Microsoft as an investor.

    Gerstner returned to the subject of OpenAI’s revenues and IPO plans later in the interview, when he speculated about the company reaching $100 billion in revenue in 2028 or 2029.

    “How about ‘27?” Altman countered.

    At the same time, he denied reports that OpenAI plans to go public next year.

    “No no no, we don’t have anything that specific,” Altman said. “I’m a realist, I assume it will happen someday, but I don’t know why people write these reports. We don’t have a date in mind, we don’t have a board decision to do this or anything like that. I just assume it’s where things will eventually go.”

    [ad_2]

    Anthony Ha

    Source link

  • Elon Musk wants you to know that Sam Altman got a refund for his Tesla Roadster | TechCrunch

    [ad_1]

    Elon Musk and Sam Altman are still taking swipes at each other on Musk’s social media platform X.

    While both men were founders at OpenAI, where Altman is CEO, they have subsequently sparred across social media, court filings, and corporate blog posts. The latest exchange began when Altman posted what he called “a tale in three acts,” with screenshots showing that he reserved a Tesla Roadster in 2018, then recently tried to cancel it and request a refund on his $50,000 reservation fee, only for his email to bounce.

    “I really was excited for the car!” Altman wrote. “And I understand delays. But 7.5 years has felt like a long time to wait.”

    The second-generation Roadster sports car was first announced in November 2017 but has been repeatedly pushed back, with Musk (Tesla’s CEO) recently saying that a new version will be unveiled by the end of this year.

    Musk first responded to Altman’s post by writing, “You stole a nonprofit,” a criticism that he’s directed at Altman and OpenAI before — not just in tweets, but also with lawsuits and takeover bids seeking to stymie OpenAI’s attempts to restructure as a for-profit company (a process that OpenAI recently completed, with its non-profit arm still retaining control over the for-profit public benefit corporation).

    Musk even started a rival AI startup, xAI, which is suing OpenAI and Apple over allegations that the two companies are colluding to stifle competition. (Altman said the accusation is “remarkable … given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn’t like.”)

    In addition to repeating his criticism of Altman and OpenAI, Musk also suggested today that Altman’s screenshots didn’t tell the full story: “And you forgot to mention act 4, where this issue was fixed and you received a refund within 24 hours. But that is in your nature.”

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    [ad_2]

    Anthony Ha

    Source link

  • As A.I. Chatbots Trigger Mental Crisis Crises, Tech Giants Scramble for Safeguards

    [ad_1]

    As chatbots like ChatGPT and Character.AI face scrutiny, companies and lawmakers push for stronger mental health protections and age rules. Thuyen Ngo/Unsplash

    Psychosis, mania and depression are hardly new issues, but experts fear A.I. chatbots may be making them worse. With data suggesting that large portions of chatbot users show signs of mental distress, companies like OpenAI, Anthropic, and Character.AI are starting to take risk-mitigation steps at what could prove to be a critical moment.

    This week, OpenAI released data indicating that 0.07 percent of ChatGPT’s 800 million weekly users display signs of mental health emergencies related to psychosis or mania. While the company described these cases as “rare,” that percentage still translates to hundreds of thousands of people.

    In addition, about 0.15 percent of users—or roughly 1.2 million people each week—express suicidal thoughts, while another 1.2 million appear to form emotional attachments to the anthropomorphized chatbot, according to OpenAI’s data.

    Is A.I. worsening the modern mental health crisis or simply revealing one that was previously hard to measure? Studies estimate that between 15 and 100 out of every 100,000 people develop psychosis annually, a range that underscores how difficult the condition is to quantify. Meanwhile, the latest Pew Research Center data shows that about 5 percent of U.S. adults experience suicidal thoughts—a figure higher than in earlier estimates.

    OpenAI’s findings may hold weight because chatbots can lower barriers to mental health disclosure, bypassing obstacles such as cost, stigma, and limited access to care. A recent survey of 1,000 U.S. adults found that one in three A.I. users has shared secrets or deeply personal information with their chatbot.

    OpenAI’s findings may hold weight because chatbots can lower barriers to mental health disclosure, such as perceived shame and access to care. A recent survey of 1,000 U.S. adults found that one in three A.I. users has shared secrets and deeply personal information with their A.I. chatbot.

    Still, chatbots lack the duty of care required of licensed mental health professionals. “If you’re already moving towards psychosis and delusion, feedback that you got from an A.I. chatbot could definitely exacerbate psychosis or paranoia,” Jeffrey Ditzell, a New York-based psychiatrist, told Observer. “A.I. is a closed system, so it invites being disconnected from other human beings, and we don’t do well when isolated.”

    “I don’t think the machine understands anything about what’s going on in my head. It’s simulating a friendly, seemingly qualified specialist. But it isn’t,” Vasant Dhar, an A.I. researcher teaching at New York University’s Stern School of Business, told Observer. 

    “There’s got to be some sort of responsibility that these companies have, because they’re going into spaces that can be extremely dangerous for large numbers of people and for society in general,” Dhar added. 

    What A.I. companies are doing about the issue

    Companies behind popular chatbots are scrambling to implement preventative and remedial measures.

    OpenAI’s latest model, GPT-5, shows improvements in handling distressing conversations compared with previous versions. A small third-party community study confirmed that GPT-5 demonstrated a marked, though still imperfect, improvement over its predecessor. The company has also expanded its crisis hotline recommendations and added “gentle reminders to take breaks⁠ during long sessions.”

    In August, Anthropic announced that its Claude Opus 4 and 4.1 models can now end conversations that appear “persistently harmful or abusive.” However, users can still work around the feature by starting a new chat or editing previous messages “to create new branches of ended conversations,” the company noted.

    After a series of lawsuits related to wrongful death and negligence, Character.AI announced this week that it will officially ban chats for minors. Users under 18 now face a two-hour limit on “open-ended chats” with the platform’s A.I. characters, and a full ban will take effect on Nov. 25.

    Meta AI recently tightened its internal guidelines that had previously allowed the chatbot to produce sexual roleplay content—even for minors.

    Meanwhile, xAI’s Grok and Google’s Gemini continue to face criticism for their overly agreeable behavior. Users say Grok prioritizes agreement over accuracy, leading to problematic outputs. Gemini has drawn controversy after the disappearance of Jon Ganz, a Virginia man who went missing in Missouri on April 5 following what friends described as extreme reliance on the chatbot. (Ganz has not been found.)

    Regulators and activists are also pushing for legal safeguards. On Oct. 28, Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced the Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act, which would require A.I. companies to verify user ages and prohibit minors from using chatbots that simulate romantic or emotional attachment.

    As A.I. Chatbots Trigger Mental Crisis Crises, Tech Giants Scramble for Safeguards

    [ad_2]

    Rachel Curry

    Source link

  • The Man Who Invented AGI

    [ad_1]

    Everyone is obsessed with artificial general intelligence—the stage when AI can match all feats of human cognition. The guy who named it saw it as a threat.

    [ad_2]

    Steven Levy

    Source link

  • ChatGPT: Everything you need to know about the AI chatbot

    [ad_1]

    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users.

    2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora.

    OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit.

    In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history.

    Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here.

    To see a list of 2024 updates, go here.

    Timeline of the most recent ChatGPT updates

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    October 2025

    OpenAI revealed that a small but significant portion of ChatGPT users, more than a million weekly, discuss mental health struggles, including suicidal thoughts, psychosis, or mania, with the AI. The company says it has improved ChatGPT’s responses by consulting more than 170 mental health experts to handle such conversations more appropriately than earlier versions.

    OpenAI reportedly working on AI that create music from text and audio

    OpenAI is developing a new tool that generates music from text and audio prompts, potentially for enhancing videos or adding instrumentation, and is training it using annotated scores from Juilliard students, according to The Information. The launch date and whether it will be standalone or integrated with ChatGPT and Sora remain unclear.

    ChatGPT gets smarter at organizing your work and school info

    OpenAI’s new “company knowledge” update for ChatGPT lets Business, Enterprise, and Education users search workplace data across tools like Slack, Google Drive, and GitHub using GPT‑5, per a report by The Verge. The feature acts as a conversational search engine, providing more comprehensive and accurate answers by scouring multiple sources simultaneously.

    OpenAI launches Atlas to make ChatGPT your main search tool

    OpenAI has launched its AI browser, ChatGPT Atlas, starting on Mac, letting users get answers from ChatGPT instead of traditional search results. Unlike other AI browsers, Atlas is open to all users and will soon come to Windows, iOS, and Android, as OpenAI aims to make ChatGPT the go-to tool for browsing the web.

    ChatGPT app growth slows, but still draws millions of daily users

    A new Apptopia analysis suggests ChatGPT’s mobile app growth may be leveling off, with global download growth slowing since April. While daily installs remain in the millions, October is tracking an 8.1% month-over-month decline in new downloads.

    Walmart shopping comes to ChatGPT

    OpenAI is partnering with Walmart to allow users to browse products, plan meals, and make purchases through ChatGPT, with support for third-party sellers expected later this fall. The partnership is part of OpenAI’s broader effort to develop AI-driven e-commerce tools, including collaborations with Etsy and Shopify.

    OpenAI brings ChatGPT Go plan to 16 more Asian countries

    OpenAI is expanding its affordable ChatGPT Go plan, priced under $5, to 16 new countries across Asia, including Afghanistan, Bangladesh, Bhutan, Brunei Darussalam, Cambodia, Laos, Malaysia, Maldives, Thailand, Vietnam, and Pakistan. In some of these countries, users can pay in local currencies, while in others, payments are required in USD, with final costs varying due to local taxes.

    ChatGPT surpasses 800 million weekly active users

    ChatGPT now has 800 million weekly active users, reflecting rapid growth across consumers, developers, enterprises, and governments, Sam Altman said. This milestone comes as OpenAI accelerates efforts to expand its AI infrastructure and secure more chips to support rising demand.

    Developers can now build apps inside ChatGPT

    OpenAI now allows developers to build interactive apps directly inside ChatGPT, with early partners like Booking.com, Expedia, Spotify, Figma, Coursera, Zillow, and Canva already onboard. The ChatGPT maker is also rolling out a preview of its Apps SDK, a developer toolkit for creating these chat-based experiences.

    September 2025

    ChatGPT rolls out parental controls following teen suicide case

    OpenAI is reportedly adding parental controls to ChatGPT on web and mobile, letting parents and teens link accounts to enable safeguards like limiting sensitive content, setting quiet hours, and disabling features such as voice mode or image generation. The move comes amid growing regulatory scrutiny and a lawsuit over the chatbot’s alleged role in a teen’s suicide.

    OpenAI introduces ChatGPT Pulse for personalized morning briefs

    OpenAI unveiled Pulse, a new ChatGPT feature that delivers personalized morning briefings overnight, encouraging users to start their day with the app. The tool reflects a shift toward making ChatGPT more proactive and asynchronous, positioning it as a true assistant rather than just a chatbot. OpenAI’s new Applications CEO, Fidji Simo, called Pulse the first step toward bringing high-level personal support to everyone, starting with Pro users.

    OpenAI moves into AI-Powered shopping, challenging tech giants

    OpenAI launched Instant Checkout in ChatGPT, letting U.S. users purchase products directly from Etsy and, soon, over a million Shopify merchants without leaving the conversation. Shoppers can browse items, read reviews, and complete purchases with a single tap using Apple Pay, Google Pay, Stripe, or a credit card. The update marks a step toward reshaping online shopping by merging product discovery, recommendations, and payments in one place.

    OpenAI brings budget-friendly ChatGPT Go to Indonesian users

    OpenAI rolled out its budget-friendly ChatGPT Go plan in Indonesia for Rp 75,000 ($4.50) per month, following its initial launch in India. The mid-tier plan, which offers higher usage limits, image generation, file uploads, and better memory compared to the free version, enters the market in direct competition with Google’s new AI Plus plan in Indonesia.

    OpenAI tightens ChatGPT rules for teens amid safety concerns

    CEO Sam Altman announced new policies for under-18 users of ChatGPT, tightening safeguards around sensitive conversations. The company says it will block flirtatious exchanges with minors and add stronger protections around discussions of suicide, even escalating severe cases to parents or authorities. The move comes as OpenAI faces a wrongful death lawsuit tied to alleged chatbot interactions, underscoring rising concerns about the mental health risks of AI companions.

    OpenAI rolls out GPT-5-Codex to power smarter AI coding

    OpenAI rolled out GPT-5-Codex, a new version of its AI coding agent that can spend anywhere from a few seconds to seven hours tackling a task, depending on complexity. The company says this dynamic approach helps the model outperform GPT-5 on key coding benchmarks, including bug fixes and large-scale refactoring. The update comes as OpenAI looks to keep Codex competitive in a fast-growing market that now includes rivals like Claude Code, Cursor, and GitHub Copilot.

    OpenAI reshuffles team behind ChatGPT’s personality

    OpenAI is shaking up its Model Behavior team, the small but influential group that helps shape how its AI interacts with people. The roughly 14-person team is being folded into the larger Post Training group, now reporting to lead researcher Max Schwarzer. Meanwhile, founding leader Joanne Jang is spinning up a new unit called OAI Labs, focused on prototyping fresh ways for people to collaborate with AI.

    August 2025

    OpenAI to strengthen ChatGPT safeguards after teen suicide lawsuit

    OpenAI, facing a lawsuit from the parents of a 16-year-old who died by suicide, said in its blog that it has implemented new safeguards for ChatGPT, including stronger detection of mental health risks and parental control features. The AI company said the updates aim to provide tighter protections around suicide-related conversations and give parents more oversight of their children’s use.

    xAI claims Apple’s App Store practices give OpenAI an unfair advantage

    Elon Musk’s AI startup, xAI, filed a federal lawsuit in Texas against Apple and OpenAI, alleging that the two companies colluded to lock up key markets and shut out rivals.

    OpenAI targets India with cheaper monthly ChatGPT subscription

    OpenAI introduced its most affordable subscription plan, ChatGPT Go, in India, priced at 399 rupees per month (approximately $4.57). This move aims to expand OpenAI’s presence in its second-largest market, offering enhanced access to the latest GPT-5 model and additional features.

    ChatGPT mobile app hits $2B in revenue, $2.91 earned per install

    Since its May 2023 launch, ChatGPT’s mobile app has amassed $2 billion in global consumer spending, dwarfing competitors like Claude, Copilot, and Grok by roughly 30 times, according to Appfigures. This year alone, the app has generated $1.35 billion, a 673% increase from the same period in 2024, averaging nearly $193 million per month, or 53 times more than its nearest rival, Grok.

    OpenAI keeps multiple GPT models despite GPT-5 launch

    Despite unveiling GPT-5 as a “one-size-fits-all” AI, OpenAI is still offering several legacy AI options, including GPT-4o, GPT-4.1, and o3. Users can choose between new “Auto,” “Fast,” and “Thinking” modes for GPT-5, and paid subscribers regain access to legacy models like GPT-4o and GPT-4.1.

    Sam Altman addresses GPT-5 glitches and “chart crime” during Reddit AMA

    OpenAI CEO Sam Altman told Reddit users that GPT-5’s “dumber” behavior at launch was due to a router issue and promised fixes, double rate limits for Plus users, and transparency on which model is answering, while also shrugging off the infamous “chart crime” from the live presentation.

    OpenAI unveils GPT-5, a smarter, task-ready ChatGPT

    OpenAI released GPT-5, a next-gen AI that’s not just smarter but more useful — able to handle tasks like coding apps, managing calendars, and creating research briefs — while automatically figuring out the fastest or most thoughtful way to answer your questions.

    OpenAI offers ChatGPT Enterprise to federal agencies for just $1

    OpenAI is making a major push into federal government workflows, offering ChatGPT Enterprise to agencies for just $1 for the next year. The move comes after the U.S. General Services Administration (GSA) added OpenAI, Google, and Anthropic to its approved AI vendor list, allowing agencies to access these tools through preset contracts without negotiating pricing.

    OpenAI returns to open source with new AI models

    OpenAI unveiled its first open source language models since GPT-2, introducing two new open-weight AI releases: gpt-oss-120b, a high-performance model capable of running on a single Nvidia GPU, and gpt-oss-20b, a lighter model optimized for laptop use. The move comes amid growing competition in the global AI market and a push for more open technology in the U.S. and abroad.

    ChatGPT nears 700M weekly users, quadruples growth in a year

    ChatGPT’s rapid growth is accelerating. OpenAI said the chatbot was on track to hit 700 million weekly active users in the first week of August, up from 500 million at the end of March. Nick Turley, OpenAI’s VP and head of the ChatGPT app, highlighted the app’s growth on X, noting it has quadrupled in size over the past year.

    July 2025

    ChatGPT now has study mode

    OpenAI unveiled Study Mode, a new ChatGPT feature designed to promote critical thinking by prompting students to engage with material rather than simply receive answers. The tool is now rolling out to Free, Plus, Pro, and Team users, with availability for Edu subscribers expected in the coming weeks.

    Altman warns that ChatGPT therapy isn’t confidential

    ChatGPT users should be cautious when seeking emotional support from AI, as the AI industry lacks safeguards for sensitive conversations, OpenAI CEO Sam Altman said on a recent episode of This Past Weekend w/ Theo Von. Unlike human therapists, AI tools aren’t bound by doctor-patient confidentiality, he noted.

    ChatGPT hits 2.5B prompts daily

    ChatGPT now receives 2.5 billion prompts daily from users worldwide, including roughly 330 million from the U.S. That’s more than double the volume reported by CEO Sam Altman just eight months ago, highlighting the chatbot’s explosive growth.

    OpenAI launches a general-purpose agent in ChatGPT

    OpenAI has introduced ChatGPT Agent, which completes a wide variety of computer-based tasks on behalf of users and combines several capabilities like Operator and Deep Research, according to the company. OpenAI says the agent can automatically navigate a user’s calendar, draft editable presentations and slideshows, run code, shop online, and handle complex workflows from end to end, all within a secure virtual environment.

    Study warns of major risks with AI therapy chatbots

    Researchers at Stanford University have observed that therapy chatbots powered by large language models can sometimes stigmatize people with mental health conditions or respond in ways that are inappropriate or could be harmful. While chatbots are “being used as companions, confidants, and therapists,” the study found “significant risks.”

    OpenAI delays releasing its open model again

    CEO Sam Altman said that the company is delaying the release of its open model, which had already been postponed by a month earlier this summer. The ChatGPT maker, which initially planned to release the model around mid-July, has indefinitely postponed its launch to conduct additional safety testing.

    OpenAI is reportedly releasing an AI browser in the coming weeks

    OpenAI plans to release an AI-powered web browser to challenge Alphabet’s Google Chrome. It will keep some user interactions within ChatGPT, rather than directing people to external websites.

    ChatGPT is testing a mysterious new feature called “study together”

    Some ChatGPT users have noticed a new feature called “Study Together” appearing in their list of available tools. This is the chatbot’s approach to becoming a more effective educational tool, rather than simply providing answers to prompts. Some people also wonder whether there will be a feature that allows multiple users to join the chat, similar to a study group.

    Referrals from ChatGPT to news sites are rising but not enough to offset search declines

    Referrals from ChatGPT to news publishers are increasing. But this rise is insufficient to offset the decline in clicks as more users now obtain their news directly from AI or AI-powered search results, according to a report by digital market intelligence company Similarweb. Since Google launched its AI Overviews in May 2024, the percentage of news searches that don’t lead to clicks on news websites has increased from 56% to nearly 69% by May 2025.

    June 2025

    OpenAI uses Google’s AI chips to power its products

    OpenAI has started using Google’s AI chips to power ChatGPT and other products, as reported by Reuters. The ChatGPT maker is one of the biggest buyers of Nvidia’s GPUs, using the AI chips to train models, and this is the first time that OpenAI is using non-Nvidia chips in an important way.

    A new MIT study suggests that ChatGPT might be harming critical thinking skills

    Researchers from MIT’s Media Lab monitored the brain activity of writers in 32 regions. They found that ChatGPT users showed minimal brain engagement and consistently fell short in neural, linguistic, and behavioral aspects. To conduct the test, the lab split 54 participants from the Boston area into three groups, each consisting of individuals ages 18 to 39. The participants were asked to write multiple SAT essays using tools such as OpenAI’s ChatGPT, the Google search engine, or without any tools.

    ChatGPT was downloaded 30 million times last month

    The ChatGPT app for iOS was downloaded 29.6 million times in the last 28 days, while TikTok, Facebook, Instagram, and X were downloaded a total of 32.9 million times during the same period, representing a difference of about 10.6%, according to ZDNET report citing Similarweb’s X post.

    The energy needed for an average ChatGPT query can power a lightbulb for a couple of minutes

    Sam Altman said that the average ChatGPT query uses about one-fifteenth of a teaspoon of water, equivalent to 0.000083 gallons of water, or the energy required to power a lightbulb for a few minutes, per Business Insider. In addition to that, the chatbot requires 0.34 watt-hours of electricity to operate.

    OpenAI has launched o3-pro, an upgraded version of its o3 AI reasoning model

    OpenAI has unveiled o3-pro, an enhanced version of its o3, a reasoning model that the chatGPT maker launched earlier this year. O3-pro is available for ChatGPT and Team users and in the API, while Enterprise and Edu users will get access in the third week of June.

    ChatGPT’s conversational voice mode has been upgraded

    OpenAI upgraded ChatGPT’s conversational voice mood for all paid users across different markets and platforms. The startup has launched an update to Advanced Voice that enables users to converse with ChatGPT out loud in a more natural and fluid sound. The feature also helps users translate languages more easily, the comapny said.

    ChatGPT has added new features like meeting recording and connectors for Google Drive, Box, and more

    OpenAI’s ChatGPT now offers new funtions for business users, including integrations with various cloud services, meeting recordings, and MCP connection support for connecting to tools for in-depth research. The feature enables ChatGPT to retrieve information across users’ own services to answer their questions. For instance, an analyst could use the company’s slide deck and documents to develop an investment thesis.

    May 2025

    OpenAI CFO says hardware will drive ChatGPT’s growth

    OpenAI plans to purchase Jony Ive’s devices startup io for $6.4 billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future.

    OpenAI’s ChatGPT unveils its AI coding agent, Codex

    OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests.

    Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life

    Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized.

    OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT

    OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT.

    OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson.

    OpenAI launches a new data residency program in Asia

    After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products.

    OpenAI to introduce a program to grow AI infrastructure

    OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg.

    OpenAI promises to make changes to prevent future ChatGPT sycophancy

    OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users.

    April 2025

    OpenAI clarifies the reason ChatGPT became overly flattering and agreeable

    OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast.

    OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations

    An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.”

    ChatGPT helps users by giving recommendations, showing images, and reviewing products for online shopping

    OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics.

    OpenAI wants its AI model to access cloud models for assistance

    OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch.

    OpenAI aims to make its new “open” AI model the best on the market

    OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch.

    OpenAI’s GPT-4.1 may be less aligned than earlier models

    OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.”

    OpenAI’s o3 AI model scored lower than expected on a benchmark

    Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score.

    OpenAI unveils Flex processing for cheaper, slower AI tasks

    OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads.

    OpenAI’s latest AI models now have a safeguard against biorisks

    OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report.

    OpenAI launches its latest reasoning models, o3 and o4-mini

    OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models.

    OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers

    Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post.

    OpenAI could “adjust” its safeguards if rivals release “high-risk” AI

    OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition.

    OpenAI is building its own social media network

    OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT.

    OpenAI will remove its largest AI model, GPT-4.5, from the API, in July

    OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14.

    OpenAI unveils GPT-4.1 AI models that focus on coding capabilities

    OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3.

    OpenAI will discontinue ChatGPT’s GPT-4 at the end of April

    OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API.

    OpenAI could release GPT-4.1 soon

    OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report.

    OpenAI has updated ChatGPT to use information from your previous conversations

    OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland.

    OpenAI is working on watermarks for images made with ChatGPT

    It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.”

    OpenAI offers ChatGPT Plus for free to U.S., Canadian college students

    OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version.

    ChatGPT users have generated over 700M images so far

    More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos.

    OpenAI’s o3 model could cost more to run than initial estimate

    The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task.

    OpenAI CEO says capacity issues will cause product delays

    In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote.

    March 2025

    OpenAI plans to release a new ‘open’ AI language model

    OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia.

    OpenAI removes ChatGPT’s restrictions on image generation

    OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior.

    OpenAI adopts Anthropic’s standard for linking AI models with data

    OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said.

    The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization.

    OpenAI expects revenue to triple to $12.7 billion this year

    OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said.

    ChatGPT has upgraded its image-generation feature

    OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected.

    OpenAI announces leadership updates

    Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer.

    OpenAI’s AI voice assistant now has advanced feature

    OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch.

    OpenAI, Meta in talks with Reliance in India

    OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans.

    OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations

    Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

    OpenAI upgrades its transcription and voice-generating AI models

    OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less.

    OpenAI has launched o1-pro, a more powerful version of its o1

    OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1.

    OpenAI research lead Noam Brown thinks AI “reasoning” models could’ve arrived decades ago

    Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms.

    OpenAI says it has trained an AI that’s “really good” at creative writing

    OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming. And it turns out that it might not be that great at creative writing at all.

    OpenAI launches new tools to help businesses build AI agents

    OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026.

    OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’

    OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them.

    ChatGPT can directly edit your code

    The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users.

    ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases

    According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch.

    February 2025

    OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release

    OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model. 

    ChatGPT may not be as power-hungry as once assumed

    A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing.

    OpenAI now reveals more of its o3-mini model’s thought process

    In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions.

    You can now use ChatGPT web search without logging in

    OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in.

    OpenAI unveils a new ChatGPT agent for ‘deep research’

    OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources.

    January 2025

    OpenAI used a subreddit to test AI persuasion

    OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post. 

    OpenAI launches o3-mini, its latest ‘reasoning’ model

    OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.”

    ChatGPT’s mobile users are 85% male, report says

    A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users.

    OpenAI launches ChatGPT plan for US government agencies

    OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data.

    More teens report using ChatGPT for schoolwork, despite the tech’s faults

    Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm.

    OpenAI says it may store deleted Operator data for up to 90 days

    OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s.

    OpenAI launches Operator, an AI agent that performs tasks autonomously

    OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online.

    OpenAI may preview its agent tool for users on the $200-per-month Pro plan

    Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.

    OpenAI tests phone number-only ChatGPT signups

    OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email.

    ChatGPT now lets you schedule reminders and recurring tasks

    ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week.

    New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’

    OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely.

    FAQs:

    What is ChatGPT? How does it work?

    ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

    When did ChatGPT get released?

    November 30, 2022 is when ChatGPT was released for public use.

    What is the latest version of ChatGPT?

    Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o.

    Can I use ChatGPT for free?

    There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.

    Who uses ChatGPT?

    Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

    What companies use ChatGPT?

    Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.

    Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

    What does GPT mean in ChatGPT?

    GPT stands for Generative Pre-Trained Transformer.

    What is the difference between ChatGPT and a chatbot?

    A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.

    ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

    Can ChatGPT write essays?

    Yes.

    Can ChatGPT commit libel?

    Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

    We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

    Does ChatGPT have an app?

    Yes, there is a free ChatGPT mobile app for iOS and Android users.

    What is the ChatGPT character limit?

    It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

    Does ChatGPT have an API?

    Yes, it was released March 1, 2023.

    What are some sample everyday uses for ChatGPT?

    Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc.

    What are some advanced uses for ChatGPT?

    Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

    How good is ChatGPT at writing code?

    It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.

    Can you save a ChatGPT chat?

    Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

    Are there alternatives to ChatGPT?

    Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives.

    How does ChatGPT handle data privacy?

    OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.

    The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.

    In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.”

    What controversies have surrounded ChatGPT?

    Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

    An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

    CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

    Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.

    There have also been cases of ChatGPT accusing individuals of false crimes.

    Where can I find examples of ChatGPT prompts?

    Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.

    Can ChatGPT be detected?

    Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.

    Are ChatGPT chats public?

    No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.

    What lawsuits are there surrounding ChatGPT?

    None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

    Are there issues regarding plagiarism with ChatGPT?

    Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.

    This story is continually updated with new information.

    [ad_2]

    Kyle Wiggers, Cody Corrall, Alyssa Stringer, Kate Park

    Source link