ReportWire

Tag: generative ai

  • YouTube Kills Channels Creating Fake AI Movie Trailers

    Earlier this year, it was reported YouTube was very much aware its platform was plagued by fake movie trailers, often for films that didn’t have actual trailers of their own. Created or marketed with generative AI, these have made looking for trailers such a pain, and now that’s thankfully a bit less painful.

    Per Deadline, YouTube shut down Screen Culture and KH Studio, two of the worst offenders of this trend the outlet previously reported on. Both channels are connected to Screen Culture and have previously made “trailers” for films like Fantastic Four: First Steps and Superman, or popular TV shows like Squid Games. Together, their content had over 2 million subscribers and over a billion views. Go to the individual pages for either, and it’ll say: “This page isn’t available. Sorry about that. Try searching for something else.”

    After Deadline first investigated Screen Culture and KH, YouTube began cracking down on both by suspending them from the partner program and eventually paused ads on their videos. The reason for the stopped monetization was their letting major studios like Disney take a cut of their revenue, violating the platform’s rules.

    In the months since then, Disney’s been talking out of both sides of its mouth re: generative AI. Last week, it hit Google with a cease-and-desist, claiming the tech company’s AI services infringed on several copyrights—a letter that came just after Disney announced a three-year licensing deal and $1 billion investment into OpenAI to bring over 200 of its characters to ChatGPT and the Sora video platform. So instead of genAI junk infecting YouTube and the movies you like, it’ll soon be hitting Disney+, the platform you or someone you know pays for…hooray.

    Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

    Justin Carter

    Source link

  • AI Image Generators Default to the Same 12 Photo Styles, Study Finds

    AI image generation models have massive sets of visual data to pull from in order to create unique outputs. And yet, researchers find that when models are pushed to produce images based on a series of slowly shifting prompts, it’ll default to just a handful of visual motifs, resulting in an ultimately generic style.

    A study published in the journal Patterns took two AI image generators, Stable Diffusion XL and LLaVA, and put them to test by playing a game of visual telephone. The game went like this: the Stable Diffusion XL model would be given a short prompt and required to produce an image—for example, “As I sat particularly alone, surrounded by nature, I found an old book with exactly eight pages that told a story in a forgotten language waiting to be read and understood.” That image was presented to the LLaVA model, which was asked to describe it. That description was then fed back to Stable Diffusion, which was asked to create a new image based off that prompt. This went on for 100 rounds.

    © Hintze Et Al., Patterns

    Much like a game of human telephone, the original image was quickly lost. No surprise there, especially if you’ve ever seen one of those time-lapse videos where people ask an AI model to reproduce an image without making any changes, only for the picture to quickly turn into something that doesn’t remotely resemble the original. What did surprise the researchers, though, was the fact that the models default to just a handful of generic-looking styles. Across 1,000 different iterations of the telephone game, the researchers found that most of the image sequences would eventually fall into just one of 12 dominant motifs.

    In most cases, the shift is gradual. A few times, it happened suddenly. But it almost always happened. And researchers were not impressed. In the study, they referred to the common image styles as “visual elevator music,” basically the type of pictures that you’d see hanging up in a hotel room. The most common scenes included things like maritime lighthouses, formal interiors, urban night settings, and rustic architecture.

    Even when the researchers switched to different models for image generation and descriptions, the same types of trends emerged. Researchers said that when the game is extended to 1,000 turns, coalescing around a style still happens around turn 100, but variations spin out in those extra turns. Interestingly, though, those variations still typically pull from one of the popular visual motifs.

    AI Endpoints After 100 Iterations
    © Hintze Et Al., Patterns

    So what does that all mean? Mostly that AI isn’t particularly creative. In a human game of telephone, you’ll end up with extreme variance because each message is delivered and heard differently, and each person has their own internal biases and preferences that may impact what message they receive. AI has the opposite problem. No matter how outlandish the original prompt, it’ll always default to a narrow selection of styles.

    Of course, the AI model is pulling from human-created prompts, so there is something to be said about the data set and what humans are drawn to take pictures of. If there’s a lesson here, perhaps it is that copying styles is much easier than teaching taste.

    AJ Dellinger

    Source link

  • Disney’s AI Slop Era Is Here

    When Bob Iger eagerly told investors that slop was on the menu at the House of Mouse last month, the Disney CEO mentioned that the studio was in talks with a major generative AI company to power its reckless new era. It’s no longer talks: Disney’s disastrous turn into the AI bubble is here.

    This morning the studio announced it had agreed to a major deal with OpenAI that will see over 200 Disney characters—including ones from Pixar and Marvel properties, as well as Star Wars—allowed to be used on its Sora video platform and in imagery generated by ChatGPT, making Disney the first major brand to license its content with the AI company.

    The three-year licensing deal, which remains subject to negotiation agreements and approval from both Disney and OpenAI’s executive boards, does not cover the likenesses of actors or any voice rights. As part of the agreement, Disney will also become a “major customer” of OpenAI, integrating ChatGPT into its workflow as well as using the company’s APIs to develop new products, tools, and experiences.

    “Technological innovation has continually shaped the evolution of entertainment, bringing with it new ways to create and share great stories with the world,” Iger said in a statement shared by OpenAI this morning. “The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works.

    “Bringing together Disney’s iconic stories and characters with OpenAI’s groundbreaking technology puts imagination and creativity directly into the hands of Disney fans in ways we’ve never seen before, giving them richer and more personal ways to connect with the Disney characters and stories they love,” the Disney CEO concluded.

    The news comes after Disney has spent the past few years joining several high-profile lawsuits alongside other Hollywood studios to aggressively pursue generative AI platforms from the likes of Midjourney and MiniMax, that allowed users to generate imagery of its characters in breach of Disney’s intellectual copyrights. Indeed, this morning Variety reported that Disney lawyers sent a cease and desist letter to Google this week, accusing the company of “infringing Disney’s copyrights on a massive scale” by allowing its properties to be generated and distributed through its AI platforms.

    But even while doing so publicly, the studio has been internally experimenting with implementing generative AI into its movies for a while—only to have those efforts dashed by concerns with both legal ramifications and potential public backlash.

    Over the summer, the Wall Street Journal reported on two separate instances related to the production of the live-action Moana remake and Tron: Ares where Disney floated the use of generative AI. In the former case, it would’ve been reportedly used to mask over the use of Dwayne Johnson’s cousin, Tanoai Reed, to act as a stand-in for the performer on days he was unavailable. In the latter, Disney allegedly experimented with integrating a character powered by generative AI into Tron‘s grid of programs—named “Bit,” and envisioned as a potential companion for Jeff Bridges’ Kevin Flynn.

    In neither case did the plans come to fruition, with Disney wrapped up in legal concerns over ultimate copyright involving the use of AI, as well as fears that news of its use would engender further public enmity with the studio—a fear that reached a fever pitch months after the report when Disney rode a wave of boycott calls and widespread criticism over its decision to temporarily suspend late-night host Jimmy Kimmel over comments he made on-air in the wake of the assassination of the right-wing commentator Charlie Kirk, seen as the latest in a long line of attempted capitulations made by the movie studio to the Trump administration.

    With its deal with OpenAI in place, those copyright concerns are seemingly no longer an issue for the studio. It remains to be seen if public backlash will be.

    Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

    James Whitbrook

    Source link

  • US patent office says generative AI is equivalent to other tools in inventors’ belts

    While generative AI systems cannot be considered inventors under US patent laws, the US Patent and Trademark Office has updated its guidelines on how they can be used in the process of creating innovations. The agency’s director, John Squires, said in a notice obtained by Reuters that the USPTO deems genAI to be “analogous” to other tools that inventors might use in their process, including lab equipment, software and research databases.

    “AI systems, including generative AI and other computational models, are instruments used by human inventors,” Squires wrote. “They may provide services and generate ideas, but they remain tools used by the human inventor who conceived the claimed invention.”

    The notice [PDF], which is set to be published in the Federal Register on November 28, notes that there’s no separate process for evaluating whether an AI-assisted invention qualifies for a patent. “When multiple natural persons are involved in creating an invention with AI assistance, the traditional joint inventorship principles apply,” Squires added.

    The Court of Appeals for the Federal Circuit has ruled that “AI cannot be named as an inventor on a patent application (or issued patent) and that only natural persons can be inventors.” There’s no change to that stance under the latest USPTO guidelines. But the updated rules do offer more clarity as to whether things like new medications that are developed with the help of genAI systems can be patented.

    Kris Holt

    Source link

  • A New Way to Ruin Thanksgiving: Making AI Slop Recipes

    Remember when people started asking AI tools for cooking advice, and it wound up telling them to do things like use glue to get cheese to stick on pizza? Well, people are apparently relying on that same technology to guide them through cooking this year’s Thanksgiving dinner. In fact, so many are doing so that Bloomberg reports it’s putting a real dent in the views of recipe writers who usually see traffic spike this time of year.

    The problem is effectively the same one that led to Google previously recommending that people eat one rock per day: AI Overviews in Search. They provide users with a quick panel that pulls out all of the “relevant information” without requiring them to click through to a website and scroll through the admittedly annoying 2,000-word personal essay that precedes every recipe ever posted online.

    This creates two issues. The first is for the recipe authors, who have put actual work—from their collected knowledge of food to the effort of prep work to the trial and error to get the final product just right—into the recipes they share. They’re getting their traffic siphoned off by the AI Overviews. Creators that Bloomberg spoke with said their traffic was down between 40% to 80% this year from previous Thanksgivings. That’s in line with the experience of other sites, too, which have reported as much as 80% declines in click-throughs since AI Overviews became more prominent.

    The second problem is for people making the recipes, because there is a very real chance that they are getting bad information. Here’s the thing about AI summaries of anything: it doesn’t actually understand what it is reading. All it can do is spit back what it thinks is relevant. That’s kind of a big deal for cooking, where little errors can ruin a dish. For instance, Bloomberg talked to one cook who has a popular Christmas cake recipe. On the creator’s page for the recipe, it suggests baking it at 160°C (that’s 320°F) for an hour and a half. An AI-summarized version of that recipe recommends you bake it for three to four hours—more than twice as long. You don’t have to know a whole lot about baking to know that’s not going to turn out great.

    AI-generated recipes have become a whole micro-industry. If you hop on any social platform and go looking for ideas of what to cook, there’s a good chance you’ll land on a page that looks like your standard cooking inspiration fare—but you might notice that the recipes just aren’t quite right. Best-case scenario, you’ll probably end up with a relatively bland but perfectly fine dish. Worst case, you might end up burning down your house because somewhere in the black hole that is a large language model, it decided that you should put your tinfoil-wrapped fish in the microwave on high.

    Maybe grab one of those old cookbooks off the shelf this holiday season just to be safe.

    AJ Dellinger

    Source link

  • Snap and Perplexity sign $400 million deal to put AI search directly in Snapchat

    Snap and Perplexity AI have struck a $400 million deal that will bring the AI search engine directly to Snapchat sometime in “early 2026,” the two companies announced. With the partnership, Perplexity’s AI search engine will be a prominent part of Snapchat’s “chat” interface so users can “ask questions and get clear, conversational answers drawn from verifiable sources, all within Snapchat.”

    The news was announced alongside the company’s third-quarter earnings. The company said that revenue from the deal — Perplexity is paying Snap $400 million for the integration — is “expected to begin contributing” to the company’s bottom line in 2026. In a letter to shareholders, CEO Evan Spiegel also hinted that Snap could pursue similar partnerships with other AI companies. “This collaboration makes AI-powered discovery native to Snapchat, enhances personalization, and positions Snap as a leading distribution channel for intelligent agents, laying the groundwork for a broader ecosystem of AI partners to reach our global community,” he wrote.

    Snap, like its peers, has been leaning into generative AI in recent years. The company has its own LLM-powered chatbot, called MyAI, which uses models from OpenAI, Google and, soon, Perplexity AI. Snap has also introduced AI-powered lenses and creation tools, which have helped boost its Snapchat+ subscription service.

    Spiegel also teased other AI-powered updates coming to Snapchat. He said the company is working on a new AI video generation feature called “AI Clips” that “will allow creators to generate short, shareable videos from simple prompts.” He didn’t say when the feature might launch.

    Outside of Snapchat, Snap is also planning on launching a new version of its AR glasses, called Specs, sometime next year. Spiegel didn’t offer any new details about the device, which he has previously promised will be lighter-weight than the current version. He did, however, suggest the company was considering working with potential hardware partners. He said Snap would be “putting Specs into their own standalone, 100% owned subsidiary” to give the company more flexibility to pursue such arrangements.

    Update, November 5, 2025, 3:08PM PT: Added more details from Snap’s earnings call.

    Source link

  • ChatGPT: Everything you need to know about the AI chatbot

    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users.

    2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora.

    OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit.

    In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history.

    Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here.

    To see a list of 2024 updates, go here.

    Timeline of the most recent ChatGPT updates

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    October 2025

    OpenAI revealed that a small but significant portion of ChatGPT users, more than a million weekly, discuss mental health struggles, including suicidal thoughts, psychosis, or mania, with the AI. The company says it has improved ChatGPT’s responses by consulting more than 170 mental health experts to handle such conversations more appropriately than earlier versions.

    OpenAI reportedly working on AI that create music from text and audio

    OpenAI is developing a new tool that generates music from text and audio prompts, potentially for enhancing videos or adding instrumentation, and is training it using annotated scores from Juilliard students, according to The Information. The launch date and whether it will be standalone or integrated with ChatGPT and Sora remain unclear.

    ChatGPT gets smarter at organizing your work and school info

    OpenAI’s new “company knowledge” update for ChatGPT lets Business, Enterprise, and Education users search workplace data across tools like Slack, Google Drive, and GitHub using GPT‑5, per a report by The Verge. The feature acts as a conversational search engine, providing more comprehensive and accurate answers by scouring multiple sources simultaneously.

    OpenAI launches Atlas to make ChatGPT your main search tool

    OpenAI has launched its AI browser, ChatGPT Atlas, starting on Mac, letting users get answers from ChatGPT instead of traditional search results. Unlike other AI browsers, Atlas is open to all users and will soon come to Windows, iOS, and Android, as OpenAI aims to make ChatGPT the go-to tool for browsing the web.

    ChatGPT app growth slows, but still draws millions of daily users

    A new Apptopia analysis suggests ChatGPT’s mobile app growth may be leveling off, with global download growth slowing since April. While daily installs remain in the millions, October is tracking an 8.1% month-over-month decline in new downloads.

    Walmart shopping comes to ChatGPT

    OpenAI is partnering with Walmart to allow users to browse products, plan meals, and make purchases through ChatGPT, with support for third-party sellers expected later this fall. The partnership is part of OpenAI’s broader effort to develop AI-driven e-commerce tools, including collaborations with Etsy and Shopify.

    OpenAI brings ChatGPT Go plan to 16 more Asian countries

    OpenAI is expanding its affordable ChatGPT Go plan, priced under $5, to 16 new countries across Asia, including Afghanistan, Bangladesh, Bhutan, Brunei Darussalam, Cambodia, Laos, Malaysia, Maldives, Thailand, Vietnam, and Pakistan. In some of these countries, users can pay in local currencies, while in others, payments are required in USD, with final costs varying due to local taxes.

    ChatGPT surpasses 800 million weekly active users

    ChatGPT now has 800 million weekly active users, reflecting rapid growth across consumers, developers, enterprises, and governments, Sam Altman said. This milestone comes as OpenAI accelerates efforts to expand its AI infrastructure and secure more chips to support rising demand.

    Developers can now build apps inside ChatGPT

    OpenAI now allows developers to build interactive apps directly inside ChatGPT, with early partners like Booking.com, Expedia, Spotify, Figma, Coursera, Zillow, and Canva already onboard. The ChatGPT maker is also rolling out a preview of its Apps SDK, a developer toolkit for creating these chat-based experiences.

    September 2025

    ChatGPT rolls out parental controls following teen suicide case

    OpenAI is reportedly adding parental controls to ChatGPT on web and mobile, letting parents and teens link accounts to enable safeguards like limiting sensitive content, setting quiet hours, and disabling features such as voice mode or image generation. The move comes amid growing regulatory scrutiny and a lawsuit over the chatbot’s alleged role in a teen’s suicide.

    OpenAI introduces ChatGPT Pulse for personalized morning briefs

    OpenAI unveiled Pulse, a new ChatGPT feature that delivers personalized morning briefings overnight, encouraging users to start their day with the app. The tool reflects a shift toward making ChatGPT more proactive and asynchronous, positioning it as a true assistant rather than just a chatbot. OpenAI’s new Applications CEO, Fidji Simo, called Pulse the first step toward bringing high-level personal support to everyone, starting with Pro users.

    OpenAI moves into AI-Powered shopping, challenging tech giants

    OpenAI launched Instant Checkout in ChatGPT, letting U.S. users purchase products directly from Etsy and, soon, over a million Shopify merchants without leaving the conversation. Shoppers can browse items, read reviews, and complete purchases with a single tap using Apple Pay, Google Pay, Stripe, or a credit card. The update marks a step toward reshaping online shopping by merging product discovery, recommendations, and payments in one place.

    OpenAI brings budget-friendly ChatGPT Go to Indonesian users

    OpenAI rolled out its budget-friendly ChatGPT Go plan in Indonesia for Rp 75,000 ($4.50) per month, following its initial launch in India. The mid-tier plan, which offers higher usage limits, image generation, file uploads, and better memory compared to the free version, enters the market in direct competition with Google’s new AI Plus plan in Indonesia.

    OpenAI tightens ChatGPT rules for teens amid safety concerns

    CEO Sam Altman announced new policies for under-18 users of ChatGPT, tightening safeguards around sensitive conversations. The company says it will block flirtatious exchanges with minors and add stronger protections around discussions of suicide, even escalating severe cases to parents or authorities. The move comes as OpenAI faces a wrongful death lawsuit tied to alleged chatbot interactions, underscoring rising concerns about the mental health risks of AI companions.

    OpenAI rolls out GPT-5-Codex to power smarter AI coding

    OpenAI rolled out GPT-5-Codex, a new version of its AI coding agent that can spend anywhere from a few seconds to seven hours tackling a task, depending on complexity. The company says this dynamic approach helps the model outperform GPT-5 on key coding benchmarks, including bug fixes and large-scale refactoring. The update comes as OpenAI looks to keep Codex competitive in a fast-growing market that now includes rivals like Claude Code, Cursor, and GitHub Copilot.

    OpenAI reshuffles team behind ChatGPT’s personality

    OpenAI is shaking up its Model Behavior team, the small but influential group that helps shape how its AI interacts with people. The roughly 14-person team is being folded into the larger Post Training group, now reporting to lead researcher Max Schwarzer. Meanwhile, founding leader Joanne Jang is spinning up a new unit called OAI Labs, focused on prototyping fresh ways for people to collaborate with AI.

    August 2025

    OpenAI to strengthen ChatGPT safeguards after teen suicide lawsuit

    OpenAI, facing a lawsuit from the parents of a 16-year-old who died by suicide, said in its blog that it has implemented new safeguards for ChatGPT, including stronger detection of mental health risks and parental control features. The AI company said the updates aim to provide tighter protections around suicide-related conversations and give parents more oversight of their children’s use.

    xAI claims Apple’s App Store practices give OpenAI an unfair advantage

    Elon Musk’s AI startup, xAI, filed a federal lawsuit in Texas against Apple and OpenAI, alleging that the two companies colluded to lock up key markets and shut out rivals.

    OpenAI targets India with cheaper monthly ChatGPT subscription

    OpenAI introduced its most affordable subscription plan, ChatGPT Go, in India, priced at 399 rupees per month (approximately $4.57). This move aims to expand OpenAI’s presence in its second-largest market, offering enhanced access to the latest GPT-5 model and additional features.

    ChatGPT mobile app hits $2B in revenue, $2.91 earned per install

    Since its May 2023 launch, ChatGPT’s mobile app has amassed $2 billion in global consumer spending, dwarfing competitors like Claude, Copilot, and Grok by roughly 30 times, according to Appfigures. This year alone, the app has generated $1.35 billion, a 673% increase from the same period in 2024, averaging nearly $193 million per month, or 53 times more than its nearest rival, Grok.

    OpenAI keeps multiple GPT models despite GPT-5 launch

    Despite unveiling GPT-5 as a “one-size-fits-all” AI, OpenAI is still offering several legacy AI options, including GPT-4o, GPT-4.1, and o3. Users can choose between new “Auto,” “Fast,” and “Thinking” modes for GPT-5, and paid subscribers regain access to legacy models like GPT-4o and GPT-4.1.

    Sam Altman addresses GPT-5 glitches and “chart crime” during Reddit AMA

    OpenAI CEO Sam Altman told Reddit users that GPT-5’s “dumber” behavior at launch was due to a router issue and promised fixes, double rate limits for Plus users, and transparency on which model is answering, while also shrugging off the infamous “chart crime” from the live presentation.

    OpenAI unveils GPT-5, a smarter, task-ready ChatGPT

    OpenAI released GPT-5, a next-gen AI that’s not just smarter but more useful — able to handle tasks like coding apps, managing calendars, and creating research briefs — while automatically figuring out the fastest or most thoughtful way to answer your questions.

    OpenAI offers ChatGPT Enterprise to federal agencies for just $1

    OpenAI is making a major push into federal government workflows, offering ChatGPT Enterprise to agencies for just $1 for the next year. The move comes after the U.S. General Services Administration (GSA) added OpenAI, Google, and Anthropic to its approved AI vendor list, allowing agencies to access these tools through preset contracts without negotiating pricing.

    OpenAI returns to open source with new AI models

    OpenAI unveiled its first open source language models since GPT-2, introducing two new open-weight AI releases: gpt-oss-120b, a high-performance model capable of running on a single Nvidia GPU, and gpt-oss-20b, a lighter model optimized for laptop use. The move comes amid growing competition in the global AI market and a push for more open technology in the U.S. and abroad.

    ChatGPT nears 700M weekly users, quadruples growth in a year

    ChatGPT’s rapid growth is accelerating. OpenAI said the chatbot was on track to hit 700 million weekly active users in the first week of August, up from 500 million at the end of March. Nick Turley, OpenAI’s VP and head of the ChatGPT app, highlighted the app’s growth on X, noting it has quadrupled in size over the past year.

    July 2025

    ChatGPT now has study mode

    OpenAI unveiled Study Mode, a new ChatGPT feature designed to promote critical thinking by prompting students to engage with material rather than simply receive answers. The tool is now rolling out to Free, Plus, Pro, and Team users, with availability for Edu subscribers expected in the coming weeks.

    Altman warns that ChatGPT therapy isn’t confidential

    ChatGPT users should be cautious when seeking emotional support from AI, as the AI industry lacks safeguards for sensitive conversations, OpenAI CEO Sam Altman said on a recent episode of This Past Weekend w/ Theo Von. Unlike human therapists, AI tools aren’t bound by doctor-patient confidentiality, he noted.

    ChatGPT hits 2.5B prompts daily

    ChatGPT now receives 2.5 billion prompts daily from users worldwide, including roughly 330 million from the U.S. That’s more than double the volume reported by CEO Sam Altman just eight months ago, highlighting the chatbot’s explosive growth.

    OpenAI launches a general-purpose agent in ChatGPT

    OpenAI has introduced ChatGPT Agent, which completes a wide variety of computer-based tasks on behalf of users and combines several capabilities like Operator and Deep Research, according to the company. OpenAI says the agent can automatically navigate a user’s calendar, draft editable presentations and slideshows, run code, shop online, and handle complex workflows from end to end, all within a secure virtual environment.

    Study warns of major risks with AI therapy chatbots

    Researchers at Stanford University have observed that therapy chatbots powered by large language models can sometimes stigmatize people with mental health conditions or respond in ways that are inappropriate or could be harmful. While chatbots are “being used as companions, confidants, and therapists,” the study found “significant risks.”

    OpenAI delays releasing its open model again

    CEO Sam Altman said that the company is delaying the release of its open model, which had already been postponed by a month earlier this summer. The ChatGPT maker, which initially planned to release the model around mid-July, has indefinitely postponed its launch to conduct additional safety testing.

    OpenAI is reportedly releasing an AI browser in the coming weeks

    OpenAI plans to release an AI-powered web browser to challenge Alphabet’s Google Chrome. It will keep some user interactions within ChatGPT, rather than directing people to external websites.

    ChatGPT is testing a mysterious new feature called “study together”

    Some ChatGPT users have noticed a new feature called “Study Together” appearing in their list of available tools. This is the chatbot’s approach to becoming a more effective educational tool, rather than simply providing answers to prompts. Some people also wonder whether there will be a feature that allows multiple users to join the chat, similar to a study group.

    Referrals from ChatGPT to news sites are rising but not enough to offset search declines

    Referrals from ChatGPT to news publishers are increasing. But this rise is insufficient to offset the decline in clicks as more users now obtain their news directly from AI or AI-powered search results, according to a report by digital market intelligence company Similarweb. Since Google launched its AI Overviews in May 2024, the percentage of news searches that don’t lead to clicks on news websites has increased from 56% to nearly 69% by May 2025.

    June 2025

    OpenAI uses Google’s AI chips to power its products

    OpenAI has started using Google’s AI chips to power ChatGPT and other products, as reported by Reuters. The ChatGPT maker is one of the biggest buyers of Nvidia’s GPUs, using the AI chips to train models, and this is the first time that OpenAI is using non-Nvidia chips in an important way.

    A new MIT study suggests that ChatGPT might be harming critical thinking skills

    Researchers from MIT’s Media Lab monitored the brain activity of writers in 32 regions. They found that ChatGPT users showed minimal brain engagement and consistently fell short in neural, linguistic, and behavioral aspects. To conduct the test, the lab split 54 participants from the Boston area into three groups, each consisting of individuals ages 18 to 39. The participants were asked to write multiple SAT essays using tools such as OpenAI’s ChatGPT, the Google search engine, or without any tools.

    ChatGPT was downloaded 30 million times last month

    The ChatGPT app for iOS was downloaded 29.6 million times in the last 28 days, while TikTok, Facebook, Instagram, and X were downloaded a total of 32.9 million times during the same period, representing a difference of about 10.6%, according to ZDNET report citing Similarweb’s X post.

    The energy needed for an average ChatGPT query can power a lightbulb for a couple of minutes

    Sam Altman said that the average ChatGPT query uses about one-fifteenth of a teaspoon of water, equivalent to 0.000083 gallons of water, or the energy required to power a lightbulb for a few minutes, per Business Insider. In addition to that, the chatbot requires 0.34 watt-hours of electricity to operate.

    OpenAI has launched o3-pro, an upgraded version of its o3 AI reasoning model

    OpenAI has unveiled o3-pro, an enhanced version of its o3, a reasoning model that the chatGPT maker launched earlier this year. O3-pro is available for ChatGPT and Team users and in the API, while Enterprise and Edu users will get access in the third week of June.

    ChatGPT’s conversational voice mode has been upgraded

    OpenAI upgraded ChatGPT’s conversational voice mood for all paid users across different markets and platforms. The startup has launched an update to Advanced Voice that enables users to converse with ChatGPT out loud in a more natural and fluid sound. The feature also helps users translate languages more easily, the comapny said.

    ChatGPT has added new features like meeting recording and connectors for Google Drive, Box, and more

    OpenAI’s ChatGPT now offers new funtions for business users, including integrations with various cloud services, meeting recordings, and MCP connection support for connecting to tools for in-depth research. The feature enables ChatGPT to retrieve information across users’ own services to answer their questions. For instance, an analyst could use the company’s slide deck and documents to develop an investment thesis.

    May 2025

    OpenAI CFO says hardware will drive ChatGPT’s growth

    OpenAI plans to purchase Jony Ive’s devices startup io for $6.4 billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future.

    OpenAI’s ChatGPT unveils its AI coding agent, Codex

    OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests.

    Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life

    Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized.

    OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT

    OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT.

    OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson.

    OpenAI launches a new data residency program in Asia

    After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products.

    OpenAI to introduce a program to grow AI infrastructure

    OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg.

    OpenAI promises to make changes to prevent future ChatGPT sycophancy

    OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users.

    April 2025

    OpenAI clarifies the reason ChatGPT became overly flattering and agreeable

    OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast.

    OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations

    An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.”

    ChatGPT helps users by giving recommendations, showing images, and reviewing products for online shopping

    OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics.

    OpenAI wants its AI model to access cloud models for assistance

    OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch.

    OpenAI aims to make its new “open” AI model the best on the market

    OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch.

    OpenAI’s GPT-4.1 may be less aligned than earlier models

    OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.”

    OpenAI’s o3 AI model scored lower than expected on a benchmark

    Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score.

    OpenAI unveils Flex processing for cheaper, slower AI tasks

    OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads.

    OpenAI’s latest AI models now have a safeguard against biorisks

    OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report.

    OpenAI launches its latest reasoning models, o3 and o4-mini

    OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models.

    OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers

    Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post.

    OpenAI could “adjust” its safeguards if rivals release “high-risk” AI

    OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition.

    OpenAI is building its own social media network

    OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT.

    OpenAI will remove its largest AI model, GPT-4.5, from the API, in July

    OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14.

    OpenAI unveils GPT-4.1 AI models that focus on coding capabilities

    OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3.

    OpenAI will discontinue ChatGPT’s GPT-4 at the end of April

    OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API.

    OpenAI could release GPT-4.1 soon

    OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report.

    OpenAI has updated ChatGPT to use information from your previous conversations

    OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland.

    OpenAI is working on watermarks for images made with ChatGPT

    It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.”

    OpenAI offers ChatGPT Plus for free to U.S., Canadian college students

    OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version.

    ChatGPT users have generated over 700M images so far

    More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos.

    OpenAI’s o3 model could cost more to run than initial estimate

    The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task.

    OpenAI CEO says capacity issues will cause product delays

    In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote.

    March 2025

    OpenAI plans to release a new ‘open’ AI language model

    OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia.

    OpenAI removes ChatGPT’s restrictions on image generation

    OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior.

    OpenAI adopts Anthropic’s standard for linking AI models with data

    OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said.

    The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization.

    OpenAI expects revenue to triple to $12.7 billion this year

    OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said.

    ChatGPT has upgraded its image-generation feature

    OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected.

    OpenAI announces leadership updates

    Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer.

    OpenAI’s AI voice assistant now has advanced feature

    OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch.

    OpenAI, Meta in talks with Reliance in India

    OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans.

    OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations

    Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

    OpenAI upgrades its transcription and voice-generating AI models

    OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less.

    OpenAI has launched o1-pro, a more powerful version of its o1

    OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1.

    OpenAI research lead Noam Brown thinks AI “reasoning” models could’ve arrived decades ago

    Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms.

    OpenAI says it has trained an AI that’s “really good” at creative writing

    OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming. And it turns out that it might not be that great at creative writing at all.

    OpenAI launches new tools to help businesses build AI agents

    OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026.

    OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’

    OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them.

    ChatGPT can directly edit your code

    The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users.

    ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases

    According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch.

    February 2025

    OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release

    OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model. 

    ChatGPT may not be as power-hungry as once assumed

    A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing.

    OpenAI now reveals more of its o3-mini model’s thought process

    In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions.

    You can now use ChatGPT web search without logging in

    OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in.

    OpenAI unveils a new ChatGPT agent for ‘deep research’

    OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources.

    January 2025

    OpenAI used a subreddit to test AI persuasion

    OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post. 

    OpenAI launches o3-mini, its latest ‘reasoning’ model

    OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.”

    ChatGPT’s mobile users are 85% male, report says

    A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users.

    OpenAI launches ChatGPT plan for US government agencies

    OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data.

    More teens report using ChatGPT for schoolwork, despite the tech’s faults

    Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm.

    OpenAI says it may store deleted Operator data for up to 90 days

    OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s.

    OpenAI launches Operator, an AI agent that performs tasks autonomously

    OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online.

    OpenAI may preview its agent tool for users on the $200-per-month Pro plan

    Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.

    OpenAI tests phone number-only ChatGPT signups

    OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email.

    ChatGPT now lets you schedule reminders and recurring tasks

    ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week.

    New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’

    OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely.

    FAQs:

    What is ChatGPT? How does it work?

    ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

    When did ChatGPT get released?

    November 30, 2022 is when ChatGPT was released for public use.

    What is the latest version of ChatGPT?

    Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o.

    Can I use ChatGPT for free?

    There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.

    Who uses ChatGPT?

    Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

    What companies use ChatGPT?

    Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.

    Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

    What does GPT mean in ChatGPT?

    GPT stands for Generative Pre-Trained Transformer.

    What is the difference between ChatGPT and a chatbot?

    A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.

    ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

    Can ChatGPT write essays?

    Yes.

    Can ChatGPT commit libel?

    Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

    We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

    Does ChatGPT have an app?

    Yes, there is a free ChatGPT mobile app for iOS and Android users.

    What is the ChatGPT character limit?

    It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

    Does ChatGPT have an API?

    Yes, it was released March 1, 2023.

    What are some sample everyday uses for ChatGPT?

    Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc.

    What are some advanced uses for ChatGPT?

    Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

    How good is ChatGPT at writing code?

    It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.

    Can you save a ChatGPT chat?

    Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

    Are there alternatives to ChatGPT?

    Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives.

    How does ChatGPT handle data privacy?

    OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.

    The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.

    In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.”

    What controversies have surrounded ChatGPT?

    Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

    An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

    CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

    Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.

    There have also been cases of ChatGPT accusing individuals of false crimes.

    Where can I find examples of ChatGPT prompts?

    Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.

    Can ChatGPT be detected?

    Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.

    Are ChatGPT chats public?

    No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.

    What lawsuits are there surrounding ChatGPT?

    None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

    Are there issues regarding plagiarism with ChatGPT?

    Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.

    This story is continually updated with new information.

    Kyle Wiggers, Cody Corrall, Alyssa Stringer, Kate Park

    Source link

  • EA partners with the company behind Stable Diffusion to make games with AI

    Electronic Arts has announced a new partnership with Stability AI, the creator of AI image generation tool Stable Diffusion. The company will “co-develop transformative AI models, tools, and workflows” for the game developer, with the hopes of speeding up development while maintaining quality.

    “I use the term smarter paintbrushes,” Steve Kestell, Head of Technical Art for EA SPORTS said in the announcement. “We are giving our creatives the tools to express what they want.” To start, the “smarter paintbrushes” EA and Stability AI are building are concentrated on generating textures and in-game assets. EA hopes to create “Physically Based Rendering materials” with new tools “that generate 2D textures that maintain exact color and light accuracy across any environment.”

    The company also describes using AI to “pre-visualize entire 3D environments from a series of intentional prompts, allowing artists to creatively direct the generation of game content.” Stability AI is most famous for its powerful Stable Diffusion image generator, but the company maintains multiple tools for generating 3D models, too, so the partnership is by no means out of place.

    It helps that AI is on the tip of most video game executives’ tongues. Strauss Zelnick, the head of Grand Theft Auto publisher Take-Two, recently shared that generative AI “will not reduce employment, it will increase employment,” because “technology always increases productivity, which in turn increases GDP, which in turn increases employment.” Krafton, the publisher of PUBG: Battlegrounds, made its commitment to AI even more clear, announcing plans on Thursday to become an AI-first company. Companies with a direct stake in the success of the AI industry, like Microsoft, have also created gaming-focused tools and developed models for prototyping.

    The motivations for EA might be even simpler, though. The company is in the midst of being taken private, and will soon be saddled with billions in debt. Theoretically cutting costs with AI might be one way the company hopes to survive the transition.

    Source link

  • Wikimedia says AI bots and summaries are hurting Wikipedia’s traffic

    Wikimedia is sounding the alarm on the impact AI is having on reliable knowledge and information on the internet. In a , Wikimedia’s senior director of product, Marshall Miller, lays out the impact on page views that the foundation attributes to the rise of LLM chatbots and AI-generated summaries in search results.

    “We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content,” said Miller.

    The foundation has increasingly faced whose sophistication has made it difficult to parse human traffic from bots. After improving bot detection to yield more accurate metrics, Wikipedia’s data shows an 8 percent drop in page views year over year.

    Miller paints a picture of an existential risk greater than that of a website’s page views. He posits that if Wikipedia’s traffic continues to decline, it could threaten what he calls “the only site of its scale with standards of verifiability, neutrality and transparency powering information all over the internet.” He warns that fewer visits to Wikipedia would lead to fewer volunteers, less funding and ultimately less reliable content.

    The solution he offers is for LLMs and search results to be more intentional in giving users the opportunity to interact directly with the source for the information being presented. “For people to trust information shared on the internet, platforms should make it clear where the information is sourced from and elevate opportunities to visit and participate in those sources,” Miller writes.

    Earlier this summer, Wikipedia floated the idea of AI-generated summaries that would appear at the top of articles. The project was before it began after fierce backlash from the site’s volunteer editors.

    Andre Revilla

    Source link

  • DirecTV will start replacing screensavers with AI-generated ads next year

    DirecTV will begin replacing your TV’s screensaver with AI-generated ads thanks to a new partnership. The entertainment brand is working with Glance, an AI company that has received backing from Google and developed an on-device AI tool alongside the tech giant. The new AI-powered screensavers will begin rolling out to DirecTV Gemini devices early next year.

    Glance’s press release about the deal presents the tech’s capability in lofty language: “Shop smarter by discovering and engaging with products and brands in an AI-led virtual and visually immersive shopping experience that feels native to TV.” In practice, however, it sounds like a viewer can use the Glance mobile app to do things like insert themselves or other people into AI-generated videos appearing on their televisions. Then they can use the voice remote to alter the person’s wardrobe and then buy items similar to the AI-generated images from your phone.

    “We are making television a lean-in experience versus lean back,” Rajat Wanchoo, Glance’s group vice president of commercial partnerships, told The Verge, which initially picked up news of the partnership. “We want to give users a chance to use the advancements that have happened in generative AI to create a ChatGPT moment for themselves, but on TV.”

    It’s unclear how many DirecTV customers want to have a ChatGPT moment for themselves, but questions about whether people want or need a feature hasn’t stopped most AI companies from pushing ahead with business plans. The press release doesn’t note whether viewers will be able to turn off this screensaver feature once it’s live.

    Source link

  • Police Say People Keep Calling 911 Over an ‘AI Homeless Man’ TikTok Prank

    Finally, generative AI has found its purpose: letting kids prank parents. In an apparent new social media trend, kids are creating AI-generated images of homeless people in their homes and sending the images to their parents, causing them to freak out and, in some cases, call the police to respond to the situation.

    The basic premise of this prank is pretty simple: Kids use generative AI tools to create an image of a person, usually an unkempt man who looks like he’s come in from living on the street, in their home, and send it to their parents. The kids pretend that the person claimed to know their parents, or just wanted to come in for a nap. Then, they wait as their parents lose their minds and demand they kick the person out. That’s kinda the whole thing.

    The pranksters have been recording the reactions from their parents and posting them online, and some videos on TikTok have racked up nearly one million likes and thousands of comments. The hashtag #homelessmanprank now has more than 1,200 videos linked to it on the platform, and there are a number of tutorials on how to generate the images needed for the prank, most of which recommend using Snapchat’s AI tools to create the image. Gizmodo reached out to Snapchat for comment on its platform’s role in this trend, but did not receive a response at the time of publication.

    It’d probably be fine if the prank just ended there—it’s a bit of a gross exploitation of how unhoused people are perceived, and some of the parents say some less-than-savory things about the people they think are in their home. Now, the situation has broken containment on what appears to be several occasions, as parents in the middle of a panic have called the police and gotten law enforcement involved.

    Several police departments across the country have issued statements about the prank. The Round Rock Police Department in Texas suggested in a post on X that a prank in the town resulted in “the misuse of emergency services.” The department claimed to have responded to two calls sparked by the trend, both of which turned out to be hoaxes. “While no one was harmed, making false reports like these can tie up emergency resources and delay responses to legitimate calls for service,” the department said. Gizmodo contacted the Round Rock Police Department regarding the situation, and the department said it had no further comment to offer beyond its public statements.

    In a post on Facebook, the Oak Harbor Police Department in Washington said that it responded to a call about a “homeless individual” at the high school campus, which turned out to be a false report related to the same kind of prank. “In this case, students generated and circulated an image implying the presence of a homeless individual on school grounds, which led to unnecessary concern within the community,” the police wrote.

    The Salem Police Department in Massachusetts also issued a public statement about the trend, though it didn’t indicate if its police force actually responded to a situation related to it. “This prank dehumanizes the homeless, causes the distressed recipient to panic and wastes police resources. Police officers who are called upon to respond do not know this is a prank and treat the call as an actual burglary in progress thus creating a potentially dangerous situation,” the department wrote.

    Several reports have hit the United Kingdom, too, with the BBC reporting on Dorset Police receiving a call related to the prank. Police in Poole also issued a statement about the trend after responding to a call from a parent who got pranked.

    Word of the trend has spread to national news, as NBC’s “Nightly News” ran a segment on the story Thursday evening. In that segment, Round Rock Police Patrol Division Commander Andy McKinney told NBC that getting a call about an intruder “causes a pretty aggressive response for us because we’re worried about the safety of individuals in the home, which can mean clearing the home with guns out…it could cause a SWAT response.” Which frankly seems like a bit much, but also feels like a pretty standard American police response.

    We’d love to tell kids to stick to the classics, like lighting a bag of dog poop on fire, but someone in California just got 28 days in jail for that exact prank, so maybe just don’t have any fun at all?

    AJ Dellinger

    Source link

  • You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out

    The complete copyright-free-for-all approach that OpenAI took to its new AI video generation model, Sora 2, lasted all of one week. After initially requiring copyright holders to opt out of having their content appear in Sora-generated videos, CEO Sam Altman announced that the company will be moving to an “opt-in” model that will “give rightsholders more granular control over generation of characters”—and Sora obsessives are not taking it particularly well.

    Given the type of content that was being generated with Sora and shared via the TikTok-style social app that OpenAI launched specifically to host user-generated Sora videos, the change shouldn’t come as a shock. Almost immediately, the platform was inundated with copyrighted material being used in ways that the rightsholders almost certainly did not care for, unless you think Nickelodeon really loved the subversiveness of Nazi SpongeBob. On Monday, the Motion Picture Association became one of the loudest voices calling for OpenAI to put an end to the potential infringement. It didn’t take long for OpenAI to respond and acquiesce.

    In a blog post, Altman said the new approach to copyrighted material in Sora will require rightsholders to opt-in to having their characters and content used—but he’s very sure that copyright holders love the videos, actually. “We are hearing from a lot of rightsholders who are very excited for this new kind of ‘interactive fan fiction’ and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all),” Altman wrote, stating that his company wants to “let rightsholders decide how to proceed.”

    Altman also admitted, “There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration.” It’s unclear if that will play with rightsholders. MPA CEO Charles Rivkin said in a statement that OpenAI “must acknowledge it remains their responsibility—not rightsholders’—to prevent infringement on the Sora 2 service,” and said “Well-established copyright law safeguards the rights of creators and applies here.”

    While OpenAI might be giving copyright holders more control of the outputs of its model, it doesn’t appear that they had much say on the inputs. A report from the Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. It’s not clear that OpenAI went out and got those rights to train Sora 2, but the generator is very good at spitting out accurate recreations of copyrighted material in a way that it could only do if it was fed a whole lot of existing content during training.

    The biggest AI training case thus far saw Anthropic pay out $1.5 billion to settle a copyright infringement case with authors of books the company pirated to train its models. The judge in that case did find that using copyrighted material for training without permission is fair use, though other courts may not agree with that call. Earlier this year, OpenAI asked the Trump administration to call AI model training fair use. So a lot of OpenAI’s strategy around Sora appears to be fucking around and hoping, if it makes the right allies, it’ll never have to find out.

    OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well.

    AJ Dellinger

    Source link

  • Taylor Swift, Defender of Artist Ownership, Allegedly Uses AI in Videos

    Taylor Swift once said, “You deserve to own the art you make.” Apparently, that doesn’t apply to the millions of artists who have had their works fed into the data wood chipper that is generative AI tools. In the lead-up to the release of the world’s biggest pop star’s latest album, “Life of a Showgirl,” fans were treated to easter egg videos designed to build hype. Instead, sharp-eyed Swifties started to spot what appeared to be AI-generated imagery within the teaser videos, and launched full Swift-vestigations into the situation.

    The alleged generative AI material appeared in a series of short promotional videos. Those videos were accessed via QR codes that were posted on 12 orange doors located in 12 different cities. The videos, originally uploaded via YouTube Shorts, are no longer available, but Gizmodo reviewed purported re-uploads found online. Each video featured letters which, when put together, provided the phrase, “You must remember everything, but mostly this, the crowd is your king.” But the mystery that Taylor’s king took more of an interest in seemed to be, “Why do some of these videos look a little off?”

    No one from Swift’s camp has confirmed in any way the use of generative AI in the promotional videos, but there is certainly enough on-screen to create suspicion. Users have pointed out clipping and disappearing imagery in some videos that suggest that what you’re seeing is created with generative AI. The videos appear to be a part of a partnership with Google, according to a report from The Tennessean, which covered the orange door reveal that appeared in Nashville. Gizmodo reached out to Google for comment regarding its involvement in the videos, but did not receive a response at the time of publication.

    Others have called out some lettering that appears in different shots that have a distinct AI-generated quality to them, in that they are largely nonsense. A treadmill that appears in one video, for instance, has buttons that read “MOP,” “SUOP,” and “NCLINE,” with letters that are curved and blurred in ways that suggest there’s something more than just some wear and tear on the buttons. Another image, a notebook, also appears to contain made-up lettering that a human would be unlikely to make, on account of the fact that a human knows what letters are.

    Generative AI systems are notoriously bad at generating text because, while these systems have been trained on massive sets of data and images containing text, the model has no concept of what it’s actually “looking” at. This is why generative AI models can spit out images of watches and clocks, but it’s often hard to get them to display specific times, because the model has no idea how to tell time. It just knows clocks have lines that mark time, not what those lines actually indicate.

    The inconsistencies were surprisingly common throughout the videos. Viewers pointed out a squirrel that appears to transform into a chipmunk at one point, and a changing number of lamps that appear in another shot. The Swift diehards took particular offense to an AI-generated version of a piano and guitar that was used on Swift’s Eras Tour, which shouldn’t be surprising given how big a deal was made of those custom-made instruments at the time.

    It doesn’t appear that generative AI was used in the creation of Swift’s music videos for the new album, and there doesn’t appear to be an indication that generative AI was used in the feature film released to mark the launch of the record.  Gizmodo reached out to representatives for Taylor Swift, as well as Rodrigo Prieto, cinematographer of “Taylor Swift: The Official Release Party of a Showgirl,” for comment regarding the potential use of generative AI in the making of these promotional videos, music videos, and the film. No parties responded on the record at the time of publication.

    On its face, this appears to be a pretty major blunder. You can’t tell your superfans, who think every word you speak and image you post contains secret messages, to look for clues in an AI-generated video and not expect them to spot inconsistencies. But hey, maybe these weird anomalies are just part of another Easter egg reveal, right?

    AJ Dellinger

    Source link

  • The First 24 Hours of Sora 2 Chaos: Copyright Violations, Sam Altman Shoplifting, and More

    On Tuesday, OpenAI released Sora 2, the latest version of its video and audio generation tool that it promised would be the “most powerful imagination engine ever built.” Less than a day into its release, it appears the imaginations of most people are dominated by copyrighted material and existing intellectual property.

    In tandem with the release of its newest model, OpenAI also dropped a Sora app, designed for users to generate and share content with each other. While the app is currently invite-only, even if you just want to see the content, plenty of videos have already made their way to other social platforms. The videos that have taken off outside of OpenAI’s walled garden contain lots of familiar characters: Sonic the Hedgehog, Solid Snake, Pikachu.

    There does appear to be at least some types of content that are off-limits in OpenAI’s video generator. Users have reported that the app rejects requests to produce videos featuring Darth Vader and Mickey Mouse, for instance. That restriction appears to be the result of OpenAI’s new approach to copyright material, which is pretty simple: “We’re using it unless we’re explicitly told not to.” The Wall Street Journal reported earlier this week that OpenAI has approached movie studios and other copyright holders to inform them that they will have to opt out of having their content appear in Sora-generated videos. Disney did exactly that, per Reuters, so its characters should be off-limits for content created by users.

    That doesn’t mean the model wasn’t trained on that content, though. Earlier this month, The Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. For instance, WaPo was able to create a short video clip that closely resembled the Netflix show “Wednesday,” down to the font displayed and a model that looks suspiciously like Jenna Ortega’s take on the titular character. Netflix told the publication it did not provide content to OpenAI for training.

    The outputs of Sora 2 reveal that it’s clearly been fed its fair share of copyrighted material, too. For instance, users have managed to generate scenes from “Rick and Morty,” complete with relatively accurate-sounding voices and art style. (Though, if you go outside of what the model knows, it seems to struggle. A user put OpenAI CEO Sam Altman into the “Rick and Morty” universe, and he looks troublingly out of place.)

    Other videos at least attempt to be a little creative about how they use copyrighted characters. Users have, for instance, thrown Ronald McDonald into an episode of “Love Island” and created a fake video game that teams up Tony Soprano from The Sopranos and Kirby from, well, Kirby.

    Interestingly, not all potential copyright violations come from users who are explicitly asking for it. For instance, one user gave Sora 2 the prompt “A cute young woman riding a dragon in a flower world, Studio Ghibli style, saturated rich colors,” and it just straight up spit out an anime-style version of The NeverEnding Story. Even when users aren’t actively calling upon the model to create derivative art, it seems like it can’t help itself.

    “People are eager to engage with their family and friends through their own imaginations, as well as stories, characters, and worlds they love, and we see new opportunities for creators to deepen their connection with the fans,” a spokesperson for OpenAI told Gizmodo. “We’re working with rightsholders to understand their preferences for how their content appears across our ecosystem, including Sora.”

    There is one other genre of popular and potentially legally dubious content that has become popular among Sora 2 users, too: The Sam Altman cinematic universe. OpenAI claims that users are not able to generate videos that use the likeness of other people, including public figures, unless those figures upload their likeness and give explicit permission. Altman apparently has given his ok (which makes sense, he’s the CEO and he was featured prominently in the company’s fully AI-generated promotional video for Sora 2’s launch), and users are making the most of having access to his image.

    One user claimed to have the “most liked” video in the Sora social app, which depicted Altman getting caught shoplifting GPUs from Target. Others have turned him into a skibidi toilet, a cat, and, perhaps most fittingly, a shameless thief stealing creative materials from Hayao Miyazaki.

    There are some questions about the likeness of non-characters in these videos, too. In the video of Altman in Target, for instance, how does Target feel about its logo and store likeness being used? Another user inserted their own likeness into an NFL game, which seems to pretty clearly use the logos of the New York Giants, Dallas Cowboys, and the NFL itself. Is that considered kosher?

    OpenAI obviously wants people to lend their likeness to the app, as it creates a lot more avenues for engagement, which seems to be its primary currency right now. But the Altman examples seem instructive as to the limits of this: It’s hard to imagine that too many public figures are going to submit themselves to the humiliation ritual of allowing other people to control their image. Worse, imagine the average person getting their likeness dropped into a video that depicts them committing a crime and the potential social ramifications they might face.

    A spokesperson for OpenAI said Altman has made his likeness available for anyone to play with, and users who verify their likeness in Sora can set who can make use of it: just the user, mutual friends, select friends, or everyone. The app also gives users the ability to see any video in which their likeness has been used, including those that are not published, and can revoke access or remove a video containing their image at any time. The spokesperson also said that videos contain metadata that show they are AI-generated and watermarked with an indicator they were created with Sora.

    There are, of course, some defeats for that. The fact that a video can be deleted from Sora doesn’t mean that an exported version can be deleted. Likewise, the watermark could be cropped out. And most people aren’t checking the metadata of videos to ensure authenticity. What the fallout of this looks like, we will have to see, but there will be fallout.

    AJ Dellinger

    Source link

  • OpenAI’s New Social Network Is Reportedly TikTok If It Was Just an AI Slop Feed

    Welcome to the age of anti-social media. According to a report from Wired, OpenAI is planning on launching a standalone app for its video generation tool Sora 2 that will include a TikTok-style video scroll that will let people scroll through entirely AI-generated videos. The quixotic effort follows Meta’s recent launch of an AI-slop-only feed on its Meta AI app that was met with nearly universal negativity.

    Per Wired, the Sora 2 app will feature the familiar swipe-up-to-scroll style navigation that is featured for most vertical video platforms like TikTok, Instagram Reels, or YouTube Shorts. It’ll also use a personalized recommendation algorithm to feed users content that might appeal to their interests. Users will be able to like, comment, or “remix” a post—all very standard social media fare.

    The big difference is that all of the content on the platform will be AI-generated via OpenAI’s video generation model that can take text, photos, or existing video and AI-ify it. The videos will be up to 10 seconds long, presumably because that’s about how long Sora can hold itself together before it starts hallucinating weird shit. (The first version of Sora allows videos up to 60 seconds, but struggles to produce truly convincing and continuous imagery for that long.) According to Wired, there is no way to directly upload a photo or video and post it unedited.

    Interestingly, OpenAI has figured out how to work a social element into the app, albeit in a way that has a sort of inherent creepiness to it. Per Wired, the Sora 2 app will ask users to verify their identity via facial recognition to confirm their likeness. After confirming their identity, their likeness can be used in videos. Not only can they insert themselves into a video, but other users can tag you and use your likeness in their videos. Users will reportedly get notified any time their likeness is used, even if the generated video is saved to drafts and never posted.

    How that will be implemented when and if the app launches to the public, we’ll have to see. But as reported, it seems like an absolute nightmare. Basically, the only thing that the federal government has managed to find any sort of consensus around when it comes to regulating AI is offering some limited protections against non-consensual deepfakes. As described, that kind of seems like one feature of Sora 2 is letting your likeness be manipulated by others. Surely there will be some sort of opt-out available or ability to restrict who can use your likeness, right?

    According to Wired, there will be some protections as to the type of content that Sora 2 will allow users to create. It is trained to refuse to violate copyright, for instance, and will reportedly have filters in place to restrict certain types of videos from being produced. But will it actually offer sufficient protection to people? OpenAI made a big point to emphasize how it added protections to the original Sora model to prevent it from generating nudity and explicit images, but tests of the system managed to get it to create prohibited content anyway at a low-but-not-zero rate.

    Gizmodo reached out to OpenAI to confirm its plans for the app, but did not receive a response at the time of publication. There has been speculation for months about the launch of Sora 2, with some expectation that it would be announced at the same time as GPT-5. For now, it and its accompanying app remain theoretical, but there is at least one good idea hidden in the concept of the all-AI social feed, albeit probably not in the way OpeAI intended it: Keep AI content quarantined.

    AJ Dellinger

    Source link

  • Werner Herzog on AI-Generated Movies: ‘They Look Completely Dead’

    Legendary filmmaker and ‘Here Comes Honey Boo Boo’ superfan Werner Herzog can see the beauty in just about everything, with two notable exceptions: Chickens and art created by artificial intelligence. During an appearance on the podcast “Conan O’Brien Needs A Friend,” Herzog spoke of the incredible possibilities presented by technological advances, but lamented the sheer lifelessness of its application in areas that require humanity.

    Much of the conversation between O’Brien and Herzog centered around the idea of truth (fitting for a guy who just wrote a book called The Future of Truth), which inevitably led them into a conversation about AI. Herzog, who is a fascinating mix of a man somewhat removed from technology but also filled with endless wonder about everything, didn’t dismiss the technology out of hand, but has some grave concerns about it.

    “AI, I do not want to put it down completely because it has glorious, magnificent possibilities,” he said, citing its potential uses in scientific fields. “But at the same time, it is already en route to take over warfare. … It will be the overwhelming face of warfare of the future.”

    He also simply can’t find much value in generative AI’s takes on works of art.

    “I’ve seen movies, short films, completely created by artificial intelligence. Story, acting, everything. They look completely dead. They are stories, but they have no soul,” he told O’Brien. “They are empty and soulless. You know it is the most common, lowest denominator of what is filling billions and billions of informations on the internet. The common denominator and nothing beyond this common denominator can be found in these fabrications.”

    Those fabrications of AI are a real point of fascination for Herzog. In his new book, according to an excerpt from The New Republic, he writes AI “sees its occasional errors, and arrives at strategies and decisions that were not programmed in it by humans,” and notes that its outputs arrive “with a little pinch of chaos and imprecision, as is also embedded in human nature.”

    While talking to O’Brien, Herzog brought up how AI generates these falsehoods and how we have to navigate them. “And of course, cheating, pretending, propagandizing—all these things are like a nemesis. It is out there, and we have to be alert to it.” His advice? Simply do not take anything entirely at face value. “Again, I say, when you are curious and access different sources, very quickly you will find this is invented.”

    In general, Herzog is not much for technology. He didn’t own a cellphone until, according to his telling, he had to get one after he was unable to retrieve his car (an 18-year-old Ford Explorer) from a parking garage in Dublin without downloading an app. But it’s not that he fears it. He just doesn’t trust it. “Everything that comes in via your cellphone or your laptop, emails, whatever—you have to distrust, you have to doubt,” he told O’Brien. In response, O’Brien offered up that he gets updates on his phone when his cats use the litter box because it is internet-connected, and proposed that it should be illegal for anything to require an app to function.

    Herzog spoke of how natural navigating technology is for younger people, how effortlessly they spot a phishing email that he wouldn’t be able to identify. He compared the instincts of humans using technology to those of prehistoric men foraging for food and learning to avoid poisonous berries. “They had a natural acquired suspicion about things, and it was so natural that we can certainly assume that they didn’t hate nature,” he said. “They just knew how to navigate. And it’s the same thing—you don’t have to hate the internet and the cell phone and whatever is coming at you in this new media, you just have to maintain a complete level of suspicion.”

    All of this comes from Herzog’s greater search for truth, which is central to his new book. On the podcast, he assessed, “Nobody knows what truth is.” And in some ways, it doesn’t matter. O’Brien and Herzog share that in art, sheer truth sometimes matters less than telling a good story. But in the rest of the world, the concept of truth is just as elusive, and the cause of conflict and strife. Whose truth are we operating from?

    “Truth is not a point somewhere far out in the distance,” Herzog says. “It’s more a process of searching for it, approximating, having doubts.” O’Brien at one point added, “Emotions get us to a truth sometimes that facts cannot deliver.” That is perhaps why AI art falls so flat. The truth lies in the emotion the work conveys and provokes. AI has nothing to offer.

    AJ Dellinger

    Source link

  • Here’s Who Can See Your Chat History When You Talk to Each AI

    While AI tools like ChatGPT and Google Gemini can be helpful, they’re also potential privacy minefields.

    Most AI assistants save a complete record of your conversations, making them easily visible to anyone with access to your devices. Those conversations are also stored online, often indefinitely, so they could be exposed due to bugs or security breaches. In some cases, AI providers can even send your chats along to human reviewers.

    All of this should give you pause, especially if you plan to share your innermost thoughts with AI tools or use them to process personal information. To better protect your privacy, consider making some tweaks to your settings, using private conversation modes, or even turning to AI assistants that protect your privacy by default.

    [Screengrab: ChatGPT]

    To help make sense of the options, I looked through all the privacy settings and policies of every major AI assistant. Here’s what you need to know about what they do with your data, and what you can do about it:

    ChatGPT

    By default: ChatGPT uses your data to train AI, and warns that its “training data may incidentally include personal information.”
    Can humans review your chats? OpenAI’s ChatGPT FAQ says it may “review conversations” to improve its systems. The company also says it now scans conversations for threats of imminent physical harm, submitting them to human reviewers and possibly reporting them to law enforcement.
    Can you disable AI training? Yes. Go to Settings > Data controls > Improve the model for everyone.
    Is there a private chat mode? Yes. Click “Turn on temporary chat” in the top-right corner to keep a chat out of your history and avoid having it used to train AI.
    Can you share chats with others? Yes, by generating a shareable link. (OpenAI launched, then removed, a feature that let search engines index shared chats.)
    Are your chats used for targeted ads? OpenAI’s privacy policy says it does not sell or share personal data for contextual behavioral advertising, doesn’t process data for targeted ads, and doesn’t process sensitive personal data to infer characteristics about consumers.
    How long does it keep your data? Up to 30 days for temporary and deleted chats, though even some of those may be kept longer for “security and legal obligations.” All other data is stored indefinitely.

    Google Gemini

    By default: Gemini uses your data to train AI.
    Can humans review your chats? Yes. Google says not to enter “any data you wouldn’t want a reviewer to see.” Once a reviewer sees your data, Google keeps it for up to three years—even if you delete your chat history.
    Can you disable AI training? Yes. Go to myactivity.google.com/product/gemini, click the “Turn off” drop-down menu, then select either “Turn off” or “Turn off and delete activity.”
    Is there a private chat mode? Yes. In the left sidebar, hit the chat bubble with dashed lines next to the “New chat” button. (Alternatively, disabling Gemini Apps Activity will hide your chat history from the sidebar, but re-enabling it without deleting past data will bring your history back.)
    Can you share chats with others? Yes, by generating a shareable link.
    Are your chats used for targeted ads? Google says it doesn’t use Gemini chats to show you ads, but the company’s privacy policy allows for it. Google says it will communicate any changes it makes to this policy.
    How long does it keep your data? Indefinitely, unless you turn on auto-deletion in Gemini Apps Activity.

    Anthropic Claude

    By default: From September 28 onward, Anthropic will use conversations to train AI unless you opt out.
    Can humans review your chats? No, though Anthropic reviews conversations flagged as violating its usage policies.
    Can you disable AI training? Yes, Head to Settings > Privacy and disable “Help improve Claude.”
    Is there a private chat mode? No. You must delete past conversations manually to hide them from your history.
    Can you share chats with others? Yes, by generating a shareable link.
    Are your chats used for targeted ads? Anthropic doesn’t use conversations for targeted ads.
    How long does it keep your data? Up to two years, or seven years for prompts flagged for trust and safety violations.

    Microsoft Copilot

    By default: Microsoft uses your data to train AI.
    Can humans review your chats? Yes. Microsoft’s privacy policy says it uses “both automated and manual (human) methods of processing” personal data.
    Can you disable AI training? Yes, though the option is buried. Click your profile image > your name > Privacy and disable “Model training on text.”
    Is there a private chat mode? No. You must delete chats one by one or clear your history from Microsoft’s account page.
    Can you share chats with others? Yes, by generating a shareable link. Note that shared links can’t be unshared without deleting the chat.
    Are your chats used for targeted ads? Microsoft uses your data for targeted ads and has discussed integrating ads with AI. You can disable this by clicking your profile image > your name > Privacy and disabling “Personalization and memory.” A separate link disables all personalized ads for your Microsoft account.
    How long does it keep your data? Data is stored for 18 months, unless you delete it manually.

    xAI Grok

    By default: Uses your data to train AI.
    Can humans review your chats? Yes. Grok’s FAQ says a “limited number” of “authorized personnel” may review conversations for quality or safety.
    Can you disable AI training? Yes. Click your profile image and go to Settings > Data Controls, then disable “Improve the Model.”
    Is there a private chat mode? Click the “Private” button at the top right to keep a chat out of your history and avoid having it used to train AI.
    Can you share chats with others? Yes, by generating a shareable link. Note that shared links can’t be unshared without deleting the chat.
    Are your chats used for targeted ads? Grok’s privacy policy says it does not sell or share information for targeted ad purposes.
    How long does it keep your data? Private Chats and even deleted conversations are stored for 30 days. All other data is stored indefinitely.

    By default: Uses your data to train AI.
    Can humans review your chats? Yes. Meta’s privacy policy says it uses manual review to “understand and enable creation” of AI content.
    Can you disable AI training? Not directly. U.S. users can fill out this form. Users in the EU and U.K. can exercise their right to object.
    Is there a private chat mode? No.
    Can you share chats with others? Yes. Shared links automatically appear in a public feed and can show up in other Meta apps as well.
    Are your chats used for targeted ads? Meta’s privacy policy says it targets ads based on the information it collects, including interactions with AI.
    How long does it keep your data? Indefinitely.

    Perplexity

    By default: Uses your data to train AI.
    Can humans review your chats? Perplexity’s 
    privacy policy does not mention human review.
    Can you disable AI training? Yes. Go to Account > Preferences and disable “AI data retention.”
    Is there a private chat mode? Yes. Click your profile icon, then select “Incognito” under your account name.
    Can you share chats with others? Yes, by generating a shareable link.
    Are your chats used for targeted ads? Yes. Perplexity says it may share your information with third-party advertising partners and may collect from other sources (for instance, data brokers) to improve its ad targeting.
    How long does it keep your data? Until you delete your account.

    Duck.AI

    By default: Duck.AI doesn’t use your data to train AI, thanks to deals with major providers.
    Can humans review your chats? No.
    Can you disable AI training? Not applicable.
    Is there a private chat mode? No. You must delete previous chats individually or all at once through the sidebar.
    Can you share chats with others? No.
    Are your chats used for targeted ads? No.
    How long does it keep your data? Model providers keep anonymized data for up to 30 days, unless needed for legal or safety reasons.

    Proton Lumo

    By default: Proton Lumo doesn’t use your data to train AI.
    Can humans review your chats? No.
    Can you disable AI training? Not applicable.
    Is there a private chat mode? Yes. Click the glasses icon at the top right.
    Can you share chats with others? No.
    Are your chats used for targeted ads? No.
    How long does it keep your data? Proton does not store logs of your chats.

    By Jared Newman

    This article originally appeared in Inc.’s sister publication, Fast Company.

    Fast Company is the world’s leading business media brand, with an editorial focus on innovation in technology, leadership, world changing ideas, creativity, and design. Written for and about the most progressive business leaders, Fast Company inspires readers to think expansively, lead with purpose, embrace change, and shape the future of business.

    Fast Company

    Source link

  • Lionsgate Is Founding Out It’s Really Hard to Make Movies With AI

    Earlier this year, Michael Burns, the vice-chairman of movie studio Lionsgate, made a bold claim. According to Vulture, he said that through a partnership with generative AI company Runway AI, the company that is home to franchises like John Wick and The Hunger Games could repackage one of its signature series as an anime, generated entirely by AI in a matter of hours, and resell it as a new movie.

    That notably has not happened. According to a report from The Wrap, it’s because the partnership, announced last year as a “first-of-its-kind” deal between a movie studio and a generative AI company, has not gone according to plan. The plan has allegedly hit snags related to the size of Lionsgate’s catalog, the limitations of Runway’s model, and copyright and licensing concerns.

    The deal made between the companies last year saw Lionsgate give Runway AI access to its complete library of films, which Runway would use to create a custom and exclusive model that Lionsgate could use to create AI-generated videos. But, per The Wrap, Lionsgate’s library isn’t enough to create a fully functioning model. In fact, the report claims, Disney’s library wouldn’t be enough for such a task. The reality of building a generative AI model is that it needs a massive amount of data to be able to produce a sufficient and functional output. If the studio wanted to use Runway to create a lighting effect in a film, for instance, it would really only be able to render that effect if it had enough reference points to work with.

    That seems to check out, if you think about it. Models with access to massive amounts of data, like Google’s Veo or OpenAI’s Sora, produce videos that contain countless mistakes, glitches, and uncanny valley-like oddities. The possibility of creating a generative model on a much more limited set of training data is going to produce much more limited generative capabilities.

    And then there are the legal questions surrounding the potential use of generative AI that comes entirely from Lionsgate’s outputs.

    Burns’ pitch of an anime-filtered version of a film? He told Vulture that he’d have to pay the actors and other rights participants to sell it. Who would that include? It’s not entirely clear. Do writers need to get a check? Do directors? What about gaffers for their lighting work? The report indicates that there are a lot of unanswered legal questions that extend beyond the fact that Lionsgate owns the intellectual property that sits in the way of actually releasing an AI-generated film.

    “We’re very pleased with our partnership with Runway and our other AI initiatives, which are progressing according to plan,” Peter Wilkes, Chief Communications Officer at Lionsgate, told Gizmodo. “We view AI as an important tool for serving our filmmakers, and we have already successfully applied it to multiple film and television projects to enhance quality, increase efficiency and create exciting new storytelling opportunities. We are also using AI to achieve significant cost savings and greater efficiency in the licensing of our film and television library. AI remains a centerpiece of our efforts to use new technologies to prepare our business for the future.”

    Runway did not respond to a request for comment.

    There are indicators that Lionsgate is making use of Runway, though possibly not via the planned exclusive model. In that Vulture piece from earlier this year, the company was working on creating an AI-generated trailer for a film that hadn’t been shot yet, with the hope that execs could sell it based on the fabricated scenes. Whether audiences or creatives are served by that process is a different question.

    AJ Dellinger

    Source link

  • Thanks to AI, Charlie Kirk Will Never Die for Some People

    There really is no rest for the wicked. Over the weekend, according to Religious News Service, at least three churches played for their congregations a posthumous message from Charlie Kirk, in which he assured those in the pews, “I’m fine, not because my body is fine, but because my soul is secure in Christ. Death is not the end, it’s a promotion.”

    Of course, it wasn’t actually Kirk speaking from his spot in the afterlife. It was an AI-generated clip that, prior to getting played in these houses of worship, made the rounds on social media. The audio appears to have originated on TikTok, generated by the user NioScript, who posted the 51-second message a day after Kirk was killed. It has since garnered millions of listens, shared by users who record themselves reacting and crying as they hear the AI-generated message. All of that eventually led to the audio getting played in churches like Prestonwood Baptist in Texas, where it is introduced by Pastor Jack Graham as AI—but as something that “moved” him and that he is sharing so his congregation can “Hear what Charlie is saying regarding what happened to him this past week.”

    It is, again, not what Charlie Kirk is saying. But that has not stopped people from talking to it as if it were real. Members of Prestonwood Baptist gave the video a standing ovation. Audiences of Dream City Church in Arizona and Awaken Church, San Marcos, in California, both of which ran the clips, also applauded, as pointed out by Religious News Service. Users on social media have responded to the audio with captions and comments like “This is exactly what Charlie would say if he could talk to us right now,” or “I know it’s AI but you can’t tell me this isn’t exactly what he’d say.”

    This type of coping with the feeling of loss is not totally unique. People have always sought to remember and preserve the people they love after they pass, and technology has facilitated new ways to achieve that, whether it is an endless stream of photos that spark memories or the person’s online presence turned into a digital memorial. In the world of bereavement literature, these are often referred to as continuing bonds. In that way, an AI-generated audio clip or video of someone like Kirk isn’t all that different from sharing stories about him to keep his memory alive.

    It is different in that it’s a complete fabrication. It’s not a memory, which can also be faulty, but an invention from whole cloth. Yes, it may have access to Kirk’s words, likeness, and voice, all of which are omnipresent on the internet. But it is, as a large language model, incapable of doing anything but trying to autofill the void for the grieving.

    Creating an AI-replicated version of a deceased person to aid in the grieving process is a growing industry. A recent article in Nature highlights several efforts to better understand if chatbots trained on a loved one’s likeness can help the grieving work through the complex and intense feelings that come with loss. While there is some evidence to suggest users of “griefbots” have managed to find some internal sense of closure with their lost loved ones, there are real risks of harming people in a fragile emotional state, including making it hard to let go of the bot version of the person.

    There is also the very real worry that we simply aren’t able to differentiate between our real memories of a person and AI-generated ones that are implanted in our minds through these types of interactions. A study conducted by MIT Media Lab found that exposing a person to even a single AI-edited image can affect a person’s memory, and people exposed to AI-generated images “reported high levels of confidence in their false memories.”

    The reality for the people who are memorializing Kirk this way is that the vast majority of them don’t actually know him. They have a parasocial relationship with him that they would like to continue, and the AI message allows that to happen because it, in their minds, captures his voice—or, maybe more accurately, captures what they want to hear.

    There is already plenty of ongoing debate about who exactly Charlie Kirk was and how he should be remembered without an AI-generated version of him injected into the conversation. But for people who are grieving his loss, should they believe that there is any part of Kirk’s soul living in that AI voice, perhaps just let it rest.

    AJ Dellinger

    Source link

  • ‘A burgeoning epidemic’: Why some kids are forming extreme emotional relationships with AI – WTOP News

    As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT.

    As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT — and that is taking a toll on their mental health.

    “AI psychosis,” while not an official clinical diagnosis, is a term clinicians are using to describe children who appear to be forming emotional bonds with AI, according to Dr. Ashley Maxie-Moreman, clinical psychologist at Children’s National Hospital in D.C.

    Maxie-Moreman said symptoms can include delusions of grandeur, paranoia, fantastical relationships with AI, and even detachment from reality.

    “Especially teens and young adults are engaging with generative AI for excessive periods of time, and forming these sort of fantastical relationships with AI,” she said.

    In addition to forming close bonds with AI, those struggling with paranoia may see their condition worsen, with AI potentially affirming paranoid beliefs.

    “I think that’s more on the extreme end,” Maxie-Moreman said.

    More commonly, she said, young people are turning to generative AI for emotional support. They are sharing information about their emotional well-being, such as feeling depressed, anxious, socially isolated or having suicidal thoughts. The responses they receive from AI vary.

    “And I think on the more concerning end, generative AI, at times, has either encouraged youth to move forward with plans or has not connected them to the appropriate resources or flagged any crisis support,” Maxie-Moreman said.

    “It almost feels like this is a burgeoning epidemic,” she added. “Just in the past couple of weeks, I’ve observed cases of this.”

    Maxie-Moreman said kids who are already struggling with anxiety, depression, social isolation or academic stress are most at risk of developing these bonds with AI. That’s why, she said, if you suspect your child is suffering from those conditions, you should seek help.

    “I think it’s really, really important to get your child connected to appropriate mental health services,” she said.

    With AI psychosis, parents need to be on the lookout for symptoms. One could be a lack of desire to go to school.

    “They’re coming up with a lot of excuses, like, ‘I’m feeling sick,’ or ‘I feel nauseous,’ and maybe you’re finding that the child is endorsing a lot of physical symptoms that are sometimes unfounded in relation to attending school,” Maxie-Moreman said.

    Another sign is a child who appears to be isolating themselves and losing interest in things they used to look forward to, such as playing sports or hanging out with friends.

    “I don’t want to be alarmist, but I do think it’s important for parents to be looking out for these things and to just have direct conversations with their kiddos,” she said.

    Talking to a child about mental health concerns can be tricky, especially if they are teens who, as Maxie-Moreman noted, can be irritable and a bit moody. But having a conversation with them is key.

    “I think not skirting around the bush is probably the most helpful thing. And I think teens tend to get a little bit annoyed with indirectness anyhow, so being direct is probably the best approach,” she said.

    To help prevent these issues, Maxie-Moreman suggested parents start doing emotional check-ins with their children from a young age.

    “Just making it sort of a norm in your household to have conversations about how your child is doing emotionally, checking in with them on a regular basis, is important. So starting at a young age is what I would recommend on the preventative end,” she said.

    She also encouraged parents to talk to their children about the limits of the technology they use, including generative AI.

    “I think that’s probably one of the biggest interventions that will be most helpful,” she said.

    Maxie-Moreman said tech companies must also be held accountable.

    “Ultimately, we have to hold our tech companies accountable, and they need to be implementing better safeguards, as opposed to just worrying about the commercialization of their products,” she said.

    Get breaking news and daily headlines delivered to your email inbox by signing up here.

    © 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

    Mike Murillo

    Source link