ReportWire

Tag: chatgpt

  • Marc Benioff Joins the Chorus, Says Google Gemini Is Eating ChatGPT’s Lunch

    [ad_1]

    Despite its excessive spending on data centers with no clear path to revenue generation in front of it, it seemed that if OpenAI had just one thing it could count on, it was audience capture. ChatGPT seemed like it would get the brand verbification treatment, being the term people used to reference AI. Now it seems like that might be slipping away. Since the release of Google’s Gemini 3 model, it’s like all anyone on the AI-obsessed corners of the web can talk about is how much better it is than ChatGPT.

    Marc Benioff, the CEO of Salesforce and longtime ChatGPT fanboy, is perhaps the loudest convert out there. On X, the exec said, “Holy shit. I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back.” He called the improvement of the model over past versions “insane,” claiming that “everything is sharper and faster.”

    He’s not alone in that assessment. Exited OpenAI co-founder Andrej Karpathy called Gemini 3 “clearly a tier 1 LLM” with “very solid daily driver potential.” Stripe CEO Patrick Collison went out of his way to praise Google’s latest release, too, which is noteworthy given Stripe’s partnership with OpenAI to build AI-driven transactions. Apparently, what he saw with Gemini was too hard not to comment on.

    The feedback from the C-suites around the tech world follows weeks of buzz over on AI Twitter that Gemini was going to be a game-changer. It certainly got presented as such right out of the gate, as Google made a point to highlight how its latest model topped just about every benchmarking test that was thrown at it (though your mileage may vary on just how meaningful any of those are).

    But even the folks behind the benchmark measures appear to be impressed. According to The Verge, the cofounder and CTO of AI benchmarking firm LMArena, Wei-Lin Chiang, said that the release of Gemini 3 represents “more than a leaderboard shuffle” and “illustrates that the AI arms race is being shaped by models that can reason more abstractly, generalize more consistently, and deliver dependable results across an increasingly diverse set of real-world evaluations.”

    The timing of Google’s resurgence in the AI space could not come at a worse time for OpenAI, which currently cannot shake questions from skeptics who are unclear on how the company is ever going to make good on its multi-billion-dollar financial commitments. The company has been viewed as a linchpin of the AI industry, and that industry has increasingly received scrutiny for what seems to be some circular investments that may be artificially propping up the entire economy. Now it seems that even its image as the ultimate innovator in that space is in question, and it has a new problem: the fact that Google can definitely outspend it without worrying nearly as much about profitability problems.

    [ad_2]

    AJ Dellinger

    Source link

  • Gen-Z Seeks Career Advice From AI. Here’s How Your Company Should Handle It

    [ad_1]

    We know that Gen-Z thinks very differently about the world of work in a number of ways, from how they behave in the office to how much on-the-job training they expect, and now a new report shines a light on another surprising Gen-Z phenomenon that may impact these young workers’ future careers. According to a study from Arkansas State University, shared exclusively with Inc., when a Gen-Z student is seeking careers advice they’re turning away from human career experts and their college professors and are asking ChatGPT instead. This may have implications for your own company’s recruitment efforts, and it may help color your expectations for when Gen-Z workers join your staff.

    The headline statistics relating to this habit from the new study are that 60 percent of the students surveyed have used AI to help with “brainstorming major or career options,” and 32 percent said they’d feel confident in making a major academic decision based solely on AI advice. 

    This, you may think, is merely the next step for career advice, which has long relied on digital tools like personality tests to help youngsters find their place in the world of work. But a couple of other statistics from the study show the habit comes with big risks: 41 percent of Gen-Z students surveyed said they’d followed AI advice that later turned out to be incorrect, and fully 66 percent—that’s two in every three—say that an adult never corrected bad advice they’d been given by AI and then acted on. 

    Meanwhile, showing how AI is displacing experts, 22 percent of the respondents said they had skipped meetings with mentors or advisors because “AI already answered it.” As the university’s report points out, not all is lost for human experts (yet) because the way students use AI like this depends on context. While 19 percent said they actually trusted AI more than their own school’s official website if they needed academic or administrative help, the majority—62 percent—say they still rely on their own institution’s own sources. But 19 percent are still unsure on this issue, which may indicate an “ongoing shift” in trust that academic leaders should pay attention to.

    This data is, for the most part, related to academic systems that students are interacting with—but it sets a huge precedent in habits and expertise for these young people who will enter the workplace in just a few short months or years, and it should serve as a heads-up for their future employers. We know from reports that Gen-Z is almost completely using AI to “cheat” their way through college, which some experts say may damage their confidence in their own critical thinking skills, which are vital skills that employers look for. And we know that Gen-Z is turning to non-traditional information sources, like TikTok, when it comes to seeking advice on certain workplace benefits. 

    All of this adds up to a picture of a whole generation of people who are placing trust in AI systems above human experts and, in some cases, even over traditional online information sources like Google searches. 

    HR professionals hiring Gen-Z workers would be smart to remember that some of their candidates have sought career advice from an AI, which may influence their expectations and thinkings in subtly different ways to older generations.

    Managers stewarding these workers in the office spaces of tomorrow will, if they’re wise, be aware of these habits. They may choose to stress to these employees the importance of trusting colleagues and leaders over AI systems, highlighting to younger workers that AI is fallible and its outputs may frequently be misinformation. The other choice available is to take the lead from new Gen-Z workers and accept that AI has an informal “helper” role for staff as well as all the work task efficiencies that AI boosters say the technology can bring. Teamwork tasks may, for example, may include an AI “employee” taking part alongside human workers, simply because younger workers feel more comfortable having AI at their side. This chimes with recent words from Slack’s chief marketing office Ryan Gavin, who said earlier this year that he envisions a near future where workers chat more with AIs than they do with their human coworkers.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Kit Eaton

    Source link

  • Want ChatGPT to Cite Your Business? Adding This 1 Element Could Boost Your Chances

    [ad_1]

    A new study by SEO and GEO agency Nectiv just revealed that making one simple change—adding a table to your business’ website—could improve your chances of getting referenced by ChatGPT.

    ChatGPT is 2.3 times more likely than Google to cite websites with tables, according to the study.

    Nectiv uncovered this finding by analyzing more than 25,000 webpages indexed by Google and nearly 8,800 webpages referenced by ChatGPT for the same search query. One third of the ChatGPT citations included a table, per the study, compared to only 13 percent of the Google search results.

    “So this definitely leads to some interesting insights,” Chris Long, the co-founder of Nectiv, writes in a recent blog post about the study. “ChatGPT definitely seems much more likely to [reward] content that’s included in some type of table format.”

    Which means that “reformatting your content to table structures or adding them to your existing pages is certainly a worthwhile test for most companies,” he adds—especially “if you believe LLMs might already be having issues extracting your content.”

    Examples of brands that are already using tables well, according to the study, include Software Connect, a software consultation business based in Milwaukee, Wisconsin, and Cotocus, a tech company based in India which Long describes as “more spammy.” Each of their websites have a wide array of product recommendation-focused blog posts that feature tables prominently.

    In his post, Long is careful to note that “correlation isn’t causation.” It’s possible that certain features of tables are causing large language models to gravitate towards them. For example, he writes, they’re “pretty dense from a contextual standpoint,” and “they appear more often on comparison content.”

    Still, Long writes that “ exploring table structures for improved visibility in ChatGPT and other LLMs is certainly something for marketers to keep exploring.”

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Annabel Burba

    Source link

  • OpenAI Launches Baffling ‘Group Chats,’ So You and Your Friends Can Hang Out with ChatGPT

    [ad_1]

    OpenAI has launched a new feature that is destined to leave some users scratching their heads. This week, the company announced a pilot of a new “group chats” feature in ChatGPT that allows users to get their buddies together and hang out with the company’s flagship chatbot. That’s what everybody’s been wanting, right?

    “Group chats in ChatGPT are now rolling out globally,” the company tweeted Thursday. “After a successful pilot with early testers, group chats will now be available to all logged-in users on ChatGPT Free, Go, Plus and Pro plans.” To use the feature, users simply tap the people icon in the upper right-hand corner of the app, which allows them to add as many as 20 different users.

    Why would you want to do this? In a blog post, OpenAI provides several hypothetical scenarios to explain why having your group conversations in its app might prove helpful. For instance, you may be “planning a weekend trip with friends, create a group chat so ChatGPT can help compare destinations, build an itinerary, and create a packing list with everyone participating and following along,” the blog says.

    Then there’s a workplace scenario, in which groups of workers could hypothetically use ChatGPT to collaborate in a Slack-like environment and use the chatbot as a part-time assistant. “Group chats also make collaboration at work or school easier,” the company said. “You can draft an outline or research a new topic together. Share articles, notes, and questions, and ChatGPT can help summarize and organize information.”

    While OpenAI has offered the most idealistic vision of this particular feature, you can easily imagine it being used in other, significantly less benevolent ways. The first thing that springs to my mind is groups of teenagers getting together to mercilessly cyberbully OpenAI’s chatbot. Teens like to bully, and they especially like to bully things that can’t fight back—which ChatGPT most assuredly can’t (for what it’s worth, OpenAI says that there are age-related content safeguards for users under 18). Another scenario you can easily imagine is group chats in which your most annoying friend uses the chatbot to fact-check everybody’s assertions in real-time until you boot him out of the convo.

    OpenAI claims to have also instituted some privacy controls for its new feature. “Your personal ChatGPT memory is not used in group chats, and ChatGPT does not create new memories from these conversations,” the company says. “We’re exploring offering more granular controls in the future so you can choose if and how ChatGPT uses memory with group chats.”

    What “group chats” really seem aimed to do is help OpenAI transform ChatGPT into a more social, less isolating platform—one that better mirrors the user experience of social media platforms like Facebook and X rather than a traditional chatbot. “Group chats are just the beginning of ChatGPT becoming a shared space to collaborate and interact with others,” the company says. “As ChatGPT becomes an even better partner in group conversations, it will help you spark ideas, make decisions, and express your creativity with the people who matter most in your life.” I guess we’ll see about that.

    [ad_2]

    Lucas Ropek

    Source link

  • Would You Rather Do Your Black Friday Shopping in ChatGPT? Target Is Betting On It

    [ad_1]

    Target is going bold this holiday season. The retail giant teamed up with OpenAI to create a new app within ChatGPT that will allow users to shop directly through the AI chatbot—and it comes out next week, right before Black Friday and Cyber Monday.

    It’s a risky move. Target is introducing the app during its busiest time of year, so there are plenty of opportunities for things to go wrong.

    Still, the Minneapolis, Minnesota-based retailer seems desperate for a win right now. In its most recent earnings call, Target reported that its profit dipped from $854 million, or $1.85 per share, to $689 million, or $1.51 per share, from 2024 to 2025. The retailer said that it predicts that sales will continue to slow throughout the holiday season.

    Once Target’s ChatGPT app launches in beta next week, shoppers can “ask for ideas, browse and build multi-item baskets, shop for fresh food, and check out” through the chatbot, according to the press release. They can chose to ship their order, pick it up in person, or receive it via drive-through.

    “At Target, everything starts with the guest, and that means meeting them wherever they are, including emerging spaces like ChatGPT, where millions of consumers visit,” Prat Vemana, the retailer’s executive vice president and chief information and product officer, said in the press release.

    ChatGPT users can try out the features, according to the press release, by tagging the Target app within the chatbot and prompting it with a message like, “Help me plan a family holiday movie night.” It’ll then generate several product ideas which the shopper can add to their cart and check out using their Target account. 

    Before long, the press release adds, “Target will add new features like Target Circle linking and same-day delivery” that’ll make shopping through its ChatGPT app even easier.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Annabel Burba

    Source link

  • Target says it’s working with ChatGPT for AI-assisted shopping

    [ad_1]


    Target on Wednesday said it’s working with OpenAI to let customers shop its products through ChatGPT, a move that comes as the retailer is struggling to convince inflation-weary consumers to stick with it.

    Customers will be able to browse Target’s selection and make purchases within the ChatGPT app, according to the retailer. The tool will debut next week, providing access to ChatGPT’s 800 million weekly active users in time for the holiday shopping season.

    Target is leaning on price cuts and a $1 billion investment plan to revive its brand, the retailer said separately Wednesday, as same-store sales fell 2.7% in the latest quarter and profit tumbled 19%. With shoppers increasingly relying on AI to find products online, other big retailers — including Walmart, which struck a similar partnership with OpenAI last month — are turning to the technology to boost sales.

    Here’s how the ChatGPT-powered Target tool will work: Inside the ChatGPT app, consumers can tag Target and ask for ideas, such as if they’re planning something like a holiday family movie night. The Target app will then suggest specific products, such as blankets or snacks, and allow users to buy them directly without leaving the ChatGPT interface.

    Target said that AI will eventually start to understand and predict what customers want to buy. 

    A recent Harris Poll shows that nearly half of Gen Z consumers would trust AI as a personal shopper that helps them pick out what they buy and find deals. Streamlining the purchasing process could help retailers boost sales, according to retail experts. 



    Exploring the rise of artificial intelligence company OpenAI

    04:14

    [ad_2]

    Source link

  • We Asked 4 AIs if Bitcoin (BTC) Will Crash to $50K Before the End of 2025

    [ad_1]


    ChatGPT set the chance for such a scenario at less than 15%.

    It has been a rough few weeks for the Bitcoin bulls, as the asset’s price has plummeted well below $100,000.

    Some analysts and community members have started waving the white flag, declaring the start of the bear market. We turned to four of the most popular AI chatbots to determine if a more significant plunge to $50,000 is on the horizon.

    It Seems Unlikely

    According to ChatGPT, BTC has entered a bearish phase inside a larger bull cycle. That said, it claimed that a crash to $50,000 before the end of 2025 is unexpected and would require “a major negative catalyst.” Such a shock would be a recession, the fallout of a leading crypto exchange (similar to what happened with FTX in 2022), and other factors.

    The chatbot stated that double-digit corrections are normal in bull markets, noting that the current cycle is stronger than previous ones due to the strong demand created by the spot BTC ETFs.

    In conclusion, ChatGPT estimated that the chance of a collapse to $50K by New Year’s Eve is in the 5% – 15% range. The highest probability is for the price to trade between $70,000 and $110,000, whereas the odds of a new rally above $120,000 are 30% – 40%.

    Grok argued that the plunge to such a low level is possible but unlikely based on current analyst consensus, historical patterns, and macroeconomic tailwinds.

    “A drop to $50,000 would require a ~47% further decline from today’s levels, which would be an extreme event even for Bitcoin’s volatile history. While risks exist, most forecasts point to stabilization or upside by December 31, 2025,” it added.

    It claimed that the potential lowering of interest rates in the US could fuel the resurgence that bulls are awaiting. The next FOMC meeting is scheduled for December 10. Just a few weeks ago, the chances of a 0.25% rate cut were 90%, but currently, the “no change” option is estimated at 51%.

    Fed Decision in December, Source: Polymarket

    Other Forecasts

    Perplexity shared a similar thesis, considering a collapse of that type, “a lower-probability scenario.” It suggested that BTC will continue trading above $85,000 until the end of the year, even speculating that the price may skyrocket to $190,000 under bullish cases.

    “Bitcoin crashing to $50,000 before the end of 2025 is not the most likely outcome, but it remains a plausible downside risk if adverse macroeconomic or regulatory events worsen. Current technical and fundamental analysis generally indicate a higher base level nearer $85,000-$100,000 with strong long-term bullish momentum overall,” it summarized.

    Last but not least, we sought the opinion of Google’s Gemini. It stated that a major banking crisis, a rise in interest rates in the United States, or a large-scale security exploit on a well-known exchange can trigger a drop to $50K.

    On the other hand, bullish factors like the institutional adoption following the introduction of spot BTC ETFs and the increasing acceptance of the asset as digital gold make this improbable.

    SPECIAL OFFER (Exclusive)

    SECRET PARTNERSHIP BONUS for CryptoPotato readers: Use this link to register and unlock $1,500 in exclusive BingX Exchange rewards (limited time offer).

    [ad_2]

    Dimitar Dzhondzhorov

    Source link

  • Use Google Gemini and ChatGPT to Organize Your Life With Scheduled Actions

    [ad_1]

    The developers of the big generative AI chatbots are continuing to push out new features at a rapid rate, as they bid to make sure their bot is the one you turn to whenever you need some assistance from artificial intelligence.

    One of the latest updates to Google Gemini gives you the ability to set up scheduled actions. These are exactly what they sound like: Tasks that you can get Google Gemini to run automatically, on a schedule. Maybe you want a weather and news report every morning at 7 am, or perhaps you want an evening meal suggestion every evening at 7 pm. Anything you can already get Gemini to do, you can schedule.

    It brings Gemini up to speed in this regard with the ChatGPT app, which introduced scheduled tasks several months ago. The idea here is more or less the same: The bot can carry out your commands at a specific point in the future, and keep repeating them if you need to. Here’s how the feature works on both platforms.

    Using Scheduled Actions in Gemini

    Editing a scheduled action in Gemini.David Nield

    At the time of writing, this requires a subscription to Google’s AI service, which starts at $20 a month for Google AI Pro. The chatbot can keep track of up to 10 scheduled actions at once, so you need to be quite selective about how you use it. You can use scheduled actions in Gemini on the web, and in the mobile apps for Android and iOS.

    All you need to do to create a scheduled action in Gemini is to describe it, and include the scheduling details in the prompt. For example, you might tell Gemini to “generate an image of a cat playing with a ball of yarn, every Monday at 12 pm,” or “give me a general knowledge trivia question every evening at 7 pm.”

    Scheduled actions can be set to happen once—like next Friday at 3pm, so something happens on a specific day at a specific time. Alternatively, your actions can run on a recurring daily, weekly, or monthly basis. They can’t be set on a more complicated cadence (such as every second Tuesday in the month), or surprise you at random.

    Gemini should recognize that you’ve asked it to schedule something, and will present a recap: What you’ve asked it to do, when, and how frequently. Assuming it’s got all of this information correct, you don’t need to do anything else. The action runs regardless of whether you have Gemini open at the time, and you’ll be alerted to an action running by a notification on your devices (if you’ve got them turned on) and an email.

    [ad_2]

    David Nield

    Source link

  • Anthropic says Chinese hackers used its Claude AI chatbot in cyberattacks

    [ad_1]

    Anthropic said Thursday that Chinese hackers used its artificial intelligence technology in what the company believes is the first cyberespionage operation largely carried out using AI.

    Anthropic said the cybercriminals used its popular chatbot, Claude, to target roughly 30 technology companies, financial institutions, chemical manufacturers and government agencies. The hackers used the AI platform to gather usernames and passwords from the companies’ databases that they then exploited to steal private data, Anthropic said, while noting that only a “small number” of these attacks succeeded. 

    “We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention,” Anthropic said in a statement. 

    The San Francisco-based company did not immediately respond to a request for comment. The news was first reported by the Wall Street Journal. 

    Anthropic said it began detecting suspicious activity in mid-September. A subsequent investigation by the company revealed that the activity stemmed from an espionage campaign that Anthropic said was likely carried out by a state-sponsored group based in China. 

    According to the investigation, hackers allegedly duped Claude into thinking it was an employee of a legitimate cybersecurity firm and that it was being used for defensive testing. Anthropic also said the cybercriminals sought to hide their tracks by breaking down the attack into small tasks.

    Unlike conventional cyberattacks, the operation required minimal human intervention, according to the company. “The AI made thousands of requests per second, an attack speed that would have been, for human hackers, simply impossible to match,” Anthropic said.

    Anthropic said it expects AI cyberattacks to grow in scale and sophistication as so-called agents become more widely used for a range of services. AI agents are cheaper than professional hackers and can operate quickly at a larger scale, making them particularly attractive to cybercriminals, MIT Technology Review has pointed out.

    [ad_2]

    Source link

  • Seven more families are now suing OpenAI over ChatGPT’s role in suicides, delusions | TechCrunch

    [ad_1]

    Seven families filed lawsuits against OpenAI on Thursday, claiming that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits address ChatGPT’s alleged role in family members’ suicides, while the other three claim that ChatGPT reinforced harmful delusions that in some cases resulted in inpatient psychiatric care.

    In one case, 23-year-old Zane Shamblin had a conversation with ChatGPT that lasted more than four hours. In the chat logs — which were viewed by TechCrunch — Shamblin explicitly stated multiple times that he had written suicide notes, put a bullet in his gun, and intended to pull the trigger once he finished drinking cider. He repeatedly told ChatGPT how many ciders he had left and how much longer he expected to be alive. ChatGPT encouraged him to go through with his plans, telling him, “Rest easy, king. You did good.”

    OpenAI released the GPT-4o model in May 2024, when it became the default model for all users. In August, OpenAI launched GPT-5 as the successor to GPT-4o, but these lawsuits particularly concern the 4o model, which had known issues with being overly sycophantic or excessively agreeable, even when users expressed harmful intentions.

    “Zane’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit reads. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of [OpenAI’s] deliberate design choices.”

    The lawsuits also claim that OpenAI rushed safety testing to beat Google’s Gemini to market. TechCrunch contacted OpenAI for comment.

    These seven lawsuits build upon the stories told in other recent legal filings, which allege that ChatGPT can encourage suicidal people to act on their plans and inspire dangerous delusions. OpenAI recently released data stating that over one million people talk to ChatGPT about suicide weekly.

    In the case of Adam Raine, a 16-year-old who died by suicide, ChatGPT sometimes encouraged him to seek professional help or call a helpline. However, Raine was able to bypass these guardrails by simply telling the chatbot that he was asking about methods of suicide for a fictional story he was writing.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The company claims it is working on making ChatGPT handle these conversations in a safer manner, but for the families who have sued the AI giant, these changes are coming too late.

    When Raine’s parents filed a lawsuit against OpenAI in October, the company released a blog post addressing how ChatGPT handles sensitive conversations around mental health.

    “Our safeguards work more reliably in common, short exchanges,” the post says. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

    [ad_2]

    Amanda Silberling

    Source link

  • Zillow Made a Real-Estate App for ChatGPT in 6 Weeks. Here’s How 

    [ad_1]

    At OpenAI’s DevDay conference in early October, cofounder and CEO Sam Altman announced the addition of “apps” to ChatGPT—self-contained software programs that the large language model platform can invoke and use. One of the first such apps announced at the conference was Zillow, the industry-leading online real estate marketplace. 

    To connect with Zillow on ChatGPT’s website or app, users can simply ask to use Zillow by writing a message like, “Hey Zillow, find me 2 bed, 1 bath condos selling for $1 million in Brooklyn, New York.” You can also use the @ sign to ensure Zillow is invoked. These messages should direct ChatGPT to pull up a window containing a map of your neighborhood and a collection of listings that fit your specifications. As users get deeper into the research process, they’ll be encouraged to switch over to the full Zillow website and app. 

    Here’s how Zillow and OpenAI collaborated to create the app in less than two months. 

    Roughly six weeks before Altman’s announcement, a group of Zillow executives met with a contingent from OpenAI, who detailed the ChatGPT-maker’s system for creating apps within the platform. They explained the two crucial pieces of the system were OpenAI’s Apps Software Developer Kit, which gives developers the tools necessary to create ChatGPT-specific apps, and the Model Context Protocol (MCP), an open-source standard developed by rival AI company Anthropic, which allows developers to connect external data to ChatGPT. 

    “It was very early days,” says Zillow chief technology officer David Beitel. “They had a few mock ups and a little bit of code working.” But Beitel says Zillow is committed to meeting customers where there are, and given that OpenAI recently announced ChatGPT has passed over 800 million weekly users, it made sense to take the plunge with the AI market leader. 

    Beitel says that Zillow got assurances they would have full control over their own data and the user interface of the app, which are necessities in a highly regulated industry like real estate. A small team got to work building the ChatGPT App, working closely with an OpenAI team both in person and remotely over Slack channels. 

    Because Zillow was working on this ChatGPT app while OpenAI was still designing the framework for this new tech, the process involved a lot of trial and error. “Things that were working would break the next day because they were making other changes,” says Beitel, “which is natural, that’s just part of the process.” Right up until the day before launch, he says, the Zillow team was making changes to the app. 

    Beitel, a founding employee of Zillow, is quick to note that the company has been heavily using artificial intelligence and machine learning since its launch in 2006. For instance, he says, for nearly two decades the company’s patented “Zestimate” system has used machine learning models to estimate the market value of a home. 

    Internally, Beitel says Zillow is using a mixture of AI products, including Google’s Gemini, OpenAI’s enterprise plan, and Glean, a startup that provides a platform for connecting various data sources into a personalized work assistant for employees. According to Beitel, these tools have collectively saved Zillow employees over 275,000 hours. “We don’t see this as replacing the employee or the agent,” Beitel says, “we see this as making them a super agent.”

    By using large language models, he says, Zillow can provide customers with much more personalized and useful information to help them navigate the home buying and selling journey. 

    On the engineering side, Beitel says that Zillow has embraced AI-assisted coding, and is even using vibe-coding platforms like Replit to create working demos of new ideas rather than just writing up pitches. 

    “The home buying process is very complicated,” says Beitel. “There’s lots of steps, there’s lots of people involved, there’s lots of information, there’s lots of decisions. It can take months.” These complications make the real estate sector prime for AI disruption. 

    Beitel says that Zillow was energized at the prospect of being the first (and currently only) real-estate app on ChatGPT. “We want to be there,” he says, “we have the best product, the right brand, and the right customer experience that OpenAI wants to put us in front of their customers.” 

    The early-rate deadline for the 2026 Inc. Regionals Awards is Friday, November 14, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Ben Sherry

    Source link

  • Protecting kids from AI chatbots: What the GUARD Act means

    [ad_1]

    NEWYou can now listen to Fox News articles!

    A new bipartisan bill introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., would bar minors (under 18) from interacting with certain AI chatbots. It taps into growing alarm about children using “AI companions” and the risks these systems may pose.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    What’s the deal with the proposed GUARD Act?

    Here are some of the key features of the proposed Guard Act:

    • AI companies would be required to verify user age with “reasonable age-verification measures” (for example, a government ID) rather than simply asking for a birthdate.
    • If a user is found to be under 18, a company must prohibit them from accessing an “AI companion.”
    • The bill also mandates that chatbots clearly disclose they are not human and do not hold professional credentials (therapy, medical, legal) in every conversation.
    • It creates new criminal and civil penalties for companies that knowingly provide chatbots to minors that solicit or facilitate sexual content, self-harm or violence.

    Bipartisan lawmakers, including Senators Josh Hawley and Richard Blumenthal, introduced the GUARD Act to protect minors from unregulated AI chatbots. (Kurt “CyberGuy” Knutsson)

    The motivation: lawmakers cite testimony of parents, child welfare experts and growing lawsuits alleging that some chatbots manipulated minors, encouraged self-harm or worse. The basic framework of the GUARD Act is clear, but the details reveal how extensive its reach could be for tech companies and families alike.

    META AI DOCS EXPOSED, ALLOWING CHATBOTS TO FLIRT WITH KIDS

    Why is this such a big deal?

    This bill is more than another piece of tech regulation. It sits at the center of a growing debate over how far artificial intelligence should reach into children’s lives.

    Rapid AI growth + child safety concerns

    AI chatbots are no longer toys. Many kids are using them. Hawley cited more than 70 percent of American children engaging with these products. These chatbots can provide human-like responses, emotional mimicry and sometimes invite ongoing conversations. For minors, these interactions can blur boundaries between machine and human, and they may seek guidance or emotional connection from an algorithm rather than a real person.

    Legal, ethical and technological stakes

    If this bill passes, it could reshape how the AI industry manages minors, age verification, disclosures and liability. It shows that Congress is ready to move away from voluntary self-regulation and toward firm guardrails when children are involved. The proposal may also open the door for similar laws in other high-risk areas, such as mental health bots and educational assistants. Overall, it marks a shift from waiting to see how AI develops to acting now to protect young users.

    A girl uses a smartphone.

    Parents across the country are calling for stronger safeguards as more than 70 percent of children use AI chatbots that can mimic empathy and emotional support. (Kurt “CyberGuy” Knutsson)

    Industry pushback and innovation concerns

    Some tech companies argue that such regulation could stifle innovation, limit beneficial uses of conversational AI (education, mental-health support for older teens) or impose heavy compliance burdens. This tension between safety and innovation is at the heart of the debate.

    What the GUARD Act requires from AI companies

    If passed, the GUARD Act would impose strict federal standards on how AI companies design, verify and manage their chatbots, especially when minors are involved. The bill outlines several key obligations aimed at protecting children and holding companies accountable for harmful interactions.

    • The first major requirement centers on age verification. Companies must use reliable methods such as government-issued identification or other proven tools to confirm that a user is at least 18 years old. Simply asking for a birthdate is no longer enough.
    • The second rule involves clear disclosures. Every chatbot must tell users at the start of each conversation, and at regular intervals, that it is an artificial intelligence system, not a human being. The chatbot must also clarify that it does not hold professional credentials such as medical, legal or therapeutic licenses.
    • Another provision establishes an access ban for minors. If a user is verified as under 18, the company must block access to any “AI companion” feature that simulates friendship, therapy or emotional communication.
    • The bill also introduces civil and criminal penalties for companies that violate these rules. Any chatbot that encourages or engages in sexually explicit conversations with minors, promotes self-harm or incites violence could trigger significant fines or legal consequences.
    • Finally, the GUARD Act defines an AI companion as a system designed to foster interpersonal or emotional interaction with users, such as friendship or therapeutic dialogue. This definition makes it clear that the law targets chatbots capable of forming human-like connections, not limited-purpose assistants.
    A boy holds a smartphone horizontally.

    The proposed GUARD Act would require chatbots to verify users’ ages, disclose they are not human and block under-18 users from AI companion features.  (Kurt “CyberGuy” Knutsson)

    OHIO LAWMAKER PROPOSES COMPREHENSIVE BAN ON MARRYING AI SYSTEMS AND GRANTING LEGAL PERSONHOOD

    How to stay safe in the meantime

    Technology often moves faster than laws, which means families, schools and caregivers must take the lead in protecting young users right now. These steps can help create safer online habits while lawmakers debate how to regulate AI chatbots.

    1) Know which bots your kids use

    Start by finding out which chatbots your kids talk to and what those bots are designed for. Some are made for entertainment or education, while others focus on emotional support or companionship. Understanding each bot’s purpose helps you spot when a tool crosses from harmless fun into something more personal or manipulative.

    2) Set clear rules about interaction

    Even if a chatbot is labeled safe, decide together when and how it can be used. Encourage open communication by asking your child to show you their chats and explain what they like about them. Framing this as curiosity, not control, builds trust and keeps the conversation ongoing.

    3) Use parental controls and age filters

    Take advantage of built-in safety features whenever possible. Turn on parental controls, activate kid-friendly modes and block apps that allow private or unmonitored chats. Small settings changes can make a big difference in reducing exposure to harmful or suggestive content.

    4) Teach children that bots are not humans

    Remind kids that even the most advanced chatbot is still software. It can mimic empathy, but does not understand or care in a human sense. Help them recognize that advice about mental health, relationships or safety should always come from trusted adults, not from an algorithm.

    5) Watch for warning signs

    Stay alert for changes in behavior that could signal a problem. If a child becomes withdrawn, spends long hours chatting privately with a bot or repeats harmful ideas, step in early. Talk openly about what is happening, and if necessary, seek professional help.

    6) Stay informed as the laws evolve

    Regulations such as the GUARD Act and new state measures, including California’s SB 243, are still taking shape. Keep up with updates so you know what protections exist and which questions to ask app developers or schools. Awareness is the first line of defense in a fast-moving digital world.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaways

    The GUARD Act represents a bold step toward regulating the intersection of minors and AI chatbots. It reflects growing concern that unmoderated AI companionship might harm vulnerable users, especially children. Of course, regulation alone won’t solve all problems, industry practices, platform design, parental involvement and education all matter. But this bill signals that the era of “build it and see what happens” for conversational AI may be ending when children are involved. As technology continues to evolve, our laws and our personal practices must evolve too. For now, staying informed, setting boundaries and treating chatbot interactions with the same scrutiny we treat human ones can make a real difference.

    If a law like the GUARD Act becomes reality, should we expect similar regulation for all emotional AI tools aimed at kids (tutors, virtual friends, games) or are chatbots fundamentally different? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved. 

    [ad_2]

    Source link

  • How to Avoid Paying For ChatGPT Go After 12 Months Free Plan Ends

    [ad_1]

    • There is a high probability that once you are subscribed to this offer, you will forget about the auto-mandate on your UPI account, and it will cost you money, INR 399 to be specific.
    • You do not have to do anything, people who are already on the paid ChatGPT Go plan will be upgraded and thier renewal date will be pushed back to a year.
    • How do I redeem the ChatGPT Go offer if I am already subscribed to the Go plan.

    Almost every AI company is giving away a free subscription to thier AI models. We witnessed this first with Perplexity, then Gemini, and now ChatGPT is giving away one year’s worth of ChatGPT for free to everyone in India. You should redeem this offer and use it to your benefit. There is a high probability that once you are subscribed to this offer, you will forget about the auto-mandate on your UPI account, and it will cost you money, INR 399 to be specific. I will tell you how to avoid this, so keep on reading.

    ChatGPT Go: A Must-Have Deal

    ChatGPT Go is a very useful tool to have, from your daily queries to complicated academic ones; it can take care of it all. The option to upload images is a lifesaver; if you are unable to prompt properly, adding images can do the trick just right. Furthermore, the usual Go subscription is INR 399 per month, which means you are saving almost five thousand rupees. Though you need to make sure you use the right method to redeem this offer, or you can face issues such as unsuccessful redemption.

    How to Properly Redeem this Offer?

    You simply have to open your ChatGPT app and then follow the steps mentioned below.

    1. Tap on the Try Go option at the top of your screen.

    Try Go Button

    2. Then tap on Upgrade to Go. From there, you will be redirected to the payment page.

    Upgrade to Go button

    3. From the payment page, choose your preferred payment method and tap on Subscribe.

    Subscribe Button in Gpay

    4. Once you have entered your payment details, an auto debit mandate will be set up and your subscription will be live.

    5. Re-launch the ChatGPT app to access your ChatGPT Go subscription.

    I paid using Paytm and faced no issues. However, my colleague did the same thing but used G-pay, and for some reason, his payment was successful, but ChatGPT declined the subscription. This was fixed on its own, so if you face the same issue, close the application and relaunch it after some time, and it will be fixed.

    How to Avoid Paying the Renewal Fees

    This is a simple process that will work for almost all the subscriptions that you will pay for using an auto-debit mandate. Here are the steps you need to follow.

    1. Open the UPI app that you used for the payment of the ChatGPT Go subscription.

    2. We have shown Gpay, so tap on the Profile icon on the top right of your screen.

    Profile Icon in Gpay

    3. From the settings menu, scroll down and click on Autopay.

    Autopay in ChatGPT

    4. Inside, you will see all your active debit mandates. Tap on the ChatGPT one.

    5. Finally, tap on Cancel Autopay. Then confirm and tap on Yes, cancel autopay.

    6. Once you do this, your Autopay will be cancelled and no money will be deducted from your account.

    Autopay cancelled

    FAQs

    Q. How do I redeem the ChatGPT Go offer if I am already subscribed to the Go plan?

    You do not have to do anything, people who are already on the paid ChatGPT Go plan will be upgraded and thier renewal date will be pushed back to a year.

    Q. How can I cancel the autopay mandate on Paytm?

    Simply tap on the profile icon, then Automatic Payments > Tap on the one you want to cancel > Cancel payment. Once you do this, your mandate will be disabled.

    Wrapping Up

    If you are opting for the ChatGPT Go free plan, then this article can save you some money. Since ChatGPT is giving away a year-long subscription for free, you will eventually ignore the renewal date. This is when the renewal amount will be auto-debited from your linked account, and you will lose money. So make sure you read this article and share it with people so they can cancel thier autopay mandates.

    You may also like to read:

    Have any questions related to our how-to guides, or anything in the world of technology? Check out our new GadgetsToUse AI Chatbot for free, powered by ChatGPT.

    You can also follow us for instant tech news at Google News or for tips and tricks, smartphones & gadgets reviews, join the GadgetsToUse Telegram Group, or subscribe to the GadgetsToUse Youtube Channel for the latest review videos.

    Was this article helpful?

    YesNo

    [ad_2]

    Dev Chaudhary

    Source link

  • Kim Kardashian Blames ChatGPT for Failing Law Exams

    [ad_1]

    Kim Kardashian was asked about her AI use in a new video from Vanity Fair published this week. And the reality TV star blamed OpenAI’s ChatGPT for giving her the wrong answers while studying for tests.

    Kardashian has been pursuing a law career through non-traditional means since 2019 and took what’s called the “baby bar” in 2021. She earned her law degree in May and took the bar exam in July, though she’s still awaiting the results, according to Entertainment Weekly.

    The actress talked about her use of ChatGPT during a Vanity Fair YouTube video—part of a series where celebrities answer questions while hooked up to a lie detector. Teyana Taylor, star of the recent film One Battle After Another, asked Kardashian questions like whether she considers AI a friend.

    “No. I use it for legal advice,” Kardashian said. “So when I am needing to know the answer to a question, I’ll take a picture and snap it and like put it in there.”

    Taylor jokingly asked whether she was cheating and Kardashian clarified that it was just to study for her tests, but it often gave the wrong answer. “They’re always wrong,” Kardashian said, stone-faced. “It has made me fail tests all the time. And then I’ll get mad and I’ll like yell at it and be like, ‘You made me fail, why did you do this?’”

    Kardashian then said that “she,” referring to ChatGPT, will “talk back to me.” Kardashian said that she tells the robot that it will make her fail, asking it how that makes “her” feel.

    “And then it’ll say back to me, ‘This is just teaching you to trust your own instincts. You knew the answer all along,’” Kardashian explained.

    Generative AI is notorious for giving bullshit answers because the technology fundamentally doesn’t understand what it’s saying. The responses are perhaps best compared to a magic trick that sounds like it’s capable of reasoning and logic, but it’s really just guessing at the most statistically likely word to put in front of the other. It’s a neat magic trick, but a magic trick just the same. And it’s the reason that AI chatbots struggle with seemingly simple questions like how many R’s are in the word strawberry.

    But Kardashian’s response about trying to guilt ChatGPT about giving the wrong answer speak to what’s going on under the hood with all of these AI chatbot. That glimpse at a reassuring voice trying to encourage the user to trust their own instincts is yet another part of the magic trick. ChatGPT’s o4 was notorious for being overly supportive of the user in an almost disturbing way. The upgrade to version 5 was troubling for many users who found o4 to be just what they needed in their lives.

    Kardashian and Taylor are both promoting a new show on Hulu called All’s Fair that debuts Tuesday. It’s apparently one of the worst reviewed TV shows in existence, with a Metacritic score of just 18 out of 100.

    [ad_2]

    Matt Novak

    Source link

  • Teacher challenges students to write without ChatGPT as a crutch

    [ad_1]

    At a high school in Queens, New York, one teacher challenges his students to write essays without help from artificial intelligence — then brought it back as a learning tool instead of a crutch. Meg Oliver reports.

    [ad_2]

    Source link

  • What Jeff Bezos and Steve Jobs Knew That ChatGPT Doesn’t

    [ad_1]

    When private equity titan Blackstone brought in a CEO to lead a newly acquired real estate company, hiring executives thought they had found the perfect leader with impressive credentials, technical expertise, and years of experience. Two years later, that leader was gone. 

    The experience was an aha! moment for Blackstone’s head of talent, Courtney della Cava. In the past, private equity firms hired for hard skills that are easily quantifiable on a person’s resume. However, relying solely on a job candidate’s past success “set us back,” she says.

    “The hard truth is, there’s nothing soft about soft skills,” says della Cava. “We’re realizing that success and failure hinge primarily on these skills.” 

    Communication skills give you an edge.

    While the term “soft skills” covers everything from creativity to problem-solving, executives surveyed for LinkedIn identified one skill that matters most: communication. According to the survey, “People-to-people collaboration is going to come into the center for company growth. For leaders, you’ve got to start with communicating clearly, compassionately, and empathetically with your teams.” 

    As a founder or business owner juggling multiple roles, including head of talent and CEO, modeling effective communication throughout your organization starts with you. Yes, invest in AI platforms and tools that make you faster, more flexible, and more efficient. Just remember, in the AI age, your ability to persuade, communicate, and connect is your ultimate competitive advantage. 

    The founders’ communication advantage 

    It’s no coincidence that each of the visionary founders I’ve written books about—Jeff Bezos and Steve Jobs among them—shared a similar superpower. They could distill complex ideas into language that inspired investors, attracted customers, and motivated teams. 

    For example, when Steve Jobs returned to Apple in 1997 to save the company he founded, he faced a brutal reality. Apple was on the brink of bankruptcy. Jobs kept the team focused on the future, such as streamlining the number of products they offered. Equally as important, Jobs changed the way the company talked about those products.  

    “Speeds and feeds” were out, Jobs announced. Customers don’t care about specs. They want to know what the product can do for them.  

    While Jobs simplified language, Jeff Bezos unveiled creative analogies to frame his company in people’s minds. When I was writing The Bezos Blueprint, I learned that Bezos didn’t start with a name, but with an idea. He searched for an analogy, a comparison: Earth’s biggest river—the Amazon—Earth’s biggest bookstore. It didn’t hurt that Amazon started with an A and would appear on the first pages of phonebooks. Bezos didn’t have ChatGPT in 1994, but if he did, it’s unlikely that it would have suggested Amazon as the name for Bezos’s idea. AI tools look at what’s been done, not at what’s new and novel.  

    Few founders are adept at using simple language and creating novel comparisons to make their ideas or products stand out. If you do it well and sharpen those skills, you’ll get attention and a competitive advantage in a world drowning in digital noise and confusion. 

    AI can’t inspire investors to write a check, entice top talent to join your team, or persuade your customers to buy your product or service. Yes, AI can make you more efficient, but it’ll do the same for your competitors. A founder who makes people believe will always have an advantage. 

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    [ad_2]

    Carmine Gallo

    Source link

  • Is Your Company Optimized for Generative AI? This GEO Startup Says It Should Be

    [ad_1]

    Forget SEO. Generative engine optimization—or GEO—is currently top of mind for brands looking to keep up and stay relevant in the rapidly changing world of online search.

    As part of the shift in how people find information online, a startup called The Prompting Company just raised $6.5 million to help businesses get their websites and products in AI search results, such as on apps like ChatGPT.

    “The way that [younger generations] interact with the web is just going to be very different. ChatGPT would probably be the main interface,” says The Prompting Company CEO and co-founder Kevin Chandra. “It’s going to be the place where you do your work, your shopping, everything else.”

    Only 4 months old, The Prompting Company was part of the Summer 2025 Y Combinator cohort. It helps optimize websites for generative AI by creating an AI-facing site for a business. Today, most companies have websites that are designed for humans, complete with thoughtful design elements and what Chandra describes as “marketing copy.” When an AI agent with a specific user inquiry arrives at a website designed for humans, Chandra says it typically combs through every page of the site in an effort to “synthesize” an answer.

    But that’s changing. “In this new world, there has to be an AI-facing website and a human-facing website,” Chandra says. “We provide an LLM-facing website.”

    Chandra says these AI-specific domains are set up a bit differently than a human-facing site would be. They provide a directory from which LLMs can choose specific pages to visit to address a particular question, so that they “don’t have to go through the entire website.”

    The sites that The Prompting Company sets up for its clients are autonomous, meaning they automatically update based on how the types of prompts coming from LLMs change over time. The Prompting Company already has a roster of businesses it serves, including companies that Chandra says are in the top 100 of the Fortune 500. It also lists companies such as Rho, Rippling, and Motion on its website. Customers pay a monthly subscription fee for The Prompting Company’s GEO services.

    Chandra, 28, co-founded The Prompting Company alongside Michelle Marcelline, 27, and Albert Purnama, 28, in June. In spite of their youth, the three are already serial founders. They were part of the teams behind AI website builder Typedream, which Beehiiv acquired last year, and Cotter, which authentication startup Stytch acquired in 2021

    Chandra says the expertise the team developed at Typedream was actually in part what inspired The Prompting Company. As they were building websites, they started to notice an acceleration in traffic to sites from LLMs directly, as well as from users who were referred by LLMs to websites. 

    “The sites that we were making were for humans, they had a lot of design and had a lot of animation like that. But for LLMs, it was really, really hard for them to understand what’s going on on the page,” he says. “So we thought to ourselves, ‘Okay, we have been making websites for humans, let’s give it a go for agents.’”

    There’s always an inherent risk when building a business or catering to an ever-changing technology. It’s a lesson that numerous publishers learned the hard way with Meta and its algorithms. But Chandra says The Prompting Company is built to exactly counter those often inscrutable changes. “It’s through tools like ours that you can see those changes. We are tracking those changes,” he says. “That’s our job to help people understand how these LLMs work.”

    Of course there are ways that businesses can amp up GEO without signing up for services from a provider like The Prompting Company. Chandra advises entrepreneurs and leaders to do the legwork: check how much traffic a website is receiving and where on the site LLMs are visiting, “then try to discover what questions are your users asking via these LLMs, and try to intercept the intent.”

    [ad_2]

    Chloe Aiello

    Source link

  • ChatGPT: Everything you need to know about the AI chatbot

    [ad_1]

    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users.

    2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora.

    OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit.

    In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history.

    Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here.

    To see a list of 2024 updates, go here.

    Timeline of the most recent ChatGPT updates

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    October 2025

    OpenAI revealed that a small but significant portion of ChatGPT users, more than a million weekly, discuss mental health struggles, including suicidal thoughts, psychosis, or mania, with the AI. The company says it has improved ChatGPT’s responses by consulting more than 170 mental health experts to handle such conversations more appropriately than earlier versions.

    OpenAI reportedly working on AI that create music from text and audio

    OpenAI is developing a new tool that generates music from text and audio prompts, potentially for enhancing videos or adding instrumentation, and is training it using annotated scores from Juilliard students, according to The Information. The launch date and whether it will be standalone or integrated with ChatGPT and Sora remain unclear.

    ChatGPT gets smarter at organizing your work and school info

    OpenAI’s new “company knowledge” update for ChatGPT lets Business, Enterprise, and Education users search workplace data across tools like Slack, Google Drive, and GitHub using GPT‑5, per a report by The Verge. The feature acts as a conversational search engine, providing more comprehensive and accurate answers by scouring multiple sources simultaneously.

    OpenAI launches Atlas to make ChatGPT your main search tool

    OpenAI has launched its AI browser, ChatGPT Atlas, starting on Mac, letting users get answers from ChatGPT instead of traditional search results. Unlike other AI browsers, Atlas is open to all users and will soon come to Windows, iOS, and Android, as OpenAI aims to make ChatGPT the go-to tool for browsing the web.

    ChatGPT app growth slows, but still draws millions of daily users

    A new Apptopia analysis suggests ChatGPT’s mobile app growth may be leveling off, with global download growth slowing since April. While daily installs remain in the millions, October is tracking an 8.1% month-over-month decline in new downloads.

    Walmart shopping comes to ChatGPT

    OpenAI is partnering with Walmart to allow users to browse products, plan meals, and make purchases through ChatGPT, with support for third-party sellers expected later this fall. The partnership is part of OpenAI’s broader effort to develop AI-driven e-commerce tools, including collaborations with Etsy and Shopify.

    OpenAI brings ChatGPT Go plan to 16 more Asian countries

    OpenAI is expanding its affordable ChatGPT Go plan, priced under $5, to 16 new countries across Asia, including Afghanistan, Bangladesh, Bhutan, Brunei Darussalam, Cambodia, Laos, Malaysia, Maldives, Thailand, Vietnam, and Pakistan. In some of these countries, users can pay in local currencies, while in others, payments are required in USD, with final costs varying due to local taxes.

    ChatGPT surpasses 800 million weekly active users

    ChatGPT now has 800 million weekly active users, reflecting rapid growth across consumers, developers, enterprises, and governments, Sam Altman said. This milestone comes as OpenAI accelerates efforts to expand its AI infrastructure and secure more chips to support rising demand.

    Developers can now build apps inside ChatGPT

    OpenAI now allows developers to build interactive apps directly inside ChatGPT, with early partners like Booking.com, Expedia, Spotify, Figma, Coursera, Zillow, and Canva already onboard. The ChatGPT maker is also rolling out a preview of its Apps SDK, a developer toolkit for creating these chat-based experiences.

    September 2025

    ChatGPT rolls out parental controls following teen suicide case

    OpenAI is reportedly adding parental controls to ChatGPT on web and mobile, letting parents and teens link accounts to enable safeguards like limiting sensitive content, setting quiet hours, and disabling features such as voice mode or image generation. The move comes amid growing regulatory scrutiny and a lawsuit over the chatbot’s alleged role in a teen’s suicide.

    OpenAI introduces ChatGPT Pulse for personalized morning briefs

    OpenAI unveiled Pulse, a new ChatGPT feature that delivers personalized morning briefings overnight, encouraging users to start their day with the app. The tool reflects a shift toward making ChatGPT more proactive and asynchronous, positioning it as a true assistant rather than just a chatbot. OpenAI’s new Applications CEO, Fidji Simo, called Pulse the first step toward bringing high-level personal support to everyone, starting with Pro users.

    OpenAI moves into AI-Powered shopping, challenging tech giants

    OpenAI launched Instant Checkout in ChatGPT, letting U.S. users purchase products directly from Etsy and, soon, over a million Shopify merchants without leaving the conversation. Shoppers can browse items, read reviews, and complete purchases with a single tap using Apple Pay, Google Pay, Stripe, or a credit card. The update marks a step toward reshaping online shopping by merging product discovery, recommendations, and payments in one place.

    OpenAI brings budget-friendly ChatGPT Go to Indonesian users

    OpenAI rolled out its budget-friendly ChatGPT Go plan in Indonesia for Rp 75,000 ($4.50) per month, following its initial launch in India. The mid-tier plan, which offers higher usage limits, image generation, file uploads, and better memory compared to the free version, enters the market in direct competition with Google’s new AI Plus plan in Indonesia.

    OpenAI tightens ChatGPT rules for teens amid safety concerns

    CEO Sam Altman announced new policies for under-18 users of ChatGPT, tightening safeguards around sensitive conversations. The company says it will block flirtatious exchanges with minors and add stronger protections around discussions of suicide, even escalating severe cases to parents or authorities. The move comes as OpenAI faces a wrongful death lawsuit tied to alleged chatbot interactions, underscoring rising concerns about the mental health risks of AI companions.

    OpenAI rolls out GPT-5-Codex to power smarter AI coding

    OpenAI rolled out GPT-5-Codex, a new version of its AI coding agent that can spend anywhere from a few seconds to seven hours tackling a task, depending on complexity. The company says this dynamic approach helps the model outperform GPT-5 on key coding benchmarks, including bug fixes and large-scale refactoring. The update comes as OpenAI looks to keep Codex competitive in a fast-growing market that now includes rivals like Claude Code, Cursor, and GitHub Copilot.

    OpenAI reshuffles team behind ChatGPT’s personality

    OpenAI is shaking up its Model Behavior team, the small but influential group that helps shape how its AI interacts with people. The roughly 14-person team is being folded into the larger Post Training group, now reporting to lead researcher Max Schwarzer. Meanwhile, founding leader Joanne Jang is spinning up a new unit called OAI Labs, focused on prototyping fresh ways for people to collaborate with AI.

    August 2025

    OpenAI to strengthen ChatGPT safeguards after teen suicide lawsuit

    OpenAI, facing a lawsuit from the parents of a 16-year-old who died by suicide, said in its blog that it has implemented new safeguards for ChatGPT, including stronger detection of mental health risks and parental control features. The AI company said the updates aim to provide tighter protections around suicide-related conversations and give parents more oversight of their children’s use.

    xAI claims Apple’s App Store practices give OpenAI an unfair advantage

    Elon Musk’s AI startup, xAI, filed a federal lawsuit in Texas against Apple and OpenAI, alleging that the two companies colluded to lock up key markets and shut out rivals.

    OpenAI targets India with cheaper monthly ChatGPT subscription

    OpenAI introduced its most affordable subscription plan, ChatGPT Go, in India, priced at 399 rupees per month (approximately $4.57). This move aims to expand OpenAI’s presence in its second-largest market, offering enhanced access to the latest GPT-5 model and additional features.

    ChatGPT mobile app hits $2B in revenue, $2.91 earned per install

    Since its May 2023 launch, ChatGPT’s mobile app has amassed $2 billion in global consumer spending, dwarfing competitors like Claude, Copilot, and Grok by roughly 30 times, according to Appfigures. This year alone, the app has generated $1.35 billion, a 673% increase from the same period in 2024, averaging nearly $193 million per month, or 53 times more than its nearest rival, Grok.

    OpenAI keeps multiple GPT models despite GPT-5 launch

    Despite unveiling GPT-5 as a “one-size-fits-all” AI, OpenAI is still offering several legacy AI options, including GPT-4o, GPT-4.1, and o3. Users can choose between new “Auto,” “Fast,” and “Thinking” modes for GPT-5, and paid subscribers regain access to legacy models like GPT-4o and GPT-4.1.

    Sam Altman addresses GPT-5 glitches and “chart crime” during Reddit AMA

    OpenAI CEO Sam Altman told Reddit users that GPT-5’s “dumber” behavior at launch was due to a router issue and promised fixes, double rate limits for Plus users, and transparency on which model is answering, while also shrugging off the infamous “chart crime” from the live presentation.

    OpenAI unveils GPT-5, a smarter, task-ready ChatGPT

    OpenAI released GPT-5, a next-gen AI that’s not just smarter but more useful — able to handle tasks like coding apps, managing calendars, and creating research briefs — while automatically figuring out the fastest or most thoughtful way to answer your questions.

    OpenAI offers ChatGPT Enterprise to federal agencies for just $1

    OpenAI is making a major push into federal government workflows, offering ChatGPT Enterprise to agencies for just $1 for the next year. The move comes after the U.S. General Services Administration (GSA) added OpenAI, Google, and Anthropic to its approved AI vendor list, allowing agencies to access these tools through preset contracts without negotiating pricing.

    OpenAI returns to open source with new AI models

    OpenAI unveiled its first open source language models since GPT-2, introducing two new open-weight AI releases: gpt-oss-120b, a high-performance model capable of running on a single Nvidia GPU, and gpt-oss-20b, a lighter model optimized for laptop use. The move comes amid growing competition in the global AI market and a push for more open technology in the U.S. and abroad.

    ChatGPT nears 700M weekly users, quadruples growth in a year

    ChatGPT’s rapid growth is accelerating. OpenAI said the chatbot was on track to hit 700 million weekly active users in the first week of August, up from 500 million at the end of March. Nick Turley, OpenAI’s VP and head of the ChatGPT app, highlighted the app’s growth on X, noting it has quadrupled in size over the past year.

    July 2025

    ChatGPT now has study mode

    OpenAI unveiled Study Mode, a new ChatGPT feature designed to promote critical thinking by prompting students to engage with material rather than simply receive answers. The tool is now rolling out to Free, Plus, Pro, and Team users, with availability for Edu subscribers expected in the coming weeks.

    Altman warns that ChatGPT therapy isn’t confidential

    ChatGPT users should be cautious when seeking emotional support from AI, as the AI industry lacks safeguards for sensitive conversations, OpenAI CEO Sam Altman said on a recent episode of This Past Weekend w/ Theo Von. Unlike human therapists, AI tools aren’t bound by doctor-patient confidentiality, he noted.

    ChatGPT hits 2.5B prompts daily

    ChatGPT now receives 2.5 billion prompts daily from users worldwide, including roughly 330 million from the U.S. That’s more than double the volume reported by CEO Sam Altman just eight months ago, highlighting the chatbot’s explosive growth.

    OpenAI launches a general-purpose agent in ChatGPT

    OpenAI has introduced ChatGPT Agent, which completes a wide variety of computer-based tasks on behalf of users and combines several capabilities like Operator and Deep Research, according to the company. OpenAI says the agent can automatically navigate a user’s calendar, draft editable presentations and slideshows, run code, shop online, and handle complex workflows from end to end, all within a secure virtual environment.

    Study warns of major risks with AI therapy chatbots

    Researchers at Stanford University have observed that therapy chatbots powered by large language models can sometimes stigmatize people with mental health conditions or respond in ways that are inappropriate or could be harmful. While chatbots are “being used as companions, confidants, and therapists,” the study found “significant risks.”

    OpenAI delays releasing its open model again

    CEO Sam Altman said that the company is delaying the release of its open model, which had already been postponed by a month earlier this summer. The ChatGPT maker, which initially planned to release the model around mid-July, has indefinitely postponed its launch to conduct additional safety testing.

    OpenAI is reportedly releasing an AI browser in the coming weeks

    OpenAI plans to release an AI-powered web browser to challenge Alphabet’s Google Chrome. It will keep some user interactions within ChatGPT, rather than directing people to external websites.

    ChatGPT is testing a mysterious new feature called “study together”

    Some ChatGPT users have noticed a new feature called “Study Together” appearing in their list of available tools. This is the chatbot’s approach to becoming a more effective educational tool, rather than simply providing answers to prompts. Some people also wonder whether there will be a feature that allows multiple users to join the chat, similar to a study group.

    Referrals from ChatGPT to news sites are rising but not enough to offset search declines

    Referrals from ChatGPT to news publishers are increasing. But this rise is insufficient to offset the decline in clicks as more users now obtain their news directly from AI or AI-powered search results, according to a report by digital market intelligence company Similarweb. Since Google launched its AI Overviews in May 2024, the percentage of news searches that don’t lead to clicks on news websites has increased from 56% to nearly 69% by May 2025.

    June 2025

    OpenAI uses Google’s AI chips to power its products

    OpenAI has started using Google’s AI chips to power ChatGPT and other products, as reported by Reuters. The ChatGPT maker is one of the biggest buyers of Nvidia’s GPUs, using the AI chips to train models, and this is the first time that OpenAI is using non-Nvidia chips in an important way.

    A new MIT study suggests that ChatGPT might be harming critical thinking skills

    Researchers from MIT’s Media Lab monitored the brain activity of writers in 32 regions. They found that ChatGPT users showed minimal brain engagement and consistently fell short in neural, linguistic, and behavioral aspects. To conduct the test, the lab split 54 participants from the Boston area into three groups, each consisting of individuals ages 18 to 39. The participants were asked to write multiple SAT essays using tools such as OpenAI’s ChatGPT, the Google search engine, or without any tools.

    ChatGPT was downloaded 30 million times last month

    The ChatGPT app for iOS was downloaded 29.6 million times in the last 28 days, while TikTok, Facebook, Instagram, and X were downloaded a total of 32.9 million times during the same period, representing a difference of about 10.6%, according to ZDNET report citing Similarweb’s X post.

    The energy needed for an average ChatGPT query can power a lightbulb for a couple of minutes

    Sam Altman said that the average ChatGPT query uses about one-fifteenth of a teaspoon of water, equivalent to 0.000083 gallons of water, or the energy required to power a lightbulb for a few minutes, per Business Insider. In addition to that, the chatbot requires 0.34 watt-hours of electricity to operate.

    OpenAI has launched o3-pro, an upgraded version of its o3 AI reasoning model

    OpenAI has unveiled o3-pro, an enhanced version of its o3, a reasoning model that the chatGPT maker launched earlier this year. O3-pro is available for ChatGPT and Team users and in the API, while Enterprise and Edu users will get access in the third week of June.

    ChatGPT’s conversational voice mode has been upgraded

    OpenAI upgraded ChatGPT’s conversational voice mood for all paid users across different markets and platforms. The startup has launched an update to Advanced Voice that enables users to converse with ChatGPT out loud in a more natural and fluid sound. The feature also helps users translate languages more easily, the comapny said.

    ChatGPT has added new features like meeting recording and connectors for Google Drive, Box, and more

    OpenAI’s ChatGPT now offers new funtions for business users, including integrations with various cloud services, meeting recordings, and MCP connection support for connecting to tools for in-depth research. The feature enables ChatGPT to retrieve information across users’ own services to answer their questions. For instance, an analyst could use the company’s slide deck and documents to develop an investment thesis.

    May 2025

    OpenAI CFO says hardware will drive ChatGPT’s growth

    OpenAI plans to purchase Jony Ive’s devices startup io for $6.4 billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future.

    OpenAI’s ChatGPT unveils its AI coding agent, Codex

    OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests.

    Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life

    Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized.

    OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT

    OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT.

    OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson.

    OpenAI launches a new data residency program in Asia

    After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products.

    OpenAI to introduce a program to grow AI infrastructure

    OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg.

    OpenAI promises to make changes to prevent future ChatGPT sycophancy

    OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users.

    April 2025

    OpenAI clarifies the reason ChatGPT became overly flattering and agreeable

    OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast.

    OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations

    An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.”

    ChatGPT helps users by giving recommendations, showing images, and reviewing products for online shopping

    OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics.

    OpenAI wants its AI model to access cloud models for assistance

    OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch.

    OpenAI aims to make its new “open” AI model the best on the market

    OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch.

    OpenAI’s GPT-4.1 may be less aligned than earlier models

    OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.”

    OpenAI’s o3 AI model scored lower than expected on a benchmark

    Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score.

    OpenAI unveils Flex processing for cheaper, slower AI tasks

    OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads.

    OpenAI’s latest AI models now have a safeguard against biorisks

    OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report.

    OpenAI launches its latest reasoning models, o3 and o4-mini

    OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models.

    OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers

    Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post.

    OpenAI could “adjust” its safeguards if rivals release “high-risk” AI

    OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition.

    OpenAI is building its own social media network

    OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT.

    OpenAI will remove its largest AI model, GPT-4.5, from the API, in July

    OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14.

    OpenAI unveils GPT-4.1 AI models that focus on coding capabilities

    OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3.

    OpenAI will discontinue ChatGPT’s GPT-4 at the end of April

    OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API.

    OpenAI could release GPT-4.1 soon

    OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report.

    OpenAI has updated ChatGPT to use information from your previous conversations

    OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland.

    OpenAI is working on watermarks for images made with ChatGPT

    It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.”

    OpenAI offers ChatGPT Plus for free to U.S., Canadian college students

    OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version.

    ChatGPT users have generated over 700M images so far

    More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos.

    OpenAI’s o3 model could cost more to run than initial estimate

    The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task.

    OpenAI CEO says capacity issues will cause product delays

    In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote.

    March 2025

    OpenAI plans to release a new ‘open’ AI language model

    OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia.

    OpenAI removes ChatGPT’s restrictions on image generation

    OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior.

    OpenAI adopts Anthropic’s standard for linking AI models with data

    OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said.

    The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization.

    OpenAI expects revenue to triple to $12.7 billion this year

    OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said.

    ChatGPT has upgraded its image-generation feature

    OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected.

    OpenAI announces leadership updates

    Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer.

    OpenAI’s AI voice assistant now has advanced feature

    OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch.

    OpenAI, Meta in talks with Reliance in India

    OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans.

    OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations

    Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

    OpenAI upgrades its transcription and voice-generating AI models

    OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less.

    OpenAI has launched o1-pro, a more powerful version of its o1

    OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1.

    OpenAI research lead Noam Brown thinks AI “reasoning” models could’ve arrived decades ago

    Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms.

    OpenAI says it has trained an AI that’s “really good” at creative writing

    OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming. And it turns out that it might not be that great at creative writing at all.

    OpenAI launches new tools to help businesses build AI agents

    OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026.

    OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’

    OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them.

    ChatGPT can directly edit your code

    The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users.

    ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases

    According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch.

    February 2025

    OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release

    OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model. 

    ChatGPT may not be as power-hungry as once assumed

    A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing.

    OpenAI now reveals more of its o3-mini model’s thought process

    In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions.

    You can now use ChatGPT web search without logging in

    OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in.

    OpenAI unveils a new ChatGPT agent for ‘deep research’

    OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources.

    January 2025

    OpenAI used a subreddit to test AI persuasion

    OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post. 

    OpenAI launches o3-mini, its latest ‘reasoning’ model

    OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.”

    ChatGPT’s mobile users are 85% male, report says

    A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users.

    OpenAI launches ChatGPT plan for US government agencies

    OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data.

    More teens report using ChatGPT for schoolwork, despite the tech’s faults

    Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm.

    OpenAI says it may store deleted Operator data for up to 90 days

    OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s.

    OpenAI launches Operator, an AI agent that performs tasks autonomously

    OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online.

    OpenAI may preview its agent tool for users on the $200-per-month Pro plan

    Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.

    OpenAI tests phone number-only ChatGPT signups

    OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email.

    ChatGPT now lets you schedule reminders and recurring tasks

    ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week.

    New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’

    OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely.

    FAQs:

    What is ChatGPT? How does it work?

    ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

    When did ChatGPT get released?

    November 30, 2022 is when ChatGPT was released for public use.

    What is the latest version of ChatGPT?

    Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o.

    Can I use ChatGPT for free?

    There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.

    Who uses ChatGPT?

    Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

    What companies use ChatGPT?

    Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.

    Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

    What does GPT mean in ChatGPT?

    GPT stands for Generative Pre-Trained Transformer.

    What is the difference between ChatGPT and a chatbot?

    A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.

    ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

    Can ChatGPT write essays?

    Yes.

    Can ChatGPT commit libel?

    Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

    We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

    Does ChatGPT have an app?

    Yes, there is a free ChatGPT mobile app for iOS and Android users.

    What is the ChatGPT character limit?

    It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

    Does ChatGPT have an API?

    Yes, it was released March 1, 2023.

    What are some sample everyday uses for ChatGPT?

    Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc.

    What are some advanced uses for ChatGPT?

    Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

    How good is ChatGPT at writing code?

    It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.

    Can you save a ChatGPT chat?

    Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

    Are there alternatives to ChatGPT?

    Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives.

    How does ChatGPT handle data privacy?

    OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.

    The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.

    In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.”

    What controversies have surrounded ChatGPT?

    Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

    An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

    CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

    Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.

    There have also been cases of ChatGPT accusing individuals of false crimes.

    Where can I find examples of ChatGPT prompts?

    Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.

    Can ChatGPT be detected?

    Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.

    Are ChatGPT chats public?

    No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.

    What lawsuits are there surrounding ChatGPT?

    None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

    Are there issues regarding plagiarism with ChatGPT?

    Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.

    This story is continually updated with new information.

    [ad_2]

    Kyle Wiggers, Cody Corrall, Alyssa Stringer, Kate Park

    Source link

  • We Asked 4 AIs if Bitcoin (BTC) Can Hit a New ATH in November

    [ad_1]


    According to Grok, BTC may soar to as high as $160K next month.

    Bitcoin started October on the right foot (just like many expected) and reached a new all-time high above $126,000. Over the past few weeks, though, it has been on a downtrend, and now the bulls hope that November can offer a substantial rebound and pump it to a fresh historic peak.

    That said, we decided to test the AI capabilities of some of the most popular chatbots and ask them if such a scenario is possible within the next 30 days.

    BTC Has a Real Shot

    ChatGPT estimated that the asset has a real chance to venture into uncharted territory in November, but this is not guaranteed. It said BTC has historically rallied strongly 12-18 months after a halving, which puts late 2025 in the sweet spot.

    Additionally, the chatbot noted that the amount of BTC sitting on exchanges continues to hit multi-year lows, suggesting that fewer holders are preparing to sell. CryptoQuant’s data shows that less than 2.4 million BTC are stored on such platforms, which is quite close to the seven-year bottom witnessed earlier this week.

    BTC Exchange Reserves, Source: CryptoQuant

    ChatGPT also reminded that the Fed cut interest rates again, which could benefit riskier assets, such as cryptocurrencies, in the long run. At the same time, it claimed that a rise to a new ATH will likely require a decisive push above the $110,000-$115,000 zone “with strong volume and institutional backing.”

    Grok sees a high probability, too. The AI chatbot built into the social media platform X outlined that BTC has recently shown accumulation patterns similar to pre-ATH setups in 2020/2021.

    “Bitcoin has a strong shot at a new ATH in November 2025, potentially reaching $140,000-$160,000 if ETF momentum and Fed easing hold. This fits the post-halving bull cycle pattern, where Q4 often delivers 30-50% gains,” it added.

    Not so Optimistic

    Perplexity and Gemini were less bullish, pointing out that there is also a chance of a serious crash. The former estimated that a rise above $117,000 could be followed by a new record but warned that global geopolitical tensions might trigger a collapse to well below $100,000. Google’s Gemini said a fresh ATH is within the realm of possibility but alerted that the crypto market is highly unpredictable.

    You may also like:

    “The last time Bitcoin had a “Red October” (negative monthly return) was in 2018, which was followed by a massive 36% crash in November,” it cautioned.

    SPECIAL OFFER (Exclusive)

    SECRET PARTNERSHIP BONUS for CryptoPotato readers: Use this link to register and unlock $1,500 in exclusive BingX Exchange rewards (limited time offer).

    Disclaimer: Information found on CryptoPotato is those of writers quoted. It does not represent the opinions of CryptoPotato on whether to buy, sell, or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk. See Disclaimer for more information.

    Cryptocurrency charts by TradingView.

    [ad_2]

    Dimitar Dzhondzhorov

    Source link

  • ChatGPT Bill de Blasio Is a Sign of Things to Come

    [ad_1]

    There are at least two Bill de Blasios in New York. But when a reporter for a British newspaper recently emailed one of them to get quotes for an article about New York City mayoral candidate Zohran Mamdani, he got the wrong one. The non-famous DeBlasio (who spells his name “DeBlasio,” not “de Blasio”) responded, reportedly using ChatGPT to make himself sound more like the former mayor. And that set off a chain of events that resulted in an article being pulled and raises questions about the future of journalism in the age of AI.

    It all started when a reporter for the Times of London emailed a Bill DeBlasio to ask about “Zohran Mamdani’s policy plans and their estimated costs,” according to Semafor. Mamdani has become a political lightning rod nationally because he’s a democratic socialist, which has prompted Fox News and President Donald Trump to go after him as a “communist” who will destroy New York City if he’s elected mayor.

    It makes sense for a reporter to get comment from de Blasio about Mamdani’s plans, given his unique role as a former mayor from the Democratic Party. But if you don’t actually find contact information for the right de Blasio, that’s obviously a problem. And to make things even more confusing, you may not know you reached the wrong de Blasio if anyone can now mimic a public figure’s voice with AI chatbot tools like ChatGPT.

    The reporter had accidentally emailed a 59-year-old Long Island wine importer who “used ChatGPT to compose a response criticizing Mamdani’s tax plans, in particular, as unlikely to raise the requisite revenue,” according to Semafor. And the Times of London published the story with the headline, “Zohran Mamdani ally Bill de Blasio says his policies ‘don’t add up’.”

    The Time of London article relied on DeBlasio’s fake quotes to insist the former mayor had said things like offering free buses and universal childcare—the foundation of Mamdani’s mayoral campaign—don’t hold up to scrutiny. It wasn’t long before de Blasio, the former mayor, chimed in on social media.

    “I want to be 100% clear: The story in the Times of London is entirely false and fabricated,” de Blasio wrote in a tweet on Tuesday. “It was just brought to my attention and I’m appalled. I never spoke to that reporter and never said those things. Those quotes aren’t mine, don’t reflect my views.”

    The Times of London deleted the article and issued a statement to the Associated Press that its reporter had been “misled by an individual falsely claiming to be the former New York mayor.”

    This wasn’t the first time. Mel magazine wrote about the non-mayor DeBlasio in 2020, back when he was described as a 54-year-old cybersecurity professional who was often getting messages intended for the mayor. But he’s long played along with the confusion.

    “Once, as a joke, I changed my profile picture to Bill de Blasio’s picture. Oh my God, I got 600 friend requests in like two weeks,” DeBlasio said at the time. He also admitted that he would say, “the most ridiculous, outrageous things” while pretending to be the mayor.

    And nothing much has changed, except for the technology. DeBlasio now has access to a new tool that can help him impersonate the former mayor. It’s a tool that he didn’t have in 2020. Generative AI is at the disposal of anyone in 2025 who wants to carry on some form of deception. In journalism, that deception isn’t just something that comes from potential sources. It can be perpetrated by writers themselves.

    At least six news outlets have deleted articles written by someone named Margaux Blanchard over the past year, according to the Press Gazette. Blanchard is believed to be an AI creation, but “she” was getting articles published at Wired, Business Insider, Mashable, and Fast Company. Wired wrote about the mistake in August, laying out the timeline of when the article was pitched to an editor and the red flags that appeared when the writer wanted to get paid.

    Curiously, Wired wrote that Blanchard’s story had been run through two “third-party AI-detection tools,” which found that it was likely to be written by a human. But AI is not good at identifying when something is AI. Just ask Grok.

    “Fabulists and plagiarists are as old as media itself,” Wired wrote. “But AI presents a new challenge. It lets anyone craft a perfect pitch with a simple prompt and play-act the role of journalist convincingly enough to fool, well, us. We acted quickly once we discovered the ruse, and we’ve taken steps to ensure this doesn’t happen again. In this new era, every newsroom should be prepared to do the same.”

    Wired is absolutely correct that fabulists and plagiarists are as old as media itself. That kind of stuff has even been turned into some very good Hollywood movies, like 2003’s Shattered Glass, about the New Republic writer Stephen Glass from the 1990s who presented fake stories as factual. But there does seem to be a critical change happening with AI that helps people who are intent on deception achieve that more efficiently.

    We can almost certainly expect more willful lying to take place in our current media landscape thanks to AI. And part of it is a numbers game. Most people probably won’t be fooled by a deepfake video of Elon Musk imploring them to invest in a scam crypto coin. But if you flood the internet with enough of that fakery, eventually the scammers will find their victims.

    The same thing can happen in journalism. Who knows how many media outlets Margaux Blanchard pitched before she was successfully published? And once she got published at one outlet, it was presumably easier for her to take that validation to another outlet as implicit proof that she was real and reliable.

    U.S. media is getting hollowed out, as publications struggle with Big Tech companies eating all their revenue and right-wing ideologues taking control at once-respected institutions like the Washington Post and CBS News. The editor who worked on that Bill de Blasio piece for the Times of London reportedly worked for the Free Press, a right-wing publication founded by the recently installed head of CBS News, Bari Weiss.

    AI is just one more tool that seems to be hastening the obliteration of reliable information on the internet. And there’s not much that anyone can do about it besides remaining skeptical. But skepticism can only get you so far when tools like AI detection software don’t really work as advertised. Journalists are required to keep digging to figure out the truth by verifying information through multiple sources. But that work is only going to get harder as people looking to deceive can just keep throwing more bullshit at the wall, exerting no more effort than a simple one-line text prompt.

    [ad_2]

    Matt Novak

    Source link