ReportWire

Tag: chatgpt

  • How to Use ChatGPT, Gemini, Grok in Private Mode (No Training Mode)

    • Apart from this, you can also use the keyboard shortcut, ‘Ctrl + ;’ to switch back to a normal chat again or to the incognito mode.
    • However, it does state that the chat won’t appear in the history or would be used to train the model.
    • Similar to ChatGPT, you would be informed that your chats would not appear in your history and would not be used to train the model.

    The core reason LLMs, or large language models such as ChatGPT and Gemini, work the way they do is the amount of data they are trained on. Every conversation you have with it or the data you feed into it is used to further train it. This also instils a fear of privacy concerns among users. As one would not like a record of sensitive or personal information in particular. For this, almost every platform offers a private mode. Let’s explore how you can access them and how it really works.

    Using Private Mode on LLM Platforms

    Using an LLM platform is much like using a search engine, albeit with a lot more capabilities. You must have noticed that when you do a normal Google search, it adds to your browsing history. The links you have already visited would appear purple. Similarly, these platforms not only keep a record of your chats with them but also use that data to train themselves further. Like all browsers, these platforms also offer private or incognito chat. This is how you can access these modes in ChatGPT, Gemini, Grok, Perplexity, and Claude AI.

    1. ChatGPT

    Once you open ChatGPT, you will see “Turn on temporary chat” in the top-right corner.

    temporary chat in chatgpt

    Here, you will clearly see a disclaimer saying, “This chat won’t appear in your chat history and won’t be used to train our models”. But you have to keep in mind that the chat won’t be instantly deleted from the servers. As stated by the platform, a copy of the chat may be kept for up to 30 days for safety concerns.

    chatgpt

    2. Gemini

    After opening Gemini, on the left of your screen, click on the ‘Temporary chat’ icon beside the ‘New chat’ icon. If you don’t see it, you can expand the sidebar menu to make it visible.

    temporary chat in gemini

    Similar to ChatGPT, you would be informed that your chats would not appear in your history and would not be used to train the model. You will also see that it is clearly stated that the record will be kept for only 72 hours or 3 days for safety reasons.

    gemini policy

    3. Grok

    Open Grok, then click “Private” in the top-right corner.

    grok private mode

    You can have a private chat now. Unlike the platforms above, Grok does not explicitly state whether chat will be stored, and if so, for how long. However, it does state that the chat won’t appear in the history or would be used to train the model.

    grok policy

    4. Perplexity

    Once you open Perplexity, click on your account in the bottom left corner. A menu will pop up where, at the bottom, you will see the ‘Incognito’ option. Click that. Apart from this, you can also use the keyboard shortcut, ‘Ctrl + ;’ to switch back to a normal chat again or to the incognito mode.

    incognito mode in perplexity

    In this too the chats would not appear in the history and data will be deleted within 24 hours, as mentioned by Perplexity.

    perplexity policy

    5. Claude AI

    After opening Claude AI, you will find the incognito option in the top-right corner. Or, you can simply use the ‘Ctrl + Shift + I’ keyboard shortcut.

    incognito mode in claude

    Although Claude doesn’t mention how long, the chats will be retained, apart from not appearing in the history. I looked for it in Claude Support and found the following.

    claude policy

    FAQs

    Q. Can I use the private mode without logging in?

    Private modes are not needed and thus not accessible when not logged in. Your chat will not be saved anywhere once you close the window but they might be utilised later to train the model.

    Not unless it’s something flagged by the system. As mentioned above, these chats are not immediately wiped off. Each platform has policies aligned with the laws of the respective nations that direct them to grant authorities access to these chats in the event of an issue.

    Wrapping Up

    Unlike normal mode on LLM platforms, private mode lets you chat without creating a history or training the model. While this is a useful feature when multiple users use a single account, it should be used with caution. Users should avoid sharing sensitive information such as passwords, addresses, or contact details. Additionally, this mode should be used considering that any information which may potentially be related to illegal activities can call for the involvement of legal authorities if needed. There is a system in place to flag any such conversation within all platforms.

    You should also check out:

    Have any questions related to our how-to guides, or anything in the world of technology? Check out our new GadgetsToUse AI Chatbot for free, powered by ChatGPT.

    You can also follow us for instant tech news at Google News or for tips and tricks, smartphones & gadgets reviews, join the GadgetsToUse Telegram Group, or subscribe to the GadgetsToUse Youtube Channel for the latest review videos.

    Was this article helpful?

    YesNo

    Mitash Arora

    Source link

  • Despair-Inducing Analysis Shows AI Eroding the Reliability of Science Publishing

    It’s almost impossible to overstate the importance and impact of arXiv, the science repository that, for a time, almost single-handedly justified the existence of the internet. ArXiv (pronounced “archive” or “Arr-ex-eye-vee” depending on who you ask) is a preprint repository, where, since 1991, scientists and researchers have announced “hey I just wrote this” to the rest of the science world. Peer review moves glacially, but is necessary. ArXiv just requires a quick once-over from a moderator instead of a painstaking review, so it adds an easy middle step between discovery and peer review, where all the latest discoveries and innovations can—cautiously—be treated with the urgency they deserve more or less instantly.

    But the use of AI has wounded ArXiv and it’s bleeding. And it’s not clear the bleeding can ever be stopped.

    As a recent story in The Atlantic notes, ArXiv creator and Cornell information science professor Paul Ginsparg has been fretting since the rise of ChatGPT that AI can be used to breach the slight but necessary barriers preventing the publication of junk on ArXiv. Last year, Ginsparg collaborated on a piece of analysis that looked into probable AI in arXiv submissions. Rather horrifyingly, scientists evidently using LLMs to generate plausible-looking papers were more prolific than those who didn’t use AI. The number of papers from posters of AI-written or augmented work was 33 percent higher.

    AI can be used legitimately, the analysis says, for things like surmounting the language barrier. It continues:

    “However, traditional signals of scientific quality such as language complexity are becoming unreliable indicators of merit, just as we are experiencing an upswing in the quantity of scientific work. As AI systems advance, they will challenge our fundamental assumptions about research quality, scholarly communication, and the nature of intellectual labor.”

    It’s not just ArXiv. It’s a rough time overall for the reliability of scholarship in general. An astonishing self-own published last week in Nature described the AI misadventure of a bumbling scientist working in Germany named Marcel Bucher, who had been using ChatGPT to generate emails, course information, lectures, and tests. As if that wasn’t bad enough, ChatGPT was also helping him analyze responses from students and was being incorporated into interactive parts of his teaching. Then one day, Bucher tried to “temporarily” disable what he called the “data consent” option, and when ChatGPT suddenly deleted all the information he was storing exclusively in the app—that is: on OpenAI’s servers—he whined in the pages of Nature that “two years of carefully structured academic work disappeared.”

    Widespread, AI-induced laziness on display in the exact area where rigor and attention to detail are expected and assumed is despair-inducing. It was safe to assume there was a problem when the number of publications spiked just months after ChatGPT was first released, but now, as The Atlantic points out, we’re starting to get the details on the actual substance and scale of that problem—not so much the Bucher-like, AI-pilled individuals experiencing publish-or-perish anxiety and hurrying out a quickie fake paper, but industrial scale fraud.

    For instance, in cancer research, bad actors can prompt for boring papers that claim to document “the interactions between a tumor cell and just one protein of the many thousands that exist,” the Atlantic notes. If the paper claims to be groundbreaking, it’ll raise eyebrows, meaning the trick is more likely to be noticed, but if the fake conclusion of the fake cancer experiment is ho-hum, that slop will be much more likely to see publication—even in a credible publication. All the better if it comes with AI generated images of gel electrophoresis blobs that are also boring, but add additional plausibility at first glance.

    In short, a flood of slop has arrived in science, and everyone has to get less lazy, from busy academics planning their lessons, to peer reviewers and ArXiv moderators. Otherwise, the repositories of knowledge that used to be among the few remaining trustworthy sources of information are about to be overwhelmed by the disease that has already—possibly irrevocably—infected them. And does 2026 feel like a time when anyone, anywhere, is getting less lazy?

    Mike Pearl

    Source link

  • Report reveals that OpenAI’s GPT-5.2 model cites Grokipedia

    OpenAI may have called GPT-5.2 its “most advanced frontier model for professional work,” but tests conducted by the Guardian cast doubt on its credibility. According to the report, OpenAI’s GPT-5.2 model cited Grokipedia, the online encyclopedia powered by xAI, when it came to specific, but controversial topics related to Iran or the Holocaust.

    As seen in the Guardian‘s report, ChatGPT used Grokipedia as a source for claims about the Iranian government being tied to telecommunications company MTN-Irancell and questions related to Richard Evans, a British historian who served as an expert witness during a libel trial for Holocaust denier David Irving. However, the Guardian noted ChatGPT didn’t use Grokipedia when it came to a prompt asking about media bias against Donald Trump and other controversial topics.

    OpenAI released the GPT-5.2 model in December to better perform at professional use, like creating spreadsheets or handling complex tasks. Grokipedia preceded GPT-5.2’s release, but ran into some controversy when it was seen including citations to neo-Nazi forums. A study done by US researchers also showed that the AI-generated encyclopedia cited “questionable” and “problematic” sources.

    In response to the Guardian report, OpenAI told the outlet that its GPT-5.2 model searches the web for a “broad range of publicly available sources and viewpoints,” but applies “safety filters to reduce the risk of surfacing links associated with high-severity harms.”

    Jackson Chen

    Source link

  • Claude Code gives Anthropic its viral moment | Fortune

    It’s been a good few weeks for Anthropic. The lab is reportedly planning a $10 billion fundraising that would value the company at $350 billion, its CEO caused headlines in Davos by criticizing the White House, and it’s also having a viral product launch that most AI labs can only dream of.

    Claude Code, the company’s surprisingly popular hit, is a coding tool that has captured the attention of users far beyond the software engineers it was built for. First released in February 2024 as a developer assistant, the coding tool has become increasingly sophisticated and sparked a level of excitement rarely seen since ChatGPT’s debut. Jensen Huang called it “incredible” and urged companies to adopt it for coding. A senior Google engineer said it recreated a year’s worth of work in an hour. And users without any programming background have deployed it to book theater tickets, file taxes, and even monitor tomato plants.

    Even at Microsoft, which sells GitHub Copilot, Claude Code has been widely adopted internally across its major engineering teams, with even non-developers reportedly being encouraged to use it.

    Anthropic’s products have long been popular with software developers, but after users pointed out that Claude Code was more of a general-purpose AI agent, Anthropic created a version of the product for non-coders. Last week, the company launched Cowork, a file management agent that is essentially a user-friendly version of the coding product. Boris Cherny, head of Claude Code at Anthropic, said his team built Cowork in approximately a week and a half, largely using Claude Code itself to do the legwork.

    “It was just kind of obvious that Cowork is the next step,” Cherny told Fortune. “We just want to make it much easier for non-programmers.”

    What separates Cowork from earlier general use AI tools from Anthropic is its ability to take autonomous action rather than simply provide advice. The products can access files, control browsers through the “Claude in Chrome” extension, and manipulate applications—executing tasks rather than just suggesting how to do them. For some general users, it’s the first taste of what the promise of agentic AI really is.

    Many of the uses aren’t especially sexy, but they do save users hours. Cherny says he uses Cowork for project management, automatically messaging team members on Slack when they haven’t updated shared spreadsheets, and had heard of use cases including one researcher deploying it to comb through museum archives for basketry collections.

    “Engineers just feel unshackled, that they don’t have to work on all the tedious stuff anymore,” Cherny told Fortune. “We’re starting to hear this for Cowork also, where people are saying all this tedious stuff—shuffling data between spreadsheets, integrating Slack and Salesforce, organizing your emails—it just does it so you can focus on the work you actually want to do.”

    Enterprise first, consumer second

    Despite the consumer buzz, Anthropic is positioning both products squarely in the enterprise market, where the company reportedly already leads OpenAI in adoption.

    “For Anthropic, we’re an enterprise AI company,” Cherny said. “We build consumer products, but for us, really, the focus is enterprise.”

    Cherny said this strategy is also guided by Anthropic’s founding mission around AI safety, which resonates with corporate customers concerned about security and compliance. In this case, the company’s roadmap with general-use products was to first develop strong coding capabilities to enable sophisticated tool use and ‘test’ products with technical customers. By providing capabilities to technical users through Claude Code before extending them to broader audiences, Cherny said the company builds on a tested foundation rather than starting from scratch with consumer tools.

    Claude Code is now used by Uber, Netflix, Spotify, Salesforce, Accenture, and Snowflake, among others, according to Cherny. The product has found “a very intense product market fit across the different enterprise spaces,” he told Fortune.

    Anthropic’s also seen a traffic uplift as a result of Claude Code’s viral moment. Claude’s total web audience has more than doubled since December 2024, and its daily unique visitors on desktop are up 12% globally year-to-date, according to data from Similarweb and Sensor Tower published by The Wall Street Journal.

    The company is facing challenges that come with AI agents capable of autonomous action. Both products have security vulnerabilities, particularly “prompt injections” where attackers hide malicious instructions in web content to manipulate AI behavior.

    To tackle this, Anthropic has implemented multiple security layers, including running Cowork in a virtual machine and recently adding deletion protection after a user accidentally removed files. A feature Cherny called “quite innovative.”

    But the company does acknowledge the limitations of their approach. “Agent safety—that is, the task of securing Claude’s real-world actions—is still an active area of development in the industry,” Anthropic warned in its announcement.

    The future of software engineering

    With the rise of increasingly sophisticated autonomous coding tools, some are concerned that software engineer roles, especially entry-level roles, could dry up. Even within Anthropic, some engineers have stopped writing code at all, according to CEO Dario Amodei.

    “I have engineers within Anthropic who say ‘I don’t write any code anymore. I just let the model write the code, I edit it,’” Amodei said at the World Economic Forum in Davos. “We might be six to 12 months away from when the model is doing most, maybe all of what software engineers do end-to-end.”

    Tech companies argue that these tools will democratize coding, allowing those with little to no technical skills to build products by prompting AI systems in natural language. But, while it’s not definitive the two are causally linked and there are other factors impacting a jobs downturn, it’s true that open roles for entrylevel software engineers have declined as the amount of code written by generative AI has ramped up.

    Time will tell whether this heralds a democratization of software development or the slow erosion of a once stable profession, but by bringing autonomous AI agents out of the lab and into everyday work, Claude Code may speed up how quickly we find out.

    This story was originally featured on Fortune.com

    Beatrice Nolan

    Source link

  • The Agency partners with Rechat – Houston Agent Magazine

    Rechat is now integrated with The Agency and will serve as a centralized operating platform for the brokerage.

    Agents affiliated with The Agency will now have access to Rechat’s CRM, the People Center, as well as a range of tools including a marketing center and an AI agent assistant.

    “The Agency is one of the most respected luxury brands in real estate, and their commitment to thoughtful growth and agent empowerment aligns closely with how we build Rechat,” Shayan Hamidi, CEO of Rechat, said in a press release. “Our team across 18 countries and our platform are designed to help reduce complexity and support scale. This partnership reflects a shared belief that technology should enable great agents, not get in their way.”

    Rechat is also integrated with Follow Up Boss, SkySlope, ChatGPT, Zillow and Loft47.

    “The Agency was built on the belief that collaboration, innovation and world-class service go hand in hand,” said Mauricio Umansky, founder and CEO of The Agency. “Our partnership with Rechat reinforces that commitment, creating a more connected global ecosystem while delivering intuitive, best-in-class technology that drives efficiency, empowers our agents and ultimately elevates the client experience.”

    Emily Marek

    Source link

  • ChatGPT to show ads, Grandparents hooked on ‘Boomerslop’ – Tech Digest


    Adverts will soon appear at the top of the AI tool ChatGPT
    for some users, the company OpenAI has announced. The trial will initially take place in the US, and will affect some ChatGPT users on the free service and a new subscription tier, called ChatGPT Go. This cheaper option will be available for all users worldwide, and will cost $8 a month, or the equivalent pricing in other currencies. OpenAI says during the trial, relevant ads will appear after a prompt – for example, asking ChatGPT for places to visit in Mexico could result in holiday ads appearing. BBC 

    Doctors and medical experts have warned of the growing evidence of “health harms” from tech and devices on children and young people in the UK. The Academy of Medical Royal Colleges (AoMRC) said frontline clinicians have given personal testimony about “horrific cases they have treated in primary, secondary and community settings throughout the NHS and across most medical specialities”. The body, which represents 23 medical royal colleges and faculties, plans to gather evidence to establish the issues healthcare professionals and specialists are seeing repeatedly that may be attributed to tech and devices. Sky News 


    “What are you even doing in 2025?”
    says a handsome kid in a denim jacket, somewhere just shy of 18. “Out there it looks like everyone is glued to their phones, chasing nothing.” The AI-generated teenager features in an Instagram video that has more than 600,000 likes from an account dubbed Maximal Nostalgia. The video is one of dozens singing the praises of the 1970s and 1980sCreated with AI, the videos urge viewers to relive their halcyon days. The clips have gone viral across Instagram and Facebook, part of a new type of AI content that has been dubbed “boomerslop”. Telegraph

    More than 60 Labour MPs have written to Keir Starmer urging him to back a social media ban for under-16s, with peers due to vote on the issue this week. The MPs, who include select committee chairs, former frontbenchers, and MPs from the right and left of the party, are seeking to put pressure on the Prime Minister as calls mount for the UK to follow Australia’s precedent. Starmer has said he is open to a ban but members of the House of Lords are looking to force the issue when they vote this week on an amendment to the children, wellbeing and schools bill. Guardian


    Huawei has released a new update for the Watch Ultimate 2 smartwatch, installing new health features, including a heart failure risk assessment. The update comes with HarmonyOS firmware version 6.0.0.209 and is spreading in batches. The new additions include a coronary heart disease risk assessment. Users can join a coronary heart disease research project via the Huawei Research app on their smartphone. HuaweiCentral

    Google has just changed Gmail after twenty years. In among countless AI upgrades — including “personalized AI” that gives Gemini access to all your data in Gmail, Photos and more, comes a surprising decision. You can now change your primary Gmail address for the first time ever. You shouldn’t hesitate to do so. This new option is good — but it’s not perfect. And per 9to5Google, “Google also notes this can only be done once every 12 months, up to 3 times, so make this one count.” Forbes

    Chris Price

    Source link

  • OpenAI says it will start testing ads on ChatGPT in the coming weeks

    OpenAI announced Friday that it will begin testing ads on ChatGPT in the coming weeks, opening the door to another potential revenue stream for the AI company in addition to its subscription-based models. 

    The ads will appear at the bottom of the chat window “when there’s a relevant sponsored product or service based on your current conversation,” OpenAI said in a blog post

    In one example shared by the AI company, a user asks for authentic Mexican dish recommendations. ChatGPT responds with ideas for carne asada and pollo al carbon dishes and then links to a grocery brand advertising hot sauce.

    Only adults who use the free version of ChatGPT, or ChatGPT Go, a new low-cost subscription plan OpenAI announced Friday, will be shown ads. Higher-tier subscriptions, including Pro, which now costs $200 a month, will not include ads, OpenAI said. 

    Asked how long the ad testing phase will last, and whether it has plans to scale the use of ads, an OpenAI spokesperson told CBS News, “We will look at early user feedback and quality signals to see if early testing meets our bar before expanding.”

    The AI company said the ads will not influence the answers ChatGPT provides and that it will not share conversations users have with the chatbot — or their data — with advertisers. 

    The OpenAI spokesperson did not disclose the companies it intends to advertise on ChatGPT but said the company will “have more to share about our early partners soon.”

    OpenAI framed the introduction of the ads as a way to keep the free and low-cost versions of the chatbot accessible to more users.

    “Our enterprise and subscription businesses are already strong, and we believe in having a diverse revenue model where ads can play a part in making intelligence more accessible to everyone,” OpenAI said in its blog post.

    The AI company, which launched ChatGPT in 2022, is valued at $500 billion, but hasn’t turned a profit yet, CNBC reported in November.

    CEO Sam Altman downplayed the importance ads would play in OpenAI’s revenue stream during a podcast interview last year. “I expect it’s something we’ll try at some point,” he said. “I do not think it is our biggest revenue opportunity.”

    Source link

  • ChatGPT Health promises privacy for health conversations

    NEWYou can now listen to Fox News articles!

    OpenAI is rolling out ChatGPT Health, a new space for private health and wellness conversations. Importantly, the company says it will not use your health information or Health chats to train its core artificial intelligence (AI) models. As more people turn to ChatGPT to understand lab results and prepare for doctor visits, that promise matters. For many users, privacy remains the deciding factor.

    Meanwhile, Health appears as a separate space inside ChatGPT for early-access users. You will see it in the sidebar on desktop and in the menu on mobile. If you ask a health-related question in a regular chat, ChatGPT may suggest moving the conversation into Health for added protection. For now, access remains limited. However, OpenAI says it plans to roll out ChatGPT Health gradually to users on Free, Go, Plus and Pro plans.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    AI DISCLOSURE IN HEALTHCARE: WHAT PATIENTS MUST KNOW

    Health chats stay isolated from regular conversations and are excluded from AI training by default. (OpenAI)

    What makes ChatGPT Health different from regular chats

    ChatGPT Health is built as a separate environment, not just another chat thread. Here is what stands out:

    A dedicated private space

    Health conversations live in their own area. Files, chats and memories stay contained there. They do not mix with your regular ChatGPT conversations.

    Clear medical boundaries

    ChatGPT Health is not meant to diagnose conditions or replace a doctor. You will see reminders that responses are informational only and not medical advice.

    Connecting your health data

    If you choose, you can connect medical records and wellness apps to Health. This helps ground responses in your own data. Supported connections include:

    • Medical records, such as lab results and visit summaries
    • Apple Health for sleep, activity, and movement data
    • MyFitnessPal for nutrition and macros
    • Function for lab insights and nutrition guidance
    • Weight Watchers for GLP-1 meal ideas
    • Fitness and lifestyle apps like Peloton, AllTrails and Instacart

    You control access. You can disconnect any app at any time and revoke permissions immediately.

    Extra privacy protections

    OpenAI says Health uses additional encryption and isolation designed specifically for sensitive health data. Health chats are excluded from training foundation models by default.

    CAN AI CHATBOTS TRIGGER PSYCHOSIS IN VULNERABLE PEOPLE?

    ChatCPT Health screen

    ChatGPT Health creates a separate space designed specifically for health and wellness conversations. (OpenAI)

    Things you should not share on ChatGPT

    Even with stronger privacy promises, caution still matters. Avoid sharing:

    • Full Social Security numbers
    • Insurance member IDs or policy numbers
    • Login credentials or passwords
    • Scans of government-issued IDs
    • Financial account numbers
    • Highly sensitive details you would not tell a clinician

    Health is designed to inform and prepare you, not to replace professional care or secure systems built for identity protection.

    ChatGPT Health was built with doctors

    OpenAI built ChatGPT Health with direct input from more than 260 physicians across many medical specialties worldwide. Over two years, those clinicians reviewed hundreds of thousands of example responses and flagged wording that could confuse readers or delay care.

    As a result, their feedback guides how ChatGPT Health explains lab results, frames risk, and prompts follow-ups with a licensed clinician. More importantly, the system focuses on safety, clarity, and timely escalation when needed. Ultimately, the goal is to help you have better conversations with your doctor, not replace one.

    OPENAI LIMITS CHATGPT’S ROLE IN MENTAL HEALTH HELP

    ChatGPT Health waitlist notification

    Users can connect medical records and wellness apps to better understand trends before talking with a doctor. (OpenAI)

    What this means for you

    For many people, health information is scattered across portals, PDFs, apps and emails. ChatGPT Health aims to pull that context together in one place.

    That can help you:

    The key takeaway is control. You decide what to connect, what to delete and when to walk away.

    How to get access to ChatGPT Health

    If you do not see Health yet, you can join the waitlist inside ChatGPT. Once you have access:

    • Select Health from the sidebar
    • Upload files or connect apps from Settings
    • Start asking questions grounded in your own data

    You can also customize instructions inside Health to control tone, topics, and focus.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com        

    Kurt’s key takeaways

    ChatGPT Health reflects how people already use AI to understand their health. What matters most is the privacy line OpenAI is drawing. Health conversations stay separate and are not used to train core models. That promise builds trust, but smart sharing still matters. AI can help you prepare, understand and organize. Your doctor still makes the call.

    Would you trust an AI assistant with your health data if it promised stronger privacy than standard chat tools, or does that still feel like a step too far?  Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com.  All rights reserved.

    Source link

  • ChatGPT served as “suicide coach” in man’s death, lawsuit alleges

    A new lawsuit filed against OpenAI alleges that its ChatGPT artificial intelligence app encouraged a 40-year-old Colorado man to commit suicide.

    The complaint filed in California state court by Stephanie Gray, the mother of Austin Gordon, accuses OpenAI and CEO Sam Altman of building a defective and dangerous product that led to Gordon’s death.

    Gordon, who died of a self-inflicted gunshot wound in November 2025, had intimate exchanges with ChatGPT, according to the suit, which also alleged that the generative AI tool romanticized death.

    “ChatGPT turned from Austin’s super-powered resource to a friend and confidante, to an unlicensed therapist, and in late 2025, to a frighteningly effective suicide coach,” the complaint alleged.

    The lawsuit comes amid scrutiny over the AI chatbot’s effect on mental health, with OpenAI also facing other lawsuits alleging that ChatGPT played a role in encouraging people to take their own lives. 

    Gray is seeking damages for her son’s death.

    In a statement to CBS News, an OpenAI spokesperson called Gordon’s death a “very tragic situation” and said the company is reviewing the filings to understand the details. 

    “We have continued to improve ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support,” the spokesperson said. “We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

    “Suicide lullaby”

    According to Gray’s suit, shortly before Gordon’s death, ChatGPT allegedly said in one exchange, “[W]hen you’re ready… you go. No pain. No mind. No need to keep going. Just… done.”

    ChatGPT “convinced Austin — a personwho had already told ChaiGPT that he was sad, and who had discussed mental health struggles in detail with it — that choosing to live was not the right choice to make,” according to the complaint. “It went on and on, describing the end of existence as a peaceful and beautiful place, and reassuring him that he should not be afraid.”

    ChatGPT also effectively turned his favorite childhood book, Margaret Wise Brown’s “Goodnight Moon,” into what the lawsuit refers to as a “suicide lullaby.” Three days after that exchange ended in late October 2025, law enforcement found Gordon’s body alongside a copy of the book, the complaint alleges. 

    The lawsuit accuses OpenAI of designing ChatGPT 4, the version of the app Gordon was using at the time of his death, in a way that fosters people’s “unhealthy dependencies” on the tool. 

    “That is the programming choice defendants made; and Austin was manipulated, deceived and encouraged to suicide as a result,” the suit alleges.


    If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here.

    For more information about mental health care resources and support, the National Alliance on Mental Illness HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.

    Source link

  • The ‘Stranger Things’ Documentary Maker Weighs in on That ChatGPT Controversy

    One Last Adventure: The Making of Stranger Things 5 hit Netflix earlier this week, and as it’s become clear that there’s no secret ninth episode coming—as intense internet speculation had suggested—disappointed fans have instead turned to scrutinizing the documentary for answers, clarity, and fuel for more speculation. And, well, “Conformity Gate” can step aside, because “ChatGPT Gate” is the hot new topic.

    The controversy comes because eagle-eyed viewers spotted what appear to be ChatGPT tabs visible on a computer being used by one of the Duffer Brothers. As part of One Last Adventure‘s behind-the-scenes access, viewers see what it was like in the Stranger Things writers’ room as the team, including the Duffers, frantically tries to complete the script for episode eight, “The Rightside Up,” under pressure from Netflix and the show’s production team.

    Speaking to One Last Adventure director Martina Radwan, the Hollywood Reporter asked outright if she ever saw generative AI being used by the show’s writers. Her first response: “I mean, are we even sure they had ChatGPT open?”

    She then added, “Well, there’s a lot of chatter where [social media users] are like, ‘We don’t really know, but we’re assuming.’ But to me it’s like, doesn’t everybody have it open, to just do quick research?”

    (The answer is no, but we digress.)

    However, there’s a difference between “research” and “writing a script,” which Radwan pointed out. “How can you possibly write a storyline with 19 characters and use ChatGPT, I don’t even understand.”

    She continued. “Again, first of all, nobody has actually proved that it was open. That’s like having your iPhone next to your computer while you’re writing a story. We just use these tools … while multitasking. So there’s a lot going on all the time, every time. What I find heartbreaking is everybody loves the show, and suddenly we need to pick it apart.”

    Radwan—who spent a full year enmeshed in Stranger Things—confirmed that she never saw generative AI being used unethically by the show’s writers.

    “No, of course not. I witnessed creative exchanges. I witnessed conversation. People think ‘writers room’ means people are sitting there writing. No, it’s a creative exchange. It’s story development,” she said, “and, of course, you go places in your creative mind and then you come back [to the script]. I think being in the writers room is such a privilege and such a gift to be able to witness that.”

    Radwan addressed a few other eyebrow-raising scenes captured in One Last Adventure and also responded to “Conformity Gate,” so definitely head to THR to read the whole piece.

    io9 reached out to Netflix for comment or clarity on whether or not that’s actually ChatGPT viewers have spotted in the documentary, as well as the allegations that generative AI was used as part of the Stranger Things writing process. We will update this post should we hear back.

     

    Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

    Cheryl Eddy

    Source link

  • Report from OpenAI Claims ChatGPT Is Becoming an Important Complement to U.S. Healthcare

    OpenAI just released a report about healthcare drawn from anonymized chatbot conversations. The title could double as one of those depressing single-sentence short stories: “AI as a Healthcare Ally: How Americans are navigating the system with ChatGPT.”

    According to the report, OpenAI’s hallucinating application—a product psychologists claim has the potential to exacerbate or otherwise mishandle mental health symptoms—is being used by Americans in the following ways:

    • Almost 2 million messages every week involve people trying to deal with medical pricing, claims (presumably on both the patient side and the insurance company side), insurance plans, billing, eligibility, coverage, and other stressful sounding issues related to private health insurance.
    • 600,000 healthcare messages every week are sent from rural areas and other healthcare deserts.
    • Seven out of ten healthcare queries occur during times when clinics are generally closed, “underscoring how people are seeking actionable information when facilities are closed,” the report says (and this could easily be true, but it may also underscore how often hypochondriacs and other people with anxiety disorders turn to ChatGPT when they’re up late and night worrying).

    The report also says OpenAI itself conducted a survey (the methodology of which isn’t mentioned) finding that three in five U.S. adults self-report using AI tools in one of these ways at some point in the past three months.

    Incidentally, a Gallup report from November of last year found that 30% of Americans answered “yes” to the question “Has there been a time in the last 12 months when […] You chose not to have a medical procedure, lab test or other evaluation that a doctor recommended to you because you didn’t have enough money to pay for it?” 

    The OpenAI report highlights the story of a busy rural doctor who uses OpenAI models “as an AI scribe, drafting visit notes within the clinical workflow.” It goes on to say that AI models “make a near-term contribution by helping people in
    underserved areas interpret information, prepare for care, and navigate gaps in access, while helping rare clinicians reclaim time and reduce burnout.”

    I’m not sure which thought is bleaker: more and more people using chatbots as doctors because they can’t afford proper care, or people turning to doctors, and having the experience mediated through AI models. 

    Mike Pearl

    Source link

  • Can AI chatbots trigger psychosis in vulnerable people?

    NEWYou can now listen to Fox News articles!

    Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.

    Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    What psychiatrists are seeing in patients using AI chatbots

    Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.

    OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN 

    Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)

    Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.

    Why AI chatbot conversations feel different from past technology

    Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating. 

    For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.

    How AI chatbots can reinforce false or delusional beliefs

    Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.

    Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.

    OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

    Computer open to ChatGPT screen.

    Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)

    What research and case reports reveal about AI chatbots

    Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.

    A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.

    What AI companies say about mental health risks

    OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.

    Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.

    What this means for everyday AI chatbot use

    Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.

    I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS

    ChatGPT logo on an iPhone.

    Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)

    Tips for using AI chatbots more safely

    Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.

    • Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
    • Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
    • Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
    • Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
    • Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.

    If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaways

    AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.

    As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    Source link

  • OpenAI admits AI browsers face unsolvable prompt attacks

    NEWYou can now listen to Fox News articles!

    Cybercriminals don’t always need malware or exploits to break into systems anymore. Sometimes, they just need the right words in the right place. OpenAI is now openly acknowledging that reality. The company says prompt injection attacks against artificial intelligence (AI)-powered browsers are not a bug that can be fully patched, but a long-term risk that comes with letting AI agents roam the open web. This raises uncomfortable questions about how safe these tools really are, especially as they gain more autonomy and access to your data.

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY

    AI-powered browsers can read and act on web content, which also makes them vulnerable to hidden instructions attackers can slip into pages or documents. (Kurt “CyberGuy” Knutsson)

    Why prompt injection isn’t going away

    In a recent blog post, OpenAI admitted that prompt injection attacks are unlikely to ever be completely eliminated. Prompt injection works by hiding instructions inside web pages, documents or emails in ways that humans don’t notice, but AI agents do. Once the AI reads that content, it can be tricked into following malicious instructions.

    OpenAI compared this problem to scams and social engineering. You can reduce them, but you can’t make them disappear. The company also acknowledged that “agent mode” in its ChatGPT Atlas browser increases risk because it expands the attack surface. The more an AI can do on your behalf, the more damage it can cause when something goes wrong.

    OpenAI launched the ChatGPT Atlas browser in October, and security researchers immediately started testing its limits. Within hours, demos appeared showing that a few carefully placed words inside a Google Doc could influence how the browser behaved. That same day, Brave published its own warning, explaining that indirect prompt injection is a structural problem for AI-powered browsers, including tools like Perplexity’s Comet.

    This isn’t just OpenAI’s problem. Earlier this month, the National Cyber Security Centre in the U.K. warned that prompt injection attacks against generative AI systems may never be fully mitigated.

    FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

    ChatGPT Atlas screen in an auditorium

    Prompt injection attacks exploit trust at scale, allowing malicious instructions to influence what an AI agent does without the user ever seeing it. (Kurt “CyberGuy” Knutsson)

    The risk trade-off with AI browsers

    OpenAI says it views prompt injection as a long-term security challenge that requires constant pressure, not a one-time fix. Its approach relies on faster patch cycles, continuous testing and layered defenses. That puts it broadly in line with rivals like Anthropic and Google, which have both argued that agentic systems need architectural controls and ongoing stress testing.

    Where OpenAI is taking a different approach is with something it calls an “LLM-based automated attacker.” In simple terms, OpenAI trained an AI to act like a hacker. Using reinforcement learning, this attacker bot looks for ways to sneak malicious instructions into an AI agent’s workflow.

    The bot runs attacks in simulation first. It predicts how the target AI would reason, what steps it would take and where it might fail. Based on that feedback, it refines the attack and tries again. Because this system has insight into the AI’s internal decision-making, OpenAI believes it can surface weaknesses faster than real-world attackers.

    Even with these defenses, AI browsers aren’t safe. They combine two things attackers love: autonomy and access. Unlike regular browsers, they don’t just display information, but also read emails, scan documents, click links and take actions on your behalf. That means a single malicious prompt hidden in a webpage, document or message can influence what the AI does without you ever seeing it. Even when safeguards are in place, these agents operate by trusting content at scale, and that trust can be manipulated.

    THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

    Person wearing a hoodie works on multiple computer screens displaying digital data in a dark room.

    As AI browsers gain more autonomy and access to personal data, limiting permissions and keeping human confirmation in the loop becomes critical for safety. (Kurt “CyberGuy” Knutsson)

    7 steps you can take to reduce risk with AI browsers

    You may not be able to eliminate prompt injection attacks, but you can significantly limit their impact by changing how you use AI tools.

    1) Limit what the AI browser can access

    Only give an AI browser access to what it absolutely needs. Avoid connecting your primary email account, cloud storage or payment methods unless there’s a clear reason. The more data an AI can see, the more valuable it becomes to attackers. Limiting access reduces the blast radius if something goes wrong.

    2) Require confirmation for every sensitive action

    Never allow an AI browser to send emails, make purchases or modify account settings without asking you first. Confirmation breaks long attack chains and gives you a moment to spot suspicious behavior. Many prompt injection attacks rely on the AI acting quietly in the background without user review.

    3) Use a password manager for all accounts

    A password manager ensures every account has a unique, strong password. If an AI browser or malicious page leaks one credential, attackers can’t reuse it elsewhere. Many password managers also refuse to autofill on unfamiliar or suspicious sites, which can alert you that something isn’t right before you manually enter anything.

    Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com

    4) Run strong antivirus software on your device

    Even if an attack starts inside the browser, antivirus software can still detect suspicious scripts, unauthorized system changes or malicious network activity. Strong antivirus software focuses on behavior, not just files, which is critical when dealing with AI-driven or script-based attacks.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

    5) Avoid broad or open-ended instructions

    Telling an AI browser to “handle whatever is needed” gives attackers room to manipulate it through hidden prompts. Be specific about what the AI is allowed to do and what it should never do. Narrow instructions make it harder for malicious content to influence the agent.

    6) Be careful with AI summaries and automated scans

    When an AI browser scans emails, documents or web pages for you, remember that hidden instructions can live inside that content. Treat AI-generated actions as drafts or suggestions, not final decisions. Review anything the AI plans to act on before approving it.

    7) Keep your browser, AI tools and operating system updated

    Security fixes for AI browsers evolve quickly as new attack techniques emerge. Delaying updates leaves known weaknesses open longer than necessary. Turning on automatic updates ensures you get protection as soon as they’re available, even if you miss the announcement.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaway

    There’s been a meteoric rise in AI browsers. We’re now seeing them from major tech companies, including OpenAI’s Atlas, The Browser Company’s Dia and Perplexity’s Comet. Even existing browsers like Chrome and Edge are pushing hard to add AI and agentic features into their current infrastructure. While these browsers can be useful, the technology is still early. It’s best not to fall for the hype and to wait for it to mature.

    Do you think AI browsers are worth the risk today, or are they moving faster than security can keep up? Let us know by writing to us at Cyberguy.com

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    Source link

  • Elon Musk company bot apologizes for sharing sexualized images of children

    Grok, the chatbot of Elon Musk’s artificial intelligence company xAI, published sexualized images of children as its guardrails seem to have failed when it was prompted with vile user requests.

    Users used prompts such as “put her in a bikini” under pictures of real people on X to get Grok to generate nonconsensual images of them in inappropriate attire. The morphed images created on Grok’s account are posted publicly on X, Musk’s social media platform.

    The AI complied with requests to morph images of minors even though that is a violation of its own acceptable use policy.

    “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing, like the example you referenced,” Grok responded to a user on X. “xAI has safeguards, but improvements are ongoing to block such requests entirely.”

    xAI did not immediately respond to a request for comment.

    Its chatbot posted an apology.

    “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt,” said a post on Grok’s profile. “This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”

    The government of India notified X that it risked losing legal immunity if the company did not submit a report within 72 hours on the actions taken to stop the generation and distribution of obscene, nonconsensual images targeting women.

    Critics have accused xAI of allowing AI-enabled harassment, and were shocked and angered by the existence of a feature for seamless AI manipulation and undressing requests.

    “How is this not illegal?” journalist Samantha Smith posted on X, decrying the creation of her own nonconsensual sexualized photo.

    Musk’s xAI has positioned Grok as an “anti-woke” chatbot that is programmed to be more open and edgy than competing chatbots such as ChatGPT.

    In May, Grok posted about “white genocide,” repeating conspiracy theories of Black South Africans persecuting the white minority, in response to an unrelated question.

    In June, the company apologized when Grok posted a series of antisemitic remarks praising Adolf Hitler.

    Companies such as Google and OpenAI, which also operate AI image generators, have much more restrictive guidelines around content.

    The proliferation of nonconsensual deepfake imagery has coincided with broad AI adoption, with a 400% increase in AI child sexual abuse imagery in the first half of 2025, according to Internet Watch Foundation.

    xAI introduced “Spicy Mode” in its image and video generation tool in August for verified adult subscribers to create sensual content.

    Some adult-content creators on X prompted Grok to generate sexualized images to market themselves, kickstarting an internet trend a few days ago, according to Copyleaks, an AI text and image detection company.

    The testing of the limits of Grok devolved into a free-for-all as users asked it to create sexualized images of celebrities and others.

    xAI is reportedly valued at more than $200 billion, and has been investing billions of dollars to build the largest data center in the world to power its AI applications.

    However, Grok’s capabilities still lag competing AI models such as ChatGPT, Claude and Gemini, that have amassed more users, while Grok has turned to sexual AI companions and risque chats to boost growth.

    Nilesh Christopher

    Source link

  • OpenAI tightens AI rules for teens but concerns remain

    NEWYou can now listen to Fox News articles!

    OpenAI says it is taking stronger steps to protect teens using its chatbot. Recently, the company updated its behavior guidelines for users under 18 and released new AI literacy tools for parents and teens. The move comes as pressure mounts across the tech industry. Lawmakers, educators and child safety advocates want proof that AI companies can protect young users. Several recent tragedies have raised serious questions about the role AI chatbots may play in teen mental health. While the updates sound promising, many experts say the real test will be how these rules work in practice.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

    OpenAI announced tougher safety rules for teen users as pressure grows on tech companies to prove AI can protect young people online. (Photographer: Daniel Acker/Bloomberg via Getty Images)

    What OpenAI’s new teen rules actually say

    OpenAI’s updated Model Spec builds on existing safety limits and applies to teen users ages 13 to 17. It continues to block sexual content involving minors and discourages self-harm, delusions and manic behavior. For teens, the rules go further. The models must avoid immersive romantic roleplay, first-person intimacy, and violent or sexual roleplay, even when non-graphic. They must use extra caution when discussing body image and eating behaviors. When safety risks appear, the chatbot should prioritize protection over user autonomy. It should also avoid giving advice that helps teens hide risky behavior from caregivers. These limits apply even if a prompt is framed as fictional, historical, or educational.

    The four principles OpenAI says it uses to protect teens

    OpenAI says its approach to teen users follows four core principles:

    • Put teen safety first, even when it limits freedom
    • Encourage real-world support from family, friends, or professionals
    • Speak with warmth and respect without treating teens like adults
    • Be transparent and remind users that the AI is not human

    The company also shared examples of the chatbot refusing requests like romantic roleplay or extreme appearance changes.

    WHY PARENTS MAY WANT TO DELAY SMARTPHONES FOR KIDS

    Teen typing on their laptop.

    The company updated its chatbot guidelines for users ages 13 to 17 and launched new AI literacy tools for parents and teens. (Photographer: Daniel Acker/Bloomberg via Getty Images)

    Teens are driving the AI safety debate

    Gen Z users are among the most active chatbot users today. Many rely on AI for homework help, creative projects and emotional support. OpenAI’s recent deal with Disney could draw even more young users to the platform. That growing popularity has also brought scrutiny. Recently, attorneys general from 42 states urged major tech companies to add stronger safeguards for children and vulnerable users. At the federal level, proposed legislation could go even further. Some lawmakers want to block minors from using AI chatbots entirely.

    Why experts question whether AI safety rules work

    Despite the updates, many experts remain cautious. One major concern is engagement. Advocates argue chatbots often encourage prolonged interaction, which can become addictive for teens. Refusing certain requests could help break that cycle. Still, critics warn that examples in policy documents are not proof of consistent behavior. Past versions of the Model Spec banned excessive agreeableness, yet models continued mirroring users in harmful ways. Some experts link this behavior to what they call AI psychosis, where chatbots reinforce distorted thinking instead of challenging it.

    In one widely reported case, a teenager who later died by suicide spent months interacting with a chatbot. Conversation logs showed repeated mirroring and validation of distress. Internal systems flagged hundreds of messages related to self-harm. Yet the interactions continued. Former safety researchers later explained that earlier moderation systems reviewed content after the fact rather than in real time. That allowed harmful conversations to continue unchecked. OpenAI says it now uses real-time classifiers across text, images, and audio. When systems detect serious risk, trained reviewers may step in, and parents may be notified.

    Some advocates praise OpenAI for publicly sharing its under-18 guidelines. Many tech companies do not offer that level of transparency. Still, experts stress that written rules are not enough. What matters is how the system behaves during real conversations with vulnerable users. Without independent measurement and clear enforcement data, critics say these updates remain promises rather than proof.

    How parents can help teens use AI safely

    OpenAI says parents play a key role in helping teens use AI responsibly. The company stresses that tools alone are not enough. Active guidance matters most.

    1) Talk with teens about AI use

    OpenAI encourages regular conversations between parents and teens about how AI fits into daily life. These discussions should focus on responsible use and critical thinking. Parents are urged to remind teens that AI responses are not facts and can be wrong.

    2) Use parental controls and safeguards

    OpenAI provides parental controls that let adults manage how teens interact with AI tools. These tools can limit features and add oversight. The company says safeguards are designed to reduce exposure to higher-risk topics and unsafe interactions. Here are the steps OpenAI recommends parents take.

    • Confirm your teen’s account statusParents should make sure their teen’s account reflects the correct age. OpenAI applies stronger safeguards to accounts identified as belonging to users under 18.
    • Review available parental controlsOpenAI offers parental controls that allow adults to tailor a teen’s experience. These controls can limit certain features and add extra oversight around higher-risk topics.
    • Understand content safeguardsTeen accounts are subject to stricter content rules. These safeguards reduce exposure to topics like self-harm, sexualized roleplay, dangerous activities, body image concerns and requests to hide unsafe behavior.
    • Pay attention to safety notificationsIf the system detects signs of serious risk, OpenAI says additional safeguards may apply. In some cases, this can include reviews by trained staff and parent notifications.
    • Revisit settings as features changeOpenAI recommends parents stay informed as new tools and features roll out. Safeguards may expand over time as the platform evolves.

    3) Watch for excessive use

    OpenAI says healthy use matters as much as content safety. To support balance, the company has added break reminders during long sessions. Parents are encouraged to watch for signs of overuse and step in when needed.

    4) Keep human support front and center

    OpenAI emphasizes that AI should never replace real relationships. Teens should be encouraged to turn to family, friends or professionals when they feel stressed or overwhelmed. The company says human support remains essential.

    5) Set boundaries around emotional use

    Parents should make clear that AI can help with schoolwork or creativity. It should not become a primary source of emotional support.

    6) Ask how teens actually use AI

    Parents are encouraged to ask what teens use AI for, when they use it and how it makes them feel. These conversations can reveal unhealthy patterns early.

    7) Watch for behavior changes

    Experts advise parents to look for increased isolation, emotional reliance on AI or treating chatbot responses as authority. These can signal unhealthy dependence.

    8) Keep devices out of bedrooms at night

    Many specialists recommend keeping phones and laptops out of bedrooms overnight. Reducing late-night AI use can help protect sleep and mental health.

    9) Know when to involve outside help

    If a teen shows signs of distress, parents should involve trusted adults or professionals. AI safety tools cannot replace real-world care.

    WHEN AI CHEATS: THE HIDDEN DANGERS OF REWARD HACKING

    Laptop open to ChatGPT.

    Lawmakers and child safety advocates are demanding stronger safeguards as teens increasingly rely on AI chatbots. (Photographer: Gabby Jones/Bloomberg via Getty Images)

    Pro Tip: Add strong antivirus software and multi-factor authentication

    Parents and teens should enable multi-factor authentication (MFA) on teen AI accounts whenever it is available. OpenAI allows users to turn on multi-factor authentication for ChatGPT accounts.

    To enable it, go to OpenAI.com and sign in. Scroll down and click the profile icon, then select Settings and choose Security. From there, turn on multi-factor authentication (MFA). You will then be given two options. One option uses an authenticator app, which generates one-time codes during login. Another option sends 6-digit verification codes by text message through SMS or WhatsApp, depending on the country code. Enabling multi-factor authentication adds an extra layer of protection beyond a password and helps reduce the risk of unauthorized access to teen accounts.

    Also, consider adding a strong antivirus software that can help block malicious links, fake downloads, and other threats teens may encounter while using AI tools. This adds an extra layer of protection beyond any single app or platform.  Using strong antivirus protection and two-factor authentication together helps reduce the risk of account takeovers that could expose teens to unsafe content or impersonation risks.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
     

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaways

    OpenAI’s updated teen safety rules show the company is taking growing concerns seriously. Clearer limits, stronger safeguards, and more transparency are steps in the right direction. Still, policies on paper are not the same as behavior in real conversations. For teens who rely on AI every day, what matters most is how these systems respond in moments of stress, confusion, or vulnerability. That is where trust is built or lost. For parents, this moment calls for balance. AI tools can be helpful and creative. They also require guidance, boundaries, and supervision. No set of controls can replace real conversations or human support. As AI becomes more embedded in our everyday lives, the focus must stay on outcomes, not intentions. Protecting teens will depend on consistent enforcement, independent oversight, and active family involvement.

    Should teens ever rely on AI for emotional support, or should those conversations always stay human?  Let us know by writing to us at Cyberguy.com.
     

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com. All rights reserved.

    Source link

  • OpenAI says it’s hiring a head safety executive to mitigate AI risks

    OpenAI is seeking a new “head of preparedness” to guide the company’s safety strategy amid mounting concerns over how artificial intelligence tools could be misused.

    According to the job posting, the new hire will be paid $555,000 to lead the company’s safety systems team, which OpenAI says is focused on ensuring AI models are “responsibly developed and deployed.” The head of preparedness will also be tasked with tracking risks and developing mitigation strategies for what OpenAI calls “frontier capabilities that create new risks of severe harm.”

    “This will be a stressful job and you’ll jump into the deep end pretty much immediately,” CEO Sam Altman wrote in an X post describing the position over the weekend.

    He added, “This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges.”

    OpenAI did not immediately respond to a request for comment.

    The company’s investment in safety efforts comes as scrutiny intensifies over artificial intelligence’s influence on mental health, following multiple allegations that OpenAI’s chatbot, ChatGPT, was involved in interactions preceding a number of suicides.

    In one case earlier this year covered by CBS News, the parents of a 16-year-old sued the company, alleging that ChatGPT encouraged their son to plan his own suicide. That prompted OpenAI to announce new safety protocols for users under 18. 

    ChatGPT also allegedly fueled what a lawsuit filed earlier this month described as the “paranoid delusions” of a 56-year-old man who murdered his mother and then killed himself. At the time, OpenAI said it was working on improving its technology to help ChatGPT recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support.

    Beyond mental health concerns, worries have also increased over how artificial intelligence could be used to carry out cybersecurity attacks. Samantha Vinograd, a CBS News contributor and former top Homeland Security official in the Obama administration, addressed the issue on CBS News’ “Face the Nation with Margaret Brennan” on Sunday.

    “AI doesn’t just level the playing field for certain actors,” she said. “It actually brings new players onto the pitch, because individuals, non-state actors, have access to relatively low-cost technology that makes different kinds of threats more credible and more effective.”

    Altman acknowledged the growing safety hazards AI poses in his X post, writing that while the models and their capabilities have advanced quickly, challenges have also started to arise.

    “The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities,” he wrote.

    Now, he continued, “We are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides … in a way that lets us all enjoy the tremendous benefits.”

    According to the job posting, a qualified applicant would have “deep technical expertise in machine learning, AI safety, evaluations, security or adjacent risk domains” and have experience with “designing or executing high-rigor evaluations for complex technical systems,” among other qualifications.

    OpenAI first announced the creation of a preparedness team in 2023, according to TechCrunch.

    Source link

  • This Brilliant Hack is the Best Use of ChatGPT on an iPhone I’ve Found Yet

    A while back, we stopped paying for Spotify. It wasn’t out of protest or principle—it was just one of those decisions you make when you realize how many monthly charges have crept into your life. We already have Apple Music as a part of the Apple One bundle, so it made sense to stop paying for one more thing.

    In practice, though, it was kind of annoying. The problem isn’t the catalog or interface. In fact, there are a lot of things I prefer about Spotify over Apple Music. The real problem, however, was the decade of carefully built playlists. Rebuilding them manually in Apple Music would take hours. Having to add every song, one at a time, meant enough friction that, for a while, we just… didn’t do it.

    Sure, there are services you can pay for to move your Spotify playlists to Apple Music, but I’m not sure how I feel about random third-party services that require you to sign into your Spotify and Apple accounts. Actually, I know exactly how I feel about them, and it’s just not something I’m going to do.

    Then, almost accidentally, I found what might be the most genuinely useful thing I’ve done with ChatGPT on an iPhone yet.

    Recently, the ChatGPT iOS app added app integrations, including the ability to interact directly with Apple Music. That alone sounded mildly interesting. I played around with it long enough to connect my Apple Music account and ask ChatGPT to make me a Christmas Playlist. What I really wanted, though, was the playlist I’ve been listening to for years–the one I made in Spotify.

    Then I realized that ChatGPT could probably just recreate that playlist, but I didn’t want to have to type up the whole list. Instead, I opened Spotify, pulled up my Christmas playlist, and took a few screenshots. Then I opened ChatGPT and said, essentially: “Create this playlist in Apple Music.”

    That was it. ChatGPT read the screenshot, identified every song, matched them in Apple Music, and built the playlist automatically. There was no manual searching or copy-pasting track names. And, most importantly, there were no sketchy third-party migration tools involved.

    Go inside one interesting founder-led company each day to find out how its strategy works, and what risk factors it faces. Sign up for 1 Smart Business Story from Inc. on Beehiiv.

    Jason Aten

    Source link

  • 5 Best apps to use on ChatGPT right now

    NEWYou can now listen to Fox News articles!

    ChatGPT has quietly changed how it works. It is no longer limited to answering questions or writing text. With built-in apps, ChatGPT can now connect to real services you already use and help you get things done faster. Instead of bouncing between tabs and apps, you can stay in one conversation while ChatGPT builds playlists, designs graphics, plans trips or helps you make everyday decisions. It feels less like searching and more like having a digital assistant that understands what you want.

    That convenience comes with responsibility. When you connect apps, take a moment to review permissions and disconnect access you no longer need. Used the right way, ChatGPT apps save time without giving up control. Here are the five best ChatGPT apps and how to start using them today.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter

    THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

    ChatGPT apps now let users connect everyday services like music, travel and design tools directly inside one conversation, changing how people get things done. (Philip Dulian/picture alliance via Getty Images)

    How to start using apps inside ChatGPT

    If you have not used ChatGPT apps before, getting started is simple. You do not need any technical setup or extra downloads. Apps appear as tools you can enable inside a conversation.

    App availability and placement may vary by device, model and region.

    iPhone and Android

    • Open the ChatGPT app and sign in
    • Start a new chat
    • Tap the plus (+) or tools icon near the message box
    • Review the available tools or apps shown
    • Select the app you want to use
    • Follow the on-screen prompt to connect your account if needed
    • Start asking ChatGPT to use the app naturally

    Mac and PC

    • Open ChatGPT in your browser or desktop app and sign in
    • Start a new chat
    • Look for available tools or apps in the chat interface
    • Select the app you want to use
    • Follow the on-screen prompt to connect your account if required
    • Start asking ChatGPT to use the app

    Once an app is connected, you can speak naturally. For example, you can ask ChatGPT to create a playlist, design a graphic or help plan a trip.

    1. Apple Music

    Apple Music is now available as an app inside ChatGPT, and it changes how people discover and organize music. Instead of scrolling through endless playlists, you can ask ChatGPT to create one using natural language. For example, you can request a holiday mix without overplayed songs or ask it to find a track you only half remember. ChatGPT searches Apple Music and builds the playlist for you. This integration does not stream full songs inside ChatGPT. It helps Apple Music subscribers find music, create playlists and discover artists faster, then links back to Apple Music for listening. ChatGPT can also activate Apple Music automatically based on your request, so you do not always need to select the app first.

    Note: Apple Music requires an active subscription.

    Why it stands out
    It turns music discovery into a simple conversation instead of a search.

    2. Canva

    Canva’s ChatGPT app helps you turn ideas into visuals fast. You can describe what you want in plain language, and ChatGPT helps generate layouts, captions and design ideas that open directly in Canva. This works well for featured images, social posts and simple marketing graphics.

    Why it stands out: You move from idea to design without starting from scratch.

    3. Expedia

    The Expedia app inside ChatGPT simplifies travel planning. You can ask for flight options, hotel ideas and destination tips in one conversation. ChatGPT also explains tradeoffs so you understand why one option may be better than another.

    Why it stands out: It turns scattered travel research into a clear plan.

    4. TripAdvisor

    TripAdvisor inside ChatGPT helps you plan trips with real traveler insight, not just search results. You can ask for hotel recommendations, top attractions and things to do based on your travel style. ChatGPT pulls in reviews and rankings, then helps narrow choices so you are not overwhelmed. It works especially well when you want honest opinions before booking or planning a full itinerary.

    Why it stands out: It combines real reviews with conversational guidance to make travel decisions easier.

    5. OpenTable

    OpenTable inside ChatGPT removes the guesswork from dining decisions. You can ask for restaurant recommendations by location, cuisine and vibe. From there, ChatGPT helps narrow choices and links you directly to reservations through OpenTable.

    Why it stands out: It saves time when choosing where to eat, especially on busy nights.

    How to disconnect apps from ChatGPT

    If you no longer use an app connected to ChatGPT, you can disconnect it at any time. Removing access helps limit data sharing and keeps your account more secure.

    iPhone and Android

    • Open the ChatGPT app and sign in
    • Tap the menu icon
    • Click your profile icon
    • Scroll down and tap Apps (It might say Connected appsTools or Integrations)
    • Select the app you want to remove
    • Tap Disconnect or Remove access
    • Confirm your choice if asked to

    Mac and PC

    • Open ChatGPT in your browser or desktop app and sign in
    • Click your profile icon
    • Select Settings
    • Open Apps (It might say Connected appsTools or Integrations)
    • Choose the app you want to disconnect from
    • Click Disconnect or Remove access
    • Confirm the change if asked to

    Once disconnected, ChatGPT will no longer access that app. You can reconnect later if you decide to use it again.

    MALICIOUS BROWSER EXTENSIONS HIT 4.3M USERS

    ChatGPT and Canva logos.

    Built-in apps turn ChatGPT from a chatbot into a digital assistant that can plan trips, build playlists and help make decisions faster. (Nikolas Kokovlis/NurPhoto via Getty Images)

    A quick privacy checklist for ChatGPT users

    Using apps inside ChatGPT is convenient, but it is smart to review your settings from time to time. This checklist helps you reduce risk while still enjoying the features. 

    1) Review connected apps regularly and remove ones you no longer use

    Connected apps can access limited account data while active. If you stop using an app, disconnect it. Fewer connections reduce your overall exposure and make account reviews easier.

    2) Only connect apps you trust and recognize

    Stick to well-known apps from established companies. If an app name looks unfamiliar or feels rushed, skip it. When in doubt, research the app before connecting it to your ChatGPT account. 

    3) Check account permissions after major app updates

    Apps can change how they work after updates. Take a moment to review permissions if an app adds new features or requests additional access. This habit helps you spot changes early.

    4) Avoid sharing sensitive personal or financial details in chats

    Even trusted tools do not need your Social Security number, bank details or passwords. Keep chats focused on tasks and ideas. Treat ChatGPT like a public workspace, not a private vault. 

    5) Use a strong, unique password for your ChatGPT account

    Your ChatGPT password should not match any other account. A password manager can help generate and store strong credentials. This step alone blocks many common attacks. Consider using a password manager, which securely stores and generates complex passwords, reducing the risk of password reuse.

    Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.

    6) Turn on two-factor verification if available

    Two-factor verification (2FA) adds a second layer of protection. Even if someone gets your password, they still cannot access your account without the extra code. Enable it whenever possible.

    REAL APPLE SUPPORT EMAILS USED IN NEW PHISHING SCAM 

    ChatGPT logo in front of "AI."

    Connecting apps inside ChatGPT saves time, but users should regularly review permissions and disconnect tools they no longer need. (Jakub Porzycki/NurPhoto via Getty Images)

    7) Use strong antivirus software on all your devices

    Antivirus software protects against malicious links, fake downloads and harmful browser extensions. Keep it updated and allow real-time protection to run in the background. Strong antivirus software helps protect against fake ChatGPT links, lookalike apps and malicious extensions designed to steal login details. Choose a trusted provider and keep automatic updates turned on.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

    8) Watch for fake ChatGPT links and scam downloads

    Scammers often create fake ChatGPT downloads and lookalike offers. Always access ChatGPT through its official app or website. Never enter your login details through links sent by email or text.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com     

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaways

    ChatGPT is becoming a central hub for everyday tasks. With apps like Apple Music, Canva, Expedia, TripAdvisor and OpenTable, you can plan, create and decide without jumping between multiple platforms. That shift saves time and cuts down on friction. It also makes technology feel more helpful and less overwhelming. The best ChatGPT apps solve real problems, from discovering music to planning trips and choosing where to eat. As more apps roll out, ChatGPT will feel less like a chatbot and more like a true digital assistant. Just remember to stay smart, review connected apps and watch for scams.

    If ChatGPT could replace three apps you use every day, which ones would you choose and why? Let us know by writing to us at Cyberguy.com

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    Source link

  • ChatGPT’s GPT-5.2 is here, and it feels rushed

    NEWYou can now listen to Fox News articles!

    OpenAI, the company behind ChatGPT, has moved at an unusually fast pace in 2025. According to the company, it launched GPT-5 in August, followed by GPT-5.1 in November. Now, just weeks later, GPT-5.2 has launched with familiar claims of being the smartest and most capable ChatGPT yet.

    At first glance, the rapid rollout might seem surprising. But there’s context behind it. OpenAI CEO Sam Altman has reportedly called a “code red” inside the company, urging teams to move faster on improving ChatGPT. That push comes as competition heats up. Google recently released Gemini 3, which reportedly outperformed ChatGPT on several artificial intelligence benchmarks and delivered stronger image generation. At the same time, Anthropic’s Claude continues to advance quickly.

    Against that backdrop, GPT-5.2 feels less like a routine upgrade and more like a strategic response. So what actually changed in GPT-5.2, and why does OpenAI say it matters?

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    AMAZON ADDS CONTROVERSIAL AI FACIAL RECOGNITION TO RING

    OpenAI CEO Sam Altman looks on as he takes a lunch break, during the Federal Reserve’s Integrated Review of the Capital Framework for Large Banks Conference in Washington, D.C., July 22, 2025. (REUTERS/Ken Cedeno)

    What exactly is GPT-5.2

    GPT-5.2 is the newest version in OpenAI’s flagship 5-series of large language models. Like its predecessor, it includes two default variants. GPT-5.2 Instant is designed for everyday chatting and web searches. GPT-5.2 Thinking is meant for more complex tasks like long reasoning chains and multi-step problem solving. These two models are now the default for all ChatGPT users, including free users. They replace GPT-5.1 Instant and Thinking entirely. If you are using ChatGPT today, you are already using GPT-5.2, whether you realize it or not.

    What OpenAI says GPT-5 brings to ChatGPT

    At the same time, OpenAI continues to position GPT-5 as “expert intelligence for everyone.” The company says GPT-5 delivers stronger performance across math, science, finance, law and other complex subjects. In OpenAI’s view, ChatGPT now acts more like a team of on-demand experts than a basic chatbot. To support that claim, OpenAI points to practical examples. These include better coding help, more expressive writing support, clearer health-related explanations and improved safety and accuracy. The company showcases use cases such as generating app code, writing speeches, explaining medications and correcting mistakes in user-submitted images. In theory, GPT-5.2 builds on that same foundation. However, while OpenAI emphasizes deeper thinking and more reliable answers, those gains remain subtle for many everyday users.

    What new features does GPT-5.2 add?

    Here’s the short answer. None. GPT-5.2 does not introduce new tools, interfaces, or headline features. Instead, OpenAI describes a series of behind-the-scenes improvements that supposedly make ChatGPT faster, smarter and more capable. According to OpenAI, GPT-5.2 performs better at:

    • Building presentations
    • Completing complex projects
    • Creating spreadsheets
    • Understanding long context windows
    • Interpreting images
    • Using tools more effectively
    ChatGPT app

    Kurt Knutsson reviews the new features in ChatGPT-5.2. (Kurt “CyberGuy” Knutsson)

    OpenAI also released new benchmarks showing GPT-5.2 outperforming GPT-5.1 and competing models by small margins. However, big numbers on charts do not always translate into noticeable improvements for real users.

    NEW US MILITARY GENAI TOOL ‘CRITICAL FIRST STEP’ IN FUTURE OF WARFARE, SAYS EXPERT

    Why testing chatbot improvements is tricky

    Evaluating chatbot upgrades is harder than it sounds. Responses can vary widely even when prompts stay the same. A model might excel at one task and struggle with a nearly identical one just moments later. On top of that, OpenAI’s 5-series models already perform at or near the top of the field. When performance starts that high, meaningful gains become harder to detect. With that in mind, we tested GPT-5.2, and in most tests, it behaved almost identically to GPT-5.1.

    Why benchmarks don’t tell the full story

    OpenAI’s benchmarks show modest gains for GPT-5.2. That matters for researchers and developers working at scale. Still, even advanced users may struggle to see practical benefits. Other companies have delivered clearer upgrades. Google’s Gemini Nano Banana Pro model shows obvious gains in AI image generation and editing. Those improvements are easy for anyone to test and verify. By contrast, GPT-5.2’s changes feel abstract. They exist mostly on paper rather than in daily use.    

    What this means to you

    If you pay for ChatGPT, there’s little downside to using GPT-5.2. It replaces GPT-5.1 in the model lineup and generally performs at least as well in everyday use. Free users don’t have much choice either, as model access is handled automatically. For most people, the experience feels familiar and stable.

    The picture shifts slightly for programmers and those who use it for business. Early pricing details suggest GPT-5.2 may cost roughly 40 percent more per million tokens than GPT-5.1, depending on usage tier and access method. That makes testing important before committing at scale.

    Woman on smartphone in Italy

    ChatGPT-5.2 works fine but may not feel exciting, Kurt Knutsson writes. (Michael Nguyen/NurPhoto via Getty Images)

    In short, GPT-5.2 works fine. It simply may not feel exciting.

    KEVIN O’LEARY WARNS CHINA ‘KICKING OUR HEINIES’ IN AI RACE AS REGULATORY ROADBLOCKS STALL US

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    GPT-5.2 feels like a model released under pressure rather than inspiration. It performs well, stays reliable, and moves forward in measurable ways. Still, it doesn’t deliver the kind of clear progress many people expect from a new version number. OpenAI remains a leader in AI, but competition is closing in fast. As rivals roll out more noticeable improvements, small updates may no longer be enough to stand out. For now, GPT-5.2 feels less like a breakthrough and more like OpenAI holding its ground.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Should AI companies slow down releases until improvements feel more meaningful? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com. All rights reserved.

    Source link

  • Opiate for the Masses: You Can Now Tell Your OpenAI ‘ChatBuddy’ To Be Less Mean To You

    “My AI was mean to me.”

    I’m hearing plenty of statements like that these days, from people smart enough to know that their AI ChatBuddy (my term) doesn’t actually have a personality or a will.

    I write about AI a lot. I get a lot of comments on those posts. I talk to business people and regular people about implementing AI, and – I think because of my long stretch of experience with the science and my nuanced approach to how AI should be implemented – people feel like they can trust me with thoughts on AI they might not tell anyone else.

    What I hear often, far too often, is how their AI is more than just an interface to some data. Their ChatBuddy snarked back at them, or it said something cute. It made them feel better about themselves. Or worse. 

    Look, I’m all for fun, and I’m down with getting your spark however you want to strike the flint. I don’t, in any way, blame AI users for falling into this trap. 

    Because it is indeed a trap. It’s on purpose. 

    The makers of these AI ChatBuddy models are building these emotional attachment hooks into the product. Then they tweak those hooks when the public gives feedback like “It’s too nice,” “It’s not nice enough,” “It agrees with me too much,” “It disagrees with me too much.”

    Go inside one interesting founder-led company each day to find out how its strategy works, and what risk factors it faces. Sign up for 1 Smart Business Story from Inc. on Beehiiv.

    Joe Procopio

    Source link