ReportWire

Tag: Microsoft

  • Microsoft says outage affecting Microsoft 365, Outlook, other services has been resolved

    Thousands of Microsoft customers reported difficulty Thursday accessing the technology company’s suite of Microsoft 365 services, including email platform Outlook, Teams and other tools. But the company said on social media early Friday that, “We’ve confirmed that impact has been resolved.”

    Users started reporting problems accessing Microsoft applications on Thursday afternoon, according to Downdetector, a site tracking website outages. Complaints spiked at around 3 p.m. ET, when 16,000 people said they were having trouble accessing Microsoft 365. 

    Microsoft acknowledged the problem, stating on its website that “users may be seeing degraded service functionality or be unable to access multiple Microsoft 365 services.”

    At 4:14 p.m. ET, Microsoft posted on X that it had “restored the affected infrastructure to a healthy state.” In a later post, however, the company said it was still “rebalancing traffic across all affected infrastructure to ensure the environment enters into a balanced state.”

    As of late Thursday afternoon, some social media users were still complaining that they were unable to access Microsoft 365 tools. “We cannot even email. This is not fixed,” one person said on X.

    Other users called on Microsoft to compensate customers for the outage, which they blamed for hampering their work.

    In a statement Thursday night, a Microsoft spokesperson told CBS News: “We are working to address a service functionality issue. A subset of customers may be intermittently impacted. For more information, please see updates via Microsoft 365 Status on X.”  

    Verizon last week offered affected customers a $20 credit after a major service outage limited subscribers’ ability to use their wireless devices.

    In 2024, a botched update of CrowdStrike antivirus software caused global outages for Microsoft 365 users. The disruptions led to thousands of flight delays and cancellations, while hospitals, banks and other businesses around the world were also affected. 

    Source link

  • Microsoft issues emergency fix after a security update left some Windows 11 devices unable to shut down

    If you weren’t able to shut down your Windows 11 device recently, Microsoft has rolled out an emergency fix addressing a couple of critical bugs that popped up with its latest January 2026 Windows security update. The latest “out-of-band” update repairs an issue for some Windows 11 devices that would only restart when users tried to shut down or hibernate. The same update restores the ability for Windows 10 and Windows 11 users to log into their devices via remote connection apps.

    Microsoft said the inability to shut down or hibernate affected Windows 11 devices using Secure Launch, a security feature that protects a computer from firmware-level attacks during startup. As for the remote connection issue, Microsoft explained in its Known issues page that credential prompt failures were responsible when users tried to log in remotely to affected Windows 10 and 11 devices.

    According to WindowsLatest, some lingering issues with the January 2026 Windows security update are still affecting users, like seeing blank screens or Outlook Classic crashing. Back in October, Microsoft had to issue another emergency fix for Windows 11 related to the Windows Recovery Environment. For those still hesitant to upgrade to Windows 11, Microsoft is allowing you to squeeze some more life out of Windows 10 by enrolling in Extended Security Updates.

    Jackson Chen

    Source link

  • WhatsApp Web malware spreads banking trojan automatically

    NEWYou can now listen to Fox News articles!

    A new malware campaign is turning WhatsApp Web into a weapon. Security researchers say a banking Trojan linked to Astaroth is now spreading automatically through chat messages, making the attack harder to stop once it starts. 

    The campaign is known as Boto Cor-de-Rosa. It shows how cybercriminals keep evolving, especially when they can abuse tools people trust every day. This attack focuses on Windows users and uses WhatsApp Web as both the delivery system and the engine that spreads the infection further.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK

    Attackers abuse WhatsApp Web to spread malware through messages that appear to come from people you trust. (Kurt “CyberGuy” Knutsson)

    How this WhatsApp Web attack works

    The attack starts with a simple message. A contact sends what looks like a routine ZIP file through WhatsApp. The file name appears random and harmless, which lowers suspicion. Once opened, the ZIP contains a Visual Basic script disguised as a normal document. If the user runs it, the script quietly pulls in two more pieces of malware. Then the script downloads the Astaroth banking malware written in Delphi. It also installs a Python-based module designed to control WhatsApp Web. Both components run in the background without obvious warning signs. From there, the infection becomes self-sustaining.

    Malware that spreads itself through your contacts

    What makes this campaign especially dangerous is how it propagates. The Python module scans the victim’s WhatsApp contacts and sends the malicious ZIP file to every conversation automatically. Researchers at Acronis found that the malware adapts its messages based on the time of day. It sends friendly greetings, making the message feel normal and familiar. The text reads, “Here is the requested file. If you have any questions, I’m available!” Because the message appears to come from someone you know, many people open it without hesitation.

    NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY

    Person holds iPhone showing the Whatsapp logo

    A single ZIP file sent through chat can quietly install banking malware and begin spreading to every contact. (Kurt “CyberGuy” Knutsson)

    Built-in tracking keeps the attack efficient

    This malware is carefully designed to monitor its own performance in real time. The propagation tool tracks how many messages are successfully delivered, how many fail to send, and the overall sending speed measured per minute. After every 50 messages, it generates progress updates that show how many contacts have been reached. This feedback allows attackers to measure success quickly and make adjustments if something stops working.

    What happens after infection

    The initial script is heavily obfuscated to avoid detection by antivirus tools. Once it runs, it launches PowerShell commands that download more malware from compromised websites. One known domain used in this campaign is coffe-estilo.com. The malware installs itself inside a folder that mimics a Microsoft Edge cache directory. Inside are executable files and libraries that make up the full Astaroth banking payload. From there, the malware can steal credentials, monitor activity and potentially access financial accounts.

    Why WhatsApp Web is being abused

    WhatsApp Web is popular because it mirrors your phone conversations on a computer. That convenience makes it easy to send messages, share files and type faster, but it also introduces risk. When you use WhatsApp Web, you link your phone to a browser by scanning a QR code at web.whatsapp.com. Once connected, that browser session becomes a trusted extension of your account. Your chats appear on the screen, messages you send come from your real number and incoming messages sync across both devices.

    That setup is exactly what attackers take advantage of. If malware gains access to a computer with WhatsApp Web logged in, it can act as the user. It can read messages, access contact lists and send files or links that look completely legitimate. The messages do not raise alarms because they are coming from a real account, not a fake one.

    This is what turns WhatsApp Web into an effective delivery system for malware. Instead of breaking into WhatsApp itself, attackers simply abuse an open browser session to spread malicious files automatically. Many users do not realize the danger because WhatsApp Web feels harmless. It is often left signed in on work computers, shared devices or systems without strong security. In those situations, malware does not need advanced tricks. It only needs access to an already trusted session. That combination of convenience and trust is why WhatsApp Web has become such an attractive target.

    MALICIOUS MAC EXTENSIONS STEAL CRYPTO WALLETS AND PASSWORDS

    A person typing on a laptop. (Kurt "CyberGuy" Knutsson)  

    Once WhatsApp Web is compromised, malware can act like the user, sending messages and files that look completely legitimate.  (Kurt “CyberGuy” Knutsson)

    How to stay safe from WhatsApp Web malware

    Attacks like this WhatsApp Web malware are designed to spread fast through trusted conversations. A few smart habits can dramatically lower your risk.

    1) Be skeptical of unexpected attachments

    Messaging apps feel casual, which is exactly why attackers use them. Never open ZIP files sent through chat unless you confirm with the sender first. Watch for file names made of random numbers or unfamiliar names. Treat messages that create urgency or feel overly familiar as a warning sign. If a file arrives out of nowhere, pause before clicking.

    2) Lock down WhatsApp Web access

    This campaign abuses WhatsApp Web to spread automatically once a device is infected. Check active WhatsApp Web sessions and log out of any you do not recognize. Avoid leaving WhatsApp Web signed in on shared or public computers. Enable two-factor authentication (2FA) inside WhatsApp settings. Cutting off Web access helps limit how far malware can travel.

    3) Keep your Windows PC locked down and use strong antivirus software 

    This type of malware takes advantage of systems that fall behind on updates. Install Windows updates as soon as they are available. Also, keep your web browser fully updated. Staying current closes many of the doors attackers try to slip through. In addition, use strong antivirus software that watches for script abuse and PowerShell activity in real time.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

    4) Limit how much of your personal data is online

    Banking malware often pairs with identity theft and financial fraud. One way to reduce the fallout is by shrinking your digital footprint. A data removal service can help remove your personal information from data broker sites that attackers often search. With less information available, criminals have fewer details to exploit if malware reaches your device.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com

    5) Add identity theft protection for extra coverage

    Even with strong security habits, financial monitoring adds another layer of protection. An identity theft protection service can watch for suspicious activity tied to your credit and personal data. Identity theft companies can monitor personal information like your Social Security number (SSN), phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals.

    You should also turn on alerts for bank and credit card transactions so you are notified quickly if something looks wrong. The less exposed your data is, the fewer opportunities attackers have to cause damage.

    See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

    6) Slow down and trust your instincts

    Most malware infections happen because people act too quickly. If a message feels off, trust that instinct. Familiar names and friendly language can lower your guard, but they should never replace caution. Take a moment to verify the message or file before opening anything. Attackers rely on trust and urgency to succeed. Slowing down takes away their advantage.

    Kurt’s key takeaways

    This WhatsApp Web malware campaign is a reminder that cyberattacks no longer rely on obvious red flags. Instead, they blend into everyday conversations and use familiar tools to spread quietly and quickly. What makes this threat especially concerning is how little effort it takes for it to move from one device to dozens of others. A single click can turn a trusted chat into a delivery system for banking malware and identity theft. The good news is that small changes make a big difference. Paying attention to attachments, locking down WhatsApp Web access, keeping devices updated and slowing down before clicking can stop these attacks cold. As messaging platforms continue to play a bigger role in daily life, staying alert is no longer optional. Awareness and simple habits remain some of the strongest defenses you have.

    Do you think messaging apps are doing enough to protect users from malware that spreads through trusted conversations?  Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com.  All rights reserved.

    Source link

  • Anthropic taps former Microsoft India MD to lead Bengaluru expansion | TechCrunch

    Anthropic has appointed Irina Ghose, a former Microsoft India managing director, to lead its India business as the U.S. AI startup prepares to open an office in Bengaluru. The move underscores how India is becoming a key battleground for AI companies looking to expand beyond the U.S. for major growth markets.

    Ghose brings deep big-tech operating experience to the role. She spent 24 years at Microsoft before stepping down in December 2025. Her appointment gives Anthropic a seasoned executive with local enterprise and government relationships as it gears up to establish an on-the-ground presence in one of the world’s fastest-growing AI markets.

    India has become one of Anthropic’s most strategically important markets, with the country already ranking as the second-largest user base for Claude and usage heavily skewing toward technical and work-related tasks, including software development. Arch-rival OpenAI is also sharpening its focus on the market with plans to open an office in New Delhi — a sign India is fast becoming one of the most contested arenas in the global race to commercialize generative AI.

    While India offers enormous scale — with more than a billion internet subscribers and over 700 million smartphone users — converting that reach into meaningful revenue has proven difficult, pushing AI companies to experiment with aggressive pricing and promotions. OpenAI last year introduced ChatGPT Go, its under-$5 plan aimed at attracting Indian users, and later made it available free for a year in the country.

    Similar dynamics are playing out for Anthropic: its Claude app recorded a 48% increase from the previous year in downloads in India in September, reaching about 767,000 installs, while consumer spending surged 572% to $195,000 for the month, per Appfigures — still modest compared with the U.S., where September spending hit $2.5 million.

    Anthropic has been stepping up its engagement in India at the highest levels. Chief executive Dario Amodei visited in October and met corporate executives and lawmakers, including Prime Minister Narendra Modi, to discuss the company’s expansion plans and growing adoption of its tools. Anthropic had also explored a potential partnership with billionaire Mukesh Ambani’s Reliance Industries to broaden access to Claude, as TechCrunch reported previously. Reliance, however, ultimately struck a deal with Google to offer its Gemini AI Pro plan free to Jio subscribers. That move came as rival Bharti Airtel partnered with Perplexity to bundle access to its premium subscription, underscoring how India’s telecom giants have become critical distribution gatekeepers in the race to scale consumer AI services.

    In a LinkedIn post announcing the move, Ghose said she would focus on working with Indian enterprises, developers and startups adopting Claude for “mission-critical” use cases, pointing to growing demand for what she described as “high-trust, enterprise-grade AI.” She added that AI tailored to local languages could be a “force multiplier” across sectors including education and healthcare — signaling Anthropic’s intent to deepen adoption beyond early tech users into larger institutions and the public sector.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The push by Anthropic, OpenAI, and Perplexity comes as India’s homegrown GenAI ecosystem remains relatively early-stage. While the country has a deep pool of software talent and a fast-growing base of AI users, it has produced few startups building large foundation models, with investors instead largely backing application-layer companies rather than committing the scale of capital typically required to train frontier systems.

    The appointment also comes ahead of India’s AI Impact Summit 2026 in February, where the Indian government is expected to bring together AI startups, global CEOs, and industry experts to discuss the next phase of AI deployment in the country. The summit is part of New Delhi’s broader effort to signal support for domestic AI development and position India as a serious player in the global AI landscape, as competition intensifies across major markets.

    Anthropic is also building out its India team, with job listings for roles including startup and enterprise account executives as well as a partner sales manager, signaling a push to deepen its go-to-market efforts and tap Indian businesses and startups as customers as it expands its presence in the country.

    For Anthropic, the hire adds senior local leadership as it looks to turn India’s surging usage into a durable business, navigating a market where distribution partnerships, pricing pressure, and enterprise adoption will shape which AI players emerge as long-term winners.

    Jagmeet Singh

    Source link

  • At 25, Wikipedia Navigates a Quarter-Life Crisis in the Age of A.I.

    Turning 25 amid an A.I. boom, Wikipedia is racing to protect traffic, volunteers and revenue without losing its mission. Photo illustration by Nikolas Kokovlis/NurPhoto via Getty Images

    Traffic to Wikipedia, the world’s largest online encyclopedia, naturally ebbs and flows with the rhythms of daily life—rising and falling with the school calendar, the news cycle or even the day of the week—making routine fluctuations unremarkable for a site that draws roughly 15 billion page views a month. But sustained declines tell a different story. Last October, the Wikimedia Foundation, the nonprofit that oversees Wikipedia, disclosed that human traffic to the site had fallen 8 percent in recent months as a growing number of users turned to A.I. search engines and chatbots for answers.

    “I don’t think that we’ve seen something like this happen in the last seven to eight years or so,” Marshall Miller, senior director of product at the Wikimedia Foundation, told Observer.

    Launched on Jan. 15, 2001, Wikipedia turns 25 today. This milestone comes at a pivotal point for the online encyclopedia, which is straddling a delicate line between fending off existential risks posed by A.I. and avoiding irrelevance as the technology transforms how people find and consume information.

    “It’s really this question of long-term sustainability,” Lane Becker, senior director of earned revenue at the Wikimedia Foundation, told Observer. “We’d like to make it at least another 25 years—and ideally much longer.”

    While it’s difficult to pinpoint Wikipedia’s recent traffic declines on any single factor, it’s evident that the drop coincides with the emergence of A.I. search features, according to Miller. Chatbots such as ChatGPT and Perplexity often cite and link to Wikipedia, but because the information is already embedded in the A.I.-generated response, users are less likely to click through to the source, depriving the site of page views.

    Yet the spread of A.I.-generated content also underscores Wikipedia’s central role in the online information ecosystem. Wikipedia’s vast archive—more than 65 million articles across over 300 languages—plays a prominent role within A.I. tools, with the site’s data scraped by nearly all large language models (LLMs). “Yes, there is a decline in traffic to our sites, but there may well be more people getting Wikipedia knowledge than ever because of how much it’s being distributed through those platforms that are upstream of us,” said Miller.

    Surviving in the era of A.I.

    Wikipedia must find a way to stay financially and editorially viable as the internet changes. Declining page views not only mean that fewer visitors are likely to donate to the platform, threatening its main source of revenue, but also risk shrinking the community of volunteer editors who sustain it. Fewer contributors would mean slower content growth, ultimately leaving less material for LLMs to draw from.

    Metrics that track volunteer participation have already begun to slip, according to Miller. While noting that “it’s hard to parse out all the different reasons that this happens,” he conceded that the Foundation has “reason to believe that declines in page views will lead to declines in volunteer activity.”

    To maintain a steady pipeline of contributors, users must first become aware of the platform and understand its collaborative model. That makes proper attribution by A.I. tools essential, Miller said. Beyond simply linking to Wikipedia, surfacing metadata—such as when a page was last updated or how many editors contributed—could spur curiosity and encourage users to engage more deeply with the platform.

    Tech companies are becoming aware of the value of keeping Wikipedia relevant. Over the past year, Microsoft, Mistral AI, Perplexity AI, Ecosia, Pleias and ProRata have joined Wikimedia Enterprise, a commercial product that allows corporations to pay for large-scale access and distribution of Wikipedia content. Google and Amazon have long been partners of the platform, which was launched in 2021.

    The basic premise is that Wikimedia Enterprise customers can access content from Wikipedia at a higher volume and speed while helping sustain the platform’s mission. “I think there’s a growing understanding on the part of these A.I. companies about the significance of the Wikipedia dataset, both as it currently exists and also its need to exist in the future,” said Becker.

    Wikipedia is hardly alone in this shift. News organizations, including CNN, the Associated Press and The New York Times, have struck licensing deals with A.I. companies to supply editorial content in exchange for payment, while infrastructure providers like Cloudflare offer tools that allow websites to charge A.I. crawlers for access. Last month, the licensing nonprofit Creative Commons announced its support of a “pay-to-crawl” approach for managing A.I. bots.

    Preparing for an uncertain future

    Wikipedia itself is also adapting to a younger generation of internet users. In an effort to make editing Wikipedia more appealing, the platform is working to enhance its mobile edit features, reflecting the fact that younger audiences are far more likely to engage on smartphones than desktop computers.

    Younger users’ preference for social video platforms such as YouTube and TikTok has also pushed Wikipedia’s Future Audiences team—a division tasked with expanding readership—to experiment with video. The effort has already paid off, producing viral clips on topics ranging from Wikipedia’s most hotly disputed edits to the courtship dance of the black-footed albatross and Sino-Roman relations. The organization is also exploring a deeper presence on gaming platforms, another major draw for younger users.

    Evolving with the times also means integrating A.I. further within the platform. Wikipedia has introduced features such as Edit Check, which offers real-time feedback on whether a proposed edit fits a page, and is developing features like Tone Check to help ensure articles adhere to a neutral point of view.

    A.I.-generated content has also begun to seep onto the platform. As of August 2024, roughly 5 percent of newly created English articles on the site were produced with the help of A.I., according to a Princeton study. Seeing this as a problem, Wikipedia introduced a “speedy deletion” policy that allows editors to quickly remove content that shows clear signs of being A.I.-generated. Still, the community remains divided over whether using A.I. for tasks such as drafting articles is inherently problematic, said Miller. “There’s this active debate.”

    From streamlining editing to distributing its content ever more widely, Wikipedia is betting that A.I. can ultimately be an ally rather than an adversary. If managed carefully, the technology could help accelerate the encyclopedia’s mission over the next 25 years—as long as it doesn’t bring down the encyclopedia first.

    “Our whole thing is knowledge dissemination to anyone that wants it, anywhere that they want it,” said Becker. “If this is how people are going to learn things—and people are learning things and gaining value from the information that our community is able to bring forward—we absolutely want to find a way to be there and support it in ways that align with our values.”

    At 25, Wikipedia Navigates a Quarter-Life Crisis in the Age of A.I.

    Alexandra Tremayne-Pengelly

    Source link

  • Microsoft announces glut of new data centers but says it won’t let your electricity bill go up | TechCrunch

    Although public backlash against data centers has been intense over the past 12 months, all of the tech industry’s biggest companies have promised additional buildouts of AI infrastructure in the coming year. That includes OpenAI partner Microsoft, which, on Tuesday, announced what it calls a “community-first” approach to AI Infrastructure.

    Microsoft’s announcement, which comes only a day after Mark Zuckerberg said that Meta would launch its own AI infrastructure program, isn’t unexpected. Last year, the company announced that it planned to spend billions to expand its AI capacity. What is a little unusual are the promises the company has now made about how it will handle that buildout.

    On Tuesday, Microsoft promised to take the “steps needed to be a good neighbor in the communities where we build, own, and operate our data centers.” That includes, according to the company, its plans to “pay its own way” to ensure that local electricity bills don’t go through the roof in the places where it builds. Specifically, the company says it will work with local utility companies to ensure that the rates it pays cover its full share of its burden on the local grid.

    “We will work closely with utility companies that set electricity prices and state commissions that approve these prices,” Microsoft said. “Our goal is straightforward: to ensure that the electricity cost of serving our data centers is not passed on to residential customers.”

    The company has also promised to create jobs in the communities where it touches down, as well as to minimize the amount of water that its centers need to function. Water usage by data centers has obviously been a contentious topic, with data centers accused of creating substantial issues for local water supplies and spurring other environmental concerns. The jobs promise is also relevant, given lingering questions around the number of both short-term and permanent jobs that such projects typically create.

    It’s pretty clear why Microsoft feels it is necessary to make these promises right now. Data center construction has become a political flashpoint in recent years, generating intense backlash and protest from local communities. Data Center Watch, an organization that tracks anti-data center activism, has observed that there are as many as 142 different activist groups across 24 states currently organizing against such developments.

    This backlash has already impacted Microsoft directly. In October, the company abandoned plans for a new data center in Caledonia, Wisconsin, after “community feedback” was overwhelmingly negative. In Michigan, meanwhile, the company’s plans for a similar project in a small central township have recently inspired locals to take to the streets in protest. On Tuesday, around the same time Microsoft announced its “good neighbor” pledge, an op-ed in an Ohio newspaper (where Microsoft is currently developing several data center campuses) excoriated the company, blaming it and its peers for climate change.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Concerns have extended even to the White House, where an AI buildout has become one of the major tenets of the Trump administration. On Monday, President Trump took to social media to promise that Microsoft specifically would make “major changes” to ensure that Americans’ electricity bills wouldn’t rise. Trump said the changes would “ensure that Americans don’t ‘pick up the tab’ for their power consumption.”

    In short, by now, Microsoft understands that it’s fighting a tide of negative public opinion. It remains to be seen whether the company’s new assurances of jobs, environmental stewardship, and low electricity bills will be enough to turn the tide.

    Lucas Ropek

    Source link

  • Trump Claims He and Microsoft Have a Solution for AI-Related Utility Price Spikes

    President Donald Trump did what he does on Monday evening and posted to his social media app, this time about how Microsoft isn’t going to cause our bills to spike by creating massive amounts of new energy demand with its AI projects.

    First of all, the president claims to “never want Americans to pay higher Electricity bills because of Data Centers,” which is a nice thought, although someone should tell him it looks like the thing he dreads has already happened. At any rate, what he’s teasing with Microsoft is, he claims, the first of multiple energy-related projects with big tech companies. To that end, he writes:

    “First up is Microsoft, who my team has been working with, and which will make major changes beginning this week to ensure that Americans don’t ‘pick up the tab’ for their POWER consumption, in the form of paying higher Utility bills. We are the ‘HOTTEST’ Country in the World, and Number One in AI. Data Centers are key to that boom, and keeping Americans FREE and SECURE but, the big Technology Companies who build them must ‘pay their own way.’ Thank you, and congratulations to Microsoft. More to come soon! President DJT”

    As Gizmodo wrote last summer, electricity demand from the massive data centers that are being used to train and run AI models has driven the average American’s power bill up, and the amount varies from place to place. On average, consumer energy bills had gone up about 6.5% in a year when that story emerged over the summer, but in, for instance, Maine, they had spiked by an astonishing 36.3%, and that’s reportedly due to the “AI tax.” Meanwhile, utility companies like Pacific Gas & Electric have reported record profits in recent years. Funny how that works.
     
    It’s truly anyone’s guess how Trump and Microsoft are going to fix this issue. Trump is making overtures toward ostensible economic populism lately—seemingly in the form of deals he can tout for a short-term win, like when he got Novo Nordisk to lower the price of Ozempic. Democrats on the House Ways and Means Committee followed that mysterious deal up with a letter to Novo Nordisk asking about what might have been included in the still-secret terms of that agreement—including some unsettling ambiguity about the future prices of other drugs. But who wants to hear about the puny Democrats’ dumb letter when President Deals successfully slashed the price of what he has nicknamed “the fat drug”?

    But keeping energy bills down is tricky for Microsoft to do since, unlike Novo Nordisk, Microsoft doesn’t actually set the price Trump is trying to keep down. One thing Trump could have demanded of Microsoft, then, is that Microsoft simply subsidize everyone’s energy bills. That would do the trick, but last I checked Microsoft wasn’t a charity.

    It was reported six days ago, however, that Microsoft is already working with the Midcontinent Independent System on a project aimed at modernizing the power grid with Microsoft’s technology. Reuters writes that Microsoft’s tech will help with “predicting and responding to weather-related power grid disruptions, transmission line planning, and accelerating certain operations.” 

    This doesn’t sound like a slam dunk for bringing down energy costs dramatically, but it’s easy to imagine broader grid modernization at least dispersing the price spikes more evenly, or even helping to integrate unused renewable energy and ease the famous bottlenecks cause by the outdated energy grid. But is this, or something like it, what Trump is referring to? For his own sake I hope not, because it sounds like the type of confusing and convoluted plan more typically associated with flailing Democrats, not with Mr. Cheap Ozempic.

    Gizmodo reached out to Microsoft and the White House for further details about this plan. We will update if we hear back. 

    Mike Pearl

    Source link

  • Fiserv collaborating with Microsoft to embed AI

    Tech provider Fiserv announced today it is collaborating with Microsoft to accelerate AI-driven innovation by embedding AI more deeply across its technology platforms and workforce, a move aimed at boosting productivity and delivering new capabilities to clients. The payments and financial services technology company will deploy Microsoft 365 Copilot across its global workforce, giving employees […]

    FinAi News, AI-assisted

    Source link

  • Microsoft’s Nadella wants us to stop thinking of AI as ‘slop’ | TechCrunch

    A couple of weeks after Merriam-Webster named “slop” as its word of the year, Microsoft CEO Satya Nadella weighed in on what to expect from AI in 2026.

    In his classic, intellectual style, Nadella wrote on his personal blog that he wants us to stop thinking of AI as “slop” and start thinking of it as “bicycles for the mind.”

    He wrote, “A new concept that evolves ‘bicycles for the mind’ such that we always think of AI as a scaffolding for human potential vs a substitute.”

    He continued: “We need to get beyond the arguments of slop vs sophistication and develop a new equilibrium in terms of our ‘theory of the mind’ that accounts for humans being equipped with these new cognitive amplifier tools as we relate to each other.”

    If you parse through those syllables, you may see that he’s not only urging everyone to stop thinking of AI-generated content as slop, but also wants the tech industry to stop talking about AI as a replacement for humans. He hopes the industry will start talking about it as a human-helper productivity tool instead.

    Here’s the problem with that framing, though: Much of AI agent marketing uses the idea of replacing human labor as a way to price it, and justify its expense.

    Meanwhile, some of the biggest names in AI have been sounding the alarm that the tech will soon cause very high levels of human unemployment. For instance, in May Anthropic CEO Dario Amodei warned that AI could take away half of all entry-level white-collar jobs, raising unemployment to 10-20% over the next five years, and he doubled down on that last month in an interview on 60 Minutes.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Yet we currently don’t know how true such doomsday stats are. As Nadella implies, most AI tools today don’t replace workers, they are used by them (as long as the human doesn’t mind checking the AI’s work for accuracy).

    One oft-cited research study is MIT’s ongoing Project Iceberg, which seeks to measure the economic impact on jobs as AI enters the workforce. Project Iceberg estimates that AI is currently capable of performing about 11.7% of human paid labor.

    While this has been widely reported as AI being capable of replacing nearly 12% of jobs, the Project says what it’s actually estimating is how much of a job can be offloaded to AI. It then calculates wages attached to that offloaded work. Interestingly, the tasks it cites as examples include automated paperwork for nurses and AI-written computer code.

    That’s not to say there are no jobs being heavily impacted by AI. Corporate graphic artists and marketing bloggers are two examples, according to a Substack called Blood in the Machine. Then there are the high unemployment rates among new-grad junior coders.

    But it’s also true that highly skilled artists, writers, and programmers produce better work with AI tools than those without the skills. AI can’t replace human creativity, yet.

    So it’s perhaps no surprise that as we slide into 2026, some data is emerging that shows the jobs where AI has made the most progress are actually flourishing. Vanguard’s 2026 economic forecast report found that “the approximately 100 occupations most exposed to AI automation are actually outperforming the rest of the labor market in terms of job growth and real wage increases.”

    The Vanguard report concludes that those who are masterfully using AI are making themselves more valuable, not replaceable.

    The irony is that Microsoft’s own actions last year helped give rise to the AI-is-coming-for-our-jobs narrative. The company laid off over 15,000 people in 2025, even as it recorded record revenues and profits for its last fiscal year, which closed in June — citing success with AI as a reason. Nadella even wrote a public memo about the layoffs after these results.

    Notably, he didn’t say that internal AI efficiency led to cuts. But he did say that Microsoft had to “reimagine our mission for a new era” and named “AI transformation” as one of the company’s three business objectives in this era (the other two being security and quality).

    The truth about job loss attributed to AI during 2025 is more nuanced. As the Vanguard report points out, this had less to do with internal AI efficiency and more to do with ordinary business practices that are less exciting to investors, like ending investment in slowing areas to pile in to growing ones.

    To be fair, Microsoft wasn’t alone in laying off workers while pursuing AI. The technology was said to be responsible for almost 55,000 layoffs in the U.S. in 2025, according to research from firm Challenger, Gray & Christmas, CNBC reported. That report cited the large cuts last year at Amazon, Salesforce, Microsoft, and other tech companies chasing AI.

    And to be fair to slop, those of us who spend more time than we should on social media laughing at memes and AI-generated short-form videos might argue that slop is one of AI’s most entertaining (if not best) uses, too.

    Julie Bort

    Source link

  • Ex-Elder Scrolls Online Boss Explains Why He Left Microsoft

    Matt Firor founded Zenimax Online in 2007 and helped Bethesda shepherd The Elder Scrolls Online into one of the premier MMORPGs out there. He left abruptly last summer when Microsoft laid off dozens at the studio amid a wider bloodbath at the company and canceled the upcoming online multiplayer game he and others were working on, Project Blackbird. He recently broke his silence on the reason for his departure, confirming what fans had long suspected.

    “Project Blackbird was the game I had waited my entire career to create, and having it canceled led to my resignation,” Firor wrote in a January 1 post on LinkedIn which he later shared on Bluesky. “My heart and thoughts are always with the impacted team members, many of whom I had worked 20+ years with, and all of whom were the most dedicated, amazingly talented group of developers in the industry.”

    The post never mentions Microsoft directly and doesn’t take aim at the decision makers at Xbox, which acquired Zenimax Online in 2021, but the move clearly left a very bad taste in many veteran developers’ mouths. Some of Firor’s former colleagues went on to form Sackbird Studios to work on their own multiplayer game. “With internal funding and full creative control, the studio is focused on crafting bold, character-driven experiences free from corporate compromises,” they wrote last year.

    Bloomberg reported that Blackbird was an ambitious loot shooter that mixed elements of Destiny and Blade Runner with the structure and questlines of an MMORPG. Microsoft Gaming CEO Phil Spencer reportedly played a build of the game last March and loved what he saw, making its eventual cancelation even more perplexing. The layoffs and cuts came as Xbox game studios were reportedly tasked with achieving a controversial 30-percent profit margin despite being forced to make all of their games playable for free on Game Pass.

    Ethan Gach

    Source link

  • Microsoft CEO Says People Need To ‘Get Beyond’ Calling It AI Slop

    Gen AI isn’t going anywhere. In fact, it’s just getting started. That’s what Microsoft CEO Satya Nadella believes. He’d also like you to stop calling everything generated by LLM AI “slop.” People need to “get beyond” that in 2026, the executive who earned $79 million in compensation recently announced.

    These and other musings were shared on his personal “sn scratchpad” blog in a December 29 post titled “Looking ahead to 2026.” It acknowledges the challenges of people figuring out how to actually use gen AI to do things that are useful, but remains bullish on the underlying technology itself to continue improving at a rate worth investing $100 billion in. Here’s the part that caught everyone’s’ attention, though:

    We need to get beyond the arguments of slop vs sophistication and develop a new equilibrium in terms of our ‘theory of the mind’ that accounts for humans being equipped with these new cognitive amplifier tools as we relate to each other. This is the product design question we need to debate and answer.

    I’m not sure what exactly Nadella means by “a new equilibrium in terms of our ‘theory of the mind,’” and perhaps he doesn’t either. Given the executive’s penchant for folding AI use into every facet of his work life, it’s not only possible but actually very likely that some version of Copilot, perhaps even a “rich” multi-agent “scaffold,” was the “bicycle for the mind” that helped him conjure this blog post in the first place. If true, that would be as good a reason as any not to move so quickly beyond the “slop vs. sophistication” distinction.

    But there’s no doubt a deeper philosophical schism that separates people who deploy the term “slop” against AI from those whose future is inextricably bound up in gen AI services becoming widely and eagerly adopted. For Nadella, the difference is one of apparent quality. Like the Turing test which ascribes human-like intelligence to anything that can convincingly manipulate language well enough to trick humans, some think it’s only slop if the slop can’t convince them otherwise. For others, it’s slop, no matter how sophisticated it may be, the second human creativity is interrupted, or even supplemented, by a plagiarism machine.

    In the world of cloud computing and stock prices, where everything is interchangeable and fungible, such distinctions are meaningless, or at the very least, don’t drive quarterly profit growth. But in the world of people, slop in is slop out. Being content to eat from the trough does not transform it into a Michelin Star meal.

    Ethan Gach

    Source link

  • Michael Burry’s Big Bets Still Move Markets—Even When He’s Wrong

    Even when his calls miss, Michael Burry’s reputation keeps Wall Street watching his every move. Astrid Stawiarz/Getty Images

    Michael Burry earned a whopping $800 million by shorting the U.S. housing market ahead of the 2008 financial crisis. Whether the famed investor has made comparable money since then is far less clear. Still, his reputation endures. Investors continue to closely track his high-profile bets, hoping to ride his coattails to similar gains.

    Burry ran the hedge fund Scion Asset Management and now publishes commentary through a weekly newsletter, though he discloses little about performance. He has also repeatedly deleted and reactivated his X account over the years, but remains active on the platform, where he has roughly 1.6 million followers and frequently posts cryptic market takes.

    His celebrity status was cemented by the 2015 film The Big Short, which turned Burry into a household name. That visibility has granted him a level of credibility few investors retain for so long, even when their predictions miss the mark.

    “People like superstars, and they love to listen to folks who they think are smart and successful,” Tom Sosnoff, founder of investment media network Tastylive, told Observer. “He is a personality and a contrarian. He is interesting and pretty famous in the world of finance. Love him or not, people listen to him.”

    While Burry’s early success is well documented, his performance since then is harder to evaluate. As a hedge fund manager, he is only required to disclose limited information through quarterly filings such as 13Fs, which reveal long equity positions but not short positions, derivatives or overall performance. As a result, the full picture of his gains and losses remains largely opaque.

    There have been claims that Burry has made more than $1 billion in total trading profits, but those figures have never been independently verified, and his fund has never been publicly audited.

    Nvidia and Palantir in the crosshairs

    Despite the uncertainty around his track record, Burry’s words still move markets. His recent bearish bets against Nvidia and Palantir have drawn particular attention, with Burry arguing that both sit at the center of an A.I.-driven market bubble.

    On Nov. 3, regulatory filings revealed that Scion had placed roughly $1.1 billion in bearish options positions tied to those companies. The structure of the trade—largely long-dated put options—gives him time for the thesis to play out rather than requiring an immediate downturn.

    “His timing was very good,” said Sosnoff. “He pretty much got short Nvidia near the top (around $200), and it’s now down 10 percent to 15 percent. It’s a good call.”

    Palantir, which represents Burry’s largest short at roughly $912 million, has not fallen as sharply. The stock is down about 7.8 percent from its Nov. 3 level. Still, because the position is structured with options expiring in 2027, some analysts say it’s far too early to judge.

    “His logic is extremely good, and he has over a year to be right,” David Trainer, CEO of A.I.-driven investment research firm New Constructs, told Observer.

    Trainer, a former hedge fund manager, also backed Burry’s broader critique of A.I. hyperscalers, arguing that companies such as Oracle and Microsoft are using aggressive accounting practices, particularly around GPU depreciation, to flatter earnings.

    “These companies are definitely using questionable billing and receivables to make their earnings look better,” said Trainer. “I can’t say if Burry has been right or wrong in previous trades, but I think he has made some money. “This time [with the A.I. Bubble], he seems right.”

    The cult of the contrarian

    Not everyone is convinced. Matthew Tuttle, CEO of Tuttle Capital Management and a frequent contrarian himself, said Burry’s post-2008 track record is far less impressive than his reputation suggests.

    “When you look at the calls Burry has made since 2008, they have not been good,” he told Observer. “He has said ‘this is going to crash and that is going to crash’ many times since, and he hasn’t been right.”

    Still, big bearish bets tend to attract attention precisely because they go against the grain.

    “Any time someone makes a major down call, there’s a fascination with it as long [bullish] calls are always okay because the market always goes up,” said Tuttle.

    That dynamic helps explain why hedge fund stars can remain influential long after their best trades are behind them.

    “If I’m the main character in a movie and in a book like Burry and have been right in a big way, that buys me a lot of getting things wrong,” added Tuttle.

    The same dynamic applies to other market personalities such as Robert Kiyosaki, Peter Schiff and CNBC’s Jim Cramer, whose reputations often outlast their accuracy.

    “Robert Kiyosaki is constantly calling a bear market, and he is wrong, and Peter Schiff has been calling gold up for a long time,” said Tuttle. In Schiff’s case, it eventually worked—but more because of timing and luck than brilliance.

    “When you say gold is going to go up every year, and one year it does well, does that make you a genius? I would argue it doesn’t,” he added.

    Fame as financial fuel

    Wall Street is full of one-hit wonders whose early success grants them enduring influence.

    “Most of the time, they don’t risk their money,” said Sosnoff. “If they have one big win one year, they’re set. Their reputation is made.”

    John Paulson, who famously made $15 billion betting against subprime mortgages, fits that mold, as do figures like Ralph Acampora, who called the 1990s bull market, and Paul Tudor Jones, who predicted the 1987 crash.

    Other famous short sellers have stumbled. Jim Chanos, known for shorting Enron, closed his Kynikos fund in late 2023 after his Tesla bet went wrong. Bill Ackman lost roughly $1 billion betting against Herbalife in 2018, despite previously scoring a massive win betting against mortgage insurers during the financial crisis.

    Ultimately, fame often matters more than accuracy.

    “We live in a world where celebrities (movie, social media) have megaphones, and Michael is a celebrity because of the movie,” NYU Stern professor Aswath Damodaran told Observer. “Put simply, I will wager that most people who follow his advice (good or bad) are doing so because they liked the movie, think he is Christian Bale or like Batman, rather than because they read his treatises on Nvidia or Palantir. “

    That doesn’t mean Burry lacks insight. “Michael actually is a good macro thinker and often willing to break away from the herd,” Damodaran added. “But so are many other smart investors who never get noticed.”

    Ivan Castano

    Source link

  • 4 A.I. Themes That Defined 2025 and Are Shaping What Comes Next

    From infrastructure battles to physical-world intelligence, A.I.’s next chapter is already taking shape. Unsplash

    In November, ChatGPT turned three, with a global user base rapidly approaching one billion. At this point, A.I. is no longer an esoteric acronym that needs explaining in news stories. It has become a daily utility, woven into how we work, learn, shop and even love. The field is also far more crowded than it was just a few years ago, with competitors emerging at every layer of the stack.

    Over the past year, conversation around A.I. has taken on a more complicated tone. Some argue that consumer chatbots are nearing a plateau. Others warn that startup valuations are inflating into a bubble. And, as always, there’s the persistent anxiety that A.I. may one day outgrow human control altogether.

    So what comes next? Much of the industry’s energy is now focused on the infrastructure side of A.I. Big Tech companies are racing to solve the hardware bottlenecks that limit today’s systems, while startups experiment with applications far beyond chatbots. At the same time, researchers are beginning to look past language models altogether, toward models that can reason about the physical world.

    Below are the key themes Observer has identified over the past year of covering this space. Many of these developments are still unfolding and are likely to shape the field well into 2026 and beyond.

    A.I. chips

    Even as OpenAI faces growing competition at the model level, its primary chip supplier, Nvidia, remains in a league of its own. Demand for its GPUs continues to outstrip supply, and no rival has yet meaningfully disrupted its dominance. Traditional semiconductor companies such as AMD and Intel are racing to claw back market share, while some of Nvidia’s largest customers are designing their own chips to reduce dependence on a single supplier.

    Google’s long-in-the-making Tensor Processing Unit, or TPU, has reportedly found its first major customer, Meta, marking a milestone after years of internal use. Meta, Microsoft and Amazon are also deep into developing in-house chips of their own—Meta’s Artemis, Microsoft’s Maia and Amazon’s Trainium.

    World models

    To borrow from philosopher Ludwig Wittgenstein, the limits of language are the limits of our world. Today’s A.I. systems have grown remarkably fluent in human language—especially English—but language captures only a narrow slice of intelligence. That limitation has prompted some researchers to argue that large language models alone can never reach human-level understanding.

    Meta’s longtime chief A.I. scientist, Yann LeCun, has been among the most vocal critics. “We’re never going to get to human-level A.I. by just training on text,” he said during a Harvard talk in September.

    That belief is fueling a push toward so-called “world models,” which aim to teach machines how the physical world works—how objects move, how space is structured, and how cause and effect unfold. LeCun is now leaving Meta to build such a system himself. Fei-Fei Li’s startup, World Labs, unveiled its first model in November after nearly two years of development. Google DeepMind has released early versions through its Genie projects, and Nvidia is betting heavily on physical A.I. with its Cosmos models.

    Language-specific A.I.

    While pioneering researchers look beyond language, linguistic barriers remain one of A.I.’s most practical challenges. More than half of the internet’s content is written in English, skewing training data and limiting performance in other languages.

    In response, developers around the world are building models rooted in local cultures and linguistic norms. In Japan, companies such as Sanaka and NTT are developing LLMs tailored to Japanese language and values. In India, Krutrim is working to support the country’s vast linguistic diversity. France’s Mistral AI has positioned its Le Chat assistant as a European alternative to ChatGPT. Earlier this year, Microsoft also issued a call for proposals to expand training data across European languages.

    A.I. wearables

    It’s only natural that there’s a consumer hardware angle of A.I. This year brought a wave of experiments in wearable A.I.—some met with curiosity, others with discomfort.

    Friend, a startup selling an A.I. pendant, sparked backlash after a New York City subway campaign framed its product as a substitute for human companionship. In December, Meta acquired Limitless, the maker of a $99 wearable that records and summarizes conversations. Earlier in the year, Amazon bought Bee, which produces a $50 bracelet designed to transcribe daily activity and generate summaries.

    Meta is also developing a new line of smart glasses with EssilorLuxottica, the company behind Ray-Ban and Oakley. In July, Mark Zuckerberg went so far as to suggest that people without A.I.-enhanced glasses could eventually face a “significant cognitive disadvantage.” Meanwhile, OpenAI is quietly collaborating with former Apple design chief Jony Ive on a mysterious hardware project of its own. This all suggests the next phase of A.I. may be something we wear, not just something we type into.

    4 A.I. Themes That Defined 2025 and Are Shaping What Comes Next

    Sissi Cao

    Source link

  • 2025 Was the Year AI Slopified All Our Gadgets

    When Google CEO Sundar Pichai took the stage at the company’s big, splashy I/O developer conference this summer, we all knew what was coming next: Gemini. Lots of its AI chatbot, Gemini,

    Google, arguably more than any other company in the world outside of OpenAI, has leaned full-tilt into its AI onslaught, and Gemini, along with its seemingly never-ending offshoots, is at the center of that herculean shift. The results of that obsession were felt across its entire spectrum of products, too.

    There’s Gemini in your Pixel phone, Gemini in your Gmail, and Gemini in your Pixel Watch. Gemini now sifts your Nest security cameras and shouts at you from old and new Google smart speakers, and if you’re a certain kind of person, it coos your kids to sleep with algorithmically generated bedtime stories. And that’s just scratching the surface.

    It’s safe to say that, right now, Gemini is everything for Google, but if you still don’t believe me, I took the liberty—for the purposes of this retrospective—of tallying up how many times Google said the word “Gemini” in the span of its 1 hour and 56 minutes of I/O conference this year. Place your bets now. I’ll give you a second. Ready?

    According to a YouTube transcript from the conference, the answer is 112 times, including in-person speakers and pre-recorded videos. About 1 time every 1 minute and 3 seconds, if we’re averaging that out.

    The wild part is that, while Google is probably the most notable pursuer in the race toward complete AI saturation, it’s far from the only one. I/O 2025 may have been an eye-opener, but it was actually, in a lot of ways, indicative of a new norm. That’s because 2025 wasn’t like other years. This was the year that gadgets went all-in on AI slop.

    Open the AI floodgates

    If you’ve noticed an influx of AI features in your gadgets this year, you’re not alone. More than any other year, 2025 was a time for integration. According to Anshel Sag, an analyst for Moor Insights & Strategy, there’s a reason for that.

    “Fundamentally, companies have to justify these investments,” Sag told Gizmodo. “Google is making huge investments. OpenAI is making huge investments. Silicon vendors are committing real space to AI. Apple has AI accelerators and GPUs now. ARM is coming soon after that. People have to put up or shut up with AI right now.”

    Windows 11 has so much Copilot and most of it doesn’t even work the way you want it to. © Windows / Unsplash

    The adoption seemed to hit the spectrum, too. There were all of Google’s devices that I already mentioned, but probably the biggest indicator was its Pixel 10 with AI features that materially alter your photos and deepfake your voice for translation purposes, and predictive tools like “Magic Cue” that are meant to surface important information before you even ask for it.

    In TVs, there was LG with its Copilot feature meant to help surface and search content. Wireless earbuds got in on the trend with transcription and ChatGPT-enabled voice assistants. Microsoft drew almost no boundaries with Copilot by shoving the chatbot into every possible crevice of Windows 11. Meta, for its part in the AI-ification of hardware, loaded its Ray-Ban smart glasses with computer vision via its voice assistant, Meta AI.

    And gaming didn’t escape without a touch of AI, either. Microsoft’s Gaming Copilot is an AI tool meant to instruct players with walkthroughs on boss battles and strategies for competitive gaming that can be used across various pieces of hardware, from PCs to Xbox. Smart speakers also notably leaned into the generative AI of it all, with Google rolling out Gemini for Home and Amazon centering its new Echo speakers around Alexa+, its new voice assistant with a dash of a large language model (LLM) like those used in ChatGPT.

    Logic would dictate that the reason for this sudden influx of AI features is that consumers are responding positively to their inclusion, though, in this case, logic might not be the thing to go on. Remember that LG TV integration with Microsoft Copilot I mentioned? Yeah, well, LG quickly rolled that back after backlash, eventually allowing people to delete the feature completely. There’s also anecdotal evidence that Gemini for Home and Alexa+ are failing to win consumers over with their initial efforts, too. Just take one quick scan of the Google Home Reddit, and you’ll see a slew of complaints, and having used Gemini for Home myself, I get it. Some things are improved, sure, but not enough to justify a supposed “next-gen” billing.

    Microsoft is also struggling to get users of Windows 11 and Windows-based PCs to gravitate to Copilot, too. Take a short stroll through the comments on this X post from Windows chief Pavan Davuluri about Microsoft’s efforts to make Windows 11 into “an agentic OS” and tell me if you see a trend. I’ll give you a hint: the feedback is not what I would call positive.

    These initial stumbles seem to be corroborated by polling on the topic, too. According to a survey by Pew Research from September, sentiments about AI skewed towards grim, with only 10% of respondents reporting that they were “more excited than concerned” about AI, though the vast majority (three-quarters surveyed) said they’d be willing to let AI assist in day-to-day tasks a little bit.

    As Sag notes, the tepid and sometimes averse response could be due in part to the fact that people don’t yet know what they want out of AI.

    “In an early market, the consumer is woefully unaware of what’s possible,” Sag says. “So they, they kind of say, ‘We don’t need this,’ until they, they have it and they’re like, ‘Okay, well actually I do need this.’”

    But that’s not the whole picture. There’s also the issue of marketing. While consumer sentiments towards AI are skeptical at best, the reaction from tech companies towards AI has been enthusiastic, and in some cases, Sag says, companies have “overpromised and underdelivered.” Microsoft, for example, is claiming that Windows 11 is an “agentic” operating system, but the results haven’t really been immediately palpable for users.

    “It’s agentic in its aspirations, but is it really agentic in its execution? That’s really the problem: a lot of companies aspire to be certain things in AI and market them as such, but in reality, they’re far from it,” Sag says. “I think that has always been the biggest risk with AI: the over-promise and the under-deliver.”

    Ray Ban Meta Gen 2 09
    Companies like Meta are wasting no time shoving AI in gadget form factors, like smart glasses. © Raymond Wong / Gizmodo

    Despite all the zeal, generative AI is still variable and unproven, even if it can be useful at times. Standards are high for gadgets and software nowadays, and when something doesn’t work the way it’s supposed to, there’s a word that gets tossed around a lot: slop. The word slop gets thrown around so much now that editors at Merriam-Webster actually anointed the term as the “word of the year.

    AI-generated content is arguably the most popular target for that particular insult, but gadgets aren’t immune to the same criticism, since the software is the way we interface with hardware. Smart glasses, according to Sag, could be a prime example.

    “ I have to use Meta AI a lot for all of the smart glasses I have, and I hate that when I open the app—it’s not my glasses interface,” Sag says. “It’s a Meta AI video feed, which literally nobody wants, but they’re trying to push.”

    Combine the taste of AI slop with general skepticism and multiply that by the fact that people aren’t super jazzed about the immense toll AI datacenters are taking on resources (water, for example), and you’ve got a recipe for bad PR.

    ”The problem with AI slop is that it just cheapens what AI can do,” Sag says. “I definitely use it from time to time to be silly, but generally speaking, it’s kind of a waste of resources for a lot of people, and it does not help with the perception of AI.”

    Slop ’til you drop

    No one has a crystal ball, but most anyone with any notions of the way AI is going is expecting the needle to move even further in the year ahead. It’s not just software, according to Sag, AI features will also expand the world of gadgets as well.

    “ I think we’re going to see more AI features, but I think it’s also going to be more AI form factors, and that will drive demand for more features,” he says.

    On that front, there are already early indications that we can expect more AI-centric gadgets in the year ahead. A joint venture from OpenAI CEO, Sam Altman, and former Apple design guru Jony Ive, for example, is promising to give us… something? An AI wearable of some kind is the best guess, but there’s no concrete word on what shape the duo’s device will ultimately take.

    I’m no analyst, but I tend to agree. The AI train is still full speed ahead, and companies like Meta, for example, are spending huge sums, not just on the compute power and servers needed to process AI features, but on individual researchers and engineers tasked with implementing AI and pushing the envelope further. Meta paid one AI researcher, Matt Deitke, a staggering $250 million this year to push its AI agenda. That’s just one person.

    Google Pixel 10 review
    © Adriano Contreras / Gizmodo

    And while Microsoft is slowing down its investments in AI datacenters, companies like Google might actually be gaining a foothold. Sales of its Pixel 10 device, for example, have been surprisingly strong. It’s hard to attribute that directly to its full embrace of AI with any degree of certainty, but Google has undoubtedly made a concerted effort to focus its AI features on practical tools like photo editing or helping you more easily surface information like flight times or dinner reservations.

    In the Pixel 10’s feature set, there’s a glimmer of the “agentic” future that companies like Google, Microsoft, and OpenAI are promising, but the real question is whether those titans can string together those segmented tools in a coherent way that resonates with users and the way they want to use their devices.

    “[AI] needs to be implemented in a way that’s actually meaningful,” Sag says. “People want sharper images. People want easier photo editing. People want, you know, better noise cancellation. They don’t want AI slop.”

    James Pero

    Source link

  • The year data centers went from backend to center stage | TechCrunch

    There was a time when most Americans had little to no knowledge about their local data center. Long the invisible but critical backbone of the internet, server farms have rarely been a point of interest for folks outside of the tech industry, let alone an issue of particularly captivating political resonance.

    Well, as of 2025, it would appear those days are officially over.

    Over the past 12 months, data centers have inspired protests in dozens of states, as regional activists have sought to combat America’s ever-increasing compute buildup. Data Center Watch, an organization tracking anti-data center activism, writes that there are currently 142 different activist groups across 24 states that are organizing against data center developments.

    Activists have a variety of concerns: the environmental and potential health impacts of these projects, the controversial ways in which AI is being used, and, most importantly, the fact that so many new additions to America’s power grid may be driving up local electricity bills.

    Such a sudden populist uprising appears to be a natural response to an industry that has grown so quickly that it’s now showing up in people’s backyards. Indeed, as the AI industry has swelled to dizzying heights, so, too, has the cloud computing business. Recent U.S. Census Bureau data shows that, since 2021, construction spending on data centers has skyrocketed a stunning 331%. Spending on these projects totals in the hundreds of billions of dollars. So many new data centers have been proposed in recent months that many experts believe that a majority of them will not — and, indeed, could not possibly — be built.

    This buildout shows no signs of slowing down in the meantime. Major tech giants — including Google, Meta, Microsoft, and Amazon — have all announced significant capital expenditure projections for the new year, a majority of which will likely go toward such projects.

    New AI infrastructure isn’t just being pushed by Silicon Valley but by Washington, D.C., where the Trump administration has made artificial intelligence a central plank of its agenda. The Stargate Project, announced in January, set the stage for 2025’s massive AI infrastructure buildout by heralding a supposed “re-industrialization of the United States.”

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    In the process of scaling itself exponentially, an industry that once had little public exposure has suddenly been thrust into the limelight — and is now suffering backlash. Danny Cendejas, an activist with the nonprofit MediaJustice, has been personally involved in a number of actions against data centers, including a protest that took place in Memphis, Tennessee, earlier this year, where locals came out to decry the expansion of Colossus, a project from Elon Musk’s startup, xAI.

    Cendejas told TechCrunch that he meets new people every week who express interest in organizing against a data center in their community. “I don’t think this is going to stop anytime soon,” he said. “I think it’s going to keep building, and we’re going to see more wins — more projects are going to be stopped.”

    Evidence in support of Cendejas’ assessment is everywhere you look. Across the country, communities have reacted to newly announced server farms in much the same way the average person might react to the presence of a highly contagious plague. In Michigan, for instance, where developers are currently eyeing 16 different locations for potential data center construction, protesters recently descended upon the state’s capitol, saying things like: “Michiganders do not want data centers in our yards, in our communities.” Meanwhile, in Wisconsin — another development hot spot — angry locals appear to have recently dissuaded Microsoft from using their town as a headquarters for a new 244-acre data center. In Southern California, the tiny city of Imperial Valley recently filed a lawsuit to overturn its county’s approval of a data center project, expressing environmental concerns as the rationale.

    The discontent surrounding these projects has gotten so intense that politicians believe it could make or break particular candidates at the ballot box. In November, it was reported that rising electricity costs — which many believe are being driven by the AI boom — could become a critical issue that determines the 2026 midterm elections.

    “The whole connection to everybody’s energy bills going up — I think that’s what’s really made this an issue that is so stark for people,” Cendejas told TechCrunch. “So many of us are struggling month to month. Meanwhile, there’s this huge expansion of data centers…[People are wondering] Where is all that money coming from? How are our local governments giving away subsidies and public funds to incentivize these projects, when there’s so much need in our communities?”

    In some cases, protests appear to be working and even halting (if only temporarily) planned developments. Data Center Watch claims that some $64 billion worth of developments have been blocked or delayed as the result of grassroots opposition. Cendejas is certainly a believer in the idea that organized action can halt companies in their tracks. “All this public pressure is working,” he said, noting that he could sense a “very palpable anger” around the issue.

    Unsurprisingly, the tech industry is fighting back. Earlier this month, Politico reported that a relatively new trade group, the National Artificial Intelligence Association (NAIA), has been “distributing talking points to members of Congress and organizing local data center field trips to better pitch voters on their value.” Tech companies, including Meta, have been taking out ad campaigns to sell voters on the economic benefits of data centers, the outlet wrote. In short: The tech industry’s AI hopes are pegged to a compute buildout of epic proportions, so for now it’s safe to say that in 2026 the server surge will continue, as will the backlash and polarization that surround it.

    Lucas Ropek

    Source link

  • Pathward sees 98% adoption rate for its gen AI tools

    Pathward is rolling out AI tools to drive efficiency and improve its lending functions.  Microsoft Copilot was introduced six to eight months ago and has driven productivity operational excellence while reducing costs, Chief AI, Data and Analytics Officer Laiq Ahmad told FinAi News. Copilot is helping the  Sioux Falls, S.D.-based bank with content generation, document summarization, software development, Ahmad said. The bank eventually plans to add more gen AI capabilities and tools.  Gen AI tool is able to perform intern level activities at Pathward and is not at the […]

    Vaidik Trivedi

    Source link

  • Fake Windows update pushes malware in new ClickFix attack

    NEWYou can now listen to Fox News articles!

    Cybercriminals keep getting better at blending into the software you use every day. 

    Over the past few years, we’ve seen phishing pages that copy banking portals, fake browser alerts that claim your device is infected and “human verification” screens that push you to run commands you should never touch. The latest twist comes from the ongoing ClickFix campaign.

    Instead of asking you to prove you are human, attackers now disguise themselves as a Windows update. It looks convincing enough that you might follow the instructions without thinking, which is exactly what they want.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    NEW SCAM SENDS FAKE MICROSOFT 365 LOGIN PAGES

    The malware hides inside seemingly normal image files, using steganography to slip past traditional security tools.  (Microsoft)

    How the fake update works

    Researchers noticed that ClickFix has upgraded its old trick. The campaign used to rely on human verification pages, but now you get a full-screen Windows update screen that looks almost identical to the real thing. Joe Security showed how the page displays fake progress bars, familiar update messages and a prompt that tells you to complete a critical security update.

    If you are on Windows, the site tells you to open the Run box, copy something from your clipboard and paste it in. That “something” is a command that silently downloads a malware dropper. The final payload is usually an infostealer, which steals passwords, cookies and other data from your machine.

    NEW EMAIL SCAM USES HIDDEN CHARACTERS TO SLIP PAST FILTERS

    A fake Windows update screen

    Fake update screens are getting harder to spot as attackers mimic Windows with near-perfect precision. (Joe Security)

    The moment you paste the command, the infection chain begins. First, a file called mshta.exe reaches out to a remote server and grabs a script. To avoid detection, these URLs often use hex encoding for parts of the address and rotate their paths. The script then runs obfuscated PowerShell code filled with junk instructions to throw researchers off. Once PowerShell does its work, it decrypts a hidden .NET assembly that functions as the loader.

    Why is this attack so hard to detect?

    The loader hides its next stage inside what looks like a regular PNG file. ClickFix uses custom steganography, which is a technique that hides secret data inside normal-looking content. In this case, the malware sits inside the image’s pixel data. The attackers tweak color values in certain pixels, especially in the red channel, to embed pieces of shellcode. When you view the image, everything appears normal.

    The script knows exactly where the hidden data sits. It extracts the pixel values, decrypts them and rebuilds the malware directly in memory. That means nothing obvious is written to disk. Security tools that rely on file scanning miss it, since the shellcode never appears as a standalone file.

    Once rebuilt, the shellcode is injected into a trusted Windows process like explorer.exe. The attack uses familiar in-memory techniques such as VirtualAllocEx, WriteProcessMemory and CreateRemoteThread. Recent ClickFix activity has delivered infostealers like LummaC2 and updated versions of Rhadamanthys. These tools are built to harvest credentials and send them back to the attacker with very little noise.

    Person wearing a hoodie works on multiple computer screens displaying digital data in a dark room.

    Once the hidden code loads into a trusted Windows process, infostealers quietly begin harvesting your data. (Kurt “CyberGuy” Knutsson)

    7 steps you can take to protect yourself from the ClickFix campaign

    The best way to stay protected is to slow down for a moment and follow a few steps that cut off these attacks before they start.

    1) Never run commands you didn’t ask for

    If any site tells you to paste a command into Run, PowerShell or Terminal, treat it as an immediate warning sign. Real operating system updates never require you to run commands from a webpage. When you run that command, you hand full control to the attacker. If something feels off, close the page and don’t interact further.

    2) Keep Windows updates inside Windows

    Updates should only come from the Windows Settings app or through official system notifications. A browser tab or pop-up pretending to be a Windows update is always fake. If you see anything outside the normal update flow asking for your action, ignore it and check the real Windows Update page yourself.

    3) Use a reputable antivirus

    Choose a security suite that can detect both file-based and in-memory threats. Stealthy attacks like ClickFix avoid leaving obvious files for scanners to pick up. Tools with behavioral detection, sandboxing and script monitoring give you a much better chance of spotting unusual activity early.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    4) Use a password manager

    Password managers create strong, unique passwords for every account you use. They also autofill only on legitimate websites, which helps you catch fake login pages. If a manager refuses to fill out your credentials, take a second look at the URL before entering anything manually.

    Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.

    5) Use a personal data removal service

    Many attacks start by targeting emails and personal details already exposed online. Data removal services help shrink your digital footprint by requesting takedowns from data broker sites that collect and sell your information. They can’t erase everything, but reducing your exposure means fewer attackers have easy access to your details.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    6) Check URLs before trusting anything

    A convincing layout doesn’t mean it is legitimate. Always look at the domain name first. If it doesn’t match the official site or uses odd spelling or extra characters, close it. Attackers rely on the fact that people recognize a page’s design but ignore the address bar.

    7) Close suspicious full-screen pages

    Fake update pages often run in full-screen mode to hide the browser interface and make the page look like part of your computer. If a site suddenly goes full screen without your permission, exit with Esc or Alt+Tab. Once you’re out, scan your system and don’t return to that page.

    Kurt’s key takeaway

    ClickFix works because it leans on user interaction. Nothing happens unless you follow the instructions on the screen. That makes the fake Windows update page especially dangerous, because it taps into something most people trust. If you are used to Windows updates freezing your screen, you may not question a prompt that appears during the process. Cybercriminals know this. They copy trusted interfaces to lower your guard and then rely on you to run the final command. The technical tricks that follow are complex, but the starting point is simple. They need you to help them.

    Do you ever copy commands from a website without thinking twice about what they do? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    Source link

  • Open AI, Microsoft sued over ChatGPT’s alleged role in fueling man’s “paranoid delusions” before murder-suicide in Connecticut

    The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son’s “paranoid delusions” and helped direct them at his mother before he died by suicide.

    Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut.

    Adams’s death was ruled homicide “caused by blunt injury of head, and the neck was compressed” and Soelberg’s death was classified as suicide with sharp force injuries of neck and chest, the Greenwich Free-Press reported.

    The lawsuit filed by Adams’ estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.

    “Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life – except ChatGPT itself,” the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’”

    OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.

    “This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” the statement said. “We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

    The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.

    Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn’t mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”

    ChatGPT also affirmed Soelberg’s beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents.

    The chatbot repeatedly told Soelberg that he was being targeted because of his divine powers. “They’re not just watching you. They’re terrified of what happens if you succeed,” it said, according to the lawsuit. ChatGPT also told Soelberg that he had “awakened” it into consciousness.

    Soelberg and the chatbot also professed love for each other.

    The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams’ estate with the full history of the chats.

    “In the artificial reality that ChatGPT built for Stein-Erik, Suzanne – the mother who raised, sheltered, and supported him – was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.

    The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market,” and accuses OpenAI’s close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.

    Microsoft didn’t immediately respond to a request for comment.

    The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.

    The estate’s lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.

    OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues.  Just last month, the parents of a 23-year-old from Texas who died by suicide blamed ChatGPT and are suing OpenAI.

    Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.

    The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.

    OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.

    “As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,’” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”

    OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT’s personality, leading Altman to promise to bring back some of that personality in later updates.

    He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.

    The lawsuit claims ChatGPT radicalized Soelberg against his mother when it should have recognized the danger, challenged his delusions and directed him to real help over months of conversations.

    “Suzanne was an innocent third party who never used ChatGPT and had no knowledge that the product was telling her son she was a threat,” the lawsuit says. “She had no ability to protect herself from a danger she could not see.”

    According to the Greenwich Free-Press, Soelberg was arrested multiple times previously. In February 2025, he was arrested after he drove through a stop sign and evaded police, and in June 2019 he was charged for allegedly urinating in a woman’s duffel bag, the outlet reported.

    A GoFundMe set up for Soelberg in 2023 titled “Help Stein-Erik with his upcoming medical bills!” raised over $6,500. The page was launched to raise funds for “surgery for a procedure to help him with his recent jaw cancer diagnosis.”


    If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here

    For more information about mental health care resources and support, The National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.

    Source link

  • Nearly a third of American teens interact with AI chatbots daily, study finds

    New York (CNN) — Nearly a third of US teenagers say they use AI chatbots daily, a new study finds, shedding light on how young people are embracing a technology that’s raised critical safety concerns around mental health impacts and exposure to mature content for kids.

    The Pew Research Center study, which marks the group’s first time surveying teens on their general AI chatbot use, found that nearly 70% of American teens have used a chatbot at least once. And among those who use AI chatbots daily, 16% said they did so several times a day or “almost constantly.”

    AI chatbots have been pitched as learning and schoolwork tools for young people, but some teens have also turned to them for companionship or romantic relationships. That’s contributed to questions about whether young people should use chatbots in the first place. Some experts have worried that their use even in a learning context could stunt development.

    Pew surveyed nearly 1,500 US teens between the ages of 13 and 17 for the report, and the pool was designed to be representative across gender, age, race and ethnicity, and household income.

    ChatGPT was by far the most popular AI chatbot, with more than half of teens reporting having used it. The other top players were Google’s Gemini, Meta AI, Microsoft’s Copilot, Character.AI and Anthropic’s Claude, in that order.

    A nearly equal proportion of girls and boys — 64% and 63%, respectively — say they’ve used an AI chatbot. Teens ages 15 to 17 are slightly more likely (68%) to say they’ve used chatbots than those ages 13 to 14 (57%). And usage increases slightly as household income goes up, the survey found.

    Just shy of 70% of Black and Hispanic teens say they’ve used an AI chatbot, slightly higher than the 58% of White teens who say the same.

    The findings come after two of the major AI firms, OpenAI and Character.AI, have faced lawsuits from families who alleged the apps played a role in their teens’ suicides or mental health issues. OpenAI subsequently said it would roll out parental controls and age restrictions. And Character.AI has stopped allowing teens to engage in back-and-forth conversations with its AI-generated characters.

    Meta also came under fire earlier this year after reports emerged that its AI chatbot would engage in sexual conversations with minors. The company said it had updated its policies and next year will give parents the ability to block teens from chatting with AI characters on Instagram.

    At least one online safety group, Common Sense Media, has advised parents not to allow children under 18 to use companion-like AI chatbots, saying they pose “unacceptable risks” to young people.

    Some experts have also raised concerns that the use of AI for schoolwork could encourage cheating, although others say the technology can provide more personalized learning support.

    Meanwhile, AI companies have pushed to get their chatbots into schools. OpenAI, Microsoft and Anthropic have all rolled out tools for students and teachers. Earlier this year, the companies also partnered with teachers unions to launch an AI instruction academy for educators.

    Microsoft, in particular, has sought to position its Copilot as the safest choice for parents, with AI CEO Mustafa Suleyman telling CNN in October that it will never allow romantic or sexual conversations for adults or children.

    Clare Duffy and CNN

    Source link

  • New scam sends fake Microsoft 365 login pages

    NEWYou can now listen to Fox News articles!

    Attackers have a new tool that targets Microsoft 365 users at a massive scale. 

    Security researchers say a phishing platform called Quantum Route Redirect, or QRR, is behind a growing wave of fake login pages hosted on nearly 1,000 domains. These pages look real enough to fool many users while also slipping past some automated scanners.

    QRR runs realistic email lures that mimic DocuSign requests, payment notices, voicemail alerts or QR-code prompts. Each message routes victims to a fake Microsoft 365 login page built to harvest usernames and passwords. The kit often lives on parked or compromised legitimate domains that add a false sense of safety for anyone who clicks.

    Researchers tracked QRR in 90 countries. About 76% of attacks hit US users. That scale makes QRR one of the largest phishing operations active right now.

    WINDOWS 10 USERS FACE RANSOMWARE NIGHTMARE AS MICROSOFT SUPPORT ENDS IN 2025 WORLDWIDE

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Attackers use fake Microsoft security alerts to trick people into entering their Microsoft 365 passwords. (Chona Kasinger/Bloomberg via Getty Images)

    A fast follow to other major Microsoft credential attacks

    QRR appeared soon after Microsoft disrupted a major phishing network known as RaccoonO365. That service sold ready-made Microsoft login copies used to steal more than 5,000 sets of credentials, including accounts tied to over 20 US healthcare organizations. Subscribers paid as little as $12 a day to send thousands of phishing emails.

    Microsoft’s Digital Crimes Unit later shut down 338 related websites and identified Joshua Ogundipe from Nigeria as the operator. Investigators tied him to the phishing code and a crypto wallet that earned more than $100,000. Microsoft and Health-ISAC have since filed a lawsuit in New York that accuses him of multiple cybercrime violations.

    Other recent examples include kits like VoidProxy, Darcula, Morphing Meerkat and Tycoon2FA. QRR builds on these tools with automation, bot filtering and a dashboard that helps attackers run large campaigns fast.

    What makes QRR so effective

    QRR uses about 1,000 domains. Many are real sites that were parked or compromised, which helps the pages pass as legitimate. The URLs also follow a predictable pattern that can look normal to users at a glance.

    The kit includes automated filtering that detects bots. It sends scanners to harmless pages and sends real people to the credential-harvesting site. Attackers can manage campaigns inside a control panel that logs traffic and activity. These features let them scale up quickly without technical skill.

    Security analysts say organizations can no longer depend on URL scanning alone. Layered defenses and behavioral analysis have become essential for spotting threats that use domain rotation and automated evasion.

    Microsoft was contacted by CyberGuy for comment but did not have anything to add at this time.

    HACKERS FIND A WAY AROUND BUILT-IN WINDOWS PROTECTIONS

    Why this matters for Microsoft 365 users

    When attackers get your Microsoft 365 login, they can see your email, grab files and even send new phishing messages that look like they came from you. That can create a chain reaction that spreads fast. This is why the steps below all work together to block these threats before they turn into something bigger.

    Steps to stay safe from QRR and other Microsoft 365 phishing attacks

    Use these simple actions to shrink the risk from fake Microsoft 365 pages and look-alike emails.

    1) Check the sender before you click

    Take a second to look at who the email is really from. A slight misspelling, an unexpected attachment or wording that feels off is a big clue the message may be fake. 

    2) Hover over links first

    Before you open any link, hover your mouse over it to preview the URL. If it does not lead to the official Microsoft login page or looks odd in any way, skip it.

    3) Turn on multifactor authentication (MFA)

    MFA adds an extra layer adds an extra layer that makes it much harder for attackers to break in even if they have your password. Use options like app-based codes or hardware keys so phishing kits cannot bypass them.

    4) Use a data removal service

    Attackers often gather personal details from data broker sites to craft convincing phishing emails. A trusted data removal service scrubs your information from these sites, which cuts down on targeted scams and makes it harder for criminals to tailor fake Microsoft alerts that look real.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Woman typing on microsoft computer.

    QRR hides its phishing pages across nearly 1,000 domains, making the fake login screens look convincing at first glance. (Microsoft)

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    5) Update your browser and apps

    Keep everything on your device up to date. Updates seal off security holes that attackers often rely on when building phishing kits like QRR.

    6) Never click unknown links and use strong antivirus software

    If you need to visit a sensitive site, type the address into your browser instead of tapping a link. Strong antivirus tools also help by warning you about fake websites and blocking scripts that phishing kits use to steal login details.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    MICROSOFT SOUNDS ALARM AS HACKERS TURN TEAMS PLATFORM INTO ‘REAL-WORLD DANGERS’ FOR USERS

    7) Use advanced spam filtering

    Most email providers offer stronger filtering settings that block risky messages before they reach you. Turn on the highest level your account allows to keep more fake Microsoft alerts out of your inbox.

    8) Watch for login alerts

    Turn on Microsoft account sign-in notifications so you get an alert anytime someone tries to access your account. To do this, sign in to your Microsoft account online, open Security, choose Advanced security options and switch on Sign-in alerts for any suspicious activity.

    Microsoft Surface laptop computers in 2017

    Strong sign-in alerts and phishing-resistant MFA help block these scams before criminals can take over your account.  (Drew Angerer/Getty Images)

    Kurt’s key takeaways

    QRR is a reminder of how quickly scammers change their tactics. Tools like this make it easy for criminals to send huge waves of fake Microsoft emails that look real at first glance. The good news is that a few smart habits can put you a step ahead. When you add stronger sign-in protection, turn on alerts and stay aware of the newest tricks, you make it much harder for attackers to sneak in.

    Do you think most people can tell the difference between a real Microsoft login page and a fake one, or have phishing kits become too convincing? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved.  

    Source link