ReportWire

Tag: Microsoft

  • The White House wants AI companies to cover rate hikes. Most have already said they would. | TechCrunch

    [ad_1]

    The proliferation of AI data centers plugging into the national electrical grid has helped increase consumer electricity prices, driving up the average national electricity price by more than 6% in the last year.

    That’s not a good look for the incumbents ahead of this fall’s elections, and President Donald Trump addressed the challenge in his State of the Union speech last night.

    “We’re telling the major tech companies that they have the obligation to provide for their own power needs,” Trump said. “They can build their own power plants as part of their factory, so that no one’s prices will go up.”

    The hyperscalers in question don’t need to be told. They have already made public commitments in recent weeks to cover electricity costs by building their own power sources, paying higher rates, or both, part of a broader effort to solve PR problems around data center expansion and win over skeptical communities.

    On January 11, Microsoft announced its policy “to ensure that the electricity cost of serving our datacenters is not passed on to residential customers.” January 26, OpenAI committed to “paying its own way on energy, so that our operations don’t increase your energy prices.” On February 11, Anthropic made the same pledge to “cover electricity price increases that consumers face from our data centers.” Yesterday, Google announced the largest battery project in the world yesterday to support a data center in Minnesota.

    What these commitments means in practice, and who will determine which data centers are responsible for which price increases, remains unknown. The White House has not released the text of the proposed pledge.

    “A handshake agreement with Big Tech over data center costs isn’t good enough,” Arizona Democratic Senator Mark Kelly said on social media. “Americans need a guarantee that energy prices won’t soar and communities have a say.”

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    White House spokesperson Taylor Rodgers said that next week, companies will send representatives to formally sign the pledge at the White House. Amazon, Google, Meta, Microsoft, xAI, Oracle and OpenAI are reportedly among those set to attend. However none of the companies have confirmed their attendance.

    Even if tech companies committ to taking on electricity costs, on-site power plants may not be a panacea—they can still have adverse impacts on the surrounding environment, and will stress supply chains for natural gas, turbines, photovoltaics and batteries, depending on how companies aim to power their compute.

    [ad_2]

    Tim Fernholz

    Source link

  • New Xbox CEO On Company Strategy: “The Plan’s The Plan Until It’s Not The Plan” – Kotaku

    [ad_1]

    In her introduction email to Xbox employees, Asha Sharma, the new CEO of Microsoft Gaming and former president of Microsoft’s CoreAI, promised a return to Xbox. This, among many other things about the announcement, raised some questions, with perhaps the most relevant being, “What does that even mean?

    Windows Central sat down with Sharma and new CCO Matt Booty to ask just that. Instead of clarifying her plans, Sharma offered vaguely positive-sounding platitudes that amounted to little of substance. “For me, the spirit of ‘Return to Xbox’ is about returning to the spirit that the team was founded on,” Sharma said. “It’s that spirit of surprise, it’s the spirit of building something nobody else was willing to try—I’ve heard ‘renegade,’ ‘rebellion,’ and ‘fun’ used. That’s what I was thinking about when I wrote that.”

    Sharma did, at least, acknowledge that she has a lot of work to do before making any big decisions, but it’s bizarre that Microsoft would just set her loose for interviews when she admits she still has so much to learn about Xbox. When asked about Xbox getting rid of game exclusives and whether or not that policy might be revised, she said, “Right now, I need to learn, candidly. About the ‘why’ of these decisions, what we were optimizing for, and what the data says about the Xbox strategy today. That’s the honest answer. I’m looking at lifetime value, not just what happened in a previous moment, or in short term efficiencies and things like that. The plan’s the plan until it’s not the plan.”

    On the subject of AI, Sharma reiterated that she wouldn’t “flood [the Xbox] ecosystem with slop,” and said she stands by what she wrote in her introduction email. Booty jumped in with further clarification, adding, “We’ve got no pressure from Microsoft, there are no directives on AI coming down.”

    Overall, the interview is light on concrete details about Sharma’s vision for Xbox. As she said, “I think from here, the work is proof over promise.” We’ll just have to wait and see what that work actually looks like.

    [ad_2]

    Jen Lennon

    Source link

  • Sam Altman Defends A.I. Energy Use With Human Comparison, Sparking Debate

    [ad_1]

    Sam Altman challenged critics of A.I.’s water and electricity consumption. Photo by John MacDougall/AFP via Getty Images

    Sam Altman is pushing back on mounting criticism over the environmental toll of A.I. The OpenAI chief has dismissed claims about A.I.’s water consumption as “fake” and drawn comparisons between the electricity required to power A.I. systems and the energy it takes to develop human intelligence.

    Figures suggesting that tools like ChatGPT consume multiple gallons of water per query are “totally insane” and have “no connection to reality,” Altman said in a Feb. 20 interview with The Indian Express on the sidelines of the AI Impact Summit in New Delhi. Last year, Altman claimed that ChatGPT uses 0.000085 gallons of water per query—roughly one-fifteenth of a teaspoon—though he did not explain how he calculated that figure.

    A.I.’s water footprint largely stems from the need for evaporative cooling systems used to keep data center hardware from overheating. But Altman argued that companies like OpenAI are no longer directly managing such cooling processes. Many A.I. developers, he noted, are shifting toward cooling systems that recirculate liquid rather than continually drawing fresh supplies. Meanwhile, tech giants like Microsoft, Meta, Google and Amazon have pledged to replenish more water than they withdraw by 2030.

    Even so, data centers continue to drink up water at a rapid pace. Total A.I.-related water consumption for cooling reached 23.7 cubic kilometers in 2025, a 38 percent increase over 2020, and is expected to more than triple over the next 25 years, according to a January report from Xylem. Despite the industry’s pivot to alternative methods, the report found that 56 percent of data center capacity still relies on some form of evaporative cooling.

    Altman was more measured when it came to electricity usage. “What is fair, though, is the energy consumption,” he said. “We need to move towards nuclear, wind, and solar very quickly.”

    Last April, the International Energy Agency reported that data centers accounted for roughly 1.5 percent of global electricity consumption in 2024. Their power use is rising at a rate more than four times faster than overall electricity demand and is expected to more than double by 2030.

    In response, major tech companies are pursuing data center agreements tied to alternative energy sources, including nuclear power, to ease pressure on grids. Altman, who previously led Y Combinator, has personally invested in nuclear ventures such as Oklo, which is developing small-scale nuclear plants, and Helion, which aims to commercialize nuclear fusion.

    The OpenAI CEO also argued that critics overlook the energy required to develop human intelligence. “People talk about how much energy it takes to train an A.I. model relative to how much it costs a human to do one inference query,” he said. “But it also takes a lot of energy to train a human—it takes, like, 20 years of life and all the food you eat during that time before you get started.”

    A more appropriate comparison, he suggested, would measure the energy used by a fully trained A.I. model to answer a question against that used by a human doing the same task. “Probably A.I. has already caught up on an energy efficiency basis measured that way.”

    The remarks quickly sparked debate online over whether such comparisons are appropriate. “He’s saying a really big spreadsheet and a baby are morally equivalent,” wrote Matt Stoller, research director of the American Economic Liberties Project, in a post on X. Sridhar Vembu, founder and chief scientist of software firm Zoho Corporation, also took issue with the OpenAI chief’s statements. A.I. should “quietly recede into the background” instead of dominating our lives, said the billionaire on X. “I do not want to see a world where we equate a piece of technology to a human being.”

    Sam Altman Defends A.I. Energy Use With Human Comparison, Sparking Debate

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Microsoft Xbox shake-up: Phil Spencer and Sarah Bond step down as AI exec takes over – Tech Digest

    [ad_1]

    Share


    Microsoft has announced a seismic shift in its Xbox leadership, with Phil Spencer and Sarah Bond both stepping down from their roles.

    Spencer, the face of Xbox for over a decade, is retiring after a 38-year career at the company. He will be replaced as CEO of Microsoft Gaming by Asha Sharma, an executive previously known for leading major AI initiatives at the firm.

    The departure of both Spencer and Bond – the latter having only recently been promoted to Xbox President – marks the end of an era defined by massive acquisitions, including the $69 billion takeover of Activision Blizzard. While Spencer’s exit is described as a retirement, the sudden vacancy at the top comes after a turbulent year for the division, marked by declining hardware sales and significant layoffs.

    New Vision: From Consoles to AI Integration?

    The appointment of Sharma has sparked intense debate, with some fans declaring it “the end of Xbox” due to her lack of traditional gaming industry experience. Unlike Spencer, who was viewed as a “gamer-first” executive, Sharma’s background is rooted in AI and social platforms.

    Industry analysts suggest this indicates a pivot toward integrating Microsoft’s advanced AI tools into the development pipeline and competing with “insta-gratification” platforms including TikTok and Instagram.

    To ease concerns about a “soulless” future for the brand, Sharma stated she would not “flood the ecosystem with AI slop,” emphasizing that games remain human-crafted art. The promotion of Matt Booty to Chief Content Officer suggests that while Sharma will handle the platform’s technological evolution, Booty will be tasked with maintaining the creative output of Microsoft’s sprawling network of studios.

    For Xbox, this transition signifies a move away from the traditional “console wars.” With hardware sales struggling, the new leadership is expected to lean harder into the “Xbox everywhere” strategy initiated by Spencer, focusing on Game Pass, cloud gaming and potentially AI-assisted game creation.

    While Spencer’s legacy is built on the hardware and major deals of the past 40 years, Sharma’s tenure will likely be defined by how she navigates the increasingly blurred lines between gaming, artificial intelligence and social media.

    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • AI home search could change how you buy a house

    [ad_1]

    NEWYou can now listen to Fox News articles!

    If you have ever searched for a home online, you know the routine. Set a price range. Click a few filters. Run the search. Start over. Again and again.

    Now imagine skipping all of that and simply saying, “I want a home near good schools with high ceilings, a short commute and a kitchen that feels modern.” Then the platform responds like it already understands what matters most to you. Well, that future tech is here.

    Homes.com, powered by Microsoft Azure OpenAI, has launched Homes AI, a fully integrated conversational home search experience. Instead of clicking through a bunch of filters, you talk or type your way to the right home. And this is more than just a new feature. It could completely change how people search for and ultimately buy houses.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    DOORDASH LAUNCHES ZESTY, AN AI APP FOR FINDING LOCAL FOOD

    Instead of guessing which filters to use, buyers can ask detailed questions about schools, commute times or neighborhood trends and get instant answers in one place. (David Cooper/Toronto Star via Getty Images)

    Why AI home search fixes the old filter problem

    For years, homebuyers had to search like they were programming a database. That meant checking boxes, toggling filters and running multiple searches just to piece together what they actually wanted.

    “Searching for a home previously forced prospective buyers to think like a database — checking boxes, toggling filters and manually running multiple searches to piece together what they wanted,” Livia Sponseller, head of Homes.com Product at CoStar Group, told CyberGuy. “We understand that isn’t how people best operate, so conversational search removes the silos of data so that all information, whether it’s about neighborhood average home prices, schools or in-depth details about a specific home, allows buyers to easily and simply describe what they’re looking for in their own words.”

    That line hits home. No one dreams about toggling filters. People dream about backyards, school districts and a kitchen where everyone gathers. With Homes AI, you can describe what matters to you in plain language. The system pulls from deep property data, 3D Matterport tours, neighborhood insights and proprietary school data to guide you.

    “Direct conversations with our AI guide, Homes AI, capture nuances in buyer preferences that traditional filters do not,” Sponseller added. “These nuances are ultimately what lead a buyer to choose the right home for them, making it feel less like browsing listings and more like truly experiencing the home.”

    In other words, this moves home search from mechanical to meaningful.

    Why AI home search works right now

    AI assistants are already part of everyday life. Millions of people already talk to generative AI tools every week. That comfort level matters. As Sponseller explained, “People have become very accustomed to interacting with AI assistants like ChatGPT. Hundreds of millions of people are using its generative AI tools each week, so people are beginning to tap into the power of these generative pre-trained transformers (GPT) and large language models (LLMs). The experience we built for Homes.com represents the natural next step — seamlessly integrating advanced AI into the existing site infrastructure and shifting the heavy lifting of filtering and refining search results from the homebuyer to the technology itself.”

    That shift is huge. The burden moves from you to technology. Instead of refining results manually, the AI refines them for you in real time. And it does so inside the Homes.com ecosystem. Your data stays within the platform and is not used to train external models.

    CRIMINALS ARE USING ZILLOW TO PLAN BREAK-INS. HERE’S HOW TO REMOVE YOUR HOME IN 10 MINUTES

    A Homes AI promotional screen

    Instead of guessing which filters to use, buyers can ask detailed questions about schools, commute times or neighborhood trends and get instant answers in one place. (Homes.com)

    What surprises buyers about AI home search

    The first time someone uses conversational artificial intelligence for home search, the biggest surprise may be how human it feels. Sponseller said, “I think users will be genuinely surprised by how closely it mirrors the experience of working with the most knowledgeable agent. Whether you’re looking for comparable sales, average home values in an area or the lifestyle of a specific neighborhood, buyers can ask virtually any home-related question and get an answer immediately, as opposed to referring to multiple sites for all that information.”

    Instead of hopping between tabs, you stay in one seamless experience. You can ask about commute times, neighborhood trends or interior details without starting over. She also pointed out, “Homes AI is a transparent, fast, data-rich and ad-free tool, elevating the experience for consumers to another level.” That ad-free part matters. It keeps the focus on your goals, not on who paid for placement.

    A Homes.com screen with search results for homes

    As the system learns your preferences, it refines recommendations over time, helping you narrow choices with more clarity and confidence. (Homes.com)

    What AI home search means for the future of real estate

    Sponseller believes this goes beyond one platform: “This is bigger than real estate. It’s only a matter of time until we see conversational experiences extend across industries, not just real estate portals. Why leave the heavy lifting to the searcher-consumer if ultimately this simplifies the process? Homes.com is simply the first to fully integrate this approach at scale, but I think it’s safe to say that shopping experiences across the board are entering a new era.”

    And when we look back? “We have full confidence that people will look back at the current state of portals and have a laugh at how clunky, manual, and fragmented the process felt.”

    She added, “The housing market has evolved to a point where applying filters and needing to run multiple consecutive searches to capture all the filters will feel as outdated as flipping through the Yellow Pages.” That comparison says it all.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    What this means for you

    If you are thinking about buying a home in the next few years, this could make the process feel a lot less stressful. Instead of endlessly scrolling and tweaking filters, you can simply explain what matters to you. The system does the sorting. It narrows the list based on your real priorities, not just basic checkboxes. That means you may tour fewer homes that miss the mark. You could spot red flags earlier. You might even feel more prepared before you ever walk through the front door. In a market where every decision counts, having clearer information upfront can make a real difference.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Buying a home is a big deal. It is emotional. It is expensive. And it can feel overwhelming fast. For years, online search tools helped, but they also made you do most of the work. You had to adjust filters, rerun searches and keep track of what mattered. AI home search changes that dynamic. You explain what you want. The technology handles the sorting. Over time, it even remembers your priorities. That could mean fewer wasted showings. Fewer surprises. More confidence before you ever step inside a house.

    If this is where home search is headed, will you trust a system that learns your preferences, or will you still want full control of every filter yourself? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Elon Musk Loses Half of xAI’s Founding Team—Where They’ve Gone Next

    [ad_1]

    Elon Musk’s xAI has lost half of its 12-person founding team. BRENDAN SMIALOWSKI/AFP via Getty Images

    Just days after Elon Musk merged his A.I. startup, xAI, with SpaceX in preparation for a widely anticipated trillion-dollar IPO later this year, two of xAI’s founding employees—Yuhuai (Tony) Wu and Jimmy Ba—announced their resignations. That means half of xAI’s founding team has now left the company barely three years after its launch. Musk framed the staff exodus as growing pains. “As a company grows, especially as quickly as xAI, the structure must evolve just like any living organism. This unfortunately required parting ways with some people. We wish them well in future endeavors,” he wrote on X yesterday (Feb. 11).

    Wu and Ba’s exits appeared amicable. But lower-level employees have been more candid about internal tensions at the Musk-run startup. Several members of xAI’s technical staff have also left in recent weeks, according to their posts on X and LinkedIn.

    “All A.I. labs are building the exact same thing, and it’s boring,” said Vahid Kazemi, who worked on xAI’s audio models, in a post on X. “I think there’s room for more creativity. So, I’m starting something new.”

    In an interview with NBC News, Kazemi also criticized the company’s working culture, saying he regularly worked 12-hour days, including holidays and weekends.

    Launched in March 2023 with a roster of industry veterans from companies like OpenAI, Google, Microsoft, and Tesla, xAI will now operate as a wholly owned subsidiary of SpaceX. The new iteration of SpaceX faces no shortage of challenges: Grok continues to face legal scrutiny, while Musk’s leadership style remains a point of contention.

    Here are the co-founders and notable leaders who have left xAI so far—and where they are now.

    Jimmy Ba

    Jimmy Ba, who led A.I. safety at xAI, announced his exit on Feb. 10. A professor at the University of Toronto who studied under A.I. pioneer Geoffrey Hinton, Ba’s research played a key role in shaping Grok’s development.

    “So proud of what the xAI team has done and will continue to stay close as a friend of the team,” Ba wrote on X. He hasn’t announced his next move, but added that “2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species.”

    Despite Ba’s departure, Dan Hendrycks, executive director of the nonprofit Center for AI Safety, remains a safety advisor for xAI.

    Yuhuai (Tony) Wu

    Tony Wu, a former research scientist at Google and postdoctoral researcher at Stanford University, announced his departure from xAI on Feb. 9.

    Wu led xAI’s reasoning team. “It’s time for my next chapter…It is an era with full possibilities: a small team armed with AIs can move mountains and redefine what’s possible,” he wrote on X.

    Wu has not disclosed his next role. Co-founders Guodong Zhang and Manuel Kroiss remain at xAI and are helping lead the company’s reorganization.

    Mike Liberatore

    While not a founding member, Mike Liberatore joined xAI as chief financial officer in April 2025, just one month after xAI acquired X in a deal that valued the combined company at $113 billion.

    Liberatore, formerly a finance executive at Airbnb and SquareTrade, left after only three months. He now works as a business finance officer at OpenAI, according to LinkedIn.

    Musk replaced Liberatore with ex-Morgan Stanley banker Anthony Armstrong. Armstrong advised Musk on his Twitter (now X) acquisition in 2022 and later served as a senior advisor at the Office of Personnel Management during Musk’s controversial tenure at the Department of Government Efficiency (DOGE).

    Greg Yang

    Greg Yang spent nearly six years as a researcher at Microsoft before joining xAI’s founding team. He left the company in January due to health complications from Lyme disease.

    “Likely I contracted Lyme a long time ago, but until I pushed myself hard building xAI and weakened my immune system, the symptoms weren’t noticeable,” Yang wrote on X. He continues to advise xAI in an informal capacity.

    Igor Babuschkin

    Igor Babuschkin, a former research engineer at OpenAI and Google DeepMind, was a co-founder and key engineering lead at xAI. Widely known as the primary developer behind Grok, Babuschkin left in July 2025 to start his own venture capital firm, Babuschkin Ventures, focused on A.I. research and startups.

    Christian Szegedy

    Christian Szegedy spent 12 years at Google before joining xAI as a founding research scientist. He left xAI in February 2025 to become chief scientist at superintelligence cloud company Morph Labs.

    More than a year later, he departed that role to found mathematical A.I. startup Math Inc. in September, according to his LinkedIn.

    I left xAI in the last week of February and I am on good terms with the team. IMO, xAI has a bright future,” Szegedy wrote on X.

    Other senior engineers and scientists at xAI include Yasemin Yesiltepe, Zhuoyi (Zoey) Huang and Yao Fu.

    Kyle Kosic

    Kyle Kosic left OpenAI in early 2023 after two years to co-found xAI, where he served as engineering infrastructure lead. He departed about a year later, in April 2024, to return to OpenAI as a technical staff member.

    Kosic was the first co-founder to leave xAI and did not issue a public statement. It is unclear who now leads xAI’s engineering infrastructure, though another co-founder, Ross Nordeen, remains the company’s technical program manager after previously holding the same role at Tesla.

    Elon Musk Loses Half of xAI’s Founding Team—Where They’ve Gone Next

    [ad_2]

    Rachel Curry

    Source link

  • Google and Microsoft-backed Terradot acquires carbon removal competitor | TechCrunch

    [ad_1]

    Carbon removal startup Terradot is acquiring competitor Eion, the two companies announced today. The sale was driven largely by big investors like sovereign wealth funds, which want to work with companies that can handle large contracts. Eion was simply too small, Eion CEO Anastasia Pavlovic Hans told The Wall Street Journal.

    Both companies spread pulverized rocks on farm fields to absorb carbon dioxide from the atmosphere. Known as enhanced rock weathering (EWR), it speeds up a natural process and has the potential to be a low-cost way to remove carbon, but it requires large and distributed operations. The spread between what EWR companies would like to charge and what buyers would like to pay remains wide, according to a survey by CDR.fyi. 

    California-based Terradot’s operations are centered on Brazil, where the company works with basalt as its mineral of choice, while Eion works in the U.S. and uses olivine. Terradot’s investor list includes Gigascale Capital, Google, Kleiner Perkins, and Microsoft, while Eion’s investors include AgFunder, Mercator Partners, and Overture.

    [ad_2]

    Tim De Chant

    Source link

  • Deepfake fraud on ‘industrial scale’ as barriers to entry disappear – Tech Digest

    [ad_1]

    Share

    Image: Ofcom

    Deepfake fraud has officially reached an “industrial scale,” according to chilling new analysis by AI experts.

    The report, highlighted by The Guardian, warns that tools used to create hyper-realistic, tailored scams are no longer the playground of elite hackers. Instead, they have become inexpensive, widely available, and simple enough for “pretty much anybody” to deploy against the public.

    Researchers at the AI Incident Database have catalogued a surge in “impersonation for profit.” Recent examples include sophisticated heists where deepfake videos of politicians were used to hawk fake investment schemes and AI-generated “doctors” to promote medical scams.

    The financial toll is staggering: UK consumers alone are estimated to have lost £9.4bn to fraud in the nine months leading up to November 2025.

    MIT researcher Simon Mylius claims that the barriers to entry for producing deepfakes have effectively disappeared. “Capabilities have suddenly reached that level where fake content can be produced by anyone,” he warned. Meanwhile, Harvard experts suggest that AI models are evolving far faster than security experts anticipated, making detection a constant game of cat-and-mouse.

    One recent high-profile incident involved the CEO of AI security firm Evoke, who nearly hired a “talented engineer” following a video interview. It was only after noticing the candidate’s “soft edges” and a glitchy, fake background that a technical analysis confirmed the individual was a deepfake.

    While the motive remains unclear, whether it was a play for a salary or a grab for trade secrets, it serves as a warning that no business is too small to be targeted.

    In response to this growing national security threat, the UK Government recently announced a “world-first” deepfake detection initiative. As detailed on Tech Digest, the Home Office is partnering with Microsoft, academics, and technical experts to build a standardized evaluation framework.

    This collaboration aims to establish consistent industry standards for identifying manipulated audio and video, bridging the gap between theoretical AI models and the real-world tools needed by law enforcement.

    With an estimated eight million deepfakes shared in 2025 – a massive leap from just 500,000 two years ago – the new framework with Microsoft is designed to identify gaps in current detection tools before the next wave of AI-driven fraud hits the mainstream.

    UK government announces deepfake detection initiative with Microsoft


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Microsoft crosses privacy line few expected

    [ad_1]

    NEWYou can now listen to Fox News articles!

    For years, we’ve been told that encryption is the gold standard for digital privacy. If data is encrypted, it is supposed to be locked away from hackers, companies and governments alike. That assumption just took a hit. 

    In a federal investigation tied to alleged COVID-19 unemployment fraud in Guam, a U.S. territory where federal law applies, Microsoft confirmed it provided law enforcement with BitLocker recovery keys. Those keys allowed investigators to unlock encrypted data on multiple laptops.

    This is one of the clearest public examples to date of Microsoft providing BitLocker recovery keys to authorities as part of a criminal investigation. While the warrant itself may have been lawful, the implications stretch far beyond one investigation. For everyday Americans, this is a clear signal that “encrypted” does not always mean “inaccessible.”

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    HACKERS ABUSE GOOGLE CLOUD TO SEND TRUSTED PHISHING EMAILS

    In the Guam investigation, Microsoft provided BitLocker recovery keys that allowed law enforcement to unlock encrypted laptops. (David Paul Morris/Bloomberg via Getty Images)

    What happened in the Guam BitLocker case?

    Federal investigators believed three Windows laptops held evidence tied to an alleged scheme involving pandemic unemployment funds. The devices were protected with BitLocker, Microsoft’s built-in disk encryption tool enabled by default on many modern Windows PCs. BitLocker works by scrambling all data on a hard drive so it cannot be read without a recovery key. 

    Users can store that key themselves, but Microsoft also encourages backing it up to a Microsoft account for convenience. In this case, that convenience mattered. When served with a valid search warrant, Microsoft provided the recovery keys to investigators. That allowed full access to the data stored on the devices. Microsoft says it receives roughly 20 such requests per year and can only comply when users have chosen to store their keys in the cloud.

    We reached out to Microsoft for comment, but did not hear back before our deadline.

    How Microsoft was able to unlock encrypted data

    According to John Ackerly, CEO and co-founder of Virtru and a former White House technology advisor, the problem is not encryption itself. The real issue is who controls the keys. He begins by explaining how convenience can quietly shift control. “Microsoft commonly recommends that users back up BitLocker recovery keys to a Microsoft account for convenience. That choice means Microsoft may retain the technical ability to unlock a customer’s device. When a third party holds both encrypted data and the keys required to decrypt it, control is no longer exclusive.”

    Once a provider has the ability to unlock data, that power rarely stays theoretical. “When systems are built so that providers can be compelled to unlock customer data, lawful access becomes a standing feature. It is important to remember that encryption does not distinguish between authorized and unauthorized access. Any system designed to be unlocked on demand will eventually be unlocked by unintended parties.”

    Ackerly then points out that this outcome is not inevitable. Other companies have made different architectural choices. “Other large technology companies have demonstrated that a different approach is possible. Apple has designed systems that limit its own ability to access customer data, even when doing so would ease compliance with government demands. Google offers client-side encryption models that allow users to retain exclusive control of encryption keys. These companies still comply with the law, but when they do not hold the keys, they cannot unlock the data. That is not obstruction. It is a design choice.”

    Finally, he argues that Microsoft still has room to change course. “Microsoft has an opportunity to address this by making customer-controlled keys the default and by designing recovery mechanisms that do not place decryption authority in Microsoft’s hands. True personal data sovereignty requires systems that make compelled access technically impossible, not merely contractually discouraged.”

    In short, Microsoft could comply because it had the technical ability to do so. That single design decision is what turned encrypted data into accessible data.

    “With BitLocker, customers can choose to store their encryption keys locally, in a location inaccessible to Microsoft, or in Microsoft’s consumer cloud services,” a Microsoft spokesperson told CyberGuy in a statement. “We recognize that some customers prefer Microsoft’s cloud storage, so we can help recover their encryption key if needed. While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide whether to use key escrow and how to manage their keys.”

    WHY CLICKING THE WRONG COPILOT LINK COULD PUT YOUR DATA AT RISK

    New CISA warning: Thanksgiving clickjacking threat in popular browsers

    When companies hold encryption keys, lawful requests can unlock far more data than most people expect. (Kurt “CyberGuy” Knutsson)

    Why this matters for data privacy

    This case has reignited a long-running debate over lawful access versus systemic risk. Ackerly warns that centralized control has a long and troubling history. “We have seen the consequences of this design pattern for more than two decades. From the Equifax breach, which exposed the financial identities of nearly half the U.S. population, to repeated leaks of sensitive communications and health data during the COVID era, the pattern is consistent: centralized systems that retain control over customer data become systemic points of failure. These incidents are not anomalies. They reflect a persistent architectural flaw.”

    When companies hold the keys, they become targets. That includes hackers, foreign governments and legal demands from agencies like the FBI. Once a capability exists, it rarely goes unused.

    How other tech giants handle encryption differently

    Apple has designed systems, such as Advanced Data Protection, where it cannot access certain encrypted user data even when served with government requests. Google offers client-side encryption for some services, primarily in enterprise environments, where encryption keys remain under the customer’s control. These companies still comply with the law, but in those cases, they do not possess the technical means to unlock the data. That distinction matters. As encryption experts often note, you cannot hand over what you do not have.

    What we can do to protect our privacy

    The good news is that personal privacy is not gone. The bad news is that it now requires intention. Small choices matter more than most people realize. Ackerly says the starting point is understanding control. “The main takeaway for everyday users is simple: if you don’t control your encryption keys, you don’t fully control your data.”

    That control begins with knowing where your keys are stored. “The first step is understanding where your encryption keys live. If they’re stored in the cloud with your provider, your data can be accessed without your knowledge.”

    Once keys live outside your control, access becomes possible without your consent. That is why the way data is encrypted matters just as much as whether it is encrypted. “Consumers should look for tools and services that encrypt data before it reaches the cloud — that way, it is impossible for your provider to hand over your data. They don’t have the keys.” Defaults are another hidden risk. Many people never change them. “Users should also look to avoid default settings designed for convenience. Default settings matter, and when convenience is the default, most individuals will unknowingly trade control for ease of use.”

    When encryption is designed so that even the provider cannot access the data, the balance shifts back to the individual. “When data is encrypted in a way that even the provider can’t access, it stays private — even if a third party comes asking. By holding your own encryption keys, you’re eliminating the possibility of the provider sharing your data.” Ackerly says the lesson is simple but often ignored. “The lesson is straightforward: you cannot outsource responsibility for your sensitive data and assume that third parties will always act in your best interest. Encryption only fulfills its purpose when the data owner is the sole party capable of unlocking it.” Privacy still exists. It just no longer comes by default.

    700CREDIT DATA BREACH EXPOSES SSNS OF 5.8M CONSUMERS

    Person holds a phone

    Reviewing default security and backup settings can help you keep control of your private data. (Kurt “CyberGuy” Knutsson)

    Practical steps you can take today

    You do not need to be a security expert to protect your data. A few practical checks can go a long way.

    1) Start by checking where your encryption keys live

    Many people do not realize that their devices quietly back up recovery keys to the cloud. On a Windows PC, sign in to your Microsoft account and look under device security or recovery key settings. Seeing a BitLocker recovery key listed online means it is stored with Microsoft. 

    For other encrypted services, such as Apple iCloud backups or Google Drive, open your account security dashboard and review encryption or recovery options. Focus on settings tied to recovery keys, backup encryption, or account-based access. When those keys are linked to an online account, your provider may be able to access them. The goal is simple. Know whether your keys live with you or with a company.

    2) Avoid cloud-based key backups unless you truly need them

    Cloud backups are designed for convenience, not privacy. If possible, store recovery keys offline. That can mean saving them to a USB drive, printing them and storing them in a safe place, or using encrypted hardware you control. The exact method matters less than who has access. If a company does not have your keys, it cannot be forced to turn them over.

    3) Choose services that encrypt data before it reaches the cloud

    Not all encryption works the same way, even if companies use similar language. Look for services that advertise end-to-end or client-side encryption, such as Signal for messages, or Apple’s Advanced Data Protection option for iCloud backups. These services encrypt your data on your device before it is uploaded, which means the provider cannot read it or unlock it later. Here is a simple rule of thumb. If a service can reset your password and restore all your data without your involvement, it likely holds the encryption keys. That also means it could be forced to hand over access. When encryption happens on your device first, providers cannot unlock your data because they never had the keys to begin with. That design choice blocks third-party access by default.

    4) Review default security settings on every new device

    Default settings usually favor convenience. That can mean easier recovery, faster syncing and weaker privacy. Take five minutes after setup and lock down the basics.

    iPhone: tighten iCloud and account recovery

    Turn on Advanced Data Protection for iCloud (strongest iCloud protection)

    • Open Settings
    • Tap your name
    • Tap iCloud
    • Scroll down and tap Advanced Data Protection
    • Tap Turn On Advanced Data Protection
    • Follow the prompts to set up Account Recovery options, like a Recovery Contact or Recovery Key

    Review iCloud Backup

    • Open Settings
    • Tap your name
    • Tap iCloud
    • Tap iCloud Backup
    • Decide if you want it on or off, based on your privacy comfort level

    Strengthen your Apple ID security

    • Open Settings
    • Tap your name
    • Tap Sign-In & Security
    • Make sure Two-Factor Authentication (2FA) is turned on and review trusted phone numbers and devices
    • Review trusted phone numbers and devices

    Android: lock your Google account and backups

    Review and control device backup

    Settings may vary depending on your Android phone’s manufacturer.

    • Open Settings
    • Tap Google
    • Tap Backup (or All services then Backup)
    • Tap Manage backup
    • Choose what backs up and confirm which Google account stores it

    NEW ANDROID MALWARE CAN EMPTY YOUR BANK ACCOUNT IN SECONDS

    Strengthen your screen lock, since it protects the device itself

    Settings may vary depending on your Android phone’s manufacturer.

    • Open Settings
    • Tap Security or Security & privacy
    • Set a strong PIN or password
    • Turn on biometrics if you want, but keep the PIN strong either way

    Secure your Google account

    Settings may vary depending on your Android phone’s manufacturer.

    • Open Settings
    • Tap Google
    • Tap Manage your Google Account
    • Go to Security
    • Turn on 2-Step Verification and review recent security activity

    Mac: enable FileVault and review iCloud settings

    Turn on FileVault disk encryption

    • Click the Apple menu
    • Select System Settings
    • Click Privacy & Security
    • Scroll down and click FileVault
    • Click Turn On
    • Save your recovery method securely

    Review iCloud syncing

    • Open System Settings
    • Click your name
    • Click iCloud
    • Review what apps and data types sync
    • Turn off anything you do not want stored in the cloud

    Windows PC: check BitLocker and where the recovery key is stored

    Confirm BitLocker status and settings

    • Open Settings
    • Go to Privacy & security
    • Tap Device encryption or BitLocker (wording varies by device)

    Check whether your BitLocker recovery key is stored in your Microsoft account

    • Go to your Microsoft account page
    • Open Devices
    • Select your PC
    • Look for Manage recovery keys or a BitLocker recovery key entry
    • If you see a key listed online, it means the key is stored with Microsoft. That is why Microsoft was able to provide keys in the Guam case.

    If your account can recover everything with a few clicks, a third party might be able to recover it too. Convenience can be helpful, but it can also widen access.

    5) Treat convenience features as privacy tradeoffs

    Every shortcut comes with a cost. Before enabling a feature that promises easy recovery or quick access, pause and ask one question. If I lose control of this account, who else gains access? If the answer includes a company or third party, decide whether the convenience is worth it. 

    These steps are not extreme or technical. They are everyday habits. In a world where lawful access can quietly become routine access, small choices now can protect your privacy later.

    Strengthen protection beyond encryption

    Encryption controls who can access your data, but it does not stop every real-world threat. Once data is exposed, different protections matter.

    Strong antivirus software adds device-level protection

    Strong antivirus software helps block malware, spyware and credential-stealing attacks that can bypass privacy settings altogether. Even encrypted devices are vulnerable if malicious software gains control before encryption comes into play.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com

    An identity theft protection service helps when exposure turns into fraud

    If personal data is accessed, sold, or misused, identity protection services can monitor for suspicious activity, alert you early and help lock down accounts before damage spreads. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number and email address, and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals.

    See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

    Kurt’s key takeaways

    Microsoft’s decision to comply with the BitLocker warrant may have been legal. That doesn’t make it harmless. This case exposes a hard truth about modern encryption. Privacy depends less on the math and more on how systems are built. When companies hold the keys, the risk falls on the rest of us.

    Do you trust tech companies to protect your encrypted data, or do you think that responsibility should fall entirely on you? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • UK government announces deepfake detection initiative with Microsoft – Tech Digest

    [ad_1]

    Share


    The UK government has announced a collaboration with Microsoft and top academics to build a robust defence against the skyrocketing threat of deepfakes.

    This new initiative centres on developing a standardized evaluation framework designed to identify critical gaps in deepfake detection.

    By testing current technologies against real-world threats – including fraud, impersonation, and non-consensual sexual abuse – the government aims to establish clear benchmarks for the tech industry to meet.

    The urgency of the project is underscored by staggering growth in synthetic media. Official figures reveal that an estimated eight million deepfakes were shared in 2025 alone, a massive jump from just 500,000 two years prior.

    Criminals are increasingly using these AI-generated images and audio to defraud the public, often targeting vulnerable individuals with sophisticated scams.

    Beyond individual fraud, the initiative seeks to protect national security and public trust. Last week, the Home Office funded a “Deepfake Detection Challenge” hosted by Microsoft, where over 350 experts from INTERPOL and the “Five Eyes” intelligence community were tasked with identifying manipulated media in high-pressure scenarios involving election security and organized crime.

    “Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear,” says Tech Secretary Liz Kendall. “The UK is leading the global fight against deepfake abuse, and those who seek to deceive and harm others will have nowhere to hide.”

    Consumer advocates have welcomed the move but are calling for faster regulatory enforcement to protect people from financial ruin. Adds Rocio Concha, Which? Director of Policy and Advocacy:

    “The UK is in the grips of a scam epidemic – social media platforms are littered with convincing deepfakes designed to con people into parting with their hard-earned cash.

    “Under the Online Safety Act, platforms have duties to detect and remove fraudulent content, including deepfake scams, and the government’s plan to develop a standard for identifying deepfakes could help them do this.

    “For this new initiative to work, Ofcom should not hesitate to take action – including robust fines – against companies who aren’t playing their part. Many deepfakes feature in paid-for scam ads.”

    The framework is part of a broader legislative push that includes criminalizing the creation of non-consensual intimate deepfakes and banning the “nudification” tools that facilitate such abuse.

    https://www.gov.uk/government/news/government-leads-global-fight-against-deepfake-threats


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Here’s Why You Should Care About the Next-Gen Xbox Launch

    [ad_1]

    Xbox is in freefall, but Microsoft’s gaming platform could redeem itself as soon as next year. During AMD’s Q4 2025 earnings call on Tuesday, the chipmaker’s CEO, Lisa Su, said that “development of Microsoft’s next-gen Xbox featuring an AMD semi-custom SoC is progressing well to support a launch in 2027.”

    Up until now, Xbox President Sarah Bond had only been ready to tell us the next Xbox console was indeed in development and would deliver a “premium experience.” The console likely won’t conform to any one game launcher. Instead, it could allow players to access platforms like Steam, Epic Games Store, and GOG as well as Xbox. Essentially, it will be a PC with a less-upgradable “semi-custom” AMD SoC (system on a chip). 

    Why is AMD talking about this console even before Xbox can start its next-gen media blitz? Likely because gaming hardware will be in the doldrums all throughout 2026. Su said the company expects a “decline by a significant double-digit percentage as we enter the seventh year of what has been a very strong console cycle.” 

    2026 may be an odd year for gaming hardware

    The Steam Machine should arrive early this year, according to AMD. © Valve

    The year will start off with the launch of Valve’s long-awaited Steam Machine. Su said the device will arrive early this year, confirming what Valve had already indicated to Gizmodo and others about release timing. 

    Valve’s PC/console hybrid will sport another “semi-custom” AMD chip, though in this case it will be based on older GPU microarchitecture, namely RDNA 3.5. Valve has claimed it will be powerful enough for 4K gaming with the help of upscaling, though judging purely by specs, it may not be as powerful as graphics-obsessed gamers may demand. Later this year, we may see new handheld gaming PCs sporting an Intel Panther Lake chip, but that may be it for new gaming hardware. 

    The difficulty will be getting gamers excited for new gaming hardware, especially if it costs anything more than the current generation of consoles. More than five years after launch, the PlayStation 5 and Xbox Series X cost more, not less, due to last year’s tariffs. A gaming-ready PC is now enormously expensive due to the ongoing memory shortage. We still don’t know the price of the Steam Machine, but based on Valve’s statements to this point, it likely won’t be cheap. 

    The next-gen Xbox may be even costlier. Numerous leaks from reliable sources like Moore’s Law is Dead on YouTube suggest that Microsoft’s PC-like console will use AMD’s newfangled RDNA 5 microarchitecture. The specs we’ve seen from leaks—including GPU core counts—support that this could indeed be a powerful machine for playing games at 4K with ray tracing enabled. 

    The industry may be even worse off without Xbox

    Microsoft Corp. Xbox Event Ahead Of 2019 E3 Electronic Entertainment Expo
    Xbox President Sarah Bond has made it seem like the next-gen console will be a premium device. Graphics alone may not be enough to move new hardware. © Patrick T. Fallon / Bloomberg via Getty Images

    Specs are one thing, but next-gen hardware success will depend on whether Microsoft can give gamers a real reason to care. If consoles become a “premium” device built only for the most-dedicated gamers with deep pockets, it will price out many more potential players. Judging by Xbox’s reported slowdown in hardware and services revenue, just because they can’t afford a new console doesn’t mean players will run out to buy an increasingly expensive Game Pass subscription. Xbox needs to offer gamers a whole new way to play, something that re-energizes Xbox as a lifestyle brand, rather than just another manufacturer of gaming hardware.

    The gaming industry needs a win. A total of 33% of U.S.-based game developers who responded to the annual State of the Games Industry Report said they were laid off in the last two years. Many of those were due to Microsoft’s own cuts. Among its many in-house and partnered studios, Microsoft is responsible for major developers from Blizzard to Bethesda down to former indie darlings like Double Fine and Obsidian. Xbox’s slow demise will make a bad time for the gaming industry worse.

    [ad_2]

    Kyle Barr

    Source link

  • AMD suggests the next-gen Xbox will arrive in 2027

    [ad_1]

    Microsoft could launch the next-generation Xbox console sometime in 2027, AMD CEO Lisa Su has revealed during the semiconductor company’s latest earnings call. Valve is on track to start shipping its AMD-powered Steam Machine early this year, she said, while Microsoft’s development of an Xbox with a semi-custom SOC from AMD is “progressing well to support a launch in 2027.” While it doesn’t necessarily mean Microsoft is releasing a new Xbox console next year, that seems to be the company’s current goal.

    Xbox president Sarah Bond announced Microsoft’s multi-year partnership with AMD for its consoles in mid-2025. Based on Bond’s statement back then, Microsoft is embracing the use of artificial intelligence and machine learning in future Xbox games. She also said that the companies are going to “co-engineer silicon” across devices, “in your living room and in your hands,” implying the development of future handheld consoles.

    Leaked documents from the FTC vs. Microsoft court battle revealed in the past that Microsoft was planning to make the next Xbox a “hybrid game platform,” which combines local hardware and cloud computing. The documents also said that Microsoft was planning to release the next Xbox in 2028. Whether the company has chosen to launch the new Xbox early remains to be seen, but it is possible when the Xbox X and S were released in 2020, and they haven’t sold as well as the Xbox One.

    [ad_2]

    Mariella Moon

    Source link

  • Apple revenues rocket to $144bn as iPhone 17 dominates China – Tech Digest

    [ad_1]

    Share

    Apple has smashed Wall Street expectations with a massive 16% revenue surge, propelled by record-breaking iPhone sales.

    The tech titan reported a staggering $143.8 billion in revenue for the first quarter, far exceeding the $138.4 billion forecast by analysts. This performance marks the company’s strongest growth since 2021, driven by what CEO Tim Cook described as “unprecedented demand” across every geographic segment.

    The star of the quarter was the new iPhone 17 lineup. Revenue from the smartphone segment shot up 23% compared to the same period last year, reinvigorating demand in key markets.

    In particular, Apple saw a dramatic turnaround in China, where sales climbed by 38%. Cook noted that it was the “best iPhone quarter in history in greater China,” with the company’s active device install base reaching an all-time high in the region.

    While the iPhone flourished, other divisions saw a slight dip. Sales of Mac computers fell by just over 7%, and the wearables category – including the Apple Watch and AirPods – slipped by 3%.

    Despite these minor declines, the overall hardware success has placed Apple in “supply chase mode.” Cook informed analysts that the company is currently constrained as it struggles to keep up with the overwhelming consumer appetite for the iPhone 17 and 17 Pro.

    Investors remain keenly focused on Apple’s long-term artificial intelligence strategy. While competitors such as Microsoft have seen their stocks punished for heavy AI spending without immediate payoffs, Apple’s hardware-first approach appears to be weathering the storm.

    The company recently confirmed a partnership with Google to power a “more personalized Siri” using Gemini AI models, a move intended to close the gap with rivals while maintaining Apple’s signature user experience.

    Analysts suggest that Apple’s financial discipline is currently its greatest strength. While Microsoft spent over $37 billion on AI infrastructure last quarter, Apple’s planned $16 billion in capital expenditure remains conservative.

    This focus on “execution and pricing discipline” over “incremental AI features” has helped the company hit a historic $4 trillion market value, as the broader tech industry faces questions about an AI bubble.


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Microsoft releases second emergency Windows 11 update to fix Outlook crashes

    [ad_1]

    Microsoft issued another out-of-band update to fix a bug that caused Outlook to crash for Windows 11 users. This second emergency patch addresses issues seen with Outlook and files stored in the cloud following Microsoft’s January 2026 Windows security update.

    According to Microsoft, this update fixes a bug where some apps that “open or save files stored in cloud-backed locations” became unresponsive or displayed error messages. Some users also experienced Outlook crashing or not opening when PST files are stored in cloud-based options like OneDrive.

    This is the second time this year that Microsoft had to issue a last-minute fix for bugs related to its January security update. Last week, some Windows 11 devices couldn’t shut down or hibernate, while other devices running Windows 10 or 11 couldn’t log in through remote connections. For more context, Microsoft only issues out-of-band updates when there’s a serious issue that can’t wait until its regular update cycle. Fortunately, the latest out-of-band update is cumulative, so you only need to download and install this one to fix the issues seen with the January update.

    [ad_2]

    Jackson Chen

    Source link

  • Why clicking the wrong Copilot link could put your data at risk

    [ad_1]

    NEWYou can now listen to Fox News articles!

    AI assistants are supposed to make life easier. Tools like Microsoft Copilot can help you write emails, summarize documents and answer questions using information from your own account. But security researchers are now warning that a single bad link could quietly turn that convenience into a privacy risk. 

    A newly discovered attack method shows how attackers could hijack a Copilot session and siphon data without you seeing anything suspicious on screen.

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.     

    Because Copilot stays tied to your logged-in Microsoft account, attackers can quietly use your active session to access data in the background. (Photo by Donato Fasano/Getty Images)

    What researchers discovered about Copilot links

    ILLINOIS DHS DATA BREACH EXPOSES 700K RESIDENTS’ RECORDS

    Security researchers at Varonis uncovered a technique they call “Reprompt.” In simple terms, it shows how attackers could sneak instructions into a normal-looking Copilot link and make the AI do things on their behalf.

    Here’s the part that matters to you: Microsoft Copilot is connected to your Microsoft account. Depending on how you use it, Copilot can see your past conversations, things you’ve asked it and certain personal data tied to your account. Normally, Copilot has guardrails to prevent sensitive information from leaking. Reprompt showed a way around some of those protections.

    The attack starts with just one click. If you open a specially crafted Copilot link sent through email or a message, Copilot can automatically process hidden instructions embedded inside the link. You don’t need to install anything, and there are no pop-ups or warnings. After that single click, Copilot can keep responding to instructions in the background using your already logged-in session. Even closing the Copilot tab does not immediately stop the attack, because the session stays active for a while.

    How Reprompt works

    Varonis found that Copilot accepts questions through a parameter inside its web address. Attackers can hide instructions inside that address and make Copilot execute them as soon as the page loads.

    That alone would not be enough, because Copilot tries to block data leaks. The researchers combined several tricks to get around this. First, they injected instructions directly into Copilot through the link itself. This allowed Copilot to read information it normally shouldn’t share.

    Second, they used a “try twice” trick. Copilot applies stricter checks the first time it answers a request. By telling Copilot to repeat the action and double-check itself, the researchers found that those protections could fail on the second attempt.

    Third, they showed that Copilot could keep receiving follow-up instructions from a remote server controlled by the attacker. Each response from Copilot helped generate the next request, allowing data to be quietly sent out piece by piece. The result is an invisible back-and-forth where Copilot keeps working for the attacker using your session. From your perspective, nothing looks wrong.

    MICROSOFT SOUNDS ALARM AS HACKERS TURN TEAMS PLATFORM INTO ‘REAL-WORLD DANGERS’ FOR USERS

    Varonis responsibly reported the issue to Microsoft, and the company fixed it in the January 2026 Patch Tuesday updates. There is no evidence that Reprompt was used in real-world attacks before the fix. Still, this research is important because it shows a bigger problem. AI assistants have access, memory and the ability to act on your behalf. That combination makes them powerful, but also risky if protections fail. As researchers put it, the danger increases when autonomy and access come together.

    It’s also worth noting that this issue only affected Copilot Personal. Microsoft 365 Copilot, which businesses use, has extra security layers like auditing, data loss prevention and admin controls.

    “We appreciate Varonis Threat Labs for responsibly reporting this issue,” a Microsoft spokesperson told CyberGuy. “We have rolled out protections that address the scenario described and are implementing additional measures to strengthen safeguards against similar techniques as part of our defense-in-depth approach.”

    8 steps you can take to stay safe from AI attacks

    Even with the fix in place, these habits will help protect your data as AI tools become more common.

    1) Install Windows and browser updates immediately

    Security fixes only protect you if they’re installed. Attacks like Reprompt rely on flaws that already have patches available. Turn on automatic updates for Windows, Edge and other browsers so you don’t delay critical fixes. Waiting weeks or months leaves a window where attackers can still exploit known weaknesses.

    2) Treat Copilot and AI links like login links

    If you wouldn’t click a random password reset link, don’t click unexpected Copilot links either. Even links that look official can be weaponized. If someone sends you a Copilot link, pause and ask yourself whether you were expecting it. When in doubt, open Copilot manually instead.

    Corporate signage of Microsoft Corp at Microsoft India Development Center

    Even after Microsoft fixed the flaw, the research highlights why limiting data exposure and monitoring account activity still matters as AI tools evolve. (Photographer: Prakash Singh/Bloomberg via Getty Images)

    3) Use a password manager to protect your accounts

    A password manager creates and stores strong, unique passwords for every service you use. If attackers manage to access session data or steal credentials indirectly, unique passwords prevent one breach from unlocking your entire digital life. Many password managers also warn you if a site looks suspicious or fake.

    Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords, and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

    4) Enable two-factor authentication on your Microsoft account

    Two-factor authentication (2FA) adds a second layer of protection, even if attackers gain partial access to your session. It forces an extra verification step, usually through an app or device, making it much harder for someone else to act as you inside Copilot or other Microsoft services.

    5) Reduce how much personal data exists online

    Data broker sites collect and resell personal details like your email address, phone number, home address and even work history. If an AI tool or account session is abused, that publicly available data can make the damage worse. Using a data-removal service helps delete this information from broker databases, shrinking your digital footprint and limiting what attackers can piece together.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    6) Run strong antivirus software on your device

    Modern antivirus tools do more than scan files. They help detect phishing links, malicious scripts and suspicious behavior tied to browser activity. Since Reprompt-style attacks start with a single click, having real-time protection can stop you before damage happens, especially when attacks look legitimate.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

    7) Regularly review your account activity and settings

    Check your Microsoft account activity for unfamiliar logins, locations, or actions. Review what services Copilot can access, and revoke anything you no longer need. These checks don’t take long, but they can reveal issues early, before attackers have time to do serious damage. Here’s how:

    Go to account.microsoft.com, and sign in to your Microsoft account.

    Select Security, then choose View my sign-in activity and verify your identity if prompted.

    Review each login for unfamiliar locations, devices or failed sign-in attempts.

    If you see anything suspicious, select This wasn’t me or Secure your account, then change your password immediately and enable two-step verification.

    Visit account.microsoft.com/devices, and remove any devices you no longer recognize or use.

    In Microsoft Edge, open Settings > Appearance > Copilot and Sidebar > Copilot, and turn off Allow Microsoft to access page content if you want to limit Copilot’s access.

    Review apps connected to your Microsoft account and revoke permissions you no longer need.

    close up of hands of business person working on computer, man using internet and social media

    A single Copilot link can carry hidden instructions that run the moment you click, without any warning or pop-ups.  (iStock)

    8) Be specific about what you ask AI tools to do

    Avoid giving AI assistants broad authority like “handle whatever is needed.” Wide permissions make it easier for hidden instructions to influence outcomes. Keep requests narrow and task-focused. The less freedom an AI has, the harder it is for malicious prompts to steer it silently.

    Kurt’s key takeaway

    Reprompt doesn’t mean Copilot is unsafe to use, but it does show how much trust these tools require. When an AI assistant can think, remember and act for you, even a single bad click can matter. Keeping your system updated and being selective about what you click remain just as important in the age of AI as it was before.

    Do you feel comfortable letting AI assistants access your personal data, or does this make you more cautious? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved. 

    [ad_2]

    Source link

  • Microsoft says outage affecting Microsoft 365, Outlook, other services has been resolved

    [ad_1]

    Thousands of Microsoft customers reported difficulty Thursday accessing the technology company’s suite of Microsoft 365 services, including email platform Outlook, Teams and other tools. But the company said on social media early Friday that, “We’ve confirmed that impact has been resolved.”

    Users started reporting problems accessing Microsoft applications on Thursday afternoon, according to Downdetector, a site tracking website outages. Complaints spiked at around 3 p.m. ET, when 16,000 people said they were having trouble accessing Microsoft 365. 

    Microsoft acknowledged the problem, stating on its website that “users may be seeing degraded service functionality or be unable to access multiple Microsoft 365 services.”

    At 4:14 p.m. ET, Microsoft posted on X that it had “restored the affected infrastructure to a healthy state.” In a later post, however, the company said it was still “rebalancing traffic across all affected infrastructure to ensure the environment enters into a balanced state.”

    As of late Thursday afternoon, some social media users were still complaining that they were unable to access Microsoft 365 tools. “We cannot even email. This is not fixed,” one person said on X.

    Other users called on Microsoft to compensate customers for the outage, which they blamed for hampering their work.

    In a statement Thursday night, a Microsoft spokesperson told CBS News: “We are working to address a service functionality issue. A subset of customers may be intermittently impacted. For more information, please see updates via Microsoft 365 Status on X.”  

    Verizon last week offered affected customers a $20 credit after a major service outage limited subscribers’ ability to use their wireless devices.

    In 2024, a botched update of CrowdStrike antivirus software caused global outages for Microsoft 365 users. The disruptions led to thousands of flight delays and cancellations, while hospitals, banks and other businesses around the world were also affected. 

    [ad_2]

    Source link

  • Microsoft issues emergency fix after a security update left some Windows 11 devices unable to shut down

    [ad_1]

    If you weren’t able to shut down your Windows 11 device recently, Microsoft has rolled out an emergency fix addressing a couple of critical bugs that popped up with its latest January 2026 Windows security update. The latest “out-of-band” update repairs an issue for some Windows 11 devices that would only restart when users tried to shut down or hibernate. The same update restores the ability for Windows 10 and Windows 11 users to log into their devices via remote connection apps.

    Microsoft said the inability to shut down or hibernate affected Windows 11 devices using Secure Launch, a security feature that protects a computer from firmware-level attacks during startup. As for the remote connection issue, Microsoft explained in its Known issues page that credential prompt failures were responsible when users tried to log in remotely to affected Windows 10 and 11 devices.

    According to WindowsLatest, some lingering issues with the January 2026 Windows security update are still affecting users, like seeing blank screens or Outlook Classic crashing. Back in October, Microsoft had to issue another emergency fix for Windows 11 related to the Windows Recovery Environment. For those still hesitant to upgrade to Windows 11, Microsoft is allowing you to squeeze some more life out of Windows 10 by enrolling in Extended Security Updates.

    [ad_2]

    Jackson Chen

    Source link

  • WhatsApp Web malware spreads banking trojan automatically

    [ad_1]

    NEWYou can now listen to Fox News articles!

    A new malware campaign is turning WhatsApp Web into a weapon. Security researchers say a banking Trojan linked to Astaroth is now spreading automatically through chat messages, making the attack harder to stop once it starts. 

    The campaign is known as Boto Cor-de-Rosa. It shows how cybercriminals keep evolving, especially when they can abuse tools people trust every day. This attack focuses on Windows users and uses WhatsApp Web as both the delivery system and the engine that spreads the infection further.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK

    Attackers abuse WhatsApp Web to spread malware through messages that appear to come from people you trust. (Kurt “CyberGuy” Knutsson)

    How this WhatsApp Web attack works

    The attack starts with a simple message. A contact sends what looks like a routine ZIP file through WhatsApp. The file name appears random and harmless, which lowers suspicion. Once opened, the ZIP contains a Visual Basic script disguised as a normal document. If the user runs it, the script quietly pulls in two more pieces of malware. Then the script downloads the Astaroth banking malware written in Delphi. It also installs a Python-based module designed to control WhatsApp Web. Both components run in the background without obvious warning signs. From there, the infection becomes self-sustaining.

    Malware that spreads itself through your contacts

    What makes this campaign especially dangerous is how it propagates. The Python module scans the victim’s WhatsApp contacts and sends the malicious ZIP file to every conversation automatically. Researchers at Acronis found that the malware adapts its messages based on the time of day. It sends friendly greetings, making the message feel normal and familiar. The text reads, “Here is the requested file. If you have any questions, I’m available!” Because the message appears to come from someone you know, many people open it without hesitation.

    NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY

    Person holds iPhone showing the Whatsapp logo

    A single ZIP file sent through chat can quietly install banking malware and begin spreading to every contact. (Kurt “CyberGuy” Knutsson)

    Built-in tracking keeps the attack efficient

    This malware is carefully designed to monitor its own performance in real time. The propagation tool tracks how many messages are successfully delivered, how many fail to send, and the overall sending speed measured per minute. After every 50 messages, it generates progress updates that show how many contacts have been reached. This feedback allows attackers to measure success quickly and make adjustments if something stops working.

    What happens after infection

    The initial script is heavily obfuscated to avoid detection by antivirus tools. Once it runs, it launches PowerShell commands that download more malware from compromised websites. One known domain used in this campaign is coffe-estilo.com. The malware installs itself inside a folder that mimics a Microsoft Edge cache directory. Inside are executable files and libraries that make up the full Astaroth banking payload. From there, the malware can steal credentials, monitor activity and potentially access financial accounts.

    Why WhatsApp Web is being abused

    WhatsApp Web is popular because it mirrors your phone conversations on a computer. That convenience makes it easy to send messages, share files and type faster, but it also introduces risk. When you use WhatsApp Web, you link your phone to a browser by scanning a QR code at web.whatsapp.com. Once connected, that browser session becomes a trusted extension of your account. Your chats appear on the screen, messages you send come from your real number and incoming messages sync across both devices.

    That setup is exactly what attackers take advantage of. If malware gains access to a computer with WhatsApp Web logged in, it can act as the user. It can read messages, access contact lists and send files or links that look completely legitimate. The messages do not raise alarms because they are coming from a real account, not a fake one.

    This is what turns WhatsApp Web into an effective delivery system for malware. Instead of breaking into WhatsApp itself, attackers simply abuse an open browser session to spread malicious files automatically. Many users do not realize the danger because WhatsApp Web feels harmless. It is often left signed in on work computers, shared devices or systems without strong security. In those situations, malware does not need advanced tricks. It only needs access to an already trusted session. That combination of convenience and trust is why WhatsApp Web has become such an attractive target.

    MALICIOUS MAC EXTENSIONS STEAL CRYPTO WALLETS AND PASSWORDS

    A person typing on a laptop. (Kurt "CyberGuy" Knutsson)  

    Once WhatsApp Web is compromised, malware can act like the user, sending messages and files that look completely legitimate.  (Kurt “CyberGuy” Knutsson)

    How to stay safe from WhatsApp Web malware

    Attacks like this WhatsApp Web malware are designed to spread fast through trusted conversations. A few smart habits can dramatically lower your risk.

    1) Be skeptical of unexpected attachments

    Messaging apps feel casual, which is exactly why attackers use them. Never open ZIP files sent through chat unless you confirm with the sender first. Watch for file names made of random numbers or unfamiliar names. Treat messages that create urgency or feel overly familiar as a warning sign. If a file arrives out of nowhere, pause before clicking.

    2) Lock down WhatsApp Web access

    This campaign abuses WhatsApp Web to spread automatically once a device is infected. Check active WhatsApp Web sessions and log out of any you do not recognize. Avoid leaving WhatsApp Web signed in on shared or public computers. Enable two-factor authentication (2FA) inside WhatsApp settings. Cutting off Web access helps limit how far malware can travel.

    3) Keep your Windows PC locked down and use strong antivirus software 

    This type of malware takes advantage of systems that fall behind on updates. Install Windows updates as soon as they are available. Also, keep your web browser fully updated. Staying current closes many of the doors attackers try to slip through. In addition, use strong antivirus software that watches for script abuse and PowerShell activity in real time.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

    4) Limit how much of your personal data is online

    Banking malware often pairs with identity theft and financial fraud. One way to reduce the fallout is by shrinking your digital footprint. A data removal service can help remove your personal information from data broker sites that attackers often search. With less information available, criminals have fewer details to exploit if malware reaches your device.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com

    5) Add identity theft protection for extra coverage

    Even with strong security habits, financial monitoring adds another layer of protection. An identity theft protection service can watch for suspicious activity tied to your credit and personal data. Identity theft companies can monitor personal information like your Social Security number (SSN), phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals.

    You should also turn on alerts for bank and credit card transactions so you are notified quickly if something looks wrong. The less exposed your data is, the fewer opportunities attackers have to cause damage.

    See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

    6) Slow down and trust your instincts

    Most malware infections happen because people act too quickly. If a message feels off, trust that instinct. Familiar names and friendly language can lower your guard, but they should never replace caution. Take a moment to verify the message or file before opening anything. Attackers rely on trust and urgency to succeed. Slowing down takes away their advantage.

    Kurt’s key takeaways

    This WhatsApp Web malware campaign is a reminder that cyberattacks no longer rely on obvious red flags. Instead, they blend into everyday conversations and use familiar tools to spread quietly and quickly. What makes this threat especially concerning is how little effort it takes for it to move from one device to dozens of others. A single click can turn a trusted chat into a delivery system for banking malware and identity theft. The good news is that small changes make a big difference. Paying attention to attachments, locking down WhatsApp Web access, keeping devices updated and slowing down before clicking can stop these attacks cold. As messaging platforms continue to play a bigger role in daily life, staying alert is no longer optional. Awareness and simple habits remain some of the strongest defenses you have.

    Do you think messaging apps are doing enough to protect users from malware that spreads through trusted conversations?  Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Anthropic taps former Microsoft India MD to lead Bengaluru expansion | TechCrunch

    [ad_1]

    Anthropic has appointed Irina Ghose, a former Microsoft India managing director, to lead its India business as the U.S. AI startup prepares to open an office in Bengaluru. The move underscores how India is becoming a key battleground for AI companies looking to expand beyond the U.S. for major growth markets.

    Ghose brings deep big-tech operating experience to the role. She spent 24 years at Microsoft before stepping down in December 2025. Her appointment gives Anthropic a seasoned executive with local enterprise and government relationships as it gears up to establish an on-the-ground presence in one of the world’s fastest-growing AI markets.

    India has become one of Anthropic’s most strategically important markets, with the country already ranking as the second-largest user base for Claude and usage heavily skewing toward technical and work-related tasks, including software development. Arch-rival OpenAI is also sharpening its focus on the market with plans to open an office in New Delhi — a sign India is fast becoming one of the most contested arenas in the global race to commercialize generative AI.

    While India offers enormous scale — with more than a billion internet subscribers and over 700 million smartphone users — converting that reach into meaningful revenue has proven difficult, pushing AI companies to experiment with aggressive pricing and promotions. OpenAI last year introduced ChatGPT Go, its under-$5 plan aimed at attracting Indian users, and later made it available free for a year in the country.

    Similar dynamics are playing out for Anthropic: its Claude app recorded a 48% increase from the previous year in downloads in India in September, reaching about 767,000 installs, while consumer spending surged 572% to $195,000 for the month, per Appfigures — still modest compared with the U.S., where September spending hit $2.5 million.

    Anthropic has been stepping up its engagement in India at the highest levels. Chief executive Dario Amodei visited in October and met corporate executives and lawmakers, including Prime Minister Narendra Modi, to discuss the company’s expansion plans and growing adoption of its tools. Anthropic had also explored a potential partnership with billionaire Mukesh Ambani’s Reliance Industries to broaden access to Claude, as TechCrunch reported previously. Reliance, however, ultimately struck a deal with Google to offer its Gemini AI Pro plan free to Jio subscribers. That move came as rival Bharti Airtel partnered with Perplexity to bundle access to its premium subscription, underscoring how India’s telecom giants have become critical distribution gatekeepers in the race to scale consumer AI services.

    In a LinkedIn post announcing the move, Ghose said she would focus on working with Indian enterprises, developers and startups adopting Claude for “mission-critical” use cases, pointing to growing demand for what she described as “high-trust, enterprise-grade AI.” She added that AI tailored to local languages could be a “force multiplier” across sectors including education and healthcare — signaling Anthropic’s intent to deepen adoption beyond early tech users into larger institutions and the public sector.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The push by Anthropic, OpenAI, and Perplexity comes as India’s homegrown GenAI ecosystem remains relatively early-stage. While the country has a deep pool of software talent and a fast-growing base of AI users, it has produced few startups building large foundation models, with investors instead largely backing application-layer companies rather than committing the scale of capital typically required to train frontier systems.

    The appointment also comes ahead of India’s AI Impact Summit 2026 in February, where the Indian government is expected to bring together AI startups, global CEOs, and industry experts to discuss the next phase of AI deployment in the country. The summit is part of New Delhi’s broader effort to signal support for domestic AI development and position India as a serious player in the global AI landscape, as competition intensifies across major markets.

    Anthropic is also building out its India team, with job listings for roles including startup and enterprise account executives as well as a partner sales manager, signaling a push to deepen its go-to-market efforts and tap Indian businesses and startups as customers as it expands its presence in the country.

    For Anthropic, the hire adds senior local leadership as it looks to turn India’s surging usage into a durable business, navigating a market where distribution partnerships, pricing pressure, and enterprise adoption will shape which AI players emerge as long-term winners.

    [ad_2]

    Jagmeet Singh

    Source link

  • At 25, Wikipedia Navigates a Quarter-Life Crisis in the Age of A.I.

    [ad_1]

    Turning 25 amid an A.I. boom, Wikipedia is racing to protect traffic, volunteers and revenue without losing its mission. Photo illustration by Nikolas Kokovlis/NurPhoto via Getty Images

    Traffic to Wikipedia, the world’s largest online encyclopedia, naturally ebbs and flows with the rhythms of daily life—rising and falling with the school calendar, the news cycle or even the day of the week—making routine fluctuations unremarkable for a site that draws roughly 15 billion page views a month. But sustained declines tell a different story. Last October, the Wikimedia Foundation, the nonprofit that oversees Wikipedia, disclosed that human traffic to the site had fallen 8 percent in recent months as a growing number of users turned to A.I. search engines and chatbots for answers.

    “I don’t think that we’ve seen something like this happen in the last seven to eight years or so,” Marshall Miller, senior director of product at the Wikimedia Foundation, told Observer.

    Launched on Jan. 15, 2001, Wikipedia turns 25 today. This milestone comes at a pivotal point for the online encyclopedia, which is straddling a delicate line between fending off existential risks posed by A.I. and avoiding irrelevance as the technology transforms how people find and consume information.

    “It’s really this question of long-term sustainability,” Lane Becker, senior director of earned revenue at the Wikimedia Foundation, told Observer. “We’d like to make it at least another 25 years—and ideally much longer.”

    While it’s difficult to pinpoint Wikipedia’s recent traffic declines on any single factor, it’s evident that the drop coincides with the emergence of A.I. search features, according to Miller. Chatbots such as ChatGPT and Perplexity often cite and link to Wikipedia, but because the information is already embedded in the A.I.-generated response, users are less likely to click through to the source, depriving the site of page views.

    Yet the spread of A.I.-generated content also underscores Wikipedia’s central role in the online information ecosystem. Wikipedia’s vast archive—more than 65 million articles across over 300 languages—plays a prominent role within A.I. tools, with the site’s data scraped by nearly all large language models (LLMs). “Yes, there is a decline in traffic to our sites, but there may well be more people getting Wikipedia knowledge than ever because of how much it’s being distributed through those platforms that are upstream of us,” said Miller.

    Surviving in the era of A.I.

    Wikipedia must find a way to stay financially and editorially viable as the internet changes. Declining page views not only mean that fewer visitors are likely to donate to the platform, threatening its main source of revenue, but also risk shrinking the community of volunteer editors who sustain it. Fewer contributors would mean slower content growth, ultimately leaving less material for LLMs to draw from.

    Metrics that track volunteer participation have already begun to slip, according to Miller. While noting that “it’s hard to parse out all the different reasons that this happens,” he conceded that the Foundation has “reason to believe that declines in page views will lead to declines in volunteer activity.”

    To maintain a steady pipeline of contributors, users must first become aware of the platform and understand its collaborative model. That makes proper attribution by A.I. tools essential, Miller said. Beyond simply linking to Wikipedia, surfacing metadata—such as when a page was last updated or how many editors contributed—could spur curiosity and encourage users to engage more deeply with the platform.

    Tech companies are becoming aware of the value of keeping Wikipedia relevant. Over the past year, Microsoft, Mistral AI, Perplexity AI, Ecosia, Pleias and ProRata have joined Wikimedia Enterprise, a commercial product that allows corporations to pay for large-scale access and distribution of Wikipedia content. Google and Amazon have long been partners of the platform, which was launched in 2021.

    The basic premise is that Wikimedia Enterprise customers can access content from Wikipedia at a higher volume and speed while helping sustain the platform’s mission. “I think there’s a growing understanding on the part of these A.I. companies about the significance of the Wikipedia dataset, both as it currently exists and also its need to exist in the future,” said Becker.

    Wikipedia is hardly alone in this shift. News organizations, including CNN, the Associated Press and The New York Times, have struck licensing deals with A.I. companies to supply editorial content in exchange for payment, while infrastructure providers like Cloudflare offer tools that allow websites to charge A.I. crawlers for access. Last month, the licensing nonprofit Creative Commons announced its support of a “pay-to-crawl” approach for managing A.I. bots.

    Preparing for an uncertain future

    Wikipedia itself is also adapting to a younger generation of internet users. In an effort to make editing Wikipedia more appealing, the platform is working to enhance its mobile edit features, reflecting the fact that younger audiences are far more likely to engage on smartphones than desktop computers.

    Younger users’ preference for social video platforms such as YouTube and TikTok has also pushed Wikipedia’s Future Audiences team—a division tasked with expanding readership—to experiment with video. The effort has already paid off, producing viral clips on topics ranging from Wikipedia’s most hotly disputed edits to the courtship dance of the black-footed albatross and Sino-Roman relations. The organization is also exploring a deeper presence on gaming platforms, another major draw for younger users.

    Evolving with the times also means integrating A.I. further within the platform. Wikipedia has introduced features such as Edit Check, which offers real-time feedback on whether a proposed edit fits a page, and is developing features like Tone Check to help ensure articles adhere to a neutral point of view.

    A.I.-generated content has also begun to seep onto the platform. As of August 2024, roughly 5 percent of newly created English articles on the site were produced with the help of A.I., according to a Princeton study. Seeing this as a problem, Wikipedia introduced a “speedy deletion” policy that allows editors to quickly remove content that shows clear signs of being A.I.-generated. Still, the community remains divided over whether using A.I. for tasks such as drafting articles is inherently problematic, said Miller. “There’s this active debate.”

    From streamlining editing to distributing its content ever more widely, Wikipedia is betting that A.I. can ultimately be an ally rather than an adversary. If managed carefully, the technology could help accelerate the encyclopedia’s mission over the next 25 years—as long as it doesn’t bring down the encyclopedia first.

    “Our whole thing is knowledge dissemination to anyone that wants it, anywhere that they want it,” said Becker. “If this is how people are going to learn things—and people are learning things and gaining value from the information that our community is able to bring forward—we absolutely want to find a way to be there and support it in ways that align with our values.”

    At 25, Wikipedia Navigates a Quarter-Life Crisis in the Age of A.I.

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link