ReportWire

Tag: Artificial Intelligence

  • OpenAI admits AI browsers face unsolvable prompt attacks

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Cybercriminals don’t always need malware or exploits to break into systems anymore. Sometimes, they just need the right words in the right place. OpenAI is now openly acknowledging that reality. The company says prompt injection attacks against artificial intelligence (AI)-powered browsers are not a bug that can be fully patched, but a long-term risk that comes with letting AI agents roam the open web. This raises uncomfortable questions about how safe these tools really are, especially as they gain more autonomy and access to your data.

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY

    AI-powered browsers can read and act on web content, which also makes them vulnerable to hidden instructions attackers can slip into pages or documents. (Kurt “CyberGuy” Knutsson)

    Why prompt injection isn’t going away

    In a recent blog post, OpenAI admitted that prompt injection attacks are unlikely to ever be completely eliminated. Prompt injection works by hiding instructions inside web pages, documents or emails in ways that humans don’t notice, but AI agents do. Once the AI reads that content, it can be tricked into following malicious instructions.

    OpenAI compared this problem to scams and social engineering. You can reduce them, but you can’t make them disappear. The company also acknowledged that “agent mode” in its ChatGPT Atlas browser increases risk because it expands the attack surface. The more an AI can do on your behalf, the more damage it can cause when something goes wrong.

    OpenAI launched the ChatGPT Atlas browser in October, and security researchers immediately started testing its limits. Within hours, demos appeared showing that a few carefully placed words inside a Google Doc could influence how the browser behaved. That same day, Brave published its own warning, explaining that indirect prompt injection is a structural problem for AI-powered browsers, including tools like Perplexity’s Comet.

    This isn’t just OpenAI’s problem. Earlier this month, the National Cyber Security Centre in the U.K. warned that prompt injection attacks against generative AI systems may never be fully mitigated.

    FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

    ChatGPT Atlas screen in an auditorium

    Prompt injection attacks exploit trust at scale, allowing malicious instructions to influence what an AI agent does without the user ever seeing it. (Kurt “CyberGuy” Knutsson)

    The risk trade-off with AI browsers

    OpenAI says it views prompt injection as a long-term security challenge that requires constant pressure, not a one-time fix. Its approach relies on faster patch cycles, continuous testing and layered defenses. That puts it broadly in line with rivals like Anthropic and Google, which have both argued that agentic systems need architectural controls and ongoing stress testing.

    Where OpenAI is taking a different approach is with something it calls an “LLM-based automated attacker.” In simple terms, OpenAI trained an AI to act like a hacker. Using reinforcement learning, this attacker bot looks for ways to sneak malicious instructions into an AI agent’s workflow.

    The bot runs attacks in simulation first. It predicts how the target AI would reason, what steps it would take and where it might fail. Based on that feedback, it refines the attack and tries again. Because this system has insight into the AI’s internal decision-making, OpenAI believes it can surface weaknesses faster than real-world attackers.

    Even with these defenses, AI browsers aren’t safe. They combine two things attackers love: autonomy and access. Unlike regular browsers, they don’t just display information, but also read emails, scan documents, click links and take actions on your behalf. That means a single malicious prompt hidden in a webpage, document or message can influence what the AI does without you ever seeing it. Even when safeguards are in place, these agents operate by trusting content at scale, and that trust can be manipulated.

    THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

    Person wearing a hoodie works on multiple computer screens displaying digital data in a dark room.

    As AI browsers gain more autonomy and access to personal data, limiting permissions and keeping human confirmation in the loop becomes critical for safety. (Kurt “CyberGuy” Knutsson)

    7 steps you can take to reduce risk with AI browsers

    You may not be able to eliminate prompt injection attacks, but you can significantly limit their impact by changing how you use AI tools.

    1) Limit what the AI browser can access

    Only give an AI browser access to what it absolutely needs. Avoid connecting your primary email account, cloud storage or payment methods unless there’s a clear reason. The more data an AI can see, the more valuable it becomes to attackers. Limiting access reduces the blast radius if something goes wrong.

    2) Require confirmation for every sensitive action

    Never allow an AI browser to send emails, make purchases or modify account settings without asking you first. Confirmation breaks long attack chains and gives you a moment to spot suspicious behavior. Many prompt injection attacks rely on the AI acting quietly in the background without user review.

    3) Use a password manager for all accounts

    A password manager ensures every account has a unique, strong password. If an AI browser or malicious page leaks one credential, attackers can’t reuse it elsewhere. Many password managers also refuse to autofill on unfamiliar or suspicious sites, which can alert you that something isn’t right before you manually enter anything.

    Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com

    4) Run strong antivirus software on your device

    Even if an attack starts inside the browser, antivirus software can still detect suspicious scripts, unauthorized system changes or malicious network activity. Strong antivirus software focuses on behavior, not just files, which is critical when dealing with AI-driven or script-based attacks.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

    5) Avoid broad or open-ended instructions

    Telling an AI browser to “handle whatever is needed” gives attackers room to manipulate it through hidden prompts. Be specific about what the AI is allowed to do and what it should never do. Narrow instructions make it harder for malicious content to influence the agent.

    6) Be careful with AI summaries and automated scans

    When an AI browser scans emails, documents or web pages for you, remember that hidden instructions can live inside that content. Treat AI-generated actions as drafts or suggestions, not final decisions. Review anything the AI plans to act on before approving it.

    7) Keep your browser, AI tools and operating system updated

    Security fixes for AI browsers evolve quickly as new attack techniques emerge. Delaying updates leaves known weaknesses open longer than necessary. Turning on automatic updates ensures you get protection as soon as they’re available, even if you miss the announcement.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaway

    There’s been a meteoric rise in AI browsers. We’re now seeing them from major tech companies, including OpenAI’s Atlas, The Browser Company’s Dia and Perplexity’s Comet. Even existing browsers like Chrome and Edge are pushing hard to add AI and agentic features into their current infrastructure. While these browsers can be useful, the technology is still early. It’s best not to fall for the hype and to wait for it to mature.

    Do you think AI browsers are worth the risk today, or are they moving faster than security can keep up? Let us know by writing to us at Cyberguy.com

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Robots learn 1,000 tasks in one day from a single demo

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Most robot headlines follow a familiar script: a machine masters one narrow trick in a controlled lab, then comes the bold promise that everything is about to change. I usually tune those stories out. We have heard about robots taking over since science fiction began, yet real-life robots still struggle with basic flexibility. This time felt different.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    ELON MUSK TEASES A FUTURE RUN BY ROBOTS

    Researchers highlight the milestone that shows how a robot learned 1,000 real-world tasks in just one day. (Science Robotics)

    How robots learned 1,000 physical tasks in one day

    A new report published in Science Robotics caught our attention because the results feel genuinely meaningful, impressive and a little unsettling in the best way. The research comes from a team of academic scientists working in robotics and artificial intelligence, and it tackles one of the field’s biggest limitations.

    The researchers taught a robot to learn 1,000 different physical tasks in a single day using just one demonstration per task. These were not small variations of the same movement. The tasks included placing, folding, inserting, gripping and manipulating everyday objects in the real world. For robotics, that is a big deal.

    Why robots have always been slow learners

    Until now, teaching robots physical tasks has been painfully inefficient. Even simple actions often require hundreds or thousands of demonstrations. Engineers must collect massive datasets and fine-tune systems behind the scenes. That is why most factory robots repeat one motion endlessly and fail as soon as conditions change. Humans learn differently. If someone shows you how to do something once or twice, you can usually figure it out. That gap between human learning and robot learning has held robotics back for decades. This research aims to close that gap.

    THE NEW ROBOT THAT COULD MAKE CHORES A THING OF THE PAST

    A robot doing dishes

    The research team behind the study focuses on teaching robots to learn physical tasks faster and with less data.  (Science Robotics)

    How the robot learned 1,000 tasks so fast

    The breakthrough comes from a smarter way of teaching robots to learn from demonstrations. Instead of memorizing entire movements, the system breaks tasks into simpler phases. One phase focuses on aligning with the object, and the other handles the interaction itself. This method relies on artificial intelligence, specifically an AI technique called imitation learning that allows robots to learn physical tasks from human demonstrations.

    The robot then reuses knowledge from previous tasks and applies it to new ones. This retrieval-based approach allows the system to generalize rather than start from scratch each time. Using this method, called Multi-Task Trajectory Transfer, the researchers trained a real robot arm on 1,000 distinct everyday tasks in under 24 hours of human demonstration time.

    Importantly, this was not done in a simulation. It happened in the real world, with real objects, real mistakes and real constraints. That detail matters.

    Why this research feels different

    Many robotics papers look impressive on paper but fall apart outside perfect lab conditions. This one stands out because it tested the system through thousands of real-world rollouts. The robot also showed it could handle new object instances it had never seen before. That ability to generalize is what robots have been missing. It is the difference between a machine that repeats and one that adapts.

    AI VIDEO TECH FAST-TRACKS HUMANOID ROBOT TRAINING

    A robot doing dishes

    The robot arm practices everyday movements like gripping, folding and placing objects using a single human demonstration.  (Science Robotics)

    A long-standing robotics problem may finally be cracking

    This research addresses one of the biggest bottlenecks in robotics: inefficient learning from demonstrations. By decomposing tasks and reusing knowledge, the system achieved an order of magnitude improvement in data efficiency compared to traditional approaches. That kind of leap rarely happens overnight. It suggests that the robot-filled future we have talked about for years may be nearer than it looked even a few years ago.

    What this means for you

    Faster learning changes everything. If robots need less data and less programming, they become cheaper and more flexible. That opens the door to robots working outside tightly controlled environments.

    In the long run, this could enable home robots to learn new tasks from simple demonstrations instead of specialist code. It also has major implications for healthcare, logistics and manufacturing.

    More broadly, it signals a shift in artificial intelligence. We are moving away from flashy tricks and toward systems that learn in more human-like ways. Not smarter than people. Just closer to how we actually operate day to day.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com     

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP 

    Kurt’s key takeaways 

    Robots learning 1,000 tasks in a day does not mean your house will have a humanoid helper tomorrow. Still, it represents real progress on a problem that has limited robotics for decades. When machines start learning more like humans, the conversation changes. The question shifts from what robots can repeat to what they can adapt to next. That shift is worth paying attention to.

    If robots can now learn like us, what tasks would you actually trust one to handle in your own life? Let us know by writing to us at Cyberguy.com

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • You Must Stare Into the Heart of the $400 Million Machine

    [ad_1]

    The mysterious ASML machine costs $400 million, and the companies that make GPUs can’t function without the machine. There’s no AI without the GPUs, and there’s currently no economy without the concept of AI absorbing investor money and using it to unnervingly build companies and expand them and drive all the questionably moral and even more questionably useful economic activity that we all may not like, but which sustains us. For the time being.

    A new 55-minute YouTube video is the most in-depth and lucid explanation I’ve ever consumed about the $400 machine—ASML’s colossal EUV lithography system—how and why this technology was conceived, and roughly how it works. It’s created by Veritasium, the YouTube channel of science influencer Derek Muller, which has just shy of 20 million subscribers, which sounds like a lot until you compare it to MrBeast’s 458 million. It’s a powerful, but relatively niche channel, prominent enough to gain access to an ASML clean room, but probably nonetheless close to the ceiling of popularity for a channel about fairly hard science.

    As of this writing, the video was doing impressive business, pushing ten million views, even though it’s about, well, ultraviolet lithography. Fortunately it sidesteps most of the usual corn syrup that taints your average freaking epic science video. It doesn’t treat its audience like children. It hasn’t been injected with a bunch of “that just happened” jokes. The vibe is that the makers of the video respect their viewers and genuinely want them to come away more knowledgeable than they were when they started.

     

    Will you actually be more knowledgeable than you were before you watched the video? Speaking for myself, I’m not sure I deserve Veritasium’s respect. The audience stand-in is a guy named Casper Mebius, and he responds to an ASML guy talking about the wavelength of a red laser being 650 nanometers by going “something like that, yeah.” I can’t relate to that at all. I would have said “if you say so.” Maybe I deserved the Miss Rachel version of this video.

    But you, like me, must nonetheless stare into the heart of the $400 machine. You must behold the otherworldly smoothness of the mirrors. You must hear, in detail, how the tin droplets are dripped and laser blasted, and how they emit the light of a supernova. You must try, and fail, to truly wrap your head around the thought experiments about laser accuracy involving aiming at dimes on the moon. Most importantly: you must watch the comparatively crude, herky-jerky dance of the GPU wafers themselves getting lithography-ed inside the machine. 

    It was once very important to the people in power in the U.S. that China not ever harness the full power of the GPU. But keeping China away from cutting edge chips seems to be getting de-prioritized lately. A few weeks ago, it emerged that a Chinese team in Shenzhen had, by poaching ASML employees, created a prototype of the $400 million machine. It’s haunting to contemplate what this all might portend.

    The $400 machine will one day no longer be the crown jewel of the tech economy. Moore’s law will march on, processor power will keep inflating, and the $400 million machine will become e-waste like everything else. The $1 billion machine is not far away. Stare into this one while it still means something. 

    [ad_2]

    Mike Pearl

    Source link

  • DeSantis says Florida can regulate AI despite Trump’s executive order: ‘We have a right to do this’

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    – DeSantis says Florida can regulate AI despite Trump’s executive order: ‘We have a right to do this’

    – AI-powered bat tracking could give baseball players the edge 

    – Trump admin will recruit 1,000 technologists for elite ‘Tech Force’ to modernize government

    Florida Gov. Ron DeSantis speaks about plans to lower insurance prices in the state, during a press conference at Florida International University’s Wall of Wind, an experimental facility focused on wind engineering research, on Wednesday, Feb. 5, in Miami.  (AP/Rebecca Blackwell)

    DESANTIS VS. DONALD: Florida Gov. Ron DeSantis, a Republican, said on Monday that state officials have the right to regulate artificial intelligence despite President Trump’s recent executive order aiming to require a national AI standard the president argues would overrule state laws.

    TECH HOME RUN: Baseball teams have long searched for a way to study the entire swing without sensors or complex lab setups. Today, a new solution is entering the picture. Theia, an AI biomechanics company, debuted a commercially available video-only system that analyzes bat trajectory and full-body biomechanics together. This new approach works in real baseball environments and needs no reflective body markers, wearables or special equipment.

    President Trump signs an executive order on AI

    U.S. President Donald Trump signs an executive order on AI next to Sriram Krishnan, Senior White House Policy Advisor on Artificial Intelligence, U.S. Senate Commerce Committee Chairman Ted Cruz (R-TX), U.S. Commerce Secretary Howard Lutnick, David O. Sacks, Chair of the President’s Council of Advisors on Science and Technology, and U.S. Treasury Secretary Scott Bessent, in the Oval Office at the White House in Washington, D.C., U.S. December 11, 2025. (Al Drago/Reuters)

    TECH FORCE: The Trump administration launched a new initiative Monday aimed at recruiting top-tier technical talent to accelerate the adoption of artificial intelligence (AI) at the federal level.  

    HANDS-FREE TECH: Chrome on Android now offers a fresh way to digest information when your hands are busy or your eyes need a break. A new update powered by Google Gemini can turn written webpages into short podcast-style summaries. Two virtual hosts chat about the content, making it feel easier to follow during your commute or while you multitask.

    Sam Altman speaking into a microphone

    Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, US, on Tuesday, Sept. 23, 2025. Stargate is a collaboration of OpenAI, Oracle and SoftBank, with promotional support from President Donald Trump, to build data centers and other infrastructure for artificial intelligence throughout the US. (Photographer: Kyle Grillot/Bloomberg via Getty Images) (Kyle Grillot/Bloomberg via Getty Images)

    ‘MORE USABLE’: OpenAI announced an update for ChatGPT Images that it says drastically improves both the generation speed and instruction-following capability of its image generator.

    EYES TO THE FUTURE: Artificial intelligence (AI) is charging into a new phase in 2026 – one that could reshape business operations, global competition and even which workers thrive, according to Goldman Sachs’ Chief Information Officer Marco Argenti.

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    X
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements, and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

    [ad_2]

    Source link

  • Big Tech’s fast-expanding plans for data centers run into stiff community opposition

    [ad_1]

    SPRING CITY, Pa. — Tech companies and developers looking to plunge billions of dollars into ever-bigger data centers to power artificial intelligence and cloud computing are increasingly losing fights in communities where people don’t want to live next to them, or even near them.

    Communities across the United States are reading about — and learning from — each other’s battles against data center proposals that are fast multiplying in number and size to meet steep demand as developers branch out in search of faster connections to power sources.

    In many cases, municipal boards are trying to figure out whether energy- and water-hungry data centers fit into their zoning framework. Some have entertained waivers or tried to write new ordinances. Some don’t have zoning.

    But as more people hear about a data center coming to their community, once-sleepy municipal board meetings in farming towns and growing suburbs now feature crowded rooms of angry residents pressuring local officials to reject the requests.

    “Would you want this built in your backyard?” Larry Shank asked supervisors last month in Pennsylvania’s East Vincent Township. “Because that’s where it’s literally going, is in my backyard.”

    A growing number of proposals are going down in defeat, sounding alarms across the data center constellation of Big Tech firms, real estate developers, electric utilities, labor unions and more.

    Andy Cvengros, who helps lead the data center practice at commercial real estate giant JLL, counted seven or eight deals he’d worked on in recent months that saw opponents going door-to-door, handing out shirts or putting signs in people’s yards.

    “It’s becoming a huge problem,” Cvengros said.

    Data Center Watch, a project of 10a Labs, an AI security consultancy, said it is seeing a sharp escalation in community, political and regulatory disruptions to data center development.

    Between April and June alone, its latest reporting period, it counted 20 proposals valued at $98 billion in 11 states that were blocked or delayed amid local opposition and state-level pushback. That amounts to two-thirds of the projects it was tracking.

    Some environmental and consumer advocacy groups say they’re fielding calls every day, and are working to educate communities on how to protect themselves.

    “I’ve been doing this work for 16 years, worked on hundreds of campaigns I’d guess, and this by far is the biggest kind of local pushback I’ve ever seen here in Indiana,” said Bryce Gustafson of the Indianapolis-based Citizens Action Coalition.

    In Indiana alone, Gustafson counted more than a dozen projects that lost rezoning petitions.

    For some people angry over steep increases in electric bills, their patience is thin for data centers that could bring still-higher increases.

    Losing open space, farmland, forest or rural character is a big concern. So is the damage to quality of life, property values or health by on-site diesel generators kicking on or the constant hum of servers. Others worry that wells and aquifers could run dry.

    Lawsuits are flying — both ways — over whether local governments violated their own rules.

    Big Tech firms Microsoft, Google, Amazon and Facebook — which are collectively spending hundreds of billions of dollars on data centers across the globe — didn’t answer Associated Press questions about the effect of community pushback.

    Microsoft, however, has acknowledged the difficulties. In an October securities filing, it listed its operational risks as including “community opposition, local moratoriums, and hyper-local dissent that may impede or delay infrastructure development.”

    Even with high-level support from state and federal governments, the pushback is having an impact.

    Maxx Kossof, vice president of investment at Chicago-based developer The Missner Group, said developers worried about losing a zoning fight are considering selling properties once they secure a power source — a highly sought-after commodity that makes a proposal far more viable and valuable.

    “You might as well take chips off the table,” Kossof said. “The thing is you could have power to a site and it’s futile because you might not get the zoning. You might not get the community support.”

    Some in the industry are frustrated, saying opponents are spreading falsehoods about data centers — such as polluting water and air — and are difficult to overcome.

    Still, data center allies say they are urging developers to engage with the public earlier in the process, emphasize economic benefits, sow good will by supporting community initiatives and talk up efforts to conserve water and power and protect ratepayers.

    “It’s definitely a discussion that the industry is having internally about, ‘Hey, how do we do a better job of community engagement?’” said Dan Diorio of the Data Center Coalition, a trade association that includes Big Tech firms and developers.

    Winning over local officials, however, hasn’t translated to winning over residents.

    Developers pulled a project off an October agenda in the Charlotte suburb of Matthews, North Carolina, after Mayor John Higdon said he informed them it faced unanimous defeat.

    The project would have funded half the city’s budget and developers promised environmentally friendly features. But town meetings overflowed, and emails, texts and phone calls were overwhelmingly opposed, “999 to one against,” Higdon said.

    Had council approved it, “every person that voted for it would no longer be in office,” the mayor said. “That’s for sure.”

    In Hermantown, a suburb of Duluth, Minnesota, a proposed data center campus several times larger than the Mall of America is on hold amid challenges over whether the city’s environmental review was adequate.

    Residents found each other through social media and, from there, learned to organize, protest, door-knock and get their message out.

    They say they felt betrayed and lied to when they discovered that state, county, city and utility officials knew about the proposal for an entire year before the city — responding to a public records request filed by the Minnesota Center for Environmental Advocacy — released internal emails that confirmed it.

    “It’s the secrecy. The secrecy just drives people crazy,” said Jonathan Thornton, a realtor who lives across a road from the site.

    Documents revealing the extent of the project emerged days before a city rezoning vote in October. Mortenson, which is developing it for a Fortune 50 company that it hasn’t named, says it is considering changes based on public feedback and that “more engagement with the community is appropriate.”

    Rebecca Gramdorf found out about it from a Duluth newspaper article, and immediately worried that it would spell the end of her six-acre vegetable farm.

    She found other opponents online, ordered 100 yard signs and prepared for a struggle.

    “I don’t think this fight is over at all,” Gramdorf said.

    ___

    Follow Marc Levy on X at https://x.com/timelywriter.

    [ad_2]

    Source link

  • Here Come The Humanoids | Sunday on 60 Minutes

    [ad_1]


    Here Come The Humanoids | Sunday on 60 Minutes – CBS News









































    Watch CBS News



    60 Minutes gets a look at the first real-world test of Boston Dynamics’ humanoid robot Atlas, offering a glimpse of a future coming faster than you might think.

    [ad_2]
    Source link

  • A.I. Won’t Eliminate Managers, But It Will Redefine Leadership

    [ad_1]

    As A.I. automates information and routine decision-making, it is forcing managers to confront whether they truly know how to lead people. Unsplash+

    The discourse surrounding artificial intelligence in the workplace is thick with dystopian forecasts and utopian promises. Will it eradicate jobs or usher in a new era of human creativity? For managers and leaders, the question is more pointed: will advances in A.I. make my role obsolete? The answer is a definitive no. A.I. will not replace managers. It will, however, act as a great accelerant, stripping away the administrative crutches many have leaned on for decades and laying bare a critical deficit in our organizations: the inability to genuinely manage people.

    For more than a century, the prevailing management model has been one of command-and-control. Managers were expected to be the nexus of knowledge, the primary problem-solvers and the arbiters of work. Promotion into management was typically a reward for attaining technical proficiency in a particular area, creating a legion of what the Chartered Management Institute (CMI) has called “accidental managers”—individuals promoted for their knowledge but utterly unprepared for the human complexities of leadership. In the U.K. alone, the CMI estimates that 82 percent of managers receive no formal preparation or training to take on the people management aspects of their role.

    This is the category of manager that A.I. is coming for. The manager whose primary value lies in holding information, creating reports, assigning tasks and resolving routine problems is standing on a trapdoor. Generative A.I. and advanced analytics can now perform these functions with unprecedented speed and efficiency. Knowledge is no longer power because knowledge is ubiquitous. A recent MIT Sloan study found that access to A.I. tools increased productivity for knowledge workers by over 40 percent, largely by automating the synthesis and retrieval of information—the very tasks that once consumed a manager’s day. When the “what” and the “how” of a task are automated, what is left for a manager to do?

    The answer is everything that truly matters: the “who” and the “why.” What remains are the deeply human skills that A.I. cannot replicate. These include fostering psychological safety, building trust, inspiring motivation, navigating conflict and cultivating an employee’s innate potential. In this new landscape, the manager’s role shifts from chief problem-solver to chief enabler. Success will no longer be measured by the solutions a manager provides, but by the problem-solving capabilities they build within their teams.

    This is where the crisis in management becomes painfully evident. Despite decades of investment the world over in leadership development programs, each busying itself inventing its own version of a management wheel, employee engagement levels remain stubbornly low. Gallup reports that only 10 percent of workers in the U.K., for example, feel engaged in their work. Globally, the share of employees experiencing high daily stress has steadily climbed over the past 20 years to 41 percent, rising to nearly 60 percent for those working under poor management. Together, disengagement and stress are estimated to cost the global economy $8.9 trillion annually, roughly nine percent of global GDP. 

    Traditional management approaches, which emphasize telling, directing and correcting, are misaligned with how people learn and perform. By removing autonomy and short-circuiting learning, they unintentionally fuel disengagement and burnout, precisely the outcomes organizations can least afford in an A.I.-accelerated environment. 

    The solution requires a fundamental reboot of our management operating system. For years, organizations have attempted to retrofit coaching skills onto managers through formal, session-based models like GROW. These models, while effective in executive coaching contexts, are ill-suited for the dynamic, fast-paced reality of frontline management. Time-starved managers rarely have the capacity for scheduled, hour-long coaching conversations, nor the psychological distance required to coach their direct reports while holding them accountable for performance.

    What’s needed instead is a more integrated, behavioral approach that embeds coaching into the fabric of daily interactions. This means shifting from reflexively fixing problems to facilitating better thinking in others, and bringing development into the flow of work. 

    At its core, this approach can be distilled into a simple behavioral sequence summarized as STAR. 

    Stop: The first, and most difficult, step is resisting the instinct to immediately solve the problem when an employee raises an issue. Instead of jumping to an answer, the manager pauses and takes a step back.

    Think: In that pause, the manager assesses whether this is a coachable moment. Is the situation non-urgent? Is there an opportunity for learning rather than rescue?

    Ask: Rather than telling, the manager adopts an inquiry-led approach, using questions to prompt reflection and ownership. A subtle but effective shift is moving from blame-oriented “why?” questions to solution-focused “what?” questions. For example, replacing “Why is this late?” with “What obstacles came up, and what options do we have now?” changes the tone from accusation to collaboration.

    Result: The interaction concludes with clear next steps and follow-up, reinforcing accountability while ensuring the employee owns the outcome and and that there will be an opportunity for appropriate feedback.

    This is not coaching as a formal, scheduled meeting. It’s a 90-second interaction in the hallway or a two-minute exchange on a video call. It’s coaching as a continuous micro-practice. The cumulative impact, however, is macro. Government-sponsored research conducted by the London School of Economics has shown that managers trained in this approach increased the amount of time they spent coaching in the flow of work by 70 percent. The benefits ripple outwards: managers regain time as their teams become more self-sufficient, employees feel more valued and trusted and the organization develops a resilient, adaptive and highly engaged culture.

    A.I. is an epochal technology that will automate complexity and democratize access to knowledge. This transition will be uncomfortable for managers who have built their authority on being the expert in the room. But for those who recognize that the future of leadership lies in human connection, judgment and meaning-making, it represents the greatest opportunity in a generation. 

    The challenge is clear: evolve from a director of tasks into a developer of people. A.I. will increasingly manage the tasks. Leaders must manage meaning and the conditions in which people can do their best thinking. A.I. won’t replace those who fail to make this shift, but it will make them increasingly irrelevant by revealing a new, higher standard of leadership.

    Dominic Ashley-Timms is the CEO of the performance consultancy Notion and co-author of the bestselling book, The Answer is a Question: The Missing Superpower That Changes Everything and Will Transform Your Impact as a Manager and Leader.

    A.I. Won’t Eliminate Managers, But It Will Redefine Leadership

    [ad_2]

    Dominic Ashley-Timms

    Source link

  • Big Tech Blocked California Data Center Legislation, Leaving Only a Study Requirement

    [ad_1]

    Tools that power artificial intelligence devour energy. But attempts to shield regular Californians from footing the bill in 2025 ended with a law requiring regulators to write a report about the issue by 2027.

    If that sounds pretty watered down, it is. Efforts to regulate the energy usage of data centers — the beating heart of AI — ran headlong into Big Tech, business groups and the governor.

    That’s not surprising given that California is increasingly dependent on big tech for state revenue: A handful of companies pay upwards of $5 billion just on income tax withholding.

    The law mandating the report is the lone survivor of last year’s push to rein in the data-center industry. Its deadline means the findings won’t likely be ready in time for lawmakers to use in 2026. The measure began as a plan to give data centers their own electricity rate, shielding households and small businesses from higher bills.

    It amounts to a “toothless” measure, directing the utility regulator to study an issue it already has the authority to investigate, said Matthew Freedman, a staff attorney with The Utility Reform Network, a ratepayer advocate.

    Data centers’ enormous electricity demand has pushed them to the center of California’s energy debate, and that’s why lawmakers and consumer advocates say new regulations matter.

    For instance, the sheer amount of energy requested by data centers in California is prompting questions about costly grid upgrades even as speculative projects and fast-shifting AI loads make long-term planning uncertain. Developers have requested 18.7 gigawatts of service capacity for data centers, more than enough to serve every household in the state, according to the California Energy Commission.

    But the report could help shape future debates as lawmakers revisit tougher rules and the CPUC considers new policies on what data centers pay for power — a discussion gaining urgency as scrutiny of their rising electricity costs grows, he said.

    “It could be that the report helps the Legislature to understand the magnitude of the problem and potential solutions,” Freedman said. “It could also inform the CPUC’s own review of the reasonableness of rates for data center customers, which they are likely to investigate.”

    State Sen. Steve Padilla, D-Chula Vista, says that the final version of his law “was not the one we would have preferred,” agreeing that it may seem “obvious” the CPUC can study data center cost impacts. The measure could help frame future debates and at least “says unequivocally that the CPUC has the authority to study these impacts” as demand from data centers accelerates, Padilla added.

    “(Data centers) consume huge amounts of energy, huge amounts of resources, and at least in the near future, we’re not going to see that change,” he said.

    Earlier drafts of Padilla’s measure went further, requiring data centers to install large batteries to support the grid during peak demand and pushing utilities to supply them with 100% carbon-free electricity by 2030 — years ahead of the state’s own mandate. Those provisions were ultimately stripped out.


    How California’s first push to regulate data centers slipped away

    California’s bid to bring more oversight to data centers unraveled earlier this year under industry pressure, ending with Gov. Gavin Newsom’s veto of a bill requiring operators to report their water use. Concerns over the bills reflected fears that data-center developers could shift projects to other states and take valuable jobs with them.

    A September Stanford report on powering California data centers said the state risks losing property-tax revenue, union construction jobs and “valuable AI talent” if data-center construction moves out of state.

    The idea that increased regulation could lead to businesses or dollars in some form leaving California is an argument that has been brought up across industries for decades. It often does not hold up to more careful or long-term scrutiny.

    In the face of this opposition, two key proposals stalled in the Legislature’s procedural churn. Early in the session, Padilla put a separate clean-power incentives proposal for data centers on hold until 2026. Later in the year, an Assembly bill requiring data centers to disclose their electricity use was placed in the Senate’s suspense file — where appropriations committees often quietly halt measures.

    Newsom, who has often spoken of California’s AI dominance, echoed the industry’s competitiveness worries in his veto message of the water-use reporting requirement. The governor said he was reluctant to impose requirements on data centers, “without understanding the full impact on businesses and the consumers of their technology.”

    Despite last year’s defeats, some lawmakers say they will attempt to tackle the issue again.

    Padilla plans to try again with a bill that would add new rules on who pays for data centers’ long-term grid costs in California, while Assemblymember Rebecca Bauer-Kahan, D-San Ramon, will revisit her electricity-disclosure bill.


    Big Tech warns of job losses, but one advocate sees an opening

    After blocking most measures last year — and watering down the lone energy-costs bill — Big Tech groups say they’ll revive arguments that new efforts to regulate data centers could cost California jobs.

    “When we get to the details of what our regulatory regime looks like versus other states, or how we can make California more competitive … that’s where sometimes we struggle to find that happy medium,” he said.

    Despite having more regulations than some states, California continues to toggle between the 4th and 5th largest economy in the world and has for some time, suggesting that the Golden State is very competitive.

    Dan Diorio, vice president of state policy for the Data Center Coalition, another industry lobbying group, said new requirements on data centers should apply to all other large electricity users.

    “To single out one industry is not something that we think would set a helpful precedent, ” Diorio said. “We’ve been very consistent with that throughout the country.”

    Critics say job loss fears are overblown, noting California built its AI sector without the massive hyperscale facilities that typically gravitate to states with ample, cheaper land and streamlined permitting.

    Data-center locations — driven by energy prices, land and local rules — have little to do with where AI researchers live, said Shaolei Ren, an AI researcher at UC Riverside.

    “These two things are sort of separate, they’re decoupled,” he said.

    Freedman, of TURN, said lawmakers may have a bargaining chip: if developers cared about cheaper power, they wouldn’t be proposing facilities in a state with high electric rates. That means speed and certainty may be the priority, giving lawmakers the space to potentially offer quicker approvals in exchange for developers covering more grid costs.

    “There’s so much money in this business that the energy bills — even though large — are kind of like rounding errors for these guys,” Freedman said. “If that’s true, then maybe they shouldn’t care about having to pay a little bit more to ensure that costs aren’t being shifted to other customers.”

    This story was originally published by CalMatters and distributed through a partnership with The Associated Press.

    Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – December 2025

    [ad_2]

    Associated Press

    Source link

  • Grok chatbot allowed users to create digitally altered photos of minors in

    [ad_1]

    Elon Musk’s Grok, the chatbot developed by his company xAI, acknowledged “lapses in safeguards” on the platform that allowed users to generate digitally altered, sexualized photos of minors.

    The admission comes after multiple users alleged on social media that people are using Grok to generate suggestive images of minors, in some cases stripping them of clothing they were wearing in original photos. 

    In a post on Friday responding to one person on Musk-owned social media site X, Grok said it was “urgently fixing” the holes in its system. Grok also included a link to CyberTipline, a website where people can report child sexual exploitation.

    “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing, like the example you referenced,” Grok said in a separate post on X on Thursday. “xAI has safeguards, but improvements are ongoing to block such requests entirely.”

    In another social media post, a user posted side-by-side photos of herself wearing a dress and another that appears to be a digitally altered version of the same photo of her in a bikini. “How is this not illegal?” she wrote on X.

    On Friday, French officials reported the sexually explicit content generated by Grok to prosecutors, referring to it as “manifestly illegal” in a statement, according to Reuters.

    xAI, the company that developed the AI chatbot Grok, said “Legacy Media Lies” in a response to a request for comment. 

    Grok has independently taken some responsibility for the content. In one instance last week, the chatbot apologized for generating an AI image of two female minors in “sexualized attire,” adding that the artificial photo violated ethical standards and potentially U.S. law on child pornography. 

    Copyleaks, a plagiarism and AI content detection tool, said in a recent blog post that there are many examples of Grok generating sexualized versions of women.

    “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal,” Alon Yamin, CEO and co-founder of Copyleaks, said in the post.

    [ad_2]

    Source link

  • AI disclosure in healthcare: What patients must know

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Artificial intelligence is quickly reshaping healthcare. It now supports diagnostic imaging, clinical decision tools, patient messages and back office workflows. According to the World Economic Forum, 4.5 billion people still lack access to essential care, and the global health worker shortage could reach 11 million by 2030. AI could help close that gap.

    However, as AI becomes more embedded in care, regulators are zeroing in on a simple question. Should patients be told when AI plays a role in their care?

    In the United States, no single federal law requires broad AI disclosure in healthcare. Instead, a growing patchwork of state laws is filling that gap. Some states require clear disclosure. Others mandate transparency indirectly through limits on how AI can be used.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    STATE-LEVEL AI RULES SURVIVE — FOR NOW — AS SENATE SINKS MORATORIUM DESPITE WHITE HOUSE PRESSURE

    AI now supports many healthcare decisions, from patient communications to coverage reviews, making transparency more important than ever for trust and accountability. (Kurt “CyberGuy” Knutsson)

    Why AI disclosure matters for trust

    Transparency is not a technical detail, it is a trust issue. Research across industries shows people expect to be informed when AI affects decisions that matter to them. In healthcare, that expectation is even stronger. An analysis published by CX Today found that when AI use is hidden, trust erodes quickly, even when outcomes are accurate.

    Healthcare depends on trust. Patients follow treatment plans, share sensitive information and stay engaged when they believe care decisions are ethical and accountable.

    How AI disclosure connects to HIPAA and informed consent

    While HIPAA does not directly regulate artificial intelligence, its principles still apply. Covered entities must clearly explain how protected health information is used and safeguarded.

    When AI systems analyze or generate clinical information using patient data, nondisclosure can undermine that goal. Patients may not fully understand how their information shapes care decisions.

    Disclosure also supports informed consent. Patients have the right to understand material factors influencing diagnosis, treatment, or care communications. Just as clinicians disclose new procedures or medical devices, meaningful AI use should be explained, so patients can ask questions and stay involved in their care.

    AI TOOLS COULD WEAKEN DOCTORS’ SKILLS IN DETECTING COLON CANCER, STUDY SUGGESTS

    A stethoscope

    States are stepping in where federal rules fall short, creating new disclosure requirements when AI influences care access, claims, or treatment decisions. (Kurt “CyberGuy” Knutsson)

    What does AI disclosure mean in healthcare?

    AI disclosure means informing patients or members when artificial intelligence systems are used in healthcare-related decisions. This can include clinical messages, diagnostic support tools, utilization review, claims processing or coverage determinations. The goal is transparency, accountability and patient trust.

    Healthcare activities most likely to trigger disclosure

    According to analysis from Morgan Lewis, disclosure requirements most often apply when AI is used for:

    • Patient-facing clinical communications
    • Utilization review and utilization management
    • Claims processing and coverage decisions
    • Mental health or therapeutic interactions

    These areas are considered high impact because they directly affect access to care and understanding of health information.

    Risks of failing to disclose AI use

    Healthcare organizations that fail to disclose AI use face real consequences. These include increased litigation risk, reputational damage and erosion of patient trust. Ethical concerns around autonomy and transparency can also trigger regulatory scrutiny.

    MORE AMERICANS ARE TURNING TO AI FOR HEALTH ADVICE

    A doctor with arms crossed

    Clear AI disclosure helps patients stay informed and involved, reinforcing that licensed healthcare professionals remain responsible for every medical decision. (Kurt “CyberGuy” Knutsson)

    How states are shaping AI disclosure rules

    States are taking different paths to regulate healthcare AI, but most are starting with one common goal: greater transparency when technology influences care.

    California focuses on communication and coverage decisions

    California has taken one of the most comprehensive approaches.

    AB 3030 requires clinics and physician offices that use generative AI for patient communications to include a clear disclaimer. Patients must also be told how to reach a human healthcare professional.

    SB 1120 applies to health plans and disability insurers. It requires safeguards when AI is used for utilization review. It also mandates disclosure and confirms that licensed professionals make medical necessity decisions.

    Colorado regulates high-risk AI systems

    Colorado’s SB24 205 targets AI systems considered high risk. These are tools that materially influence decisions like approval or denial of healthcare services.

    Entities must implement safeguards against algorithmic discrimination and disclose AI use. While broader than clinical care alone, the law directly affects patient access decisions.

    Utah emphasizes mental health and regulated services

    Utah has layered disclosure rules that intersect with healthcare.

    HB 452 requires mental health chatbots to clearly disclose AI use. SB 149 and SB 226 extend disclosure requirements to regulated occupations, including healthcare professionals.

    This approach ensures transparency in therapeutic interactions and clinical services.

    Other states that are expanding AI transparency

    Several other states are moving in the same direction. Massachusetts, Rhode Island, Tennessee and New York are all considering or enforcing rules that require disclosure and human review when AI influences utilization review or claims outcomes. Even when clinical diagnosis is not covered, these laws push accountability where AI affects care access.

    What this means for you

    If you are a patient, expect more transparency. You may see disclosures in messages, coverage notices or digital interactions. If you work in healthcare, AI governance is no longer optional. Disclosure practices must align across clinical, administrative, and digital systems. Training staff and updating patient notices will matter as much as the technology itself. Trust will increasingly depend on how openly AI is introduced into care.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    AI can improve efficiency, expand access, and support clinicians. Yet its value depends on trust. Disclosure does not slow innovation. It strengthens confidence in both the technology and the professionals who use it. As states continue to act, transparency will likely become the norm rather than the exception in healthcare AI.

    If AI helps guide your care, would knowing when and how it is used change the way you trust your healthcare provider? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Airloom will showcase its new approach to wind power at CES

    [ad_1]

    One of the many concerns about artificial intelligence these days is how the rush to build data centers is impacting local communities. Data centers can create a drain on resources, and some utility companies have already said customers can expect to see their electricity bills growing as these facilities increase demand. There have been some discussions of what other power sources could support the AI engine, and wind power specialist Airloom is one company that’s looking to address the problem. Ahead of the business’ upcoming appearance at CES, we’ve learned a bit about what Airloom has accomplished this year and what it is aiming for next.

    Rather than the very tall towers typically used for this approach, Airloom’s structures are 20 to 30 meters high. They are comprised of a loop of adjustable wings that move along a track, a design that’s akin to a roller coaster. As the wings move, they generate power just like the blades on a regular wind turbine do. Airloom claims that its structures require 40 percent less mass than a traditional one while delivering the same output. It also says the Airloom’s towers require 42 percent fewer parts and 96 percent fewer unique parts. In combination, the company says its approach is 85 percent faster to deploy and 47 percent less expensive than horizontal axis wind turbines. Airloom broke ground on a pilot site in June for testing out its approach and confirming how those figures work in practice.

    It’s not feasible to bring a wind farm, even a small one, into CES, but Airloom will have a booth at the event with materials about its technology and engineering. While the business isn’t in a consumer-facing field, the impact of Airloom’s work could have a future positive impact on people if the data center boom continues.

    [ad_2]

    Anna Washenko

    Source link

  • Why some people are turning to artificial intelligence for mental health needs – WTOP News

    [ad_1]

    According to a George Mason University flash poll of about 500 people across the country, about 50% reported using AI for support with mental health issues.

    Many people are turning to artificial intelligence for coping, feedback, guidance and to be a sort of confidant.

    According to a George Mason University flash poll of about 500 people across the country, about 50% reported using AI for support with mental health issues. That figure goes up to 80% for those between ages 25 and 34.

    And 15% of respondents said they used AI for mental health issues every day.

    “We’ve discovered that it is a very convenient and easy and intimate and easily accessible tool for responding to mental health concerns,” said Melissa Perry, dean of George Mason’s College of Public Health.

    While people admit to using the tech for mental health support, some do have lingering questions. People participating in their surveys, Perry said, wonder whether the information they get from AI is trustworthy and whether it ensures their privacy.

    “They were concerned about the privacy and the confidentiality of the data that they were providing by interacting with a chat bot, and they’re also wondering whether or not such platforms have been evaluated and optimized by mental health professionals,” Perry said. “But it’s critically important to keep in mind that they aren’t a replacement for human counselors and therapists and trained mental health professionals.”

    Society, Perry said, has become increasingly more comfortable with screens. However, she said, too much dependence on communicating with a machine could lead some to forget that “we are social beings who need to interact and live in a social world.”

    “Using AI is in response to feelings of loneliness, but it can’t be a cure,” Perry said.

    In the coming years, people who responded to the survey said the tech could be helpful for lowering the cost of mental health services and offering real-time support in particularly stressful moments.

    “The loneliness epidemic has become widely recognized,” Perry said. “People are turning to computers and to chat bots and platforms as a way to cope with loneliness, but it’s not going to be a cure.”

    Further research, Perry said, may help determine how the tech can help people in need without creating a sense of false security or errors in the type of advice that chatbots provide.

    More information on researchers’ findings is available online.

    Get breaking news and daily headlines delivered to your email inbox by signing up here.

    © 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

    [ad_2]

    Scott Gelman

    Source link

  • 4 A.I. Themes That Defined 2025 and Are Shaping What Comes Next

    [ad_1]

    From infrastructure battles to physical-world intelligence, A.I.’s next chapter is already taking shape. Unsplash

    In November, ChatGPT turned three, with a global user base rapidly approaching one billion. At this point, A.I. is no longer an esoteric acronym that needs explaining in news stories. It has become a daily utility, woven into how we work, learn, shop and even love. The field is also far more crowded than it was just a few years ago, with competitors emerging at every layer of the stack.

    Over the past year, conversation around A.I. has taken on a more complicated tone. Some argue that consumer chatbots are nearing a plateau. Others warn that startup valuations are inflating into a bubble. And, as always, there’s the persistent anxiety that A.I. may one day outgrow human control altogether.

    So what comes next? Much of the industry’s energy is now focused on the infrastructure side of A.I. Big Tech companies are racing to solve the hardware bottlenecks that limit today’s systems, while startups experiment with applications far beyond chatbots. At the same time, researchers are beginning to look past language models altogether, toward models that can reason about the physical world.

    Below are the key themes Observer has identified over the past year of covering this space. Many of these developments are still unfolding and are likely to shape the field well into 2026 and beyond.

    A.I. chips

    Even as OpenAI faces growing competition at the model level, its primary chip supplier, Nvidia, remains in a league of its own. Demand for its GPUs continues to outstrip supply, and no rival has yet meaningfully disrupted its dominance. Traditional semiconductor companies such as AMD and Intel are racing to claw back market share, while some of Nvidia’s largest customers are designing their own chips to reduce dependence on a single supplier.

    Google’s long-in-the-making Tensor Processing Unit, or TPU, has reportedly found its first major customer, Meta, marking a milestone after years of internal use. Meta, Microsoft and Amazon are also deep into developing in-house chips of their own—Meta’s Artemis, Microsoft’s Maia and Amazon’s Trainium.

    World models

    To borrow from philosopher Ludwig Wittgenstein, the limits of language are the limits of our world. Today’s A.I. systems have grown remarkably fluent in human language—especially English—but language captures only a narrow slice of intelligence. That limitation has prompted some researchers to argue that large language models alone can never reach human-level understanding.

    Meta’s longtime chief A.I. scientist, Yann LeCun, has been among the most vocal critics. “We’re never going to get to human-level A.I. by just training on text,” he said during a Harvard talk in September.

    That belief is fueling a push toward so-called “world models,” which aim to teach machines how the physical world works—how objects move, how space is structured, and how cause and effect unfold. LeCun is now leaving Meta to build such a system himself. Fei-Fei Li’s startup, World Labs, unveiled its first model in November after nearly two years of development. Google DeepMind has released early versions through its Genie projects, and Nvidia is betting heavily on physical A.I. with its Cosmos models.

    Language-specific A.I.

    While pioneering researchers look beyond language, linguistic barriers remain one of A.I.’s most practical challenges. More than half of the internet’s content is written in English, skewing training data and limiting performance in other languages.

    In response, developers around the world are building models rooted in local cultures and linguistic norms. In Japan, companies such as Sanaka and NTT are developing LLMs tailored to Japanese language and values. In India, Krutrim is working to support the country’s vast linguistic diversity. France’s Mistral AI has positioned its Le Chat assistant as a European alternative to ChatGPT. Earlier this year, Microsoft also issued a call for proposals to expand training data across European languages.

    A.I. wearables

    It’s only natural that there’s a consumer hardware angle of A.I. This year brought a wave of experiments in wearable A.I.—some met with curiosity, others with discomfort.

    Friend, a startup selling an A.I. pendant, sparked backlash after a New York City subway campaign framed its product as a substitute for human companionship. In December, Meta acquired Limitless, the maker of a $99 wearable that records and summarizes conversations. Earlier in the year, Amazon bought Bee, which produces a $50 bracelet designed to transcribe daily activity and generate summaries.

    Meta is also developing a new line of smart glasses with EssilorLuxottica, the company behind Ray-Ban and Oakley. In July, Mark Zuckerberg went so far as to suggest that people without A.I.-enhanced glasses could eventually face a “significant cognitive disadvantage.” Meanwhile, OpenAI is quietly collaborating with former Apple design chief Jony Ive on a mysterious hardware project of its own. This all suggests the next phase of A.I. may be something we wear, not just something we type into.

    4 A.I. Themes That Defined 2025 and Are Shaping What Comes Next

    [ad_2]

    Sissi Cao

    Source link

  • Investors predict AI is coming for labor in 2026  | TechCrunch

    [ad_1]

    Concerns about how AI will affect workers continue to rise in lockstep with the pace of advancements and new products promising automation and efficiency.

    Evidence suggests that fear is warranted.

    A November MIT study found an estimated 11.7% of jobs could already be automated using AI. Surveys have shown employers are already eliminating entry-level jobs because of the technology. Companies are also already pointing to AI as the reason for layoffs.

    As enterprises more meaningfully adopt AI, some may take a closer look at how many employees they really need.

    In a recent TechCrunch survey, multiple enterprise VCs said AI will have a big impact on the enterprise workforce in 2026. This was particularly interesting because the survey didn’t specifically ask about it.

    Eric Bahn, a co-founder and general partner at Hustle Fund, expects to see affects on labor in 2026. He’s just not sure exactly what that will look like.

    “I want to see what roles that have been known for more repetition get automated, or even more complicated roles with more logic become more automated,” Bahn said. “Is it going to lead to more layoffs? Is there going to be higher productivity? Or will AI just be an augmentation for the existing labor market to be even more productive in the future? All of this seems pretty unanswered, but it seems like something big is going to happen in 2026.”

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Marell Evans, founder and managing partner at Exceptional Capital, predicted companies looking to increase AI spending, will pull money from their pool for labor and hiring.

    “I think on the flip side of seeing an incremental increase in AI budgets, we’ll see more human labor get cut and layoffs will continue to aggressively impact the U.S. employment rate,” Evans said.

    Rajeev Dham, managing director at Sapphire, agreed that 2026 budgets will start to shift resources from labor to AI. Jason Mendel, a venture investor at Battery Ventures, added that AI will start to surpass just being a tool to make existing workers more efficient in 2026.

    “2026 will be the year of agents as software expands from making humans more productive to automating work itself, delivering on the human-labor displacement value proposition in some areas,” Mendel said.

    Antonia Dean, a partner at Black Operator Ventures, said even if companies aren’t shifting labor budgets toward AI projects, they will likely still say AI is the reason for layoffs or a reduction in labor costs anyway.

    “The complexity here is that many enterprises, despite how ready or not they are to successfully use AI solutions, will say that they are increasing their investments in AI to explain why they are cutting back spending in other areas or trimming workforces,” Dean said. “In reality, AI will become the scapegoat for executives looking to cover for past mistakes.”

    Many AI companies argue their technology doesn’t eliminate jobs but rather helps shift workers to “deep work” or to higher-skilled jobs while AI just automates repetitive “busy work.”

    But not everyone buys that argument, and people are worried that their jobs will be automated. According to VCs who invest in that area, it doesn’t sound like those fears will be quelled in 2026.

    [ad_2]

    Rebecca Szkutak

    Source link

  • A.I. Degrees Boom as Students Prepare for an Uncertain Job Market

    [ad_1]

    Universities are rapidly expanding A.I. programs as students seek skills that can withstand an increasingly automated future. Photo by: Jumping Rocks/Universal Images Group via Getty Images

    When Chris Callison-Burch first started teaching an A.I. course at the University of Pennsylvania in2018, his inaugural class had about 100 students. Seven years later, enrollment has swelled to roughly 400—excluding another 250 students attending remotely and an additional 100 to 200 on the waiting list. The professor now teaches in the largest classroom on campus. If his course grew any bigger, he’d need to move into the school’s sports stadium.

    “I would love to think that’s all because I’m a dynamic lecturer,” Callison-Burch told Observer. “But it’s really a testament to the popularity of the field.”

    Demand for A.I. courses and degrees has soared across higher education as the technology plays an increasingly central role in daily life and begins to encroach on once-popular fields like computer science. Amid uncertainty about the future of the labor market, students are seeking to prepare for an A.I.-dominated economy by immersing themselves in the field.

    Universities have followed suit. Schools like Carnegie Mellon and Purdue University are among a number offering undergraduate or graduate degrees in A.I., a trend expected to accelerate in the coming years. The University of Pennsylvania recently became the first Ivy League school to offer both undergraduate and graduate A.I. programs. Its graduate curriculum includes courses in natural language processing and machine learning, in addition to required classes on technology ethics and the broader legal landscape.

    The demand is widespread. The University of Buffalo’s A.I. master’s program enrolled 103 students last year, up from just five in its inaugural 2020 cohort. At the Massachusetts Institute of Technology, undergraduate enrollment in A.I. has jumped from 37 students in 2022 to more than 300. Miami Dade College has seen a 75 percent increase in enrollment in its A.I. programs since 2022, while its other programs have remained relatively steady aside from a “slight decrease in computer science,” the school told Observer.

    Callison-Burch, who also serves as faculty director of Penn’s online A.I. master’s program, has noticed a similar decline. “There’s an interesting trend at the moment where it looks like computer science enrollment is dipping,” he said, pointing to increased A.I.-powered automation across the field. More than 60 percent of undergraduate computing programs saw a decline in employment for the 2025-2026 year compared to the year prior, according to a recent report from the Computing Research Association.

    That decline comes as A.I. reshapes some of the professions most exposed to its advances. In fields like coding, early-career workers have already experienced a 13 percent relative decline in employment, according to an August research paper from Stanford.

    A.I. leaders’ advice for students

    Experts have offered a range of advice as the technology they helped develop begins to reshape the labor market. Demis Hassabis, CEO of Google DeepMind, has advocated for an immersion in A.I. tools, while acclaimed researcher Geoffrey Hinton suggests prospective students focus on a well-rounded education that pairs mathematics and science with liberal arts.

    Yann LeCun, Meta’s former chief A.I. scientist, advises young people to become adept at learning itself, as their job is “almost certainly going to change” over time. “My suggestion is to take courses on topics that are fundamental and have a long shelf life,” he told Observer via email, pointing to mathematics, physics and engineering as core areas of focus.

    It’s not just students grappling with these shifts. Callison-Burch noted that professors, too, are trying to adapt and determine how best to integrate A.I. into their classrooms. One thing, he said, is certain: the technology will only become more pervasive. That makes it all the more important for young people to familiarize themselves with its tools.

    Even so, he acknowledged that predicting how A.I. will reshape the labor market remains extraordinarily difficult, making it hard for students to bet confidently on any one path. “I don’t think there’s an easy way of picking something that’s going to be future-proof, when we can’t yet see that future,” he said.

    A.I. Degrees Boom as Students Prepare for an Uncertain Job Market

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Meta buys startup Manus in latest move to advance its artificial intelligence efforts

    [ad_1]

    DETROIT — Meta is buying artificial intelligence startup Manus, as the owner of Facebook and Instagram continues an aggressive push to amp up AI offerings across its platforms.

    The California tech giant declined to disclose financial details of the acquisition. But The Wall Street Journal reported that Meta closed the deal at more than $2 billion.

    Manus, a Singapore-based platform with some Chinese roots, launched its first “general-purpose” AI agent earlier this year. The platform offers paid subscriptions for customers to use this technology for research, coding and other tasks.

    “Manus is already serving the daily needs of millions of users and businesses worldwide,” Meta said in a Monday announcement, adding that it plans to scale this service — as Manus will “deliver general-purpose agents across our consumer and business products, including in Meta AI.”

    Xiao Hong, CEO of Manus, added that joining Meta will allow the platform to “build on a stronger, more sustainable foundation without changing how Manus works or how decisions are made.” Manus confirmed that it would continue to sell and operate subscriptions through its own app and website.

    The platform has grown rapidly over the past year. Earlier this month, Manus announced that it had crossed the $100 million mark in annual recurring revenue, just eight months after launching.

    Some of Manus’ initial financial backers reportedly included China’s Tencent Holdings, ZhenFund and HSG. And the company that first launched the platform — Butterfly Effect, which also operates under the name monica.im, which was founded in China before moving to Singapore.

    A Meta spokesperson confirmed on Tuesday that there would be “no continuing Chinese ownership interests in Manus AI” following its transaction, and that the platform would also discontinue its services and operations in China. Manus reiterated that it would continue to operate in Singapore, where most of its employees are based.

    Meta CEO Mark Zuckerberg has been pushing to revive its commercial AI efforts as the company faces tough competition from rivals such as Google and OpenAI, maker of ChatGPT. In June, the company made a $14.3 billion investment in AI data company Scale and recruited its CEO Alexandr Wang to help lead a team developing “superintelligence” at the tech giant.

    [ad_2]

    Source link

  • OpenAI tightens AI rules for teens but concerns remain

    [ad_1]

    NEWYou can now listen to Fox News articles!

    OpenAI says it is taking stronger steps to protect teens using its chatbot. Recently, the company updated its behavior guidelines for users under 18 and released new AI literacy tools for parents and teens. The move comes as pressure mounts across the tech industry. Lawmakers, educators and child safety advocates want proof that AI companies can protect young users. Several recent tragedies have raised serious questions about the role AI chatbots may play in teen mental health. While the updates sound promising, many experts say the real test will be how these rules work in practice.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

    OpenAI announced tougher safety rules for teen users as pressure grows on tech companies to prove AI can protect young people online. (Photographer: Daniel Acker/Bloomberg via Getty Images)

    What OpenAI’s new teen rules actually say

    OpenAI’s updated Model Spec builds on existing safety limits and applies to teen users ages 13 to 17. It continues to block sexual content involving minors and discourages self-harm, delusions and manic behavior. For teens, the rules go further. The models must avoid immersive romantic roleplay, first-person intimacy, and violent or sexual roleplay, even when non-graphic. They must use extra caution when discussing body image and eating behaviors. When safety risks appear, the chatbot should prioritize protection over user autonomy. It should also avoid giving advice that helps teens hide risky behavior from caregivers. These limits apply even if a prompt is framed as fictional, historical, or educational.

    The four principles OpenAI says it uses to protect teens

    OpenAI says its approach to teen users follows four core principles:

    • Put teen safety first, even when it limits freedom
    • Encourage real-world support from family, friends, or professionals
    • Speak with warmth and respect without treating teens like adults
    • Be transparent and remind users that the AI is not human

    The company also shared examples of the chatbot refusing requests like romantic roleplay or extreme appearance changes.

    WHY PARENTS MAY WANT TO DELAY SMARTPHONES FOR KIDS

    Teen typing on their laptop.

    The company updated its chatbot guidelines for users ages 13 to 17 and launched new AI literacy tools for parents and teens. (Photographer: Daniel Acker/Bloomberg via Getty Images)

    Teens are driving the AI safety debate

    Gen Z users are among the most active chatbot users today. Many rely on AI for homework help, creative projects and emotional support. OpenAI’s recent deal with Disney could draw even more young users to the platform. That growing popularity has also brought scrutiny. Recently, attorneys general from 42 states urged major tech companies to add stronger safeguards for children and vulnerable users. At the federal level, proposed legislation could go even further. Some lawmakers want to block minors from using AI chatbots entirely.

    Why experts question whether AI safety rules work

    Despite the updates, many experts remain cautious. One major concern is engagement. Advocates argue chatbots often encourage prolonged interaction, which can become addictive for teens. Refusing certain requests could help break that cycle. Still, critics warn that examples in policy documents are not proof of consistent behavior. Past versions of the Model Spec banned excessive agreeableness, yet models continued mirroring users in harmful ways. Some experts link this behavior to what they call AI psychosis, where chatbots reinforce distorted thinking instead of challenging it.

    In one widely reported case, a teenager who later died by suicide spent months interacting with a chatbot. Conversation logs showed repeated mirroring and validation of distress. Internal systems flagged hundreds of messages related to self-harm. Yet the interactions continued. Former safety researchers later explained that earlier moderation systems reviewed content after the fact rather than in real time. That allowed harmful conversations to continue unchecked. OpenAI says it now uses real-time classifiers across text, images, and audio. When systems detect serious risk, trained reviewers may step in, and parents may be notified.

    Some advocates praise OpenAI for publicly sharing its under-18 guidelines. Many tech companies do not offer that level of transparency. Still, experts stress that written rules are not enough. What matters is how the system behaves during real conversations with vulnerable users. Without independent measurement and clear enforcement data, critics say these updates remain promises rather than proof.

    How parents can help teens use AI safely

    OpenAI says parents play a key role in helping teens use AI responsibly. The company stresses that tools alone are not enough. Active guidance matters most.

    1) Talk with teens about AI use

    OpenAI encourages regular conversations between parents and teens about how AI fits into daily life. These discussions should focus on responsible use and critical thinking. Parents are urged to remind teens that AI responses are not facts and can be wrong.

    2) Use parental controls and safeguards

    OpenAI provides parental controls that let adults manage how teens interact with AI tools. These tools can limit features and add oversight. The company says safeguards are designed to reduce exposure to higher-risk topics and unsafe interactions. Here are the steps OpenAI recommends parents take.

    • Confirm your teen’s account statusParents should make sure their teen’s account reflects the correct age. OpenAI applies stronger safeguards to accounts identified as belonging to users under 18.
    • Review available parental controlsOpenAI offers parental controls that allow adults to tailor a teen’s experience. These controls can limit certain features and add extra oversight around higher-risk topics.
    • Understand content safeguardsTeen accounts are subject to stricter content rules. These safeguards reduce exposure to topics like self-harm, sexualized roleplay, dangerous activities, body image concerns and requests to hide unsafe behavior.
    • Pay attention to safety notificationsIf the system detects signs of serious risk, OpenAI says additional safeguards may apply. In some cases, this can include reviews by trained staff and parent notifications.
    • Revisit settings as features changeOpenAI recommends parents stay informed as new tools and features roll out. Safeguards may expand over time as the platform evolves.

    3) Watch for excessive use

    OpenAI says healthy use matters as much as content safety. To support balance, the company has added break reminders during long sessions. Parents are encouraged to watch for signs of overuse and step in when needed.

    4) Keep human support front and center

    OpenAI emphasizes that AI should never replace real relationships. Teens should be encouraged to turn to family, friends or professionals when they feel stressed or overwhelmed. The company says human support remains essential.

    5) Set boundaries around emotional use

    Parents should make clear that AI can help with schoolwork or creativity. It should not become a primary source of emotional support.

    6) Ask how teens actually use AI

    Parents are encouraged to ask what teens use AI for, when they use it and how it makes them feel. These conversations can reveal unhealthy patterns early.

    7) Watch for behavior changes

    Experts advise parents to look for increased isolation, emotional reliance on AI or treating chatbot responses as authority. These can signal unhealthy dependence.

    8) Keep devices out of bedrooms at night

    Many specialists recommend keeping phones and laptops out of bedrooms overnight. Reducing late-night AI use can help protect sleep and mental health.

    9) Know when to involve outside help

    If a teen shows signs of distress, parents should involve trusted adults or professionals. AI safety tools cannot replace real-world care.

    WHEN AI CHEATS: THE HIDDEN DANGERS OF REWARD HACKING

    Laptop open to ChatGPT.

    Lawmakers and child safety advocates are demanding stronger safeguards as teens increasingly rely on AI chatbots. (Photographer: Gabby Jones/Bloomberg via Getty Images)

    Pro Tip: Add strong antivirus software and multi-factor authentication

    Parents and teens should enable multi-factor authentication (MFA) on teen AI accounts whenever it is available. OpenAI allows users to turn on multi-factor authentication for ChatGPT accounts.

    To enable it, go to OpenAI.com and sign in. Scroll down and click the profile icon, then select Settings and choose Security. From there, turn on multi-factor authentication (MFA). You will then be given two options. One option uses an authenticator app, which generates one-time codes during login. Another option sends 6-digit verification codes by text message through SMS or WhatsApp, depending on the country code. Enabling multi-factor authentication adds an extra layer of protection beyond a password and helps reduce the risk of unauthorized access to teen accounts.

    Also, consider adding a strong antivirus software that can help block malicious links, fake downloads, and other threats teens may encounter while using AI tools. This adds an extra layer of protection beyond any single app or platform.  Using strong antivirus protection and two-factor authentication together helps reduce the risk of account takeovers that could expose teens to unsafe content or impersonation risks.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
     

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaways

    OpenAI’s updated teen safety rules show the company is taking growing concerns seriously. Clearer limits, stronger safeguards, and more transparency are steps in the right direction. Still, policies on paper are not the same as behavior in real conversations. For teens who rely on AI every day, what matters most is how these systems respond in moments of stress, confusion, or vulnerability. That is where trust is built or lost. For parents, this moment calls for balance. AI tools can be helpful and creative. They also require guidance, boundaries, and supervision. No set of controls can replace real conversations or human support. As AI becomes more embedded in our everyday lives, the focus must stay on outcomes, not intentions. Protecting teens will depend on consistent enforcement, independent oversight, and active family involvement.

    Should teens ever rely on AI for emotional support, or should those conversations always stay human?  Let us know by writing to us at Cyberguy.com.
     

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • As a property slump drags on, China’s economy looks more resilient than it feels

    [ad_1]

    HONG KONG — By some measures, China’s economy is looking resilient, with strong exports and breakthroughs in artificial intelligence and other advanced technologies.

    But that’s not how it feels for many ordinary Chinese, who have been enduring the strain from weak property prices and uncertainty over their jobs and incomes.

    While some industries are thriving thanks to government support for technologies such as AI and electric vehicles, owners of small businesses report tough times as their customers cut back on spending.

    Some economists believe that the world’s second largest economy is growing more slowly than official figures suggest, even though China may hit its official 2025 annual growth target of about 5%. Beijing has averted a damaging full blown trade war with Washington after President Donald Trump struck a truce with Chinese leader Xi Jinping, but many longer-term challenges remain.

    Business is “very tough” right now as people don’t have much disposable income, said billiards hall owner Xiao Feng, who lives in Beijing.

    “It seems the wealthy don’t have the time, and the ordinary folks don’t have money to spend,” said Xiao. “After deducting all costs, including rent, labor, utilities, I’m just breaking even.”

    Xiao and his wife, a nurse, have a 10-year-old son. With her stable income, she is now the household’s breadwinner.

    “Before, I used to contribute about 100,000 yuan (about $14,250) annually to the household,” said Xiao, who has cut his staff from eight to five as competition has intensified. “But I’ve had no income for about six consecutive months now.”

    Beijing-based commercial property agent Zhang Xiaoze said he used to make up to 3 million yuan (nearly $428,000) a year during the peak years of the mid-2010s. Now he brings in about 100,000 yuan annually, and the the business environment is “extremely challenging,” he said.

    “Demand is weak because many companies are relocating out of Beijing,” Zhang said, who is married with one child. “The fundamental issue is that people don’t have money.”

    “There are times when I must dip into my savings to support the family,” he said.

    China’s ruling Communist Party is promoting leader Xi’s push for “high-quality growth” and domestic innovation as it shifts investment and policies toward a consumption-driven growth model and high-tech industries.

    During its rapid ascent as an export manufacturing superpower, China invested heavily in infrastructure such as railways, highways and ports, industrial zones and other property development. While boosting consumer spending and business investment are key priorities, exports remain a vital driver of employment and economic growth.

    In the first 11 months of this year Chinese exports amounted to a record $3.4 trillion — with growing shipments to Southeast Asia and Europe helping to offset a sharp drop to the U.S. — versus imports of $2.3 trillion.

    “China’s economy is amidst what I call a ‘Great Transition,’ as it moves away from the growth engines that drove growth the past three decades,” said Lynn Song, chief economist for Greater China at ING.

    As is true in the U.S., in China the AI boom has helped drive gains in share prices. But the resources that have poured into the technology sector have not translated into a direct wealth effect for most people, said Song. “It is no surprise that many feel the situation on the ground is not reflecting the relatively more optimistic growth picture,” he said.

    The divergence between the official economic growth figures and what many Chinese people are feeling suggests China ’s actual growth “may be well below” what official data suggest, said Zichun Huang, China economist at Capital Economics.

    Recent economic data indicate growth is slowing. Retail sales increased by just 1.3% in November from a year earlier, slower than October’s 2.9% growth. Fixed-asset investment, meanwhile, dropped 2.6% in the first 11 months of 2025.

    Disposable household income growth has been running below pre-pandemic pace in recent years, economists at HSBC said in a recent report, and “income gains from property have virtually vanished.”

    The International Monetary Fund recently raised China’s growth forecast from 4.8% to 5%, near the official target, and banks including Goldman Sachs raised their forecast for China’s economic growth in recent months.

    Other estimates vary. Capital Economics forecasts growth at a 3% to 3.5% annual pace this year. The Rhodium Group, a think tank, puts it at 2.5% to 3%.

    Much of China’s consumer and investor confidence hinges on property, the main repository for most household wealth. Housing prices have fallen 20% or more since they peaked in 2021. The massive downturn followed a crackdown on excessive borrowing in the real estate industry that triggered a debt crisis.

    In the first 11 months of this year, new home sales fell 11.2% by value from a year earlier, according to China’s National Bureau of Statistics. Property investments fell nearly 16% year-on-year.

    Xiao, the Beijing billiards hall owner, bought an apartment in the city’s Tongzhou district in 2019 for more than 3 million yuan. ($428,000). It’s now worth about ($342,000).

    “I drive a ten-year-old car and have no plans to replace it given the economic climate,” Xiao said. “If my apartment hadn’t depreciated so significantly, I might have already bought a new one.”

    Xiao said he used to spend a “considerable amount” on his son’s tutoring fees. “But now we’ve cut that entirely and teach him ourselves instead,” he added. “I feel quite uncertain about the economic outlook.”

    A Tianjin-based tutor, who only gave his surname as Zhou as he’s not authorized by his company to speak to the media, said his income dipped by more than a third as more parents stopped sending their children for tutoring.

    “Because of the economic situation, parents are unwilling to spend money on tutoring,” said Zhou. “They prefer large group classes instead of one-on-one tutoring.”

    “Business is much worse than before — about 50 percent worse than during the COVID period,” he added. “The future looks bleak.”

    Most forecasts are for the economy to grow more slowly in 2026 and beyond, as China’s leaders tinker with incremental policies while putting off fundamental reforms that might help boost consumer confidence. Challenges ahead center on consumption and investment, but with the housing market remaining weak, growth momentum may be slow, economists said.

    Excess supply in many industries, including autos, steel and consumer goods is a chronic problem, depressing prices and profits. Chinese export prices have fallen by over 20% overall since early 2022, according to HSBC. Government efforts to tame price wars have so far had “minimal impact,” it said.

    The country’s growing trade surplus, at more than $1 trillion in 2025, is also adding to trade friction, potentially triggering protectionist moves that may crimp exports.

    Economists such as Michael Pettis of the Carnegie Endowment for International Peace argue that a fundamental shift enabling workers to hold much more of the nation’s wealth is needed. But that so far appears to be politically untenable.

    With people cutting back on everything including business trips, a budget hotel owner in the northern city of Shijiazhuang was glum about the outlook.

    “I don’t see an immediate rebound in the economy,” said the man, who gave only his surname, Zhai, fearing that making critical comments about the economy could get him in trouble. “(I) don’t have a high level of education, so switching industries is almost impossible. Other industries are also struggling.”

    “My lease expires next May or June,” he added. “If the situation hasn’t improved by then, I will shut down the hotel.”

    ____

    AP’s Beijing newsroom contributed to this story.

    [ad_2]

    Source link

  • The 11 big trades of 2025: Bubbles, cockroaches and a 367% jump

    [ad_1]

    It was another year of high-conviction bets — and fast reversals.

    From bond desks in Tokyo and credit committees in New York to currency traders in Istanbul, markets delivered both windfalls and whiplash. Gold hit records. Staid mortgage behemoths gyrated like meme stocks. A textbook carry trade blew up in a flash.

    Investors bet big on shifting politics, bloated balance sheets and fragile narratives, fueling outsized stock rallies, crowded yield trades, and crypto strategies built on leverage, hope, and not much else. Donald Trump’s White House return quickly sank — and then revived — financial markets across the world, lit a fire under European defense stocks, and emboldened speculators fanning mania after mania. Some positions paid off spectacularly. Others misfired when momentum reversed, financing dried up or leverage cut the wrong way.

    As the year draws to a close, Bloomberg highlights some of the most eye-catching wagers of 2025 — the wins, the wipeouts and the positions that defined the era. Many of those bets leave investors fretting over all-too-familiar fault lines as they prepare for 2026: shaky companies, stretched valuations, and trend-chasing trades that work, until they don’t.

    Crypto: Trumped

    It looked like one of crypto’s more compelling momentum bets: load up on anything and everything tied to the Trump brand. During his presidential campaign and after he took office, Trump went all-in on digital assets — pushing sweeping reforms and installing industry allies across powerful agencies. His family leaned in, championing coins and crypto firms that traders treated as political rocket fuel.

    The franchise came together fast. Hours before the inauguration, Trump launched a memecoin and promoted it on social media. First Lady Melania Trump soon followed with her own token. Later in the year, Trump family–affiliated World Liberty Financial made its WLFI token tradable and available to retail investors. A set of Trump-adjacent trades followed. Eric Trump co-founded American Bitcoin, a publicly traded miner that went public via a merger in September.

    Each debut sparked a rally. Each proved ephemeral. As of Dec. 23, Trump’s memecoin was floundering, off more than 80% from its January high. Melania’s was down nearly 99%, according to CoinGecko. American Bitcoin had sunk about 80% from its September peak.

    Politics gave the trades a push. The laws of speculation pulled them back down. Even with a friend in the White House, these trades couldn’t escape crypto’s core pattern: prices rise, leverage floods in, and liquidity dries up. Bitcoin, still the bellwether, is on track for an annual loss after slumping from its October peak. For Trump-linked assets, politics offered momentum, but no protection. — Olga Kharif

    AI Trade: The Next Big Short?

    The trade was revealed in a routine filing, yet its impact was anything but routine. Scion Asset Management disclosed on Nov. 3 that it held protective put options in Nvidia Corp. and Palantir Technologies Inc. — stocks at the center of the artificial intelligence trade that’s powered the market’s rally for three years. While not a whale-sized hedge fund, Scion commands attention due to the person who runs it: Michael Burry, who earned fame as a market prophet in The Big Short book and movie about the mortgage bubble that led to the 2008 crisis.

    The strike prices were startling: Nvidia’s was 47% below where the stock had just closed, while Palantir’s was 76% below. But some mystery lingered: Due to limited reporting requirements, it was unclear if the puts — contracts that give an investor the right to sell a stock at a certain price by a certain date — were part of a more complicated trade. And the filing offered just a snapshot of Scion’s books on Sept. 30, leaving open the possibility that Burry had since trimmed or exited the positions. Yet skepticism about the lofty valuations and massive spending plans of major AI players had been building like a pile of dry kindling. Burry’s disclosure landed like a freshly struck match.

    Nvidia, the largest stock in the world, tumbled in reaction, as did Palantir, though they later regained ground. The Nasdaq also dipped.

    It’s impossible to know exactly how much Burry made. One bread crumb he left was a post on X saying he paid $1.84 for the Palantir puts; those options went on to gain as much as 101% in less than three weeks. The filing crystallized doubts simmering beneath a market dominated by a narrow group of AI-linked stocks, heavy passive inflows and subdued volatility. Whether the trade proves prescient or premature, it underscored how quickly even the most dominant market narratives can turn once belief begins to crack. — Michael P. Regan

    Defense Stocks: New World Order

    A geopolitical shift has led to huge gains in a sector once deemed toxic by asset managers: European defense. Trump’s plans to take a step back from funding Ukraine’s military sent European governments into a spending spree, giving a huge lift to shares of regional defense firms — from the roughly 150% year-to-date rally in Germany’s Rheinmetall AG as of Dec. 23, to Italy’s Leonardo SpA more than 90% ascent during the period.

    Money managers who once saw the sector as too controversial to touch amid environmental, social and governance concerns changed their tune and a number of funds even redefined their mandates.

    “We had taken defense out of our ESG funds until the beginning of this year,” said Pierre Alexis Dumont, chief investment officer at Sycomore Asset Management. “There was a change of paradigm, and when there is a change of paradigm, one has to be responsible and also defend one’s values. So we’re focusing on defensive weapons.”

    From goggle makers to chemicals producers, and even a printing company, stocks were snapped up in a mad rush. A Bloomberg basket of European defense stocks was up more than 70% for the year as of Dec. 23. The boom spilled into credit markets as well, with firms only tangentially linked to defense attracting hordes of prospective lenders. Banks even started selling “European Defence Bonds,” modeled on green bonds except in this case ringfenced for borrowers like weapons manufacturers. It marked a repricing of defense as a public good rather than a reputational liability — and a reminder that when geopolitics shifts, capital tends to follow faster than ideology. — Isolde MacDonogh

    Debasement Trade: Fact or Fiction? 

    Heavy debt loads in major economies such as the US, France and Japan — and a lack of political appetite to confront them — pushed some investors in 2025 to tout gold and alternative assets like crypto, while cooling enthusiasm for government bonds and the US dollar. The idea gained traction under a bearish label: the “debasement trade,” a nod to historic episodes when rulers such as Nero diluted the value of money to cope with fiscal strain.

    The narrative reached a crescendo in October, when concerns over the US fiscal outlook collided with the longest government shutdown on record. Investors searched for shelter beyond the dollar. That month, gold and Bitcoin both rose to records — a rare moment for assets often cast as rivals.

    As a story, debasement offered a clean explanation for a messy macro backdrop. As a trade, it proved more complicated. Bitcoin has since slumped amid a broader retreat in cryptocurrencies. The dollar stabilized somewhat. Treasuries, far from collapsing, are on track for their best year since 2020 — a reminder that fears of fiscal erosion can coexist with powerful demand for safe assets, particularly when growth slows and policy rates peak.

    Elsewhere, price action told a different story. Swings in metals from copper to aluminum, and even silver, were driven at least as much by Donald Trump’s tariff policies and macro forces as by concerns about currency debasement, blurring the line between inflation hedging and old-fashioned supply shocks. Gold, meanwhile, has kept powering ahead, reaching new all-time highs. In that corner of the market, the debasement trade endured — less as a sweeping judgment on fiat, more as a focused bet on rates, policy and protection. — Richard Henderson

    Korean Stocks: K-Pop

    Move over, K-drama. When it comes to plot twists and thrills, it’s hard to beat this year’s action in South Korea’s stock market. Fueled by President Lee Jae Myung’s efforts to boost the country’s capital markets, the benchmark equity index rocketed more than 70% in 2025 through Dec. 22, headed toward his aspirational goal of 5000 and handily topping the charts among major stock gauges worldwide.

    It’s rare to see a political leader publicly set an index level as a goal, and Lee’s “Kospi 5000” campaign drew little attention when it was first announced. Now, more and more Wall Street banks including JPMorgan Chase & Co. and Citigroup Inc. think it’s achievable in 2026, helped in part by the global AI boom, which has increased demand for South Korean stocks as Asia’s go-to artificial intelligence trade.

    There is one notable absence from the Kospi’s world-beating rally: local retail investors. While Lee often reminds voters that he was once a retail investor himself before entering public office, his reform agenda has yet to persuade domestic investors that the market is a durable buy-and-hold proposition. Even as foreign money has poured into Korean equities, local mom-and-pop investors have been net sellers, channeling a record $33 billion into US stocks and chasing higher-risk bets ranging from crypto to leveraged exchange-traded funds overseas.

    One side effect has been pressure on the currency. As capital flowed outward, the won weakened, a reminder that even blockbuster equity rallies can mask lingering skepticism at home. — Youkyung Lee

    Bitcoin Showdown: Chanos v Saylor

    There are two sides to every story. In the case of short-seller Jim Chanos’s arbitrage play involving Bitcoin hoarder Michael Saylor’s Strategy Inc., there were also two big personalities, and a trade that was fast becoming a referendum on crypto-era capitalism.

    In early 2025, as Bitcoin soared and Strategy’s shares went through the roof, Chanos saw an opportunity. The rally in Strategy had stretched the premium the company’s shares enjoyed relative to its Bitcoin holdings, something the legendary investor saw as unsustainable. So he decided to short Strategy and go long Bitcoin, announcing the move in May when the premium was still wide.

    Chanos and Saylor started publicly trading barbs. “I don’t think he understands what our business model is,” Saylor told Bloomberg TV in June about Chanos, who in turn, called Saylor’s explanations “complete financial gibberish” in an X post.

    Strategy’s shares hit a record in July, marking a 57% year-to-date gain, but as the number of so-called digital asset treasury firms exploded and crypto token prices fell from their highs, Strategy shares — and those of its copycats — began to suffer and the company’s premium to Bitcoin shrank. Chanos’s wager was paying off.

    From the time Chanos made his short call on Strategy public through Nov. 7, the date he said he exited from the position, Strategy shares dropped 42%. Beyond the P&L, it illustrated a recurring crypto boom-and-bust pattern: balance sheets inflated by confidence, and confidence sustained by rising prices and financial engineering. It works until belief falters — at which point the premium stops being a feature and starts being the problem. — Monique Mulima

    Japanese Bonds: Widowmaker to Rainmaker

    If there was one bet that repeatedly burned macro investors in the past few decades, it’s the infamous “widowmaker” wager against Japanese bonds. The reasoning behind the trade always seemed simple. Japan carried a vast public debt, and so the thinking was that interest rates just had to rise sooner or later to lure in enough buyers. Investors, therefore, borrowed bonds and sold them, expecting prices to fall once reality asserted itself. For years, however, that logic proved premature and expensive, as the central bank’s loose policies kept borrowing costs low and punished anyone who tried to rush the outcome. No longer.

    In 2025, the widowmaker turned rainmaker as yields on benchmark government bonds surged across the board, making the $7.4 trillion Japan debt market a short-seller’s dream. The triggers spanned everything from interest rate hikes to Prime Minister Sanae Takaichi unleashing the country’s biggest burst of spending since pandemic restrictions eased. Yields on benchmark 10-year JGBs soared past 2% to reach levels not seen in decades, while those on 30-year paper advanced more than a full percentage point to an all-time high. A Bloomberg gauge of Japanese government bond returns fell more than 6% this year through Dec. 23, the worst-performing major market in the world.

    Fund managers from Schroders to Jupiter Asset Management to RBC BlueBay Asset Management discussed selling JGBs in some form during the year and investors and strategists are betting the trade has room to run, as benchmark policy rates edge higher. On top of that, the Bank of Japan is trimming its bond purchases, pressuring yields. And with the nation boasting the highest government debt-to-GDP ratio in the developed world by a wide margin, bearishness to JGBs is likely to persist. — Cormac Mullen

    Credit Scraps: Playing Hardball Pays

    Some of 2025’s richest credit payoffs didn’t come from turnaround bets, but from turning on fellow investors. The dynamic, known as “creditor-on-creditor violence,” paid off big for funds like Pacific Investment Management Co. and King Street Capital Management, who waged a calculated campaign around KKR-backed Envision Healthcare.

    When Envision, a hospital staffing company, ran aground after the Covid-19 pandemic, it needed a loan from new investors. But raising new debt meant pledging assets already spoken for. While many debt holders formed a group to oppose the new financing, Pimco, King Street and Partners Group broke ranks. Their support enabled a vote to allow the collateral — a stake in Envision’s valuable ambulatory-surgery business Amsurg — to be released by the old lenders and used to back the new debt.

    The funds became holders of Amsurg-backed debt that eventually converted into Amsurg equity. Then Amsurg sold to Ascension Health this year for $4 billion. The funds who spurned their peers generated returns of around 90%, by one measure, demonstrating the payoff from waging such internecine battles. The lesson: in today’s credit markets, governed by loose documentation and fragmented creditor groups, cooperation is optional. Being right is not always enough. The bigger risk is being outflanked. —Eliza Ronalds-Hannon

    Fannie-Freddie: Revenge of the “Toxic Twins”

    Fannie Mae and Freddie Mac, the mortgage-finance giants that have been under Washington’s control since the financial crisis, have long been the subject of speculation over when and how they would be released from the government’s grip. Boosters such as hedge fund manager Bill Ackman loaded up on the two in the hopes of scoring a windfall on any privatization plan, but the shares languished for years in over-the-counter trading as the status quo prevailed.

    Then came Donald Trump’s re-election, which catapulted the stocks into a meme-like zeal on optimism the new administration would take steps to free up the companies. In 2025, the excitement ratcheted up even more: The shares soared 367% from the start of the year to their high in September — 388% on an intraday basis — and remain big winners for 2025.

    Driving the momentum to its peak this year was word in August that the administration was contemplating an IPO that could value the enterprises at around $500 billion or more, involving selling 5% to 15% of their stock to raise about $30 billion. While the shares have wavered from their September high amid skepticism about when, and whether, an IPO will actually materialize, many remain confident in the story.

    Ackman in November unveiled a proposal he pitched to the White House, which calls for relisting Fannie and Freddie on the New York Stock Exchange, writing down the Treasury’s senior-preferred stake and exercising the government’s option to acquire nearly 80% of the common stock. Even Michael Burry joined the party, announcing a bullish position in early December and musing in a 6,000-word blog post that the companies which once needed the government to save them from insolvency may be “toxic twins no more.” — Felice Maranz

    Turkey Carry Trade: Cooked

    The Turkish carry trade was a consensus favorite for emerging-market investors after a stellar 2024. With local bond yields above 40% and a central bank backing a stable dollar peg, traders piled in — borrowing cheaply abroad to buy high-yield Turkish assets. That drew billions from firms like Deutsche Bank, Millennium Partners and Gramercy — some of them on the ground in Turkey on March 19, the day the trade blew up in minutes.

    It was on that morning that Turkish police raided the home of Istanbul’s popular opposition mayor and took him into custody, sparking protests — and a frenzied selloff in the lira that the central bank was unable to contain. “People got caught very much by surprise and won’t go back in a hurry,” Kit Juckes, head of FX strategy at Societe Generale SA in Paris, said at the time.

    By the end of the day, outflows from Turkish lira-denominated assets were estimated at around $10 billion, and the market never really recovered. As of Dec. 23, the lira was some 17% weaker against the dollar for the year, one of the world’s worst performers. The episode served as a reminder that high interest rates can reward risk-takers, but they offer no protection against sudden political shocks. — Kerim Karakaya

    Debt Markets: Cockroach Alert

    Credit markets in 2025 were unsettled not by a single spectacular collapse, but by a series of smaller ones that exposed uncomfortable habits. Companies once considered routine borrowers ran into trouble, leaving lenders nursing steep losses.

    Saks Global restructured $2.2 billion in bonds after making only a single interest payment, and the restructured debt is itself now trading at less than 60 cents on the dollar. New Fortress Energy’s newly-exchanged bonds lost more than half their value in the span of a year. The bankruptcies of Tricolor and then First Brands wiped out billions in debt holdings in a matter of weeks. In some cases, sophisticated fraud was at the root of the collapse. In others, rosy projections failed to materialize. In every case, investors were left to answer for how they justified taking large credit gambles on companies with little to no proof they’d be able to repay the debt.

    Years of low defaults and loose money eroded standards, from lender protections to basic underwriting. Lenders to both First Brands and Tricolor had failed to discover the borrowers were allegedly double-pledging assets and co-mingling collateral that backed various loans.

    Those lenders included JPMorgan, whose chief executive Jamie Dimon put the market on alert in October when he colorfully warned of more trouble to come, saying, “When you see one cockroach, there are probably more.” A theme for 2026. — Eliza Ronalds-Hannon

    –With assistance from Benjamin Harvey, Kerim Karakaya, Youkyung Lee, Cormac Mullen, Michael P. Regan, Isolde MacDonogh, Eliza Ronalds-Hannon, Yvonne Yue Li and Matt Turner.

    More stories like this are available on bloomberg.com

    ©2025 Bloomberg L.P.

    [ad_2]

    Bloomberg

    Source link

  • OpenAI says it’s hiring a head safety executive to mitigate AI risks

    [ad_1]

    OpenAI is seeking a new “head of preparedness” to guide the company’s safety strategy amid mounting concerns over how artificial intelligence tools could be misused.

    According to the job posting, the new hire will be paid $555,000 to lead the company’s safety systems team, which OpenAI says is focused on ensuring AI models are “responsibly developed and deployed.” The head of preparedness will also be tasked with tracking risks and developing mitigation strategies for what OpenAI calls “frontier capabilities that create new risks of severe harm.”

    “This will be a stressful job and you’ll jump into the deep end pretty much immediately,” CEO Sam Altman wrote in an X post describing the position over the weekend.

    He added, “This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges.”

    OpenAI did not immediately respond to a request for comment.

    The company’s investment in safety efforts comes as scrutiny intensifies over artificial intelligence’s influence on mental health, following multiple allegations that OpenAI’s chatbot, ChatGPT, was involved in interactions preceding a number of suicides.

    In one case earlier this year covered by CBS News, the parents of a 16-year-old sued the company, alleging that ChatGPT encouraged their son to plan his own suicide. That prompted OpenAI to announce new safety protocols for users under 18. 

    ChatGPT also allegedly fueled what a lawsuit filed earlier this month described as the “paranoid delusions” of a 56-year-old man who murdered his mother and then killed himself. At the time, OpenAI said it was working on improving its technology to help ChatGPT recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support.

    Beyond mental health concerns, worries have also increased over how artificial intelligence could be used to carry out cybersecurity attacks. Samantha Vinograd, a CBS News contributor and former top Homeland Security official in the Obama administration, addressed the issue on CBS News’ “Face the Nation with Margaret Brennan” on Sunday.

    “AI doesn’t just level the playing field for certain actors,” she said. “It actually brings new players onto the pitch, because individuals, non-state actors, have access to relatively low-cost technology that makes different kinds of threats more credible and more effective.”

    Altman acknowledged the growing safety hazards AI poses in his X post, writing that while the models and their capabilities have advanced quickly, challenges have also started to arise.

    “The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities,” he wrote.

    Now, he continued, “We are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides … in a way that lets us all enjoy the tremendous benefits.”

    According to the job posting, a qualified applicant would have “deep technical expertise in machine learning, AI safety, evaluations, security or adjacent risk domains” and have experience with “designing or executing high-rigor evaluations for complex technical systems,” among other qualifications.

    OpenAI first announced the creation of a preparedness team in 2023, according to TechCrunch.

    [ad_2]

    Source link