ReportWire

Tag: Artificial Intelligence

  • Spanish Feminist Targeted by AI Fakes Wants Stricter Online Regulations

    [ad_1]

    MADRID, Feb 27 (Reuters) – A Spanish ⁠women’s ⁠rights activist who suffered online ⁠abuse, including AI-generated fake nude images, said the government’s pledge ​to regulate social media does not go far enough, calling for anonymous accounts to ‌be made traceable to end ‌impunity for digital violence.

    As Europe’s push to rein in U.S.-based tech giants ⁠is shifting ⁠from fines and takedown notices to stiffer measures, Madrid wants to ​impose a ban on under-16s accessing social media and criminal liability for platform executives who fail to remove illegal or hateful content.

    France, Greece and Poland are weighing similar measures ​after Australia became the first country to block social media for children under ⁠16 ⁠in December. 

    Carla Galeote, a ⁠25-year-old lawyer ​and prominent online feminist commentator, told Reuters governments were reacting only now because ​digital violence had become ⁠impossible to ignore, although the problem predated AI. 

    “Social media isn’t new – and the violence is brutal, systematic, 24/7,” Galeote said. “What hit me hardest wasn’t the deepfake, it was going to the police and being told it wasn’t even a crime.”

    She ⁠dismissed plans to ban children from social media as “paternalistic”, arguing all users, regardless ⁠of age, need protection from digital abuse.

    Spain’s proposed law has sparked backlash from tech company executives, who accuse Prime Minister Pedro Sanchez of threatening free speech. Galeote, however, believes regulation and freedom of expression can coexist.

    “It’s impossible to think that a man on the street could shout that they’ll rape you and nothing happens, but that’s what we’re seeing online,” she said. 

    Instead of imposing easily absorbable fines, Galeote advocated barring platforms ⁠from major markets, like the European Union, for repeated violations. 

    While defending pseudonymous online use, Galeote emphasized the need for traceable identities behind all accounts. 

    “Call yourself ‘PeppaPig88’ if you want – fine. But there has to be a ​real identity behind that account,” she said.

    (Reporting by David Latona; Editing ​by Aislinn Laing and Andrei Khalip)

    Copyright 2026 Thomson Reuters.

    Photos You Should See – Feb. 2026

    [ad_2]

    Reuters

    Source link

  • Anthropic refuses to bend to Pentagon on AI safeguards as dispute nears deadline

    [ad_1]

    A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business.

    Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company “cannot in good conscience accede” to the Pentagon’s final demand to allow unrestricted use of its technology.

    Anthropic, maker of the chatbot Claude, can afford to lose a defense contract. But the ultimatum this week from Defense Secretary Pete Hegseth posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable startups.

    If Amodei doesn’t budge, military officials have warned they will not just pull Anthropic’s contract but also “deem them a supply chain risk,” a designation typically stamped on foreign adversaries that could derail the company’s critical partnerships with other businesses.

    And if Amodei were to cave, he could lose trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building better-than-human AI that, without safeguards, could pose catastrophic risks.

    Anthropic said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”

    That was after Sean Parnell, the Pentagon’s top spokesman, posted on social media that “we will not let ANY company dictate the terms regarding how we make operational decisions” and added the company has “until 5:01 p.m. ET on Friday to decide” if it would meet the demands or face consequences.

    Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”

    That message hasn’t resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic’s top rivals, OpenAI and Google, voiced support for Amodei’s stand late Thursday in an open letter.

    OpenAI and Google, along with Elon Musk’s xAI, also have contracts to supply their AI models to the military.

    “The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the open letter says. “They’re trying to divide each company with fear that the other will give in.”

    Also raising concerns about the Pentagon’s approach were Republican and Democratic lawmakers and a former leader of the Defense Department’s AI initiatives.

    “Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end,” wrote retired Air Force Gen. Jack Shanahan in a social media post.

    Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry.

    “Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here,” Shanahan wrote Thursday on social media. “Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018.”

    He said Claude is already being widely used across the government, including in classified settings, and Anthropic’s red lines are “reasonable.” He said the AI large language models that power chatbots like Claude are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.

    “They’re not trying to play cute here,” he wrote.

    Parnell asserted Thursday that the Pentagon wants to “ use Anthropic’s model for all lawful purposes” and said opening up use of the technology would prevent the company from “jeopardizing critical military operations,” though neither he nor other officials have detailed how they want to use the technology.

    The military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement,” Parnell wrote.

    When Hegseth and Amodei met Tuesday, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.

    Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” He said he hopes the Pentagon will reconsider given Claude’s value to the military, but, if not, Anthropic “will work to enable a smooth transition to another provider.”

    —-

    AP reporter Konstantin Toropin contributed to this report.

    [ad_2]

    Source link

  • Fintech company Block lays off 4,000 of its 10,000 staff, citing gains from AI

    [ad_1]

    BANGKOK — Shares in the financial technology company Block soared more than 20% in premarket trading Friday after its CEO announced it was laying off more than 4,000 of its 10,000 plus employees, reconfiguring to capitalize on its use of artificial intelligence.

    “The core thesis is simple. Intelligence tools have changed what it means to build and run a company,” Jack Dorsey said in a letter to shareholders in Block, the parent company to online payment platforms such as Square and Cash App. “A significantly smaller team, using the tools we’re building, can do more and do it better,” he said.

    Dorsey’s comments explicitly naming AI as a key driver behind the move were also posted on X, or Twitter, a company he co-founded. The assertion that the job cuts will add to Block’s profitability and efficiency led investors to jump in and buy, analysts said.

    Block’s shares gained 5% Thursday to $54.53, before it reported its earnings. They shot up to nearly $69 in after-hours trading. The mobile payments services provider reported its fourth quarter gross profit jumped 24% from a year earlier.

    “For years, we have debated whether AI would dent jobs at the margin. Now we have a public case study in which the CEO explicitly says that intelligence tools have changed what it means to build and run a company,” Stephen Innes of SPI Asset Management said in a commentary.

    “Other large employers have announced tens of thousands of cuts in recent months. Some have downplayed the AI link. Block did not,” he said.

    A global technology company founded in 2009, San Francisco-based Block operates in the United States, Canada, parts of Europe, Australia and Japan.

    In a post on Twitter, Dorsey outlined various ways the company will support those laid off. For employees overseas, the terms might differ, he said.

    It was unclear which employees would be laid off where.

    Layoffs by American companies remain at relatively healthy levels, but the job cuts at Block are the latest among thousands announced in recent months.

    A number of other high-profile companies have announced layoffs recently, including UPS, Amazon, Dow and the Washington Post.

    [ad_2]

    Source link

  • Growing more complex by the day: How should journalists govern use of AI in their products?

    [ad_1]

    Like so many sectors of the economy, the news industry is hurtling toward a future where artificial intelligence plays a major role — grappling with questions about how much the technology is used, what consumers should be told about it, whether anything can be done for the journalists who will be left behind.

    These issues were on the minds of reporters for the independent outlet ProPublica as they walked picket lines earlier this month. They’re inching toward a potential strike, in what is believed would be the first such job action in the news business where how to deal with AI is the chief sticking point.

    Few expect this dispute will be the last.

    AI has undeniably helped journalists, simplifying complex tasks and saving time, particularly with data-focused stories. News organizations are using it to help sift through the Epstein files. AI suggests headlines, summarizes stories. Transcription technology has largely eliminated the need for a human to type up interviews. These days, even a simple Google search frequently involves AI.

    Yet rushing to see how AI can help a financially troubled industry has resulted in several cases of publications owning up to errors.

    Within the past year, Bloomberg issued several corrections for mistakes in AI-generated news summaries. Business Insider and Wired were forced to remove articles by a fake author named Margaux Blanchard. The Los Angeles Times had trouble with AI and opinion pieces. Ars Technica said AI fabricated quotes, and the publication that has frequently reported on the risks of overreliance on AI tools embarrassed itself further by failing to follow its policy to tell readers when the tool is used.

    The ProPublica dispute is noteworthy for how it touches on issues that are frequently cause for debates. The union representing ProPublica’s journalists, negotiating its first contract with the the outlet known for investigative reporting, says it wants commitments that mirror those sought elsewhere in the industry about disclosure and the role of humans in the use of AI.

    Along with holding informational pickets, union members pledged overwhelmingly that they would be willing to strike without a satisfactory agreement, said Jen Sheehan, spokeswoman for the New York Guild, the union that represents many journalists in the city.

    “It feels to me pretty monumental when we think about the trajectory of AI and journalism,” said Alex Mahadevan, an expert on the topic at the Poynter Institute journalism think tank.

    ProPublica has rejected its requests, the union said. Insight into why can be found in an essay, “Something Big is Happening,” that circulated widely this month. Author and investor Matt Shumer, who said he’s spent six years building an AI startup, wrote that the technology is advancing so quickly that “if you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.”

    Small wonder, then, that news executives are reluctant to put guarantees in writing that could quickly become outdated.

    Rather than make promises that can’t be kept, ProPublica is exploring how technology can create more space for investigative reporting, company spokesman Tyson Evans said. In the “unlikely event” of AI-related layoffs, ProPublica is proposing expanded severance packages for those affected, he said.

    “We’re approaching AI with both curiosity and skepticism,” Evans said. “It would be a mistake to freeze editorial decisions in a contract that will last years.”

    Fifty-seven of 283 contracts at U.S. news organizations negotiated by the NewsGuild-USA contain language related to artificial intelligence, said Jon Schleuss, president of the union that represents more journalists than any in the country. The first such deals happened in 2023, and The Associated Press was one pioneer. He wants provisions in more contracts.

    It won’t be easy, judging by the reluctance of many outlets to be tied down. The organization Trusting News, which encourages news organizations to develop and make public its policies on AI use, estimates that less than half of U.S. outlets have done so.

    “I think it is becoming harder,” Schleuss said, “because too many newsrooms are being run by the greedy side of the organization and not by the journalism side of the organization.”

    The guild pushing for contracts that guarantee AI won’t eliminate jobs. That’s no surprise; unions exist to protect jobs. Schleuss characterized a proposal that ensures an actual journalist is involved when AI is used as a way to prevent errors and help an outlet build trust with its readers.

    “Humans are actually so much better at going out, finding the story, interviewing sources, bringing back the relevant pieces, asking the hard follow-up questions and putting that in a way that people can understand and see, whether it’s a news story or a video,” he said. “Humans are way better at doing that than AI ever will be.”

    Apparently, not everyone in journalism agrees. Chris Quinn, editor of The Plain Dealer in Cleveland, Ohio, wrote this month of his disgust with a recent college graduate who turned down a job offer because the person had been taught that AI was bad for journalism.

    Quinn’s newspaper has been sending some of its journalists out to cover stories by interviewing people, collecting quotes and information, then feeding it to a computer to write. While a human will edit what the computer spits out, an integral part of the process — a reporter using his or her judgment about how to tell a story — has been stripped from their hands. Quinn defended it as the best use of limited resources.

    Research shows that a vast majority of American consumers believe that it’s very important that newsrooms tell the public when AI is used to write stories or edit photographs, said Benjamin Toff, director of the Minnesota Journalism Center at the University of Minnesota. But here’s the rub: Such disclosure makes them trust the outlet’s stories less, not more.

    A significant minority — 30% in a study Toff conducted last year — doesn’t want AI used in journalism at all.

    Telling a reader that AI was used is not as simple as it sounds. “There are just so many, many uses of AI in journalism, from the very beginning of the reporting process to when you hit publish, that just broadly declaring that when AI is used in the newsgathering process that you have to disclose it, just seems like it is actually a disservice to the reader in some cases,” Poynter’s Mahadevan said.

    Two lawmakers in New York state — the nation’s publishing capital — introduced legislation this month requiring clear disclaimers when artificial intelligence is used in an published content. There’s no immediate word on its chances for passage, but both sponsors are Democrats in a legislature controlled by that party.

    Mahadevan believes it’s fair to have policies that requires human involvement — editing to prevent slip-ups, for example. But even these declarations are open to interpretation, he said. If an outlet uses chatbots to answer reader questions, are they being edited by a human being?

    “Speaking realistically, the newsroom of the future is going to look completely different than it does today,” he said. “Which means people will lose jobs. There will be new jobs. So I think it’s important that we are having these conversations right now because audiences do not want a newsroom completely taken over by AI.”

    ___

    David Bauder writes about the intersection of media and entertainment for the AP. Follow him at http://x.com/dbauder and https://bsky.app/profile/dbauder.bsky.social.

    [ad_2]

    Source link

  • Pentagon official lashes out at Anthropic as talks break down:

    [ad_1]

    The U.S. military’s partnership with artificial intelligence firm Anthropic is teetering on the edge of collapse as the company and a top Pentagon official trade barbs on the eve of a deadline to reach a deal.

    The Pentagon has given Anthropic until Friday at 5:01 p.m. to either let the military use the company’s AI model for “all lawful purposes” or risk losing a lucrative Pentagon contract. The AI startup has sought guardrails that explicitly bar its powerful Claude model from being used to conduct mass surveillance of Americans or carry out military operations on its own. 

    The Pentagon’s chief technology officer Emil Michael told CBS News on Thursday that the military has “made some very good concessions” in order to make a deal. Anthropic quickly suggested the military’s concessions were inadequate, leading Michael to call the company’s chief executive a “liar.”

    In response to Anthropic’s concerns, Michael told CBS News the Defense Department had offered to “put it in writing that we’re specifically acknowledging” federal laws that restrict the military from surveilling Americans. He also said the military offered language “specifically acknowledging these policies that have been in place for years at the Pentagon regarding autonomous weapons.” And he said the military invited Anthropic to participate in its AI ethics board.

    Asked why the military will not specifically put in writing that Anthropic’s model can’t be used for mass surveillance of Americans or to make final targeting decisions without human involvement, Michael said those uses of AI are already barred by the law and by Pentagon policies. He also said the military does not use AI to power fully autonomous weapons.

    “At some level, you have to trust your military to do the right thing,” said Michael.

    “But we do have to be prepared for the future. We do have to be prepared for what China is doing,” Michael said, referring to how U.S. adversaries use AI. “So we’ll never say that we’re not going to be able to defend ourselves in writing to a company.” 

    An Anthropic spokesperson said Thursday that new contract language it received overnight from the Pentagon “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”

    “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” the company said.

    Anthropic CEO Dario Amodei said in a separate statement Thursday that the Pentagon’s threats to cut off its contracts “do not change our position: we cannot in good conscience accede to their request.” He added that “we hope they reconsider.”

    Late Thursday, Michael responded to Anthropic’s statement with a post on X calling Amodei a “liar” with a “God-complex.” 

    “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk,” Michael wrote.

    If the military and Anthropic do not reach a deal by Friday’s deadline, the military plans to cut off its partnership with the company and designate it a supply chain risk, Pentagon spokesman Sean Parnell said earlier Thursday. Officials are also considering invoking the Defense Production Act to make Anthropic adhere to the military’s requests, sources told CBS News. 

    Michael did not confirm to CBS News that the Defense Production Act could be used, but he said that “no company is going to take out any software that’s being used in this department until we have an alternative.” Michael added that he’s working on partnerships with alternative AI firms.

    At risk for Anthropic is its status as the only AI company to have its model deployed on the Pentagon’s classified networks, through a partnership with data analytics giant Palantir. Anthropic was awarded a $200 million contract with the Defense Department last summer to deploy its AI capabilities to advance national security.

    The feud has highlighted a broader disagreement among policymakers and tech firms over how best to mitigate the potential risks posed by AI.

    Amodei has long been vocal about the potential dangers of unconstrained AI, and has made a focus on safety and transparency a core part of his company’s identity. He’s also backed what he calls “sensible AI regulation.”

    In the case of Anthropic’s Pentagon contract, Amodei said Thursday that “frontier AI systems are simply not reliable enough to power fully autonomous weapons,” and that autonomous weapons “cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.” 

    He also said he’s concerned AI systems could pose a surveillance risk by piecing together “scattered, individually innocuous data into a comprehensive picture of any person’s life.”

    The Trump administration, meanwhile, has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete, and has warned against what it calls “woke” AI models. In a speech last month, Defense Secretary Pete Hegseth pledged, “we will not employ AI models that won’t allow you to fight wars.”

    Michael told CBS News that the disagreement is partially ideological, “and the way I describe that ideology is: they’re afraid of the power of AI.” 

    He said that the military is only interested in using AI lawfully, and is looking to “treat it like any other technology” — which means that if it isn’t used for lawful purposes, “that’s on us.”

    “You can’t put the rules and the policies of the United States military and the government in the hands of one private company,” said Michael.

    [ad_2]

    Source link

  • Ruoming Pang, Meta’s $200M Superintelligence Hire, Jumps to OpenAI After Just 7 Months

    [ad_1]

    Sam Altman reportedly courted Pang for months. Andrew Harnik/Getty Images

    Ruoming Pang, a prominent A.I. researcher recruited by Meta last year with a pay package reportedly worth more than $200 million, has left the company to join OpenAI, The Information reported yesterday (Feb. 25). His departure marks another setback for Mark Zuckerberg’s elite A.I. team and underscores the escalating A.I. talent war. Pang joined Meta Superintelligence Labs (MSL) in July after being poached from Apple. He remained at Meta for only seven months.

    Zuckerberg unveiled MSL in July 2025 as the centerpiece of Meta’s push to develop advanced A.I. systems. The lab quickly became the focus of an aggressive—and costly—hiring spree. Alexandr Wang, founder of Scale AI, now leads the group as Meta’s A.I. chief after Meta acquired 40 percent of his startup. Within MSL, a smaller, more secretive unit known as TBD Lab is tasked with building next-generation foundation models.

    Pang was originally from Shanghai and earned his undergraduate degree from Shanghai Jiao Tong University. He holds a master’s in computer science from the University of Southern California and earned a Ph.D. from Princeton University in 2006. Over the course of his career, Pang has worked on some of the most consequential A.I. systems in the industry, making him one of the more sought-after engineers in the field.

    At Apple, he spent nearly four years as a “senior distinguished engineer,” leading development of the foundation models behind Apple Intelligence. Before Apple, Pang spent roughly 15 years at Google DeepMind as a principal software engineer, where he worked on large-scale machine learning systems, including privacy-preserving technologies and speech recognition.

    OpenAI has not disclosed Pang’s title, scope of responsibilities or the terms of his compensation. The Sam Altman-led company reportedly courted him for months, so the package is likely substantial. OpenAI employees earn roughly $1.5 million in annual salary and equity, according to the Wall Street Journal. Pang is widely expected to continue working on foundation models and superintelligence research.

    For Meta, Pang’s exit complicates Zuckerberg’s ambition to dominate the superintelligence race. The company has successfully recruited high-profile researchers from OpenAI, Google and Anthropic. However, MSL has also seen a steady stream of departures in recent months.

    Among the most prominent was Yann LeCun, Meta’s chief A.I. scientist, who exited at the end of last year after more than a decade at the company. LeCun publicly criticized MSL chief Wang’s lack of experience with A.I. research

    Other departures have been quieter but telling. Ethan Knight joined MSL for only a few weeks before moving to OpenAI last August—a stint so brief it never appeared on his LinkedIn profile. Bert Maher, a software engineer, left after 12 years at Meta to join Anthropic. Avi Verma, who had been expected to join Meta from OpenAI, ultimately backed out.

    Pang’s move is the latest signal that Silicon Valley’s A.I. talent war is intensifying. Even as talk of an A.I. bubble grows louder and tech companies rely on increasingly complex financial structures to sustain lofty valuations, leaders like Zuckerberg, Altman and Anthropic’s Dario Amodei show little sign of restraint. Instead, they are offering compensation packages worth tens or even hundreds of millions of dollars to persuade top researchers that their vision for superintelligence will prevail.

    Ruoming Pang, Meta’s $200M Superintelligence Hire, Jumps to OpenAI After Just 7 Months

    [ad_2]

    Rachel Curry

    Source link

  • Amazon shelves Blue Jay warehouse robot

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Amazon made a lot of noise in October when it unveiled Blue Jay, a multi-armed warehouse robot built to speed up same-day deliveries. Just months later, the company quietly ended the program.

    The robot’s core technology will live on in other projects. Still, Blue Jay itself is done.

    That sudden shift raises an important question. If one of the world’s most advanced logistics companies cannot make a high-profile robot work at scale, what does that say about the future of artificial intelligence (AI) in the real world?

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Blue Jay was designed as a ceiling-mounted robot that could sort and handle multiple packages at once to speed up same-day delivery. (Amazon)

    What Blue Jay was supposed to do

    Blue Jay was not a simple conveyor belt upgrade. It was a ceiling-mounted system designed to recognize and sort multiple packages at once. Using AI-powered perception models, the robot could:

    • Identify packages in motion
    • Coordinate several arms at the same time
    • Manipulate items with speed and precision

    Amazon said it developed the system in under a year. That pace alone was impressive. The goal was clear: move more packages faster while reducing strain on workers in same-day fulfillment centers. On paper, that sounds like a win for everyone.

    Why Blue Jay ran into trouble

    Despite the hype, Blue Jay faced steep engineering and cost challenges. First, the robot was mounted to the ceiling. That design required complex installation and tight integration into Amazon’s Local Vending Machine warehouses. Those facilities operate as massive, single structures with automation baked into the building itself.

    There was little room to reconfigure hardware once installed. That rigidity likely became a liability. In software, AI can pivot overnight with a code update. In the physical world, changing course means retooling steel beams, motors and entire layouts. That takes time and serious money. Several employees who worked on Blue Jay have already moved to other robotics projects.

    The company reportedly continues to experiment and improve its warehouse systems. The technology behind Blue Jay will, in fact, inform future designs. In other words, the robot failed. The ideas did not.

    WAYMO’S CHEAPER ROBOTAXI TECH COULD HELP EXPAND RIDES FAST

    Amazon Blue Jay robot handling a package

    Engineering complexity and high installation costs limited how easily Blue Jay could scale inside Amazon’s tightly integrated warehouse system. (Amazon)

    From LVM to Orbital: A strategic shift

    Amazon’s next move centers on a new warehouse architecture called Orbital. Unlike the older Local Vending Machine model, Orbital is modular. It can be built from smaller units and deployed faster in different layouts.

    That flexibility matters. Retail is fragmenting. Customers expect same-day delivery from urban hubs, local stores and even grocery locations. Orbital could allow Amazon to place micro-fulfillment centers behind retail stores, including Whole Foods locations. That would help it compete more directly with Walmart, which already has a strong grocery footprint.

    Alongside Orbital, Amazon is developing a new robotics system called Flex Cell. Unlike Blue Jay’s ceiling mount, Flex Cell is expected to sit on the floor.

    That small design change signals something bigger. Amazon appears to be moving from massive centralized automation to smaller, adaptable systems built for the unpredictable realities of local retail.

    What this means for your deliveries

    If you order from Amazon regularly, you might wonder whether this affects you. In the short term, probably not. Your packages will still show up. Same-day and next-day delivery remain core priorities. However, the long-term story is more interesting. Amazon’s robotics strategy shapes how fast your order arrives, how much you pay and how local warehouses operate in your community.

    If Orbital works, you could see:

    • Faster delivery from smaller neighborhood hubs
    • Better handling of chilled and perishable items
    • More automation in retail backrooms

    If it struggles, same-day expansion could slow or become more expensive. That tension reflects a broader truth about AI. Writing code is one thing. Teaching a robot to lift boxes in a real warehouse without breaking down is another.

    AI TRUCK SYSTEM MATCHES TOP HUMAN DRIVERS IN MASSIVE SAFETY SHOWDOWN WITH PERFECT SCORES

    A warehouse worker inspecting the Blue Jay robot

    After only a few months, Amazon discontinued the Blue Jay program while continuing to reuse parts of its underlying robotics technology. (Amazon)

    The gap between AI hype and hardware reality

    Blue Jay highlights a growing divide in the tech world. AI in software is moving at lightning speed. Chatbots, image tools and predictive systems evolve weekly.

    Hardware is different. Robots must deal with gravity, friction, heat and unpredictable human environments. Every mistake has a physical cost.

    Amazon’s course correction shows that even tech giants hit limits when translating AI breakthroughs into moving metal. That does not mean automation is slowing down. It means the path is bumpier than the headlines suggest.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Amazon shelving Blue Jay is not a retreat from robotics. It is a recalibration. The company is betting that modular, flexible systems will win over massive, tightly integrated machines. That shift could define the next era of e-commerce logistics. For you, the promise remains the same: faster delivery, better availability and more local convenience. But behind that promise is a complicated dance between AI ambition and real-world constraints.

    If even Amazon struggles to make advanced robots work at scale, how much of the AI revolution is still more vision than reality? Let us know by writing to us at Cyberguy.com

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    Related Article

    Robots learn 1,000 tasks in one day from a single demo

    [ad_2]

    Source link

  • Video: The A.I. Videos on Kids’ YouTube Feeds

    [ad_1]

    new video loaded: The A.I. Videos on Kids’ YouTube Feeds

    The YouTube algorithm is pushing bizarre, often nonsensical A.I.-generated videos targeting children. Our video journalist Arijeta Lajka explains why experts say that these videos could affect their cognitive development, and how parents can identify this type of content.

    By Arijeta Lajka, Christina Shaman, Melanie Bencosme, June Kim and Luke Piotrowski

    February 26, 2026

    [ad_2]

    Arijeta Lajka, Christina Shaman, Melanie Bencosme, June Kim and Luke Piotrowski

    Source link

  • 300,000 Chrome users hit by fake AI extensions

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Your web browser may feel like a safe place, especially when you install helpful tools that promise to make your life easier. But security researchers have uncovered a dangerous campaign in which more than 300,000 people installed Chrome extensions pretending to be artificial intelligence (AI) assistants. Instead of helping, these fake tools secretly collect sensitive information like your emails, passwords and browsing activity.

    They used familiar names like ChatGPT, Gemini and AI Assistant. If you use Chrome and have installed any AI-related extension, your personal information may already be exposed. Even worse, some of these malicious extensions are still available today, putting more people at risk without their knowing.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    More than 300,000 Chrome users installed fake AI extensions that secretly harvested sensitive data. (Kurt “CyberGuy” Knutsson)

    What you need to know about fake AI extensions

    Security researchers at browser security company LayerX discovered a large campaign involving 30 malicious Chrome extensions disguised as AI-powered assistants (via BleepingComputer). Together, these extensions were installed more than 300,000 times by unsuspecting users.

    Some of the most popular extensions included names like AI Sidebar with 70,000 users, AI Assistant with 60,000 users, ChatGPT Translate with 30,000 users, and Google Gemini with 10,000 users. Another extension called Gemini AI Sidebar had 80,000 users before it was removed.

    These extensions were distributed through the official Chrome Web Store, which made them appear legitimate and trustworthy. Even more concerning, researchers found that many of these extensions were connected to the same malicious server, showing they were part of a coordinated effort.

    While some extensions have since been removed, others remain available. This means new users could still unknowingly install them and expose their personal data. Here’s the list of the affected extensions:

    • AI Assistant
    • Llama
    • Gemini AI Sidebar
    • AI Sidebar
    • ChatGPT Sidebar
    • Grok
    • Asking ChatGPT
    • ChatGBT
    • Chat Bot GPT
    • Grok Chatbot
    • Chat With Gemini
    • XAI
    • Google Gemini
    • Ask Gemini
    • AI Letter Generator
    • AI Message Generator
    • AI Translator
    • AI For Translation
    • AI Cover Letter Generator
    • AI Image Generator ChatGPT
    • Ai Wallpaper Generator
    • Ai Picture Generator
    • DeepSeek Download
    • AI Email Writer
    • Email Generator AI
    • DeepSeek Chat
    • ChatGPT Picture Generator
    • ChatGPT Translate
    • AI GPT
    • ChatGPT Translation
    • ChatGPT for Gmail

    FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

    A fake AI app in the Google Play Store

    These malicious tools were listed in the official Chrome Web Store, making them appear legitimate and trustworthy. (LayerX)

    How the fake AI Chrome extension attack works

    These fake extensions pretend to offer helpful AI features, such as translating text, summarizing emails, or acting as an AI assistant. But behind the scenes, they quietly monitor what you are doing online.

    Once installed, the extension gains permission to view and interact with the websites you visit. This allows it to read the contents of web pages, including login screens where you enter your username and password.

    In some cases, the extensions specifically targeted Gmail. They could read your email messages directly from your browser, including emails you received and even drafts you were still writing. This means attackers could access private conversations, financial information and sensitive personal details.

    The extensions then sent this information to servers controlled by the attackers. Because they loaded content remotely, the attackers could change their behavior at any time without needing to update the extension.

    Some versions could also activate voice features through your browser. This could potentially capture spoken conversations near your device and send transcripts back to the attackers.

    If you installed one of these extensions, attackers may already have access to extremely sensitive information. This includes your email content, login credentials, browsing habits and possibly even voice recordings.

    We reached out to Google for comment, and a spokesperson told CyberGuy that the company “can confirm that the extensions from this report have all been removed from the Google Web Store.”

    BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK

    Woman sitting on the floor with her laptop.

    Once installed, the extensions could read emails, capture passwords, monitor browsing activity and send the data to attacker-controlled servers. (Bildquelle/ullstein bild via Getty Images)

    7 ways you can protect yourself from malicious Chrome extensions

    If you have ever installed an AI-related Chrome extension, taking a few simple precautions now can help protect your accounts and prevent further damage.

    1) Remove any suspicious or unused browser extensions

    On a Windows PC or Mac, open Chrome and type chrome://extensions into the address bar. Review every extension listed. If you see anything unfamiliar, especially AI assistants you don’t remember installing, click “Remove” immediately. Malicious extensions depend on going unnoticed. Removing them stops further data collection and cuts off the attacker’s access to your information.

    2) Change your passwords

    If you installed any suspicious extension, assume your passwords may be compromised. Start by changing your email password first, since email controls access to most other accounts. Then update passwords for banking, shopping and social media accounts. This prevents attackers from using stolen credentials to break into your accounts.

    3) Use a password manager to create and protect strong passwords

    A password manager generates unique, complex passwords for each account and stores them securely. This prevents attackers from accessing multiple accounts if one password is stolen. Password managers also alert you if your login credentials appear in known data breaches, helping you respond quickly and protect your identity. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

    4) Install strong antivirus software and keep it active

    Good antivirus software can detect malicious browser extensions, spyware, and other hidden threats. It scans your system for suspicious activity and blocks harmful programs before they can steal your information. This adds an important layer of protection that works continuously in the background to keep your device safe. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

    5) Use an identity theft protection service

    Identity theft protection services monitor your personal data, including email addresses, financial accounts, and Social Security numbers, for signs of misuse. If criminals try to open accounts or commit fraud using your information, you receive alerts quickly. Early detection allows you to act fast and limit financial and personal damage. See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

    6) Keep your browser and computer fully updated

    Software updates fix security vulnerabilities that attackers exploit. Enable automatic updates for Chrome and your operating system so you always have the latest protections. These updates strengthen your defenses against malicious extensions and prevent attackers from taking advantage of known weaknesses.

    7) Use a personal data removal service

    Personal data removal services scan data broker websites that collect and sell your personal information. They help remove your data from these sites, reducing what attackers can find and use against you. Less exposed information means fewer opportunities for criminals to target you with scams, identity theft or phishing attacks.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    Kurt’s key takeaway

    Even tools designed to make your life easier can become tools for cybercriminals. Malicious extensions often hide behind trusted names and convincing features, making them difficult to spot. You can significantly reduce your risk by reviewing your browser extensions regularly, removing anything suspicious and using protective tools like password managers and strong antivirus software.

    Have you checked your browser extensions recently? Let us know your thoughts by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    Related Article

    Malicious browser extensions hit 4.3M users

    [ad_2]

    Source link

  • AI song generator startups angered the music industry. Now they’re hoping to join it

    [ad_1]

    CAMBRIDGE, Mass. — Suno CEO Mikey Shulman pulls up a chair to the recording studio desk where a research scientist at his artificial intelligence company is creating a new song.

    The flute line sounds promising.

    The percussion needs work.

    Neither of them is playing an instrument. They type some descriptive words – Afrobeat, flute, drums, 90 beats per minute – and out comes an infectious rhythm that livens up the 19th century office building where Suno is headquartered in Cambridge, Massachusetts. They toggle some editing tools to refine the new track.

    Much like early experiences with ChatGPT or AI text-to-image generators, trying to make an AI-generated song on platforms like Suno or its rival, Udio, can seem a little like magic. It takes no musical skills, practice or emotional wellspring to conjure up a new tune inspired by almost any of the world’s musical traditions.

    But the process of training AI on beloved musicians of the past and present to produce synthetic approximations of their work has angered the music industry and brought much of its legal power against the two startups.

    Now, after their users have flooded the internet with millions of AI-generated songs, some of which have found themselves on streaming services like Spotify, the leaders of Suno and New York-based Udio are trying to negotiate with record labels to secure a foothold in an industry that shunned them.

    “We have always thought that working together with the music industry instead of against the music industry is the only way that this works,” said Shulman, who co-founded Suno in 2022. “Music is so culturally important that it doesn’t make sense to have an AI world and a non-AI world of music.”

    Sony Music, Universal Music and Warner Records sued the two startups for copyright infringement in 2024, alleging that they were exploiting the recorded works of their artists.

    Since then, the pair have strived to make peace with the industry. Suno, now valued at $2.45 billion, last year struck a settlement with Warner, and Udio has signed licensing agreements with Warner, Universal and independent label Merlin. Only one major label, Sony, has not settled with either startup as the lawsuits move forward in Boston and New York federal courts.

    The first of the settlement deals, between Udio and Universal, led to an exodus of frustrated Udio users who were blocked from downloading their own AI-generated tracks. But Udio CEO Andrew Sanchez said he’s optimistic about what the future will bring as his company adapts its business model to let fans of willing artists use AI to play with and potentially alter their works.

    “Having a close relationship with the music industry is elemental to us,” Sanchez said in an interview. “Users really want to have an anchor to their favorite artists. They want to have an anchor to their favorite songs.”

    Many professional musicians are skeptical. Singer-songwriter Tift Merritt, co-chair of the Artists Rights Alliance, recently helped organize a “Stealing Isn’t Innovation” campaign by artists — including Cyndi Lauper and Bonnie Raitt — to urge AI companies to pursue licensing deals and partnerships rather than build platforms without regard for copyright law.

    “The economy of AI music is built totally on the intellectual property, globally, of musicians everywhere without transparency, consent, or payment. So, I know they value their intellectual property, but ours has been consumed in order to replace us,” Merritt said in an interview in Raleigh, North Carolina.

    Shulman contends technology “evolves very often faster than the law,” and his company tries to be thoughtful about “not breaking the law” but also “deliver products that the world really wants.”

    When the music industry first confronted Suno over alleged copyright infringement, the company’s antagonistic response alienated professionals like Merritt.

    Symbolizing the divide was a clip last year in which Shulman was quoted as saying, “it’s not really enjoyable” to make music most of the time. Shulman started learning piano at age 4 but later dropped it. He took up bass guitar at 12, playing in rock bands in high school and college. He said that experience gave him some of the best moments of his life.

    “You need to get really good at an instrument or really good at a piece of production software,” Shulman said on the “The Twenty Minute VC” podcast. “I think the majority of people don’t enjoy the majority of the time they spend making music.”

    “Clearly, I wish I had said different words,” Shulman told the AP. The context, he added, was that “to produce perfect music takes a lot of repetitions and not all of those minutes are the most enjoyable bits of making music. On the whole, obviously, music is amazing. I play music every day for fun.”

    Sanchez, the Udio CEO, also would like people to know he loves making music. He’s an opera-loving tenor who’s sung in choirs and grew up crooning Luciano Pavarotti in his family’s home in Buffalo, New York.

    Founded in 2023 by a group that included several AI researchers from Google, the startup now employs about 25 people. It has fewer users and raised less capital than Suno, reducing its leverage in its negotiations with record labels.

    But like ride-hailing company Lyft, which pitched itself as the friendly alternative to Uber’s aggressive expansion tactics more than a decade ago, Udio embraces its underdog status.

    “So many tech companies actively cultivate this I-am-a-tech-company-crusader and that’s part of their identity,” Sanchez said. “That alienates people who are creative and I am uniformly opposed to that.”

    Sanchez said he knows not every artist is going to embrace AI, but he hopes those who leave the room after talking with him realize he’s not imposing a kind of “AI bravado.”

    “If you took what we’re doing and pretended that the word AI wasn’t a part of it, people would be like, ‘Oh my gosh. This is so cool.’”

    In the basement office of his Philadelphia, Mississippi home, Christopher “Topher” Townsend is a one-man band, making and marketing Billboard-chart-topping gospel music — none of which he sings himself — and doing it in record time.

    The rapper, whose lyrics reflect his political conservatism, downloaded Suno in October and, within days, created Solomon Ray, a fictional singer that Townsend calls an extension of himself.

    Townsend uses ChatGPT to write lyrics, Suno to generate songs and other AI tools to create cover art and promotional videos under the Solomon Ray name.

    “I can see why artists would be afraid,” Townsend said. ”(Solomon Ray) has an immaculate voice. He doesn’t get sick. You know, he doesn’t have to take leave, he doesn’t get injured and he can work faster than I can work.”

    Trying to dispel that fear for aspiring artists is Jonathan Wyner, a professor of music production and engineering at the Berklee College of Music in Boston, who sees generative AI as just another tool.

    “To the creative musician, AI represents both enormous potential benefits in terms of streamlining things and frankly making kinds of music-making possible that weren’t possible before, and making it more accessible to people who want to make music,” he said.

    Such a vision remains a tough sell for artists who feel their work has already been exploited. Merritt says she’s particularly concerned about labels making deals with AI companies that leave out independent artists.

    Neither Sanchez nor Shulman was invited to the Grammy Awards in February, but both spent time schmoozing at the sidelines of the event.

    “I think AI music is still officially not allowed, and my hope is that some of these rules change over the next year, and then maybe the 2027 Grammys, I’ll get an invite,” Shulman said.

    —————-

    O’Brien reported from Cambridge, Massachusetts and New York. Ngowi reported from Cambridge and Somerville, Massachusetts. AP journalists Sophie Bates in Philadelphia, Mississippi and Allen G. Breed in Raleigh, North Carolina, contributed to this report.

    [ad_2]

    Source link

  • ‘Compute Equals Revenues’: Nvidia Needs Jensen Huang’s New Catchphrase to Be True

    [ad_1]

    Nvidia reported earnings on Wednesday, and as expected, the numbers were good. Really good. The company gets more than 91% of its sales from its data center unit, which generated revenue of $193,737 billion, up 68% year-over-year.

    “We have now scaled our data center business by nearly 13x since the emergence of ChatGPT in fiscal 2023,” Nvidia CFO Colette Kress said in the company’s earnings call on Wednesday.

    While very impressive, the number is not all that surprising given that global AI spending is expected to reach $2.5 trillion this year, and Nvidia’s largest customers, the major AI hyperscalers Amazon, Alphabet, Meta, and Microsoft, all reported record capex figures earlier this month.

    The hyperscalers also made eyewatering financial commitments for 2026 totaling nearly $700 billion, which came to the dismay of many investors who have been growing wary of AI spending.

    Earlier this month, Evercore analysts warned that the huge capex could turn the hyperscalers’ cash flow negative.

    And despite the record after record multibillion-dollar commitment made to scale AI infrastructure and grow the technology’s adoption across the American economy, the results are yet to fully materialize. A Goldman Sachs analyst recently said that AI contributed “basically zero” to U.S. GDP in 2025.

    Nvidia CEO Jensen Huang spent most of his time in the investor call trying to justify that capex growth.

    “I am confident in their cash flow growing, and the reason for that is very simple: we have now seen the inflection of agentic AI and the usefulness of agents across the world in enterprises everywhere,” Huang said.

    AI adoption by enterprises beyond the tech world, and whether these companies actually see real productivity gains and revenue returns from AI integration, is really important to Nvidia, because that’s a major thing that the AI industry is currently lacking to quell worries over an AI bubble.

    A recent survey found that despite 70% of firms employing AI, over 80% reported no impact on employment or productivity.

    Last week, OpenAI COO Brad Lightcap told TechCrunch that his company had “not really seen enterprise AI penetrate enterprise business process.”

    Some experts believe that Anthropic’s Claude Cowork unveiled earlier this month is going to be a turning point in AI’s penetration into the workforce, so much so that they believe it will lead to a mass extinction-level event for software companies, and maybe even white-collar work. Huang gave a special shout-out to Claude Cowork in the call as well.

    Huang also had a technical explanation to justify the capex commitments.

    “In this new world of AI, compute equals revenues,” Huang said, a phrase that he repeated many times throughout the call. Huang argues that tokens, aka the chunks of data that AI models process, are the most important part of a new AI economy. The more tokens a model uses, the more computing power and time it requires. So, as models are getting more complex, the demand for computing is also going up “exponentially,” Huang said. He argued that the capex commitments will go towards building this compute capacity, which will thus power higher-level models and translate to revenue.

    “The amount of token generation capability that the world needs is a lot, more than $700 billion, and I’m fairly confident that we’re going to continue to generate tokens…fundamentally because every single company depends on software, every software will depend on AI, and so every company will produce tokens,” Huang said. “If the new software requires tokens to be generated and the tokens are monetized, then it stands to reason that their data center build-out directly drives their revenues.”

    Huang’s justifications may not have immediately convinced the market. Even though shares rose at first in response to the report, after the call, gains eventually pulled back to less than 1%. That’s despite revenue that exceeded market expectations.

    OpenAI and China are still blind spots

    Throughout the call, Huang also tried to address rumors of a falling out with OpenAI, first spurred after a $100 billion Nvidia investment announced back in September 2025 reportedly failed to progress beyond the early stages after months. Then, two back-to-back reports claimed that Huang was privately criticizing OpenAI’s business approach while OpenAI was unhappy with the inference speed of Nvidia’s chips.

    In the call on Wednesday, Huang repeatedly praised the AI giant’s offerings, but revealed that the investment was still not finalized.

    “We continue to work with OpenAI toward a partnership agreement, and believe we are close,” Huang said on the call. The filing also refuses to give any assurance that “a transaction will be completed.”

    Another piece of uncertainty weighing on Nvidia is China. The company shared that, as of this month, the Trump administration has finally allowed it to start shipping small amounts of its H200 chips to China, where it once held 95% of the market share before Trump first banned the chipmaker’s sales to China, sparking a saga of dizzying trade tit-for-tat between the two global superpowers. But executives still don’t know if the imports will be allowed in, and are not factoring it into the revenue they expect this year.

    [ad_2]

    Ece Yildirim

    Source link

  • What’s behind the Anthropic-Pentagon feud

    [ad_1]

    Washington — The Pentagon gave Anthropic an ultimatum this week: Give the U.S. military unrestricted use of its AI technology or face a ban from all government contracts. 

    At the center of the issue is a question of who controls how artificial intelligence models are used, the Pentagon or the company’s CEO.

    The Pentagon’s AI contracts 

    The Pentagon awarded Anthropic a $200 million contract in July to develop AI capabilities that would advance U.S. national security. 

    Anthropic’s rivals, including OpenAIGoogle and xAI were also awarded $200 million contracts by the Pentagon last year. 

    Anthropic is currently the only AI company to have its model deployed on the Pentagon’s classified networks, through a partnership with data analytics giant Palantir.

    A senior Pentagon official told CBS News that Grok, which is owned by Elon Musk’s xAI, is on board with being used in a classified setting, and other AI companies are close. 

    The Pentagon announced last month that it’s looking to accelerate its uses of AI, saying the technology could help the military “rapidly convert intelligence data” and “make our Warfighters more lethal and efficient.”

    Clash over the guardrails 

    The standoff between the Pentagon and Anthropic was reportedly set off by the U.S. military’s use of its technology, known as Claude, during the operation to capture former Venezuela President Nicolás Maduro in January. 

    An Anthropic spokesperson said in a statement that the company “has not discussed the use of Claude for specific operations with the Department of War.”

    Anthropic has repeatedly asked the Pentagon to agree to certain guardrails, among them a restriction on using Claude to conduct mass surveillance of Americans, sources told CBS News. 

    And the company also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the matter said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the source said.  

    When asked for comment, a senior Pentagon official said: “This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders.”

    Pentagon officials have expressed concerns to Anthropic that the company’s guardrails could stand in the way of critical actions, such as responding to an intercontinental ballistic missile launched toward the United States.

    Any company-imposed restrictions “could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we’re prevented from using it,” Emil Michael, the undersecretary of defense for research, said at an event in February.

    On the question of when AI is used to strike or kill military targets and makes a mistake, who is liable — the military or the AI company — a defense official said: Legality is the Pentagon’s responsibility as the end user.

    What top leaders are saying  

    Anthropic CEO Dario Amodei has been vocal in expressing his concerns about the potential dangers of AI and has centered the company’s brand around safety and transparency. 

    In a lengthy essay last month, Amodei warned of the potential for abuse of the technologies, writing that “a powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.” 

    “Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies,” he wrote. 

    Amodei has long backed what he describes as “sensible AI regulation,” including rules that would require AI companies to be transparent about the risks posed by their models and any steps taken to mitigate them.

    The Trump administration, meanwhile, has favored a lighter touch, and has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete. The administration has sought to block what it calls “excessive” state-level regulations. At one point last year, venture capitalist and White House AI and crypto adviser David Sacks accused Anthropic of “fear-mongering” and suggested its interest in AI regulations is self-serving.

    In a January speech, Defense Secretary Pete Hegseth derided what he views as “social justice infusions that constrain and confuse our employment of this technology.” 

    “We will not employ AI models that won’t allow you to fight wars,” Hegseth declared. “We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.” 

    What’s next in the Anthropic v. Pentagon saga

    Hegseth gave Anthropic until Friday to agree to give the U.S. military unrestricted use of its technology or risk being blacklisted, sources familiar with the situation told CBS News. 

    Pentagon officials are considering invoking the Defense Production Act to compel Anthropic to comply on national security grounds.

    Or, if an agreement can’t be reached, defense officials have discussed declaring the company a “supply chain risk” to push it out of government, according to the sources. 

    [ad_2]

    Source link

  • Feds give record $27B in loans for utility expansion in Georgia and Alabama

    [ad_1]

    ATLANTA — Federal energy officials on Wednesday announced a record $27 billion loan to electric utilities in Georgia and Alabama, saying the loan will save customers money as the companies undertake a huge expansion driven by demand from computer data centers.

    A total of $22.4 billion will go to Georgia Power and $4.1 billion to Alabama Power. Both are subsidiaries of Atlanta-based Southern Company, one of the nation’s largest utilities. The companies plan to use the cash to build new natural-gas fueled power plants, build new transmission lines and upgrade existing power plants.

    Energy Secretary Chris Wright said the loan will result in more than $7 billion in savings over decades from a lower, federally subsidized interest rate.

    “We’re focused on driving down costs,” Wright said. He added that the loan would help ensure Southern customers “have access to affordable, reliable and secure energy for decades to come.”

    Wright and President Donald Trump have frequently made the case for their fossil fuel-friendly policies — including orders over the past nine months to keep some coal-fired plants open past planned retirement dates — as necessary to ensure reliability of the nation’s electric grid.

    Wright says the orders have saved utility customers millions of dollars and helped keep lights on during last month’s winter storm. Critics say the orders are unnecessary and have raised electric bills as utilities keep older, more expensive plants operating.

    “These loans will help lower the cost of investments in our grid that will enhance reliability and resilience for the benefit of our customers,” said Chris Womack, Southern’s chairman, president and CEO.

    The new loan comes amid scrutiny on rising utility bills, with electricity prices increasing faster than inflation in many states. There is also widespread opposition to new data centers for artificial intelligence.

    Trump in his State of the Union Tuesday announced a “ratepayer protection pledge” against higher utility bills tied to AI. He said tech companies will provide their own power as they build data centers. Trump didn’t provide details but claimed prices will go down.

    It is unclear whether any tech companies have signed pledges to build their own power plants, but Wright said on a call with reporters Wednesday that “every name you know that’s developing a data center has been in dialogue with us.”

    He cited “cooperation” from giants such as Microsoft, Google and Meta, but he didn’t specify any written agreements.

    Federal officials have long given utility loans, including $12 billion in loans that the first Trump administration and President Barack Obama’s administration guaranteed for two costly nuclear reactors at Georgia’s Plant Vogtle, partially owned by Georgia Power.

    Trump’s tax and budget bill last year reshaped the loan program to focus on increasing capacity to generate and transmit electricity. Loan guarantees under President Joe Biden focused on green energy goals.

    Gregory Beard, who directs the newly renamed Office of Energy Dominance Financing, said Wednesday that cutting interest rates and discarding Biden’s policy “will get us back on the right track in terms of affordability.”

    The loan office will review individual projects to ensure they’re financially viable, he said. “We’re not going to build this plant or deploy this capital until we are sure that it’s the right thing to do for the local community, for the local ratepayer,” Beard said in an interview.

    Those requirements don’t seem to be laid out in loan agreements that Southern released Wednesday. Jennifer Whitfield, an attorney for the Southern Environmental Law Center who represented Georgia Power expansion opponents, said the loans will save money for Georgians, but questioned their wisdom.

    “As a taxpayer, it’s hard to avoid the fact that this is a bailout paid for by every taxpaying citizen of the United States,” she said.

    Any savings for customers must be approved by the elected Public Service Commissions in Alabama and Georgia. Commissioners last July approved a three-year rate freeze requested by Georgia Power, while commissioners in Alabama approved a two-year rate freeze in December. Company officials tout the freezes when utilities nationwide have been seeking record increases. But opponents complain company-friendly regulators locked in high prices and high utility profits.

    Voters booted two Republican incumbents off the Georgia commission in November amid complaints about rising bills.

    Commissioner Peter Hubbard, one of two new Democrats, unsuccessfully tried to roll back approval for Georgia Power’s expansion in recent weeks. He said Wednesday that the declining costs of solar, wind and battery power could make new natural gas plants uneconomic over time.

    “It’s locking us into a costlier option,” he said of the federal loan. ”And so I think it just is not meeting the moment of affordability.”

    ___

    Daly reported from Washington.

    [ad_2]

    Source link

  • Asian stocks gain after optimism about AI sends Wall Street higher

    [ad_1]

    TOKYO — U.S. futures were flat after President Donald Trump’s State of the Union speech, while Asian shares were mostly higher.

    Japan’s benchmark briefly hit a record high as investors were cheered by an overnight Wall Street rally driven by optimism about the artificial-intelligence boom.

    Tokyo’s Nikkei 225 surged 2.2% to 58,583.12.

    Shares also rose in China. Hong Kong’s Hang Seng rose 0.5% to 26,735.22, while the Shanghai Composite added 0.6% to 4,142.17.

    South Korea’s Kospi surged 2.1% to 6,093.33, as the benchmark continued to benefit from the global demand for computer chips.

    In Taiwan, the Taiex jumped 2.1% as shares in TSMC, the world’s largest contract manufacturer of computer chips, surged 2.5%.

    Australia’s S&P/ASX 200 jumped 1.2% to 9,128.30.

    In his speech, Trump focused on jobs, manufacturing and an economy he says is stronger than many Americans believe. He didn’t dwell on efforts to lower the cost of living — despite polling showing that his handling of the economy and kitchen-table issues has increasingly become a liability.

    The futures for the S&P 500 and the Dow Jones Industrial Average were nearly unchanged.

    On Tuesday, before the speech, the S&P 500 climbed 0.8% to 6,890.07. The Dow industrials added 0.8% to 49,174.50, and the Nasdaq composite climbed 1% to 22,863.68.

    Advanced Micro Devices helped lead the market and rallied 8.8% after announcing a multiyear deal where it will supply chips to Meta Platforms to help power its AI ambitions. Meta also got the right to buy up to 160 million shares of AMD stock for 1 cent each, depending in part on how many chips Meta ultimately buys.

    It’s a reminder of the excitement that built in recent years about the billions of dollars pouring into AI, producing a sharp turnaround from the prior day, when worries about the potential downsides of AI shook Wall Street. IBM rose 2.7% to recover some of its 13.1% drop from Monday, which was its worst since 2000.

    Chipmaking giant Nvidia is due to report its earnings later Wednesday in a quarterly report likely to sway a jittery stock market as investors weigh whether the massive bets riding on technology’s latest craze will pay off.

    As has been the case since Nvidia’s chipsets emerged as AI’s best building blocks, the expectations are sky high for the results covering the company’s fiscal quarter, covering November through January.

    Big U.S. companies have reported mostly better profits for the end of 2025 than analysts expected. Keysight Technologies rallied 23.1% for the biggest gain in the S&P 500, while Home Depot rose 2% after likewise delivering stronger profit and revenue than analysts expected.

    In the bond market, Treasury yields held relatively steady after a report said that confidence among U.S. consumers improved by more than economists expected. The yield on the 10-year Treasury held at 4.03%, where it was late Monday.

    In other dealings early Wednesday, benchmark U.S. crude oil added 48 cents to $66.11 a barrel. Brent crude, the international standard, rose 48 cents to $71.06 a barrel.

    The U.S. dollar slipped to 155.82 Japanese yen from 155.91 yen. The dollar traded close to 160 yen levels several months ago. The euro cost $1.1803, up from $1.1774.

    ___

    Yuri Kageyama is on Threads: https://www.threads.com/@yurikageyama

    [ad_2]

    Source link

  • Hegseth demands full military access to Anthropic’s AI model Claude and sets deadline for end of week

    [ad_1]

    Trust is breaking down between the Pentagon and Anthropic over the use of its AI model, sources familiar with the situation told CBS News. 

    In a meeting at the Pentagon on Tuesday morning, Defense Secretary Pete Hegseth gave Anthropic’s CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model, the sources said. 

    Officials are considering invoking the Defense Production Act to make Anthropic adhere to what the military is seeking, they said. Axios reported earlier on some of what transpired in the meeting.

    Defense officials want full control of Anthropic’s AI technology for use in its military operations, sources told CBS News. The company was awarded a $200 million contract by the Pentagon in July to develop AI capabilities that would advance U.S. national security.

    Anthropic has repeatedly asked the Defense Department to agree to guardrails that would restrict the AI model, called Claude, from conducting mass surveillance of Americans, sources said. Defense officials noted that that’s illegal and said the military is simply asking for a license to use the AI strictly for lawful activities.


    The Free Press: Are We at an AI Precipice?


    Amodei also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the meeting said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the person said. 

    But when asked for comment, a senior Pentagon official said: “This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders.”

    The official said Grok, which is owned by Elon Musk’s xAI, is on board with being used in a classified setting, and other AI companies are close.

    In Tuesday’s meeting, Hegseth told Amodei that when the government purchases Boeing planes, the aerospace company has no say in how the Pentagon uses the planes. He argued the same should be true for the military’s use of Claude.

    After Amodei left, officials discussed whether to use the Defense Production Act in this situation, which enables the government to exert control over domestic industries. 

    But because officials say they aren’t sure the government can trust Anthropic at this point, the Pentagon may decide to officially designate the company as a “supply chain risk” to push them out of government, two sources said. Anthropic was the first tech company authorized to work on the military’s classified networks. 

    An Anthropic spokesperson said in a statement, “We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”

    Hegseth gave Anthropic a deadline of 5 p.m. Friday.

    [ad_2]

    Source link

  • China vs SpaceX in race for space AI data centers

    [ad_1]

    NEWYou can now listen to Fox News articles!

    If your phone heats up while running AI, imagine what happens inside a massive data center. Now imagine moving that data center into orbit.

    That is exactly what China and Elon Musk are planning. It is a serious race to build space-based AI data centers powered by sunlight in space.

    At stake? The future of artificial intelligence, energy dominance and who controls the next layer of digital infrastructure.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    China and Elon Musk are racing to build solar-powered AI data centers in orbit, aiming to ease Earth’s growing energy strain. (Paul Hennesy/Anadolu via Getty Images)

    China’s plan: Gigawatt-class space computing

    China’s main space contractor, China Aerospace Science and Technology Corporation, outlined a five-year plan to build what it calls “gigawatt-class space digital-intelligence infrastructure,” according to reporting cited by CCTV.  While that phrase may sound bureaucratic. It is not.

    Gigawatt-class means massive energy output. Think industrial scale. These proposed orbital hubs would integrate cloud, edge and device-level computing. In simple terms, data collected on Earth could be processed in space instead of inside giant warehouses in Arizona or Inner Mongolia.

    The vision goes even further. A December policy document describes an industrial-scale “Space Cloud” by 2030. The goal is deep integration of computing power, storage and transmission bandwidth, all powered by solar energy in orbit. China also signaled that space-based solar power tied to AI computing will be a core pillar of its upcoming 15th Five-Year Plan. It’s all part of its national strategy.

    Elon Musk says the lowest-cost AI will be in space

    Meanwhile, Elon Musk is making a similar bet. At the World Economic Forum in Davos, Musk said SpaceX plans to launch solar-powered AI data center satellites within two to three years. He argued that space is the “lowest-cost place to put AI” and predicted that it will be true within a few years. Why? Solar power in orbit can generate far more energy than panels on the ground. Musk said orbital solar generation can produce roughly five times more power because there are no clouds and no night cycles in the same way as on Earth. SpaceX reportedly expects to use funds from a planned $25 billion IPO to help develop these orbital AI systems.

    This makes sense when you consider that AI is devouring electricity. Training and running large models requires enormous computing clusters. Power grids are straining in places like Texas and Northern Virginia. So the thinking is simple. If Earth runs short on clean energy for AI, move the servers closer to the sun.

    The real bottleneck: Reusable rockets

    There is only one problem. Getting hardware into space is expensive. SpaceX solved part of that with its Falcon 9 reusable rocket. Reusability dramatically lowers launch costs. It also enabled SpaceX’s Starlink satellite network to dominate low Earth orbit.

    China, on the other hand, has not yet completed a fully successful reusable rocket program capable of repeated, reliable flights. That is a major bottleneck. Without reusability, the cost of launching and maintaining space-based AI infrastructure remains high.

    Still, China achieved a record 93 space launches last year, according to official announcements. Its commercial space startups are maturing quickly. And Beijing has made it clear it wants to become a “world-leading space power” by 2045. In other words, this is a long game.

    ARTIFICIAL INTELLIGENCE HELPS FUEL NEW ENERGY SOURCES

    Fire plumes out of rocket and it launches into space.

    Beijing plans a “gigawatt-class” space computing network as part of its long-term strategy for digital and space dominance. (Gabriel V. Cardenas/AFP via Getty Images)

    It is not just about data centers

    China’s five-year plan also includes suborbital space tourism and the gradual development of orbital tourism. That signals a broader push to commercialize space in a way similar to civil aviation.

    At the same time, both the U.S. and China see strategic and military advantages in dominating orbit. China recently inaugurated its first School of Interstellar Navigation within the Chinese Academy of Sciences. The goal is to move from near-Earth orbit to deep space exploration. State media described the next 10 to 20 years as a window for leapfrog development in interstellar navigation.

    Meanwhile, the U.S. is racing to return astronauts to the moon for the first time since the Apollo era. The competition is heating up on multiple fronts. AI infrastructure in space is just one piece of a much larger chessboard.

    Why this matters to you

    You might be thinking, “Great. Billionaires and governments are fighting over satellites. Why should I care?” Here is why. AI is becoming embedded in everything. Search results. Customer service. Medical imaging. Financial systems. Smart homes. All of that runs on computing power. And that computing power runs on energy. If the cheapest and most abundant energy for AI ends up being in orbit, the balance of tech power could shift dramatically. Countries that control space-based AI infrastructure could gain economic leverage, military advantages and technological dominance. This is the next layer of the cloud. Not in a warehouse. Not in a desert. But circling above your head.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    CHINA QUIETLY BUILDS WORLDWIDE SPACE NETWORK, ALARMING US OVER FUTURE MILITARY POWER

    SpaceX rocket launching during the night.

    Musk says space will soon be the lowest-cost place to power artificial intelligence, citing constant solar energy in orbit. (Aubrey Gemignani/NASA via Getty Images)

    Kurt’s key takeaways

    For decades, space was about flags and footprints. Today, the focus is shifting toward servers and solar arrays as governments and private companies rethink where the world’s most powerful computers should operate. China is pursuing a “Space Cloud,” while Elon Musk argues that AI belongs in orbit. Both are racing toward a future where advanced computing systems are powered by uninterrupted sunlight above Earth. That shift sounds bold and carries real risk. However, if AI continues to accelerate and energy demand keeps climbing, moving computing infrastructure into space may start to look less radical and more inevitable.

    If the infrastructure powering AI moves into orbit, who should control it? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    Related Article

    CHAD WOLF: Space isn’t just the final frontier, it’s the ‘ultimate high ground’

    [ad_2]

    Source link

  • Hegseth and Anthropic CEO set to meet as debate intensifies

    [ad_1]

    WASHINGTON — Defense Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to a new U.S. military internal network.

    Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent.

    The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity.

    It underscores the debate over AI’s role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed to root out what he calls a “woke culture” in the armed forces.

    “A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,” Amodei wrote in an essay last month.

    The Pentagon announced last summer that it was awarding defense contracts to four AI companies — Anthropic, Google, OpenAI and Elon Musk’s xAI. Each contract is worth up to $200 million.

    Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments.

    By early this year, Hegseth was highlighting only two of them: xAI and Google.

    The defense secretary said in a January speech at Musk’s space flight company, SpaceX, in South Texas that he was shrugging off any AI models “that won’t allow you to fight wars.”

    Hegseth said his vision for military AI systems means that they operate “without ideological constraints that limit lawful military applications,” before adding that the Pentagon’s “AI will not be woke.”

    In January, Hegseth said Musk’s artificial intelligence chatbot Grok would join the Pentagon network, called GenAI.mil. The announcement came days after Grok — which is embedded into X, the social media network owned by Musk — drew global scrutiny for generating highly sexualized deepfake images of people without their consent.

    OpenAI announced in early February that it, too, would join the military’s secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks.

    Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021.

    The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technology.

    “Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all lawful applications,” Owens said. “So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.”

    In the AI craze that followed the release of ChatGPT, Anthropic closely aligned with President Joe Biden’s administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks.

    Amodei, the CEO, has warned of AI’s potentially catastrophic dangers while rejecting the label that he’s an AI “doomer.” He argued in the January essay that “we are considerably closer to real danger in 2026 than we were in 2023″ but that those risks should be managed in a “realistic, pragmatic manner.”

    This would not be the first time Anthropic’s advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trump’s proposals to loosen export controls to enable some AI computer chips to be sold in China. The AI company, however, remains a close partner with Nvidia.

    The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states.

    Trump’s top AI adviser, David Sacks, accused Anthropic in October of “running a sophisticated regulatory capture strategy based on fear-mongering.”

    Sacks made the remarks on X in response to an Anthropic co-founder, Jack Clark, writing about his attempt to balance technological optimism with “appropriate fear” about the steady march toward more capable AI systems.

    Anthropic hired a number of ex-Biden officials soon after Trump’s return to the White House, but it’s also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump’s first term, to its board of directors.

    The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companies’ participation in Project Maven, a Pentagon drone surveillance program. While some workers quit over the project and Google itself dropped out, the Pentagon’s reliance on drone surveillance has only increased.

    Similarly, “the use of AI in military contexts is already a reality and it is not going away,” Owens said.

    “Some contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks,” he said, referring to the use of lethal force or weapons like nuclear arms. “Military users are aware of these risks and have been thinking about mitigation for almost a decade.”

    ___

    O’Brien reported from Providence, Rhode Island.

    [ad_2]

    Source link

  • Bloomberg Philanthropies Mayors Challenge winners

    [ad_1]

    The winners of this year’s Bloomberg Philanthropies Mayors Challenge created innovative projects to improve their cities’ core services – many using some combination of artificial intelligence and the wisdom of their residents.

    That’s what South Bend, Indiana, Mayor James Mueller did with his initiative that uses AI to interpret data about residents, like a family falling behind on paying its water bill, and to help offer them services and support that could prevent larger issues.

    “Technology is not necessarily good or bad – it’s how it’s used and how you protect against abuses,” said Mueller, a Democrat who has been mayor since 2020. “We’re trying to use cutting edge tools to deliver city services in a proactive way that meets our residents’ needs.”

    The twenty-four winners announced Tuesday range from Boise, Idaho, where they are using geothermal energy to lower residents’ heating bills, to Beira, Mozambique, where they are relocating fishermen and their families from flood-prone coastal homes to safer inland houses. Each will receive $1 million to implement the program, as well as support from Bloomberg Philanthropies experts to help the new initiative succeed.

    The hope, says former New York City Mayor Michael R. Bloomberg, founder of Bloomberg Philanthropies and Bloomberg L.P., is that successful programs from Mayors Challenge winners can be used in other cities.

    “The most effective city halls are bold, creative, and proactive in solving problems and meeting residents’ needs – and we launched the Mayors Challenge to help more of them succeed,” Bloomberg said in a statement.

    James Anderson, head of government innovation programs at Bloomberg Philanthropies, said many of this year’s winners are integrating AI technology into their work in sophisticated ways, bringing municipal governments closer to the residents they serve.

    “Testing and learning and adapting new ideas don’t generally get funded with public dollars,” Anderson said. “It is up to philanthropy to support experimentation.”

    Vico Sotto, mayor of Pasig City in the Philippines, said becoming one of this year’s Mayors Challenge winners will speed up his project to build floating parks in the Pasig River that will become new community space and reduce flooding threats in the area. Without the support of Bloomberg Philanthropies, Sotto said the initiative wouldn’t be able to start for another year or two.

    “The government doesn’t have a great reputation when it comes to maintaining infrastructure,” Sotto said. “So we will be creating a governance council, including people who live in the area, so definitely they’re not going to abandon these parks. They’re going to take care of them because they’re using them as well.”

    In Lafayette, Louisiana, the city-parish had the opposite problem. Lafayette wanted to update parts of its sewer system, but because some parts were on homeowners’ property the city wasn’t allowed to pay for it.

    Mayor-President Monique Blanco Boulet said the Mayors Challenge encouraged her administration to figure out a solution that will now allow Lafayette to make the repairs and, as a result, encourage development in the city. The plan was also named a Mayors Challenge winner.

    “Bloomberg Philanthropies, the staff, Michael Bloomberg – all of them – have such a global impact in ways that most people will never know,” said Boulet, a Republican elected in 2023. “They bring in a level of capacity and give you the space to really be creative and to come up with solutions that can change lives.”

    South Bend’s Mueller said that the Mayors Challenge comes at a time when more and more global problems need to be solved at a local level.

    “Trust in government is at an all-time low, but local governments consistently perform better in surveys about trust from their residents,” Mueller said. “It is critical for us to maintain that level of trust with our residents and build it even further. So that’s why we’re always looking at innovative ways of doing things better and making the city a better place to live.”

    The winners of the 2026 Bloomberg Philanthropies Mayors Challenge are: As-Salt, Jordan; Barcelona, Spain; Beira, Mozambique; Belfast, Northern Ireland; Benin City, Nigeria; Boise, Idaho, United States; Budapest, Hungary; Cape Town, South Africa; Cartagena, Colombia; Fez, Morocco; Fukuoka, Japan; Ghaziabad, India; Ghent, Belgium; Kanifing, The Gambia; Lafayette, Louisiana, United States; Medellín, Colombia; Netanya, Israel; Pasig, Philippines; Rio de Janeiro, Brazil; South Bend, Indiana, United States; Surabaya, Indonesia; Toronto, Canada; Turku, Finland; Visakhapatnam, India.

    _____

    Associated Press coverage of philanthropy and nonprofits receives support through the AP’s collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP’s philanthropy coverage, visit https://apnews.com/hub/philanthropy.

    [ad_2]

    Source link

  • Hegseth and Anthropic CEO set to meet as debate intensifies over the military’s use of AI

    [ad_1]

    WASHINGTON — Defense Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to a new U.S. military internal network.

    Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent.

    The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity.

    It underscores the debate over AI’s role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed to root out what he calls a “woke culture” in the armed forces.

    “A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,” Amodei wrote in an essay last month.

    The Pentagon announced last summer that it was awarding defense contracts to four AI companies — Anthropic, Google, OpenAI and Elon Musk’s xAI. Each contract is worth up to $200 million.

    Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments.

    By early this year, Hegseth was highlighting only two of them: xAI and Google.

    The defense secretary said in a January speech at Musk’s space flight company, SpaceX, in South Texas that he was shrugging off any AI models “that won’t allow you to fight wars.”

    Hegseth said his vision for military AI systems means that they operate “without ideological constraints that limit lawful military applications,” before adding that the Pentagon’s “AI will not be woke.”

    In January, Hegseth said Musk’s artificial intelligence chatbot Grok would join the Pentagon network, called GenAI.mil. The announcement came days after Grok — which is embedded into X, the social media network owned by Musk — drew global scrutiny for generating highly sexualized deepfake images of people without their consent.

    OpenAI announced in early February that it, too, would join the military’s secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks.

    Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021.

    The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technology.

    “Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all lawful applications,” Owens said. “So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.”

    In the AI craze that followed the release of ChatGPT, Anthropic closely aligned with President Joe Biden’s administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks.

    Amodei, the CEO, has warned of AI’s potentially catastrophic dangers while rejecting the label that he’s an AI “doomer.” He argued in the January essay that “we are considerably closer to real danger in 2026 than we were in 2023″ but that those risks should be managed in a “realistic, pragmatic manner.”

    This would not be the first time Anthropic’s advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trump’s proposals to loosen export controls to enable some AI computer chips to be sold in China. The AI company, however, remains a close partner with Nvidia.

    The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states.

    Trump’s top AI adviser, David Sacks, accused Anthropic in October of “running a sophisticated regulatory capture strategy based on fear-mongering.”

    Sacks made the remarks on X in response to an Anthropic co-founder, Jack Clark, writing about his attempt to balance technological optimism with “appropriate fear” about the steady march toward more capable AI systems.

    Anthropic hired a number of ex-Biden officials soon after Trump’s return to the White House, but it’s also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump’s first term, to its board of directors.

    The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companies’ participation in Project Maven, a Pentagon drone surveillance program. While some workers quit over the project and Google itself dropped out, the Pentagon’s reliance on drone surveillance has only increased.

    Similarly, “the use of AI in military contexts is already a reality and it is not going away,” Owens said.

    “Some contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks,” he said, referring to the use of lethal force or weapons like nuclear arms. “Military users are aware of these risks and have been thinking about mitigation for almost a decade.”

    ___

    O’Brien reported from Providence, Rhode Island.

    [ad_2]

    Source link

  • ChatGPT-maker OpenAI safety representatives summoned to Canada after school shooting

    [ad_1]

    TORONTO — Representatives of ChatGPT-maker OpenAI have been summoned to Ottawa after the company said last week that it considered but didn’t alert Canadian police about the activities of a person who months later committed one of the worst school shootings in the country’s history.

    Artificial Intelligence Minister Evan Solomon said Monday that he expects the company’s top safety representatives to explain its protocols and how it decides to forward cases to law enforcement when he meets with them on Tuesday.

    OpenAI said last June that the company identified the account of Jesse Van Rootselaar via abuse detection efforts for “furtherance of violent activities.”

    The San Francisco technology company said that it considered whether to refer the account to the Royal Canadian Mounted Police, or RCMP, but determined at the time that the account activity didn’t meet a threshold for referral to law enforcement. OpenAI banned the account in June for violating its usage policy.

    The 18-year-old killed eight people in a remote part of British Columbia this month and died from a self-inflicted gunshot wound.

    OpenAI said that the threshold for referring a user to law enforcement is whether the case involves an imminent and credible risk of serious physical harm to others. The company said that it didn’t identify credible or imminent planning. The Wall Street Journal first reported OpenAI’s revelation, reporting that about a dozen employees debated informing Canadian police.

    OpenAI said that it wasn’t until after learning of the school shooting that employees reached out to RCMP with information on the individual and their use of ChatGPT

    Solomon said that he contacted OpenAI immediately when he read the reports that OpenAI didn’t contact law enforcement in a timely manner.

    “I have summoned the senior safety team from OpenAI to come here to Ottawa from the United States,” Solomon said. “Canadians expect, first of all, that their children particularly are kept safe and these organizations act in a responsible manner.”

    Solomon said that some of his representatives already met with some OpenAI officials on Sunday. He wouldn’t say whether the Canadian government intends to regulate AI chatbots like ChatGPT, but insists that all options are on the table.

    Police said Van Rootselaar first killed her mother and stepbrother at the family home before attacking the nearby school. Van Rootselaar had a history of mental health contacts with police.

    The motive for the shooting remains unclear.

    The town of Tumbler Ridge in the Canadian Rockies is more than 1,000 kilometers (600 miles) northeast of Vancouver, near the provincial border with Alberta. Police said the victims included a 39-year-old teaching assistant and five students, ages 12 to 13.

    The attack was Canada’s deadliest rampage since 2020, when a gunman in Nova Scotia killed 13 people and set fires that left another nine dead.

    [ad_2]

    Source link