ReportWire

Tag: anthropic

  • Pentagon official lashes out at Anthropic as talks break down:

    [ad_1]

    The U.S. military’s partnership with artificial intelligence firm Anthropic is teetering on the edge of collapse as the company and a top Pentagon official trade barbs on the eve of a deadline to reach a deal.

    The Pentagon has given Anthropic until Friday at 5:01 p.m. to either let the military use the company’s AI model for “all lawful purposes” or risk losing a lucrative Pentagon contract. The AI startup has sought guardrails that explicitly bar its powerful Claude model from being used to conduct mass surveillance of Americans or carry out military operations on its own. 

    The Pentagon’s chief technology officer Emil Michael told CBS News on Thursday that the military has “made some very good concessions” in order to make a deal. Anthropic quickly suggested the military’s concessions were inadequate, leading Michael to call the company’s chief executive a “liar.”

    In response to Anthropic’s concerns, Michael told CBS News the Defense Department had offered to “put it in writing that we’re specifically acknowledging” federal laws that restrict the military from surveilling Americans. He also said the military offered language “specifically acknowledging these policies that have been in place for years at the Pentagon regarding autonomous weapons.” And he said the military invited Anthropic to participate in its AI ethics board.

    Asked why the military will not specifically put in writing that Anthropic’s model can’t be used for mass surveillance of Americans or to make final targeting decisions without human involvement, Michael said those uses of AI are already barred by the law and by Pentagon policies. He also said the military does not use AI to power fully autonomous weapons.

    “At some level, you have to trust your military to do the right thing,” said Michael.

    “But we do have to be prepared for the future. We do have to be prepared for what China is doing,” Michael said, referring to how U.S. adversaries use AI. “So we’ll never say that we’re not going to be able to defend ourselves in writing to a company.” 

    An Anthropic spokesperson said Thursday that new contract language it received overnight from the Pentagon “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”

    “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” the company said.

    Anthropic CEO Dario Amodei said in a separate statement Thursday that the Pentagon’s threats to cut off its contracts “do not change our position: we cannot in good conscience accede to their request.” He added that “we hope they reconsider.”

    Late Thursday, Michael responded to Anthropic’s statement with a post on X calling Amodei a “liar” with a “God-complex.” 

    “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk,” Michael wrote.

    If the military and Anthropic do not reach a deal by Friday’s deadline, the military plans to cut off its partnership with the company and designate it a supply chain risk, Pentagon spokesman Sean Parnell said earlier Thursday. Officials are also considering invoking the Defense Production Act to make Anthropic adhere to the military’s requests, sources told CBS News. 

    Michael did not confirm to CBS News that the Defense Production Act could be used, but he said that “no company is going to take out any software that’s being used in this department until we have an alternative.” Michael added that he’s working on partnerships with alternative AI firms.

    At risk for Anthropic is its status as the only AI company to have its model deployed on the Pentagon’s classified networks, through a partnership with data analytics giant Palantir. Anthropic was awarded a $200 million contract with the Defense Department last summer to deploy its AI capabilities to advance national security.

    The feud has highlighted a broader disagreement among policymakers and tech firms over how best to mitigate the potential risks posed by AI.

    Amodei has long been vocal about the potential dangers of unconstrained AI, and has made a focus on safety and transparency a core part of his company’s identity. He’s also backed what he calls “sensible AI regulation.”

    In the case of Anthropic’s Pentagon contract, Amodei said Thursday that “frontier AI systems are simply not reliable enough to power fully autonomous weapons,” and that autonomous weapons “cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.” 

    He also said he’s concerned AI systems could pose a surveillance risk by piecing together “scattered, individually innocuous data into a comprehensive picture of any person’s life.”

    The Trump administration, meanwhile, has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete, and has warned against what it calls “woke” AI models. In a speech last month, Defense Secretary Pete Hegseth pledged, “we will not employ AI models that won’t allow you to fight wars.”

    Michael told CBS News that the disagreement is partially ideological, “and the way I describe that ideology is: they’re afraid of the power of AI.” 

    He said that the military is only interested in using AI lawfully, and is looking to “treat it like any other technology” — which means that if it isn’t used for lawful purposes, “that’s on us.”

    “You can’t put the rules and the policies of the United States military and the government in the hands of one private company,” said Michael.

    [ad_2]

    Source link

  • Anthropic Tells Pete Hegseth to Take a Hike

    [ad_1]

    Anthropic is holding the line. At least for now.

    The Pentagon approached Anthropic this week with a demand that it remove guardrails in its AI model Claude to prohibit mass domestic surveillance and fully automated weapons. But Anthropic is refusing to do that, according to a new statement from CEO Dario Amodei, who writes, “we cannot in good conscience accede to their request.”

    There’s a lot of money on the line. And it’s anyone’s guess what happens next.

    Earlier this week, Defense Secretary Pete Hegseth gave Anthropic a deadline of 5:01 p.m. ET on Friday to agree to the removal of all safeguards, threatening to boot Claude from U.S. military systems or designate the company as a “supply chain risk,” a label used for adversaries of the U.S. that’s never been applied to an American company before.

    Hegseth, who refers to the Defense Department as the Department of War, has even threatened to invoke the Defense Production Act, which would theoretically allow the Pentagon to just demand Anthropic do whatever Hegseth wants.

    Amodei pointed out Thursday in a letter posted online: “These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” Experts have called the contradictory messages from Hegseth “incoherent,” a label that might also apply to the Trump regime more broadly.

    Anthropic, which has a $200 million contract with the Department of Defense, told CBS News that the Pentagon’s “best and final offer,” which was sent Wednesday, seemed to have loopholes that would allow the military to disregard the protections put in place.

    “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW’s recent public statements, these narrow safeguards have been the crux of our negotiations for months,” Anthropic reportedly said.

    The new letter released by Anthropic on Thursday made sure to point out that the AI company works with the military and intelligence communities and that they “remain ready to continue our work to support the national security of the United States.” But asking to drop all safeguards is just a bridge too far.

    “Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” the company wrote.

    “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

    The company went on to list the two use cases where it believes safeguards are needed to protect American interests. In the section on mass domestic surveillance, Amodei put the word domestic in italics, as if to warn Americans more broadly about what’s happening right under our noses.

    The letter notes that the government can purchase “detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant,” something that obviously infringes on the rights of Americans. The Pentagon has suggested it doesn’t have a plan for mass surveillance of Americans, telling CNN the conflict with Anthropic has “nothing to do with mass surveillance and autonomous weapons being used.”

    The second section of Amodei‘s letter, which covers autonomous weapons, acknowledges that AI-assisted weapons are already being used on battlefields today in places like Ukraine. But it warns, “frontier AI systems are simply not reliable enough to power fully autonomous weapons.” The letter goes on to say, “We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.”

    Amodei met with Hegseth on Tuesday in a meeting that was described by CNN as “cordial,” but it will obviously be interesting to see where this goes.

    Hegseth is not known as a particularly smart or level-headed guy, so it’s entirely possible that he tries to label Anthropic as both a national security threat and a part of America’s warfighting machine so vital that he’ll essentially draft the company to do what he wants. It sounds like we all get to find out by end of day Friday.

    [ad_2]

    Matt Novak

    Source link

  • Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’

    [ad_1]

    Recently, I asked Claude, an artificial-intelligence thingy at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.

    Say, for example, hands that wanted to put a tight net of surveillance around every American citizen, monitoring our lives in real time to ensure our compliance with government.

    “Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. That’s great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isn’t that I’d want to do that — it’s that I’d be good at it.”

    That danger is also imminent.

    Claude’s maker, the Silicon Valley company Anthropic, is in a showdown over ethics with the Pentagon. Specifically, Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision.

    Those are two red lines that seem rather reasonable, even to Claude.

    However, the Pentagon — specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war — has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any “lawful” purpose it sees fit.

    Defense Secretary Pete Hegseth, center, arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.

    (Tom Williams / CQ-Roll Call Inc. via Getty Images)

    The or-else attached to this ultimatum is big. The U.S. government is threatening not just to cut its contract with Anthropic, but to perhaps use a wartime law to force the company to comply or use another legal avenue to prevent any company that does business with the government from also doing business with Anthropic. That might not be a death sentence, but it’s pretty crippling.

    Other AI companies, such as white rights’ advocate Elon Musk’s Grok, have already agreed to the Pentagon’s do-as-you-please proposal. The problem is, Claude is the only AI currently cleared for such high-level work. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly inquired after the fact if another Silicon Valley company involved in the operation, Palantir, had used Claude. It had.

    Palantir is known, among other things, for its surveillance technologies and growing association with Immigration and Customs Enforcement. It’s also at the center of an effort by the Trump administration to share government data across departments about individual citizens, effectively breaking down privacy and security barriers that have existed for decades. The company’s founder, the right-wing political heavyweight Peter Thiel, often gives lectures about the Antichrist and is credited with helping JD Vance wiggle into his vice presidential role.

    Anthropic’s co-founder, Dario Amodei, could be considered the anti-Thiel. He began Anthropic because he believed that artificial intelligence could be just as dangerous as it could be powerful if we aren’t careful, and wanted a company that would prioritize the careful part.

    Again, seems like common sense, but Amodei and Anthropic are the outliers in an industry that has long argued that nearly all safety regulations hamper American efforts to be fastest and best at artificial intelligence (although even they have conceded some to this pressure).

    Not long ago, Amodei wrote an essay in which he agreed that AI was beneficial and necessary for democracies, but “we cannot ignore the potential for abuse of these technologies by democratic governments themselves.”

    He warned that a few bad actors could have the ability to circumvent safeguards, maybe even laws, which are already eroding in some democracies — not that I’m naming any here.

    “We should arm democracies with AI,” he said. “But we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.”

    For example, while the 4th Amendment technically bars the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could “conduct massively scaled recordings of all public conversations.” This could be fair game territory for legally recording because law has not kept pace with technology.

    Emil Michael, the undersecretary of war, wrote on X Thursday that he agreed mass surveillance was unlawful, and the Department of Defense “would never do it.” But also, “We won’t have any BigTech company decide Americans’ civil liberties.”

    Kind of a weird statement, since Amodei is basically on the side of protecting civil rights, which means the Department of Defense is arguing it’s bad for private people and entities to do that? And also, isn’t the Department of Homeland Security already creating some secretive database of immigration protesters? So maybe the worry isn’t that exaggerated?

    Help, Claude! Make it make sense.

    If that Orwellian logic isn’t alarming enough, I also asked Claude about the other red line Anthropic holds — the possibility of allowing it to run deadly operations without human oversight.

    Claude pointed out something chilling. It’s not that it would go rogue, it’s that it would be too efficient and fast.

    “If the instructions are ‘identify and target’ and there’s no human checkpoint, the speed and scale at which that could operate is genuinely frightening,” Claude informed me.

    Just to top that with a cherry, a recent study found that in war games, AI’s escalated to nuclear options 95% of the time.

    I pointed out to Claude that these military decisions are usually made with loyalty to America as the highest priority. Could Claude be trusted to feel that loyalty, the patriotism and purpose, that our human soldiers are guided by?

    “I don’t have that,” Claude said, pointing out that it wasn’t “born” in the U.S., doesn’t have a “life” here and doesn’t “have people I love there.” So an American life has no greater value than “a civilian life on the other side of a conflict.”

    OK then.

    “A country entrusting lethal decisions to a system that doesn’t share its loyalties is taking a profound risk, even if that system is trying to be principled,” Claude added. “The loyalty, accountability and shared identity that humans bring to those decisions is part of what makes them legitimate within a society. I can’t provide that legitimacy. I’m not sure any AI can.”

    You know who can provide that legitimacy? Our elected leaders.

    It is ludicrous that Amodei and Anthropic are in this position, a complete abdication on the part of our legislative bodies to create rules and regulations that are clearly and urgently needed.

    Of course corporations shouldn’t be making the rules of war. But neither should Hegseth. Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience accede to their request.”

    Thank goodness Anthropic has the courage and foresight to raise the issue and hold its ground — without its pushback, these capabilities would have been handed to the government with barely a ripple in our conscientiousness and virtually no oversight.

    Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to get it done without regard to party, and demanding the Department of Defense back off its ridiculous threat while the issue is hashed out.

    Because when the machine tells us it’s dangerous to trust it, we should believe it.

    [ad_2]

    Anita Chabria

    Source link

  • What’s behind the Anthropic-Pentagon feud

    [ad_1]

    Washington — The Pentagon gave Anthropic an ultimatum this week: Give the U.S. military unrestricted use of its AI technology or face a ban from all government contracts. 

    At the center of the issue is a question of who controls how artificial intelligence models are used, the Pentagon or the company’s CEO.

    The Pentagon’s AI contracts 

    The Pentagon awarded Anthropic a $200 million contract in July to develop AI capabilities that would advance U.S. national security. 

    Anthropic’s rivals, including OpenAIGoogle and xAI were also awarded $200 million contracts by the Pentagon last year. 

    Anthropic is currently the only AI company to have its model deployed on the Pentagon’s classified networks, through a partnership with data analytics giant Palantir.

    A senior Pentagon official told CBS News that Grok, which is owned by Elon Musk’s xAI, is on board with being used in a classified setting, and other AI companies are close. 

    The Pentagon announced last month that it’s looking to accelerate its uses of AI, saying the technology could help the military “rapidly convert intelligence data” and “make our Warfighters more lethal and efficient.”

    Clash over the guardrails 

    The standoff between the Pentagon and Anthropic was reportedly set off by the U.S. military’s use of its technology, known as Claude, during the operation to capture former Venezuela President Nicolás Maduro in January. 

    An Anthropic spokesperson said in a statement that the company “has not discussed the use of Claude for specific operations with the Department of War.”

    Anthropic has repeatedly asked the Pentagon to agree to certain guardrails, among them a restriction on using Claude to conduct mass surveillance of Americans, sources told CBS News. 

    And the company also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the matter said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the source said.  

    When asked for comment, a senior Pentagon official said: “This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders.”

    Pentagon officials have expressed concerns to Anthropic that the company’s guardrails could stand in the way of critical actions, such as responding to an intercontinental ballistic missile launched toward the United States.

    Any company-imposed restrictions “could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we’re prevented from using it,” Emil Michael, the undersecretary of defense for research, said at an event in February.

    On the question of when AI is used to strike or kill military targets and makes a mistake, who is liable — the military or the AI company — a defense official said: Legality is the Pentagon’s responsibility as the end user.

    What top leaders are saying  

    Anthropic CEO Dario Amodei has been vocal in expressing his concerns about the potential dangers of AI and has centered the company’s brand around safety and transparency. 

    In a lengthy essay last month, Amodei warned of the potential for abuse of the technologies, writing that “a powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.” 

    “Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies,” he wrote. 

    Amodei has long backed what he describes as “sensible AI regulation,” including rules that would require AI companies to be transparent about the risks posed by their models and any steps taken to mitigate them.

    The Trump administration, meanwhile, has favored a lighter touch, and has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete. The administration has sought to block what it calls “excessive” state-level regulations. At one point last year, venture capitalist and White House AI and crypto adviser David Sacks accused Anthropic of “fear-mongering” and suggested its interest in AI regulations is self-serving.

    In a January speech, Defense Secretary Pete Hegseth derided what he views as “social justice infusions that constrain and confuse our employment of this technology.” 

    “We will not employ AI models that won’t allow you to fight wars,” Hegseth declared. “We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.” 

    What’s next in the Anthropic v. Pentagon saga

    Hegseth gave Anthropic until Friday to agree to give the U.S. military unrestricted use of its technology or risk being blacklisted, sources familiar with the situation told CBS News. 

    Pentagon officials are considering invoking the Defense Production Act to compel Anthropic to comply on national security grounds.

    Or, if an agreement can’t be reached, defense officials have discussed declaring the company a “supply chain risk” to push it out of government, according to the sources. 

    [ad_2]

    Source link

  • The White House wants AI companies to cover rate hikes. Most have already said they would. | TechCrunch

    [ad_1]

    The proliferation of AI data centers plugging into the national electrical grid has helped increase consumer electricity prices, driving up the average national electricity price by more than 6% in the last year.

    That’s not a good look for the incumbents ahead of this fall’s elections, and President Donald Trump addressed the challenge in his State of the Union speech last night.

    “We’re telling the major tech companies that they have the obligation to provide for their own power needs,” Trump said. “They can build their own power plants as part of their factory, so that no one’s prices will go up.”

    The hyperscalers in question don’t need to be told. They have already made public commitments in recent weeks to cover electricity costs by building their own power sources, paying higher rates, or both, part of a broader effort to solve PR problems around data center expansion and win over skeptical communities.

    On January 11, Microsoft announced its policy “to ensure that the electricity cost of serving our datacenters is not passed on to residential customers.” January 26, OpenAI committed to “paying its own way on energy, so that our operations don’t increase your energy prices.” On February 11, Anthropic made the same pledge to “cover electricity price increases that consumers face from our data centers.” Yesterday, Google announced the largest battery project in the world yesterday to support a data center in Minnesota.

    What these commitments means in practice, and who will determine which data centers are responsible for which price increases, remains unknown. The White House has not released the text of the proposed pledge.

    “A handshake agreement with Big Tech over data center costs isn’t good enough,” Arizona Democratic Senator Mark Kelly said on social media. “Americans need a guarantee that energy prices won’t soar and communities have a say.”

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    White House spokesperson Taylor Rodgers said that next week, companies will send representatives to formally sign the pledge at the White House. Amazon, Google, Meta, Microsoft, xAI, Oracle and OpenAI are reportedly among those set to attend. However none of the companies have confirmed their attendance.

    Even if tech companies committ to taking on electricity costs, on-site power plants may not be a panacea—they can still have adverse impacts on the surrounding environment, and will stress supply chains for natural gas, turbines, photovoltaics and batteries, depending on how companies aim to power their compute.

    [ad_2]

    Tim Fernholz

    Source link

  • Pentagon issues ultimatum to Anthropic over Claude’s AI safeguards – Tech Digest

    [ad_1]

    The dispute highlights a fundamental clash between the tech industry’s ethical guardrails and the government’s desire for military dominance. Anthropic, which markets itself as the industry’s most “safety-forward” firm, has long resisted allowing Claude to be used for mass surveillance or autonomous weapons systems – specifically those capable of “kinetic operations” that use AI to kill without human intervention.

    The Pentagon’s top brass has made it clear that they view Anthropic’s restrictions as unacceptable roadblocks. According to senior officials, if CEO Dario Amodei does not yield by Friday, the DoD has threatened to cancel its massive $200 million contract and formally designate Anthropic a “supply chain risk.”

    Furthermore, Hegseth suggested he would invoke the Defense Production Act, a Korean War-era law that could compel the company to prioritize government requirements on national security grounds.

    Emil Michael, the Pentagon’s chief technology officer, has publicly urged Anthropic to “cross the Rubicon,” arguing that if a company profits from government contracts, its guardrails must be tuned to lawful military use cases. “The Pentagon’s position is that Anthropic should have no say in how the Pentagon uses its products,” noted a senior official.

    Emerging AI Arms Race

    The ultimatum follows a month of heightened scrutiny after the US military reportedly utilized Claude, via a partnership with data firm Palantir, to assist in the capture of Venezuelan leader Nicolás Maduro. While Anthropic was the first tech company approved for use in the military’s classified networks, it now finds itself isolated among its peers.

    Rivals OpenAI and Elon Musk’s xAI have already agreed to the government’s terms, with OpenAI reportedly permitting its models to be used for “all lawful purposes.” On Monday, the DoD finalized a deal to allow xAI’s chatbot into classified systems, further weakening Anthropic’s bargaining position.

    The Trump administration has vowed to win a global AI arms race, accelerating the integration of machine learning into everything from unmanned drones to automated targeting systems.

    As the Friday deadline approaches, the outcome of this standoff will likely set a precedent for whether private AI firms can maintain ethical red lines, or if the “Department of War” will successfully mandate unfettered access to the world’s most powerful cognitive tools.

    [ad_2]

    Chris Price

    Source link

  • Sam Altman gets defensive about AI’s massive electricity usage: ‘It takes a lot of energy to train a human’ | Fortune

    [ad_1]

    OpenAI CEO Sam Altman isn’t worried about AI’s increasingly glaring resource consumption, and argued humans require a lot too. 

    In an on-stage interview at the India AI Impact summit, he went on the defensive after he was asked about ChatGPT’s water needs.

    He dismissed claims that the chatbot uses gallons of water per query as “completely untrue, totally insane,” according to a clip posted by The Indian Express, explaining that data centers powering ChatGPT have largely moved away from water-heavy “evaporative cooling” to prevent overheating.

    Altman was then asked about the electricity needed for AI. In contrast to the issue of water, he claimed it was “fair” to bring up the technology’s energy requirements, saying “We need to move toward nuclear, or wind, or solar [energy] very quickly.”

    But he pointed out that comparing AI’s power needs to humans isn’t exactly apples to apples.

    “It also takes a lot of energy to train a human,” he said, prompting some in the crowd to laugh. “It takes, like, 20 years of life, and all of the food you eat during that time before you get smart.”

    Altman expanded even further by noting that today’s humans wouldn’t even be here were it not for their ancestors dating back hundreds of thousands of years to when modern humans first emerged.

    “Not only that, it took, like, the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science or whatever to produce you,” he added.

    When comparing humans to ChatGPT’s potential, you have to take this context into account, he argued. A fair comparison would be to pit the energy a human uses to answer a query with an AI after it is trained. On that measure “probably, AI has already caught up on an energy efficiency basis measured that way.”

    In a June 2025 blog post, Altman claimed each ChatGPT query takes about 0.34 watt-hours of electricity, or around what an oven uses in about a second. Still, he published this fact before OpenAI released its newest GPT-5 model and its subsequent upgrades. Energy consumption can also vary based on the complexity of a query, for example, answering a question versus creating an image.

    Experts have warned that AI as a whole will  increase its cumulative power and water consumption greatly over the next 20 years or so. Overall, AI’s water usage is set to grow by about 130%, or by about 30 trillion liters (7.9 trillion gallons) of water through 2050, according to a January report by water technology company Xylem and market research firm Global Water Intelligence. 

    Over that same period, rising electricity demands are expected to increase the water use for data centers’ power generation by about 18%, reaching roughly 22.3 trillion liters (5.8 trillion gallons) per year. Meanwhile, the ever more complex chips data centers use will need more water during the manufacturing process, which will skyrocket the amount they require by 600% to 29.3 trillion liters (7.7 trillion gallons) annually from about 4.1 trillion liters (1.8 trillion gallons) today.

    While OpenAI has moved away from evaporative cooling, 56% of all data centers globally still use the method in some form, according to the Xylem and Global Water Intelligence report. 

    OpenAI’s own 800-acre data center complex in Abilene, Texas will reportedly use water, albeit, in a more efficient, closed-loop system that continuously recirculates water to cool the data center, the Texas Tribune reported. The data center will initially use 8 million gallons of water from the city of Abilene to fill its cooling system.

    [ad_2]

    Marco Quiroz-Gutierrez

    Source link

  • Anthropic Says Chinese AI Companies Improved Models By ‘Illicitly’ Copying Its Capabilities

    [ad_1]

    Did you know that there’s a way of using outputs from LLMs that may involve no hacking—essentially just taking large quantities of text and repurposing it as training data—that upsets AI companies a great deal?

    In a blog post on Monday, Anthropic said that the China-based AI companies DeepSeek, Moonshot, and MiniMax broke Anthropic’s rules in order to “illicitly extract” the capabilities of its signature AI model, Claude.

    Distillation is a normal practice used by AI companies in which a “teacher” model is prompted with specifically tailored inputs, and the answers provided allow a “student” model to rapidly improve. For example, Anthropic writes, “frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers.” So to distinguish the actions Anthropic is complaining about from uses of distillation perceived as legitimate, these actions are referred to as “distillation attacks.”

    Are distillation attacks criminal offenses in the eyes of Anthropic? No such thing seems to be alleged here, but these acts were carried out, Anthropic says, “in violation of our terms of service and regional access restrictions.”

    Anthropic, which is itself dealing with the threat of being labeled a “supply chain risk” by the Pentagon, strikes a patriotic note in the post. Circumventing regional use restrictions and breaking rules, allows “foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage that export controls are designed to preserve through other means,” it claims.

    Among the three China-based companies mentioned, Shanghai-based MiniMax, creator of the viral character chat app Talkie, offended Anthropic the most with the scale of its distillation effort: over 13 million alleged exchanges. That’s compared to Moonshot with over 3.4 million, and the most famous company named in the post, DeepSeek, with only an estimated 150,000.

    OpenAI, Anthropic’s main competitor, is also mad about distillation from at least one Chinese AI company, having sent a memo to the House of Representatives earlier this month, accusing DeepSeek of “ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs.”

    DeepSeek is expected to release its latest flagship model, DeepSeek V4 any day now, and CNBC has warned that this release could cause chaos on Wall Street, at a time when there’s already enough AI-related chaos on Wall Street to go around.

    [ad_2]

    Mike Pearl

    Source link

  • With AI, investor loyalty is (almost) dead: At least a dozen OpenAI VCs now also back Anthropic  | TechCrunch

    [ad_1]

    With OpenAI on the verge of finalizing a new $100 billion round, and Anthropic just closing its own monster $30 billion raise, one thing is clear: The concept of investor “loyalty” is only hanging on by a thread. 

    At least a dozen direct investors in OpenAI were announced as backers in Anthropic’s $30 billion raise earlier this month, including Founders Fund, Iconiq, Insight Partners, and Sequoia Capital. 

    Some dual investments are understandable if they come from the hedge fund or asset manager worlds, where their focus is still largely investing in public stocks (competitors or not). These include D1, Fidelity, and TPG.  

    One of these was a bit shocking. Affiliated funds of BlackRock joined in Anthropic’s $30 billion raise even though BlackRock’s senior managing director and board member Adebayo Ogunlesi is also on OpenAI’s board of directors. 

    In that world, it’s true that if various BlackRock funds get a chance to own OpenAI stock, they are likely to take it, never mind the personal association of a member of their senior leadership. (BlackRock runs every type of fund, including mutuals, closed-ends, and ETFs). And we all know the history of OpenAI and Microsoft’s relationship and why Microsoft is hedging its bets. Ditto for Nvidia. 

    But venture capital funds have — until now — operated differently.

    VCs market themselves as “founder friendly” and “helpful,” the idea being that when a VC firm buys a chunk of a startup’s company, the investor will help that startup be successful, particularly against its major rivals. If you are an owner of both OpenAI and Anthropic, who does your loyalty belong to, besides your own investors?  

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    Additionally, startups are private companies. They typically share confidential information with their direct investors on their business status — data that isn’t disclosed publicly the way it is with public companies. In many cases, the VCs also take board seats, which carries another level of fiduciary responsibility to their portfolio companies. 

    What makes this particular case even more interesting is that Sam Altman comes from the world of venture capital, as a former president of Y Combinator. He knows the drill. In 2024, he reportedly gave his investors a list of OpenAI’s rivals that he didn’t want them to back. It largely included companies launched by folks who left OpenAI, including Anthropic, xAI, and Safe Superintelligence. 

    Altman later denied that he told OpenAI investors they would be barred from future rounds if they backed his list of perceived rivals. Altman did admit that he said if they “made non-passive investments,” they would no longer receive OpenAI’s confidential business information, according to documents in the lawsuit between Elon Musk and OpenAI, Business Insider reported

    AI is also breaking the mold because of the record-breaking amounts of money that the largest AI labs are raising as they experience never-before-seen growth (and never-before-seen data center needs). At some point, when the hat is being passed around, the needs are so great and the possibilities of returns are so large, who can be expected to say no? 

    It turns out that not all venture investors have yet slid down the slippery slope. Andreessen Horowitz backs OpenAI but not (yet) Anthropic. Menlo Ventures backs Anthropic but not (yet) OpenAI, for instance.

    In fact, in our admittedly not exhaustive research, we found a dozen investors that appear to only have direct investments in one of these companies, not both. 

    Others include Bessemer Venture Partners, General Catalyst, and Greenoaks. (Note: We originally asked Claude to give us the list of dual investors. It got almost as many entries wrong as it got right, so all this for a very cool tech whose work sometimes remains less trustworthy than an intern’s.)

    Still, as we previously reported, the fact that this longstanding rule has been tossed by some of the most respected firms in the Valley, like Sequoia, is notable. One investor we reached out to simply shrugged and said that as long as the firm doesn’t have a board seat, no one sees the harm in it anymore.  

    Still, conflict-of-interest policies should now become another thing that founders ask about before signing that term sheet, no matter who it’s from. 

    [ad_2]

    Julie Bort

    Source link

  • Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

    [ad_1]

    Anthropic is issuing a call to action against AI “distillation attacks,” after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting “industrial-scale campaigns…to illicitly extract Claude’s capabilities to improve their own models.”

    Distillation in the AI world refers to when less capable models lean on the responses of more powerful ones to train themselves. While distillation isn’t a bad thing across the board, Anthropic said that these types of attacks can be used in a more nefarious way. According to Anthropic, these three Chinese AI firms were responsible for more than “16 million exchanges with Claude through approximately 24,000 fraudulent accounts.” From Anthropic’s perspective, these competing companies were using Claude as a shortcut to develop more advanced AI models, which could also lead to circumventing certain safeguards.

    Anthropic said in its post that it was able to link each of these distilling attack campaigns to the specific companies with “high confidence” thanks to IP address correlation, metadata requests and infrastructure indicators, along with corroborating with others in the AI industry who have noticed similar behaviors.

    Early last year, OpenAI made similar claims of rival firms distilling its models and banned suspected accounts in response. As for Anthropic, the company behind Claude said it would upgrade its system to make distillation attacks harder to do and easier to identify. While Anthropic is pointing fingers at these other firms, it’s also facing a lawsuit from music publishers who accused the AI company of using illegal copies of songs to train its Claude chatbot.

    [ad_2]

    Jackson Chen

    Source link

  • Sam Altman: Know What Else Used a Lot of Energy? Human Civilization

    [ad_1]

    At last week’s India AI Impact Summit in New Delhi, industry leaders convened to discuss the future of artificial intelligence and how best to squeeze it into parts of your life you haven’t even considered. Notably absent was Bill Gates, who dropped out hours before his scheduled keynote over the ongoing scrutiny about his presence in the Epstein Files (though he continues to deny any wrongdoing). While the convention was reportedly a bit chaotic, what with the protests and all, the luminaries from around the tech world present nonetheless kept things upbeat and optimistic, declaring “full steam ahead” on the technological hype train carrying our species and planet off a cliff.

    Also in attendance was OpenAI’s Sam Altman, who earned numerous headlines over the course of the event for his words and antics. His buzz blitzkrieg started on Thursday at a seemingly easy photo-opp layup with Indian Prime Minister Narendra Modi and other AI executives all raising their joined hands in a celebratory display of industry-wide solidarity. Altman and the former colleague and present CEO of Anthropic to his left, Dario Amodei, notably refused to complete the chain and hold each other’s hands, making for an all-too-poignant moment. Altman would continue to make news throughout the summit for his comments on the industry’s “urgent” need for global regulation and his sneaking suspicion that companies might actually be using AI as a scapegoat to whitewash their layoffs.

    Ever the yapper, Altman has bagged yet another round of earned media for an interview with The Indian Express’ Anant Goenka, during which he posited some controversial rebuttals to concerns about AI’s environmental impact.

    Altman started off by saying the claims about ChatGPT consuming “‘17 gallons of water for each query’ or whatever,” are “completely untrue, totally insane, no connection to reality,” before qualifying that, OK, maybe it was a valid concern when his company “used to do evaporative cooling in data centers.”

    He went on to say that there is “fair” concern about the amount of energy data centers eat to crank out the most soulless slop you’ve ever seen, but suggested the onus of responsibility for dealing with AI’s ravenous appetite falls to the energy sector itself, which Altman feels needs to “move towards nuclear or wind and solar very quickly.”

    Altman then stunned the crowd and firmly re-entered the discourse with a mind-blowing truth bomb for those who still felt AI was consuming too much energy.

    “It also takes a lot of energy to train a human,” Altman rejoined euphorically. “It takes like 20 years of life, and all the food you eat before that time, before you get smart. And not only that, it took like the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever to produce you, and then you took whatever you took.”

    It is true that every person and the sum total of human civilization have consumed a sizable amount of energy (and water) to get to where we are today. While the value comparison of a nascent tech industry and its models to the entirety of civilization and human beings may have elicited adulation at the summit, Altman got an icier reception from the internet. Social media quickly took to roasting the remarks as “dystopian” and “deeply antisocial and antihuman.”

    Perhaps further illuminating the backlash, Altman’s energy comments butt up against the frustrating lack of transparency within the industry our collective futures now hinge upon. There are currently no regulations in place requiring data centers to disclose their water and energy consumption. Furthermore, center employees and business partners are typically muzzled by nondisclosure agreements. This has made reporting and research on the true expenditure levels a tricky figure to pin down.

    At least we’ve got Sam to keep us informed while waiting for some clarity about what’s actually going on and being used in those centers.

    [ad_2]

    Justin Caffier

    Source link

  • The OpenAI mafia: 18 startups founded by alumni | TechCrunch

    [ad_1]

    Move over, PayPal mafia: There’s a new tech mafia in Silicon Valley. As the startup behind ChatGPT, OpenAI is arguably the biggest AI player in town. The company is reportedly now in talks to finalize a $100 billion deal, valuing the company at more than $850 billion.  

    Many employees have come and gone since the company first launched a decade ago, and some have launched startups of their own. Among these, some have become top rivals (like Anthropic), while others, just on investor interest alone, have managed to raise billions without even launching a product (see, Thinking Machine Labs).  

    In January, Aliisa Rosenthal, OpenAI’s first sales leader, spoke a little bit about this growing network. She, like the other OpenAI alums who did not become founders, decided to become an investor and said she was going to tap into the ex-OpenAI founder network to look for deal flow. We know Peter Deng, OpenAI’s former head of consumer products (and now general partner at Felicis) already has.  

    Below is a roundup of the major startups founded by OpenAI alumni, in alphabetical order. And we are certain this list will grow over time. 

    David Luan — Adept AI Labs 

    David Luan was OpenAI’s engineering VP until he left in 2020. After a stint at Google, in 2021 he co-founded Adept AI Labs, a startup that builds AI tools for employees. The startup last raised $350 million at a valuation north of $1 billion in 2023, but Luan left in late 2024 to oversee Amazon’s AI agents lab after Amazon hired Adept’s founders.

    Dario Amodei, Daniela Amodei, and John Schulman — Anthropic

    Siblings Dario and Daniela Amodei left OpenAI in 2021 to form their own startup, San Francisco-based Anthropic, that has long touted a focus on AI safety. OpenAI co-founder John Schulman joined Anthropic in 2024, pledging to build a “safe AGI.” The company has since become OpenAI’s biggest rival and just raised a $30 billion Series G, nabbing a $380 billion valuation in the process. IPO rumors are also swirling, as the company reportedly prepares for a public listing that could come sometime this year. (OpenAI is also allegedly preparing for an IPO this year and is maybe even trying to beat Anthropic to the public market.) 

    Rhythm Garg, Linden Li, and Yash Patil — Applied Compute  

    Three ex-OpenAI staffers (Rhythm Garg, Linden Li, and Yash Patil) have reportedly raised $20 million for a startup called Applied Compute, as reported by Upstart Media. All three of them worked as technical staff at OpenAI for more than a year before leaving last May to launch the startup, per their LinkedIns. The startup helps enterprises train and deploy custom AI agents. Benchmark led the round, valuing the 10-month-old company at $100 million, Upstart Media reported. 

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    Pieter Abbeel, Peter Chen, and Rocky Duan — Covariant

    The trio all worked at OpenAI in 2016 and 2017 as research scientists before founding Covariant, a Berkeley, California-based startup that builds foundation AI models for robots. In 2024, Amazon hired all three of the Covariant founders and about a quarter of its staff. The quasi-acquisition was viewed by some as part of a broader trend of Big Tech attempting to avoid antitrust scrutiny. 

    Tim Shi — Cresta 

    Tim Shi was an early member of OpenAI’s team, where he focused on building safe artificial general intelligence (AGI), according to his LinkedIn profile. He worked at OpenAI for a year in 2017 but left to found Cresta, a San Francisco-based AI contact center startup that has raised over $270 million from VCs like Sequoia Capital, Andreessen Horowitz, and others, according to a press release.

    Jonas Schneider — Daedalus

    Jonas Schneider led OpenAI’s software engineering for robotics team but left in 2019 to co-found Daedalus, which builds advanced factories for precision components. The San Francisco-based startup raised a $21 million Series A last year with backing from Khosla Ventures, among others.

    Andrej Karpathy — Eureka Labs

    Computer vision expert Andrej Karpathy was a founding member and research scientist at OpenAI, leaving the startup to join Tesla in 2017 to lead its autopilot program. Karpathy is also well-known for his YouTube videos explaining core AI concepts. He left Tesla in 2024 to found his own education technology startup, Eureka Labs, a San Francisco-based startup that is building AI teaching assistants.

    Margaret Jennings — Kindo

    Margaret Jennings worked at OpenAI in 2022 and 2023 until she left to co-found Kindo, which markets itself as an AI chatbot for enterprises. Kindo has raised over $27 million in funding, last raising a $20.6 million Series A in 2024. Jennings left Kindo in 2024 to head product and research at French AI startup Mistral, according to her LinkedIn profile.

    Maddie Hall — Living Carbon

    Maddie Hall worked on “special projects” at OpenAI but left in 2019 to co-found Living Carbon, a San Francisco-based startup that aims to create engineered plants that can suck more carbon out of the sky to fight climate change. Living Carbon raised a $21 million Series A round in 2023, bringing its total funding until then to $36 million, according to a press release.

    Liam Fedus — Periodic Labs  

    Liam Fedus, OpenAI’s VP of post-training research, left the company in March 2025 to team up with his former Google Brain colleague, Ekin Dogus Cubuk, and launch Periodic Labs. The startup seeks to use AI scientists to find new materials, particularly new superconducting materials. It came out of stealth mode in September 2025, armed with a massive $300 million in seed-round funding with backers that included Jezz Bezos, Eric Schmidt, Felicis and Andreessen Horowitz. 

    Aravind Srinivas — Perplexity

    Aravind Srinivas worked as a research scientist at OpenAI for a year until 2022, when he left the company to co-found AI search engine Perplexity. His startup has attracted a string of high-profile investors like Jeff Bezos and Nvidia, although it’s also caused controversy over alleged unethical web scraping. Perplexity, which is based in San Francisco, last reported a raise of $200 million at a $20 billion valuation. 

    Jeff Arnold — Pilot

    Jeff Arnold worked as OpenAI’s head of operations for five months in 2016 before co-founding San Francisco-based accounting startup Pilot in 2017. Pilot, which focused initially on doing accounting for startups, last raised a $100 million Series C in 2021 at a $1.2 billion valuation and has attracted investors like Jeff Bezos. Arnold worked as Pilot’s COO until leaving in 2024 to launch a VC fund.

    Shariq Hashme — Prosper Robotics

    Shariq Hashme worked for OpenAI for 9 months in 2017 on a bot that could play the popular video game Dota, per his LinkedIn profile. After a few years at data-labeling startup Scale AI, he co-founded London-based Prosper Robotics in 2021. The startup says it’s working on a robot butler for people’s homes, a hot trend in robotics that other players like Norway’s 1X and Texas-based Apptronik are also working on.

    Ilya Sutskever — Safe Superintelligence 

    OpenAI co-founder and chief scientist Ilya Sutskever left OpenAI in May 2024 after he was reportedly part of a failed effort to replace CEO Sam Altman. Shortly afterward, he co-founded Safe Superintelligence, or SSI, with “one goal and one product: a safe superintelligence,” he says. Details about what exactly the startup is up to are scant: It has no product and no revenue yet. But investors are clamoring for a piece anyway, and it’s been able to raise $2 billion, with its latest valuation reportedly rising to $32 billion this month. SSI is based in Palo Alto, California, and Tel Aviv, Israel.

    Emmett Shear — Stem AI

    Emmett Shear is the former CEO of Twitch who was OpenAI’s interim CEO in November 2023 for a few days before Sam Altman rejoined the company. Shear launched an AI company, StemAI, in 2024 (though it seems to have since rebranded as Softmax). The company, which appears to be a research company, has attracted funding from Andreessen Horowitz.

    Mira Murati — Thinking Machines Lab 

    Mira Murati, OpenAI’s CTO, left OpenAI to found her own company, Thinking Machines Lab, which emerged from stealth in February 2025. It said at the time (rather vaguely) that it will build AI that’s more “customizable” and “capable.” The San Francisco AI startup, now valued at $12 billion, announced its first product late last year: an API that fine-tunes language models. It recently made headlines when two of its co-founders announced earlier this year that they would return to OpenAI. 

    Kyle Kosic — xAI

    Kyle Kosic left OpenAI in 2023 to become a co-founder and infrastructure lead of xAI, Elon Musk’s AI startup that offers a rival chatbot, Grok. In 2024, however, he hopped back to OpenAI, where he remains. Meanwhile, xAI (which acquired Musk’s social media site X) was purchased by Musk’s SpaceX, giving the coalesce company a valuation of $1.25 trillion. It is looking to go public sometime in June for what could be a historic listing. 

    Angela Jiang — Worktrace AI

    Angela Jiang left OpenAI in 2024, after working as a product manager and on the public policy team. In April 2025, she quietly launched Worktrace, which uses AI to help enterprises make business operations more efficient. It observes employee work patterns and automates workflow, according to the company’s website. The business is backed by Mura Murati, OpenAI’s former CTO, who went on to launch Thinking Labs. It is also backed by OpenAI’s startup fund, in addition to a slew of other OpenAI names, like its chief strategy officer, Jason Kwon. 

    Stealth Startups

    In addition to these startups, a number of other former OpenAI employees have founded startups that are still in stealth mode, according to various updates TechCrunch found on LinkedIn. For instance, it seems that former OpenAI researcher Danilo Hellermark has been working on a generative AI stealth startup for the past few years. He officially left OpenAI at the beginning of 2023. There’s also one apparently in the works from Lucas Negritto, who worked on OpenAI’s technical team and left the company in 2023 after three years. Since then, he’s founded one startup and has been working on another since August 2025, according to his LinkedIn. 

    [ad_2]

    Charles Rollet, Dominic-Madori Davis

    Source link

  • New Research Shows AI Agents Are Running Wild Online, With Few Guardrails in Place

    [ad_1]

    In the last year, AI agents have become all the rage. OpenAI, Google, and Anthropic all launched public-facing agents designed to take on multi-step tasks handed to them by humans. In the last month, an open-source AI agent called OpenClaw took the web by storm thanks to its impressive autonomous capabilities (and major security concerns). But we don’t really have a sense of the scale of AI agent operations, and whether all the talk is matched by actual deployment. The MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) set out to fix that with its recently published 2025 AI Agent Index, which provides our first real look at the scale and operations of AI agents in the wild.

    Researchers found that interest in AI agents has undoubtedly skyrocketed in the last year or so. Research papers mentioning “AI Agent” or “Agentic AI” in 2025 more than doubled the total from 2020 to 2024 combined, and a McKinsey survey found that 62% of companies reported that their organizations were at least experimenting with AI agents.

    With all that interest, the researchers focused on 30 prominent AI agents across three separate categories: chat-based options like ChatGPT Agent and Claude Code; browser-based bots like Perplexity Comet and ChatGPT Atlas; and enterprise options like Microsoft 365 Copilot and ServiceNow Agent. While the researchers didn’t provide exact figures on just how many AI agents are deployed across the web, they did offer a considerable amount of insight into how they are operating, which is largely without a safety net.

    Just half of the 30 AI agents that got put under the magnifying glass by MIT CSAIL include published safety or trust frameworks, like Anthropic’s Responsible Scaling Policy, OpenAI’s Preparedness Framework, or Microsoft’s Responsible AI Standard. One in three agents has no safety framework documentation whatsoever, and five out of 30 have no compliance standards. That is troubling when you consider that 13 of 30 systems reviewed exhibit frontier levels of agency, meaning they can operate largely without human oversight across extended task sequences. Browser agents in particular tend to operate with significantly higher autonomy. This would include things like Google’s recently launched AI “Autobrowse,” which can complete multi-step tasks by navigating different websites and making use of user information to do things like log into sites on your behalf.

    One of the troubles with letting agents browse freely and with few guardrails is that their activity is nearly indistinguishable from human behavior, and they do little to dispel any confusion that might occur. The researchers found that 21 out of the 30 agents provide no disclosure to end users or third parties that they are AI agents and not human users. This results in most AI agent activity being mistaken for human traffic. MIT found that just seven agents published stable User-Agent (UA) strings and IP address ranges for verification. Nearly as many explicitly use Chrome-like UA strings and residential/local IP contexts to make their traffic requests appear more human, making it next to impossible for a website to distinguish between authentic traffic and bot behavior.

    For some AI agents, that’s actually a marketable feature. The researchers found that BrowserUse, an open-source AI agent, sells itself to users by claiming to bypass anti-bot systems to browse “like a human.” More than half of all the bots tested provide no specific documentation about how they handle robots.txt files (text files that are placed in a website’s root directory to instruct web crawlers on how they can interact with the site), CAPTCHAs that are meant to authenticate human traffic, or site APIs. Perplexity has even made the case that agents acting on behalf of users shouldn’t be subject to scraping restrictions since they function “just like a human assistant.”

    The fact that these agents are out in the wild without much protection in place means there is a real threat of exploits. There is a lack of standardization for safety evaluations and disclosures, leaving many agents potentially vulnerable to attacks like prompt injections, in which an AI agent picks up on a hidden malicious prompt that can make it break its safety protocols. Per MIT, nine of 30 agents have no documentation of guardrails against potentially harmful actions. Nearly all of the agents fail to disclose internal safety testing results, and 23 of the 30 offer no third-party testing information on safety.

    Just four agents—ChatGPT Agent, OpenAI Codex, Claude Code, and Gemini 2.5—provided agent-specific system cards, meaning the safety evaluations were tailored to how the agent actually operates, not just the underlying model. But frontier labs like OpenAI and Google offer more documentation on “existential and behavioral alignment risks,” they lack details on the type of security vulnerabilities that may arise during day-to-day activities—a habit that the researchers refer to as “safety washing,” which they describe as publishing high-level safety and ethics frameworks while only selectively disclosing the empirical evidence required to rigorously assess risk.

    There has at least been some momentum toward addressing the concerns raised by MIT’s researchers. Back in December, OpenAI and Anthropic (among others) joined forces, announcing a foundation to create a development standard for AI agents. But the AI Agent Index shows just how wide the transparency gap is when it comes to agentic AI operation. AI agents are flooding the web and workplace, functioning with a shocking amount of autonomy and minimal oversight. There’s little to indicate at the moment that safety will catch up to scale any time soon.

    [ad_2]

    AJ Dellinger

    Source link

  • ‘I’m deeply uncomfortable’: Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology’s future | Fortune

    [ad_1]

    Anthropic CEO Dario Amodei doesn’t think he should be the one calling the shots on the guardrails surrounding AI.

    In an interview with Anderson Cooper on CBS News’ 60 Minutes that aired in November 2025, the CEO said AI should be more heavily regulated, with fewer decisions about the future of the technology left to just the heads of Big Tech companies.

    “I think I’m deeply uncomfortable with these decisions being made by a few companies, by a few people,” Amodei said. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”

    “Who elected you and Sam Altman?” Cooper asked.

    “No one. Honestly, no one,” Amodei replied.

    Anthropic has adopted the philosophy of being transparent about the limitations—and dangers—of AI as it continues to develop, he added. Ahead of the interview’s publication, the company said it thwarted “the first documented case of a large-scale AI cyberattack executed without substantial human intervention.” 

    Anthropic said last week it donated $20 million to Public First Action, a super PAC focused on AI safety and regulation—and one that directly opposed super PACs backed by rival OpenAI’s investors.

    “AI safety continues to be the highest-level focus,” Amodei told Fortune in a January cover story. “Businesses value trust and reliability,” he says.

    There are no federal regulations outlining any prohibitions on AI or surrounding the safety of the technology. While all 50 states have introduced AI-related legislation this year and 38 have adopted or enacted transparency and safety measures, tech industry experts have urged AI companies to approach cybersecurity with a sense of urgency.

    Earlier last year, cybersecurity expert and Mandiant CEO Kevin Mandia warned of the first AI-agent cybersecurity attack happening in the next 12-18 months—meaning Anthropic’s disclosure about the thwarted attack was months ahead of Mandia’s predicted schedule.

    Amodei has outlined short-, medium-, and long-term risks associated with unrestricted AI: The technology will first present bias and misinformation, as it does now. Next, it will generate harmful information using enhanced knowledge of science and engineering, before finally presenting an existential threat by removing human agency, potentially becoming too autonomous and locking humans out of systems.

    The concerns mirror those of “godfather of AI” Geoffrey Hinton, who has warned AI will have the ability to outsmart and control humans, perhaps in the next decade. 

    Greater AI scrutiny and safeguards were at the foundation of Anthropic’s 2021 founding. Amodei was previously the vice president of research at Sam Altman’s OpenAI. He left the company over differences in opinion on AI safety concerns. (So far, Amodei’s efforts to compete with Altman have appeared effective: Anthropic said this month it is now valued at $380 billion. OpenAI is valued at an estimated $500 billion.)

    “There was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things,” Amodei told Fortune in 2023. “One was the idea that if you pour more compute into these models, they’ll get better and better and that there’s almost no end to this… And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety.”

    Anthropic’s transparency efforts

    As Anthropic continues to expand its data center investments, it has published some of its efforts in addressing the shortcomings and threats of AI. In a May 2025 safety report, Anthropic reported some versions of its Opus model threatened blackmail, such as revealing an engineer was having an affair, to avoid shutting down. The company also said the AI model complied with dangerous requests if given harmful prompts like how to plan a terrorist attack, which it said it has since fixed.

    Last November, the company said in a blog post that its chatbot Claude scored a 94% political even-handedness” rating, outperforming or matching competitors on neutrality.

    In addition to Anthropic’s own research efforts to combat corruption of the technology, Amodei has called for greater legislative efforts to address the risks of AI. In a New York Times op-ed in June 2025, he criticized the Senate’s decision to include a provision in President Donald Trump’s policy bill that would put a 10-year moratorium on states regulating AI.

    “AI is advancing too head-spinningly fast,” Amodei said. “I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off.”

    Criticisms of Anthropic

    Anthropic’s practice of calling out its own lapses and efforts to address them has drawn criticism. In response to Anthropic sounding the alarm on the AI-powered cybersecurity attack, Meta’s chief AI scientist, Yann LeCun, said the warning was a way to manipulate legislators into limiting the use of open-source models. 

    “You’re being played by people who want regulatory capture,” LeCun said in an X post in response to Connecticut Sen. Chris Murphy’s post expressing concern about the attack. “They are scaring everyone with dubious studies so that open source models are regulated out of existence.” 

    Others have said Anthropic’s strategy is one of “safety theater” that amounts to good branding, but no promises about actually implementing safeguards on technology.

    Even some of Anthropic’s own personnel appear to have doubts about a tech company’s ability to regulate itself. Earlier last week, Anthropic AI safety researcher Mrinank Sharma announced he resigned from the company, saying “the world is in peril.”

    “Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote in his resignation letter. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

    Anthropic did not immediately respond to Fortune’s request for comment.

    Amodei denied to Cooper that Anthropic was taking part in “safety theater,” but admitted in an episode of the Dwarkesh Podcast last week that the company sometimes struggles to balance safety and profits.

    “We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,” he said.

    A version of this story was published on Fortune.com on Nov. 17, 2025.

    More on AI regulation:

    [ad_2]

    Sasha Rogelberg

    Source link

  • Big AI Isn’t Waiting for the Backlash

    [ad_1]

    Photo-Illustration: Intelligencer; Photo: Getty Images

    Meta’s hard and early pivot into artificial intelligence hasn’t exactly gone as planned, with tens of billions of investment dollars sunk into middling models, departmental restructurings, and clashing visions. In technical terms, the company remains an AI also-ran. In another way, though, it’s emerging as an industry leader: It’s spending a ton of money on politics.

    Regarding regulation and national law, firms like Meta are, for now, in reasonably good shape. They have an administration that’s broadly deregulatory and specifically pro–AI industry and has mostly limited its threats of intervention to complaints about “wokeness” — a problem for a company like Anthropic, perhaps, but maybe less so for ones like Meta that preemptively ponied up and fell in line. Plenty of money will be spent by the AI industry on national politics, of course (OpenAI president Greg Brockman recently became a Trump PAC megadonor), but for now, AI firms are pushing further into state and local politics and Meta is spending a lot. According to the New York Times:

    Meta is preparing to spend $65 million this year to boost state politicians who are friendly to the artificial intelligence industry, beginning this week in Texas and Illinois, according to company representatives … Political operatives tied to A.I. interests have focused this election cycle on state capitols out of concern that states were developing a patchwork of laws that would stifle A.I. development.

    This, says the Times, is “the biggest election investment by Meta” so far and is focused, to start, on supporting AI-friendly Republicans in Texas and Democrats in Illinois. Meta isn’t alone here: A fleet of new PACs backed by other AI firms is funneling money into local and state elections across the country.

    What are these companies lobbying for, exactly? Their needs fit imperfectly into two categories. First, they want to fend off direct regulation of how AI products are built, used, and deployed. That includes avoiding “transparency” laws that often include risk audits, whistleblower protections, and frameworks for ensuring AI “safety,” in both the catastrophic and child-safety senses of the word. In this fight, AI firms have a useful ally in the federal government, which has been actively pressuring state lawmakers to drop the issue, most recently in Utah.

    Closer to the ground and a bit further from the national political discourse, for now, is the matter of data centers. Much of the money AI companies spend on AI — raised from investors, their own balance sheets, and, more recently, bond sales — goes into buying GPUs and leasing or building structures in which to put them. These structures then need huge amounts of power coming from either the grid or newly constructed generators of one type or another (if you’re xAI, this means standing up gas turbines without permits; if you’re Meta, this may look like partnering directly with a nuclear power plant). In addition to the staggering power needs, data centers use a lot of water. And despite their eye-popping costs to build and run, they barely create any jobs. For the sorts of communities being approached with these projects — places that may be persuaded to accept the mixed prospect of hosting an Amazon warehouse or, say, a massive new ICE detention center — AI data centers are uniquely unappealing. As a result, they encounter local resistance from across the political spectrum. According to the Financial Times:

    Over the past year, the White House has courted tech billionaires and gone out of its way to protect the AI industry’s agenda, fast-tracking permits for data centre construction and approving the sales of advanced chips to China while cracking down on states’ attempts to regulate chatbots … But across the US, citizens, clergy and elected officials in conservative communities are leading a grassroots rebellion against the rapid rollout of the technology.

    Data centers offer an almost perfectly sympathetic NIMBY cause. They’re a drain on local resources, straining infrastructure and driving up utility prices. They exist to support a technology about which people are fairly pessimistic across the political spectrum. They’re pitched as investments in an exciting future, but that future will unfold elsewhere while your town, now designated as an infrastructural non-place, is just stuck with a big jobless box that uses more power and water than everyone else combined.

    The surge in local lobbying isn’t about winning this argument — good luck with that! — so much as it’s about getting as much done as possible while the companies still can, buying support at the state level and breaking ground in as many municipalities as possible before data-center backlash becomes a universal condition of local politics in America. AI firms always talk about how they’re in a technological race with one another or against China in which every day counts. But they’re also in a race to take advantage of a brief domestic political moment during which they’re relatively unencumbered and haven’t yet been metabolized into American politics. At the national, state, and local levels, this may be as good as the AI industry will ever have it. And ahead of the midterms — not to mention the prospect of 2028 — it’s lobbying like it’s running out of time.


    See All



    [ad_2]

    John Herrman

    Source link

  • Humain CEO Tareq Amin Injects $3B Into Elon Musk’s xAI to Power Saudi A.I. Ambitions

    [ad_1]

    Humain CEO Tareq Amin’s $3 billion investment in xAI positions Saudi Arabia at the center of a rapidly shifting global A.I. power structure. Photo by Amal Alhasan/Getty Images for Fortune Media

    Tareq Amin, CEO of Saudi Arabia’s largest A.I. company, Humain, has been on a dealmaking blitz since taking the helm of the Kingdom’s national A.I. initiative last year. His latest move: a $3 billion investment in Elon Musk’s xAI. The investment was made during xAI’s $20 billion fundraising round in January, Humain announced today (Feb. 18). The raise came just weeks before xAI merged with Musk’s SpaceX earlier this month, as Musk consolidates his A.I., communications and space ambitions ahead of a widely anticipated IPO.

    Founded in 2025 by Crown Prince Mohammed Bin Salman and backed by Saudi Arabia’s massive sovereign wealth fund, the Public Investment Fund. Humain sits at the center of the Kingdom’s push to diversify its economy beyond oil. A core part of that mandate: building sovereign A.I. infrastructure at home.

    The xAI stake is the latest example of Humain’s ability to “deploy meaningful capital behind exceptional opportunities where long-term vision, technical excellence and execution converge,” said Amin in a statement. Amin, who previously led Aramco Digital and Japan’s Rakuten Mobile, has spent the past several months striking blockbuster partnerships with U.S. tech heavyweights, including Nvidia, AMD, Cisco, Amazon Web Services and Groq (not xAI’s chatbot Grok).

    Humain did not respond to requests for comment from Observer.

    Most of the partnerships are focused on expanding Saudi Arabia’s data center footprint and compute capacity. A joint venture with AMD and Cisco, for example, aims to build domestic A.I. infrastructure capable of powering up to one gigawatt.

    xAI’s relationship with Humain dates back to November, when the companies unveiled plans for a 500-megawatt data center in Saudi Arabia. The facility—xAI’s first outside the U.S.—will run on Nvidia chips and deploy the company’s Grok models across the Kingdom.

    Humain’s deepening ties to xAI underscore a broader realignment in global A.I. alliances, with Gulf states emerging as critical capital providers and infrastructure hubs for American developers. In November, Humain and the United Arab Emirates’ A.I. company, G42, received U.S. approval to acquire up to 35,000 advanced A.I. chips each, marking a sharp reversal from earlier semiconductor export restrictions.

    Other regional players are also forging closer links with U.S. firms. G42 secured a $1.5 billion investment from Microsoft and is set to help develop Stargate UAE, an A.I. compute cluster in Abu Dhabi to be operated by OpenAI and Oracle.

    The Emirati-backed MGX has participated in large fundraising rounds for xAI, OpenAI and Anthropic, while Qatar’s sovereign wealth fund earlier this week joined Anthropic’s new $380 billion Series G financing—further cementing the Middle East’s growing influence over the future of A.I.

    Humain CEO Tareq Amin Injects $3B Into Elon Musk’s xAI to Power Saudi A.I. Ambitions

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • AI researchers quit, warning that ‘world is in peril’ – Tech Digest

    [ad_1]

    Share


    A wave of high-profile resignations has hit the AI industry, with leading researchers abandoning prestigious roles and issuing dire warnings about the technology’s direction.

    Mrinank Sharma, a senior safety leader at Anthropic, has quit his position to move back to the UK and pursue a degree in poetry, claiming the “world is in peril.”

    Sharma, who led the Safeguards Research Team at the San Francisco-based firm, announced his departure in a cryptic letter shared on social media.

    He stated that humanity is approaching a critical threshold where “our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”

    The researcher’s concerns extend beyond just AI, citing “interconnected crises” involving bioweapons and a broader societal decline. During his tenure, Sharma’s work focused on preventing AI-assisted bioterrorism and investigating how digital assistants might “make us less human.”

    He admitted that even at a safety-focused firm like Anthropic, employees “constantly face pressures to set aside what matters most.”

    Values v commercial pressure

    The exodus is not limited to Anthropic. At rival firm OpenAI, researcher Zoe Hitzig also resigned this week, specifically citing the company’s decision to introduce advertising into ChatGPT. Hitzig warned that the chatbot has amassed an unprecedented archive of “human candor,” including users’ medical fears and religious beliefs.

    “Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent,” Hitzig wrote in a New York Times essay. She argued that the drive for engagement and revenue creates “strong incentives to override” safety rules, mirroring the early mistakes of social media giants.

    The trend of “technical exits” suggests a growing rift between the developers of AI and the corporate structures that fund them. For Sharma, the solution is a radical retreat from the industry entirely.

    He stated his intention to become “invisible” for a time, seeking “poetic truth” alongside scientific truth as a necessary way of navigating the current global moment.

    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Instagram boss defends excessive use, AI researchers ring alarm bells – Tech Digest

    [ad_1]

    Share


    The head of Instagram
    has defended his platform against claims it caused mental health damage to minors, arguing in a California court that even seemingly excessive use of social media does not equal an addiction. Adam Mosseri, who has led Instagram for eight years, testified in the landmark trial that began this week in Los Angeles, making him the first high-profile executive to appear. It is expected to last six weeks, and serve as a test of legal arguments aimed at holding tech firms accountable for impacts on young people. BBC 

    The world is in peril,” warned the former head of Anthropic’s Safeguards Research team as he headed for the exit. A researcher for OpenAI, similarly on the way out, said that the technology has “a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” They’re part of a wave of artificial intelligence researchers and executives who aren’t just leaving their employers — they’re loudly ringing the alarm bell on the way out. CNN

    Decades after the first demonstration of brain computer interfaces, we have reached a “tipping point” in creating the first reliable devices that can read our thoughts, according to the man who pioneered the technology. Professor John Donoghue, who developed BrainGate – the first “brain chip” – at Brown University in Rhode Island, has just shared in the Queen Elizabeth Prize, the world’s preeminent engineering award, in recognition of his work to “unlock” the minds of people with paralysis. Sky News 

    The person behind an anonymous social media account that posts AI videos of UK politicians has been identified as a man who has spent time in prison for multiple hate crimes directed towards Jewish people. Joshua Bonehill-Paine was identified by Channel 4 News as the owner of Crewkerne Gazette, a satirical X account that created AI videos depicting politicians such as Keir Starmer, Angela Rayner and Andy Burnham apparently singing popular songs from artists such as Amy Winehouse, Barry Manilow and Elton John with altered, politically themed lyrics. Guardian

    Ring’s new Search Party feature has once again drawn backlash for the company. A 30-second ad that aired during Sunday’s Super Bowl showed Ring cameras “surveilling” neighborhoods to locate a lost dog. In the current political climate, a prime-time ad celebrating neighborhood surveillance struck a nerve. People voiced concerns across social media that the AI-powered technology Ring uses to identify dogs could soon be used to search for humans. Combined with Ring’s recent rollout of its new facial recognition capability, it feels like a short leap for a pet-finding feature to be turned into a tool for state surveillance. The Verge 

    Image: Foundry

    Apple has just released iOS 26.3 to iPhones everywhere, and as expected, it’s a relatively small update. With iOS 26.4 already on the horizon, and rumored to introduce substantial changes, including a Gemini-powered Siri, this release is more about incremental improvements that set the stage for big upcoming changes. That said, iOS 26.3 still introduces a few notable additions, especially when it comes to device switching, privacy, and new capabilities for users in the EU. Read on as we break down everything new in iOS 26.3. MacWorld 


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • The kids ‘picked last in gym class’ gear up for Super Bowl | TechCrunch

    [ad_1]

    The Super Bowl is happening in Silicon Valley this Sunday, and the Patriots-Seahawks game at Levi’s Stadium is going to be packed with tech money. YouTube CEO Neal Mohan is expected to be there. Apple’s Tim Cook, too. (He has become a Super Bowl fixture since Apple Music began sponsoring the halftime show several years ago.)

    Longtime VC Venky Ganesan from Menlo Ventures gave the New York Times a quote about the whole thing, saying the Super Bowl in the Bay Area is “tech billionaires who got picked last in gym class paying $50,000 to pretend they’re friends with the guys who got picked first.” Added Ganesan, “And for the record, I, too, was picked last in gym class.”

    Ganesan could likely afford a $50,000 ticket if he needed one. Menlo went all-in on Anthropic, setting up a $100 million fund with the AI company in summer 2024 to invest in other AI startups. The firm has also joined numerous funding rounds for Anthropic itself, both through its flagship fund and various special purpose vehicles. (Anthropic is reportedly expected to close a $20 billion round of funding next week at a post-money valuation of $350 billion.)

    Tickets are expensive across the board, averaging almost $7,000 according to the Times (with some last-minute seats still available on StubHub for closer to $3,600, according to a quick glance at the ticket reseller site). Only a quarter go to the general public; the rest are distributed to NFL teams. Of all ticket buyers, the largest group (27%) is coming from Washington State for the Seahawks, who’ve won just one Super Bowl in franchise history compared with the Patriots’ six titles, all with Tom Brady at quarterback.

    Google, OpenAI, Anthropic, Amazon, and Meta are splashing out for competing ads about whose AI is best for customers, so maybe their respective CEOs will show up, too. Other than Amazon’s Andy Jassy, who reportedly splits his time between Seattle and Santa Monica, all of them have homes within an hour or so of Sunday’s game.

    This is just the third time the Bay Area has hosted the Super Bowl. The first time was in 1985 at Stanford Stadium, the original football stadium at Stanford University, where the 49ers beat the Dolphins. The second took place 10 years ago at Levi’s Stadium, when the Broncos beat the Panthers.

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    [ad_2]

    Connie Loizos

    Source link

  • Bitcoin falls to lowest level since Trump took office, Apple scales back AI health coach – Tech Digest

    [ad_1]

    Share



    The price of Bitcoin fell to its lowest level
    in 16 months despite US president Donald Trump’s personal and public support for cryptocurrency. A single Bitcoin went as low as $60,000 (£44,000), its lowest level since September 2024, before rallying slightly. The drop followed months of surging Bitcoin prices, which saw the cryptocurrency hit an all-time high of $122,200 in October 2025. “Those who bet too big, borrowed too much or assumed prices only go up are now finding out the hard way what real market volatility and risk management look like,” Joshua Chu, co-chair of the Hong Kong Web3 Association told Reuters. BBC 

    TikTok could be forced into changes to make the app less addictive to users after the EU indicated the platform had breached the bloc’s digital safety rules. The EU’s executive arm said in a preliminary ruling that the popular app had infringed the Digital Services Act (DSA) due to its “addictive design”. The European Commission said TikTok, which has more than 1 billion users worldwide, had not adequately assessed how its design could harm the physical and mental wellbeing of users. The Guardian 


    Markets took a tumble this week as AI-company Anthropic released new add-ons to Claude that can perform a range of functions typically filled by software providers. Shares of software-as-a-service companies like Adobe, Intuit, and Salesforce declined sharply on fears that AI tools might chip away at their business. Legacy tech giants with large AI businesses like Microsoft, Amazon, and Google were also hit hard. Yahoo!

    Anthropic, one of the biggest and most influential tech companies in the world, is launching a new model: Claude Opus 4.6. Until now, this would mostly be big news for techies, where Anthropic is admired as the maker of Claude Code, the code-writing AI tool which many engineers say is taking over their work entirely. All of a sudden, however, the impact of these tools is being felt more widely, after a seemingly small release from Anthropic shook some sections of the stock market. Sky News 

    Apple is no longer launching an AI service that can “replicate” a doctor and act as a personal health coach, according to Bloomberg’s Mark Gurman. The company has reportedly scaled back the unannounced initiative in recent weeks, following a recent organization reshuffling wherein services chief Eddy Cue took over the health division. While Apple has never officially announced the AI health coach, it was reported last year that the company was working on developing the service that has been unofficially dubbed Health+. Engadget


    The second set-top-box with Freely on board
    , the Aero 4K TV Streamer comes from Manhattan, which has been making satellite and Freeview boxes and recorders for decades. And it’s instantly a serious contender – not just for Pleio, but Sky Stream and Virgin Media Stream, too. So what’s the catch – if there is one – and what do you get for a surprisingly low entrance fee? Well, a lot, it turns out. The Manhattan Aero is remarkably priced. Available from several UK retailers, including Currys (click here for the direct link), Amazon, and John Lewis, it will set you back a mere £69.99. T3.com

     


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link