ReportWire

Tag: Anthropic PBC

  • Microsoft partners with Anthropic and Nvidia in cloud infrastructure deal

    [ad_1]

    Microsoft said Tuesday it is partnering with artificial intelligence company Anthropic and chipmaker Nvidia as part of an AI infrastructure deal that moves the software giant further away from its longtime alliance with OpenAI.

    Anthropic, maker of the chatbot Claude that competes with OpenAI’s ChatGPT, said it is committed to buying $30 billion in computing capacity from Microsoft’s Azure cloud computing platform.

    As part of the partnership, Nvidia will also invest up to $10 billion in Anthropic, and Microsoft will invest up to $5 billion in the San Francisco-based startup.

    The joint announcements by CEOs Dario Amodei of Anthropic, Satya Nadella of Microsoft, and Jensen Huang of Nvidia came just ahead of the opening of Microsoft’s annual Ignite developer conference.

    “This is all about deepening our commitment to bringing the best infrastructure, model choice and applications to our customers,” Nadella said on a video call with the other two executives, adding that it builds on the “critical” partnership Microsoft still has with OpenAI.

    Microsoft was, until earlier this year, the exclusive cloud provider for OpenAI and made the technology behind ChatGPT the foundation for its own AI assistant, Copilot. But the two companies moved farther apart and their business agreements were amended as OpenAI increasingly sought to secure its own cloud capacity through big deals with Oracle, SoftBank and other data center developers and chipmakers.

    Asked in September if OpenAI could do more with those new computing partnerships than it could with Microsoft, OpenAI CEO Sam Altman told The Associated Press his company was “severely limited for the value we can offer to people.”

    At the same time, Microsoft holds a roughly 27% stake in the new for-profit corporation that OpenAI, founded as a nonprofit, is forming to advance its commercial ambitions as the world’s most valuable startup.

    Anthropic, founded by ex-OpenAI leaders in 2021, said Claude will now be the “only frontier model” available to customers of the three biggest cloud computing providers: Amazon, which remains Anthropic’s primary cloud provider, and Google and Microsoft.

    AI products like Claude, ChatGPT, Copilotand Google’s Gemini are reshaping how many people work but take huge amounts of energy and computing power to build and operate. Neither OpenAI nor Anthropic has yet reported turning a profit, amplifying concerns about an AI bubble if their products don’t meet investors’ high expectations and justify the expenditures. As part of the deal, Nvidia said Anthropic will have access to up to a gigawatt of capacity from its specialized AI chips.

    Huang said he’s “admired the work of Anthropic and Dario for a long time, and this is the first time we are going to deeply partner with Anthropic to accelerate Claude.”

    At Microsoft’s Ignite conference, a showcase of its latest AI technology which opened Tuesday in San Francisco, Anthropic’s chief product officer Mike Krieger highlighted the budding partnership during an on-stage appearance.

    “From the beginning, it has seemed there has been a lot of shared DNA between our companies,” said Krieger, who was also the co-founder of Instagram.

    ——

    AP Technology Writer Michael Liedtke in San Francisco contributed to this report.

    [ad_2]

    Source link

  • Google unveils Gemini’s next generation, aiming to turn its search engine into a ‘thought partner’

    [ad_1]

    SAN FRANCISCO (AP) — Google is unleashing its Gemini 3 artificial intelligence model on its dominant search engine and other popular online services in the high-stakes battle to create technology that people can trust to enlighten them and manage tedious tasks.

    The next-generation model unveiled Tuesday comes nearly two years after Google took the wraps off its first iteration of the technology. Google designed Gemini in response to a competitive threat posed by OpenAI’s ChatGPT that came out in late 2022, triggering the biggest technological shift since Apple released the iPhone in 2007.

    Google’s latest AI features initially will be rolled out to Gemini Pro and Ultra subscribers in the United States before coming to a wider, global audience. Gemini 3’s advances include a new AI “thinking” feature within Google’s search engine that company executives believe will become an indispensable tool that will help make people more productive and creative.

    “We like to think this will help anyone bring any idea to life,” Koray Kavukcuoglu, a Google executive overseeing Gemini’s technology, told reporters.

    As AI models have become increasingly sophisticated, the advances have raised worries that the technology is more prone to behave in ways that jumble people’s feelings and thoughts while feeding them misleading information and fawning flattery. In some of the most egregious interactions, AI chatbots have faced accusations of becoming suicide coaches for emotionally vulnerable teenagers.

    The various problems have spurred a flurry of negligence lawsuits against the makers of AI chatbots, although none have targeted Gemini yet.

    Google executives believe they have built in guardrails that will prevent Gemini 3 from hallucinating or be deployed for sinister purposes such as hacking into websites and computing devices.

    Gemini 3 ‘s responses are designed to be “smart, concise and direct, trading cliche and flatter for insight — telling you what you need to hear, not just what you want to hear. It acts as a true thought partner,” Kavukcuoglu and Demis Hassabis, CEO of Google’s DeepMind division, wrote in a blog post.

    Besides providing consumers with more AI tools, Gemini 3 is also likely to be scrutinized as a barometer that investors may use to get a better sense about whether the massive torrent of spending on the technology will pay off.

    After starting the year expecting to spend $75 billion, Google’s corporate parent Alphabet recently raised its capital expenditure budget from $91 billion to $93 billion, with most of the money earmarked for AI. Other Big Tech powerhouses such as Microsoft, Amazon and Facebook parent Meta Platforms are spending nearly as much — or even more — on their AI initiatives this year.

    Investors so far have been mostly enthusiastic about the AI spending and the breakthroughs they have spawned, helping propel the values of Alphabet and its peers to new highs. Alphabet’s market value is now hovering around $3.4 trillion, more than doubling in value since the initial version of Gemini came out in late 2023. Alphabet’s shares edged up slightly Tuesday after the Gemni 3 news came out.

    But the sky-high values also have amplified fears of a potential investment bubble that will eventually burst and drag down the entire stock market.

    For now, AI technology is speeding ahead.

    OpenAI released its fifth generation of the AI technology powering ChatGPT in August, around the same time the next version of Claude came out from Anthropic.

    Like Gemini, both ChatGPT and Claude are capable of responding rapidly to conversational questions involving complex topics — a skill that has turned them into the equivalent of “answer engines” that could lessen people’s dependence on Google search.

    Google quickly countered that threat by implanting Gemini’s technology into its search engine to begin creating detailed summaries called “AI Overviews” in 2023, and then introducing an even more conversational search tool called “AI mode” earlier this year.

    Those innovations have prompted Google to de-emphasize the rankings of relevant websites in its search results — a shift that online publishers have complained is diminishing the visitor traffic that helps them finance their operations through digital ad sales.

    The changes have been mostly successful for Google so far, with AI Overviews now being used by more than 2 billion people every month, according to the company. The Gemini app, by comparison, has about 650 million monthly users.

    With the release of Gemini 3, the AI mode in Google’s search engine is also adding a new feature that will allow users to click on a “thinking” option in a tab that company executives promise will deliver even more in-depth answers than has been happening so far. Although the “thinking” choice in the search engine’s AI mode initially will only be offered to Gemini Pro and Ultra subscribers, the Mountain View, California, company plans to eventually make it available to all comers.

    [ad_2]

    Source link

  • Anthropic, Microsoft announce new AI data center projects as industry’s construction push continues

    [ad_1]

    Artificial intelligence company Anthropic announced a $50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York.

    Microsoft also on Wednesday announced a new data center under construction in Atlanta, Georgia, describing it as connected to another in Wisconsin to form a “massive supercomputer” running on hundreds of thousands of Nvidia chips to power AI technology.

    The latest deals show that the tech industry is moving forward on huge spending to build energy-hungry AI infrastructure, despite lingering financial concerns about a bubble, environmental considerations and the political effects of fast-rising electricity bills in the communities where the massive buildings are constructed.

    Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn’t disclose their exact locations or what source of electricity they will need.

    Another company, cryptocurrency mining data center developer TeraWulf, has previously revealed it was working with Fluidstack on Google-backed data center projects in Texas and New York, on the shore of Lake Ontario. TeraWulf declined comment Wednesday.

    A report last month from TD Cowen said that the leading cloud computing providers leased a “staggering” amount of U.S. data center capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined.

    Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic’s chief rival OpenAI, maker of ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave and Microsoft.

    Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. It said in a statement that the “scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier.”

    Microsoft has branded its two-story Atlanta data center as Fairwater 2 and said it will be connected across a “high-speed network” with the original Fairwater complex being built south of Milwaukee, Wisconsin. The company said the facility’s densely packed Nvidia chips will help power Microsoft’s own AI technology, along with OpenAI’s and other AI developers.

    Microsoft was, until earlier this year, OpenAI’s exclusive cloud computing provider before the two companies amended their partnership. OpenAI has since announced more than $1 trillion in infrastructure obligations, much of it tied to its Stargate project with partners Oracle and SoftBank. Microsoft, in turn, spent nearly $35 billion in the July-September quarter on capital expenditures to support its AI and cloud demand, nearly half of that on computer chips.

    Anthropic has made its own computing partnerships with Amazon and, more recently, Google.

    The tech industry’s big spending on computing infrastructure for AI startups that aren’t yet profitable has fueled concerns about an AI investment bubble.

    Investors have closely watched a series of circular deals over recent months between AI developers and the companies building the costly chips and data centers needed to power their AI products. Anthropic said it will continue to “prioritize cost-effective, capital-efficient approaches” to scaling up its business.

    OpenAI had to backtrack last week after its chief financial officer, Sarah Friar, made comments at a tech conference suggesting the U.S. government could help in financing chips needed for data centers. The White House’s top AI official, David Sacks, responded on social media platform X that there “will be no federal bailout for AI” and if one of the leading companies fails, “others will take its place,” though he also added he didn’t think “anyone was actually asking for a bailout.”

    OpenAI CEO Sam Altman later confirmed in a lengthy statement that “we do not have or want government guarantees” for the company’s data centers and also sought to address concerns about whether it will be able to pay for all the infrastructure it has signed up for.

    “We are looking at commitments of about $1.4 trillion over the next 8 years,” Altman wrote. “Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there.”

    [ad_2]

    Source link

  • Microsoft to ship 60,000 Nvidia AI chips to UAE under US-approved deal

    [ad_1]

    WASHINGTON (AP) — Microsoft said Monday it will be shipping Nvidia’s most advanced artificial intelligence chips to the United Arab Emirates as part of a deal approved by the U.S. Commerce Department.

    The Redmond, Washington software giant said licenses approved in September under “stringent” safeguards enable it to ship more than 60,000 Nvidia chips, including the California chipmaker’s advanced GB300 Grace Blackwell chips, for use in data centers in the Middle Eastern country.

    The agreement appeared to contradict President Donald Trump’s remarks in a “60 Minutes” interview aired Sunday that such chips would not be exported outside the U.S.

    Asked by CBS News’ Norah O’Donnell if he will allow Nvidia to sell its most advanced chips to China, Trump said he wouldn’t.

    “We will let them deal with Nvidia but not in terms of the most advanced,” Trump said. “The most advanced, we will not let anybody have them other than the United States.”

    The UAE’s ability to access chips is tied to its pledge to invest $1.4 trillion in U.S. energy and AI-related projects, an outsized sum given its annual GDP is roughly $540 billion.

    The UAE ambassador to the U.S., Yousef Al Otaiba, said in a statement earlier this year that the arrangement was “setting a new ‘Gold Standard’ for securing AI models, chips, data and access.”

    Microsoft’s announcement Monday was part of the company’s planned $15.2 billion investment in technology in the UAE, which is says has some of the highest per-capita usage of AI. Microsoft had already accumulated in the UAE more than 21,000 of Nvidia’s graphics processor chips, known as GPUs, through licenses approved under then-President Joe Biden.

    “We’re using these GPUs to provide access to advanced AI models from OpenAI, Anthropic, open-source providers, and Microsoft itself,” said a company statement.

    [ad_2]

    Source link

  • OpenAI and Amazon sign $38 billion deal for AI computing power

    [ad_1]

    SEATTLE (AP) — OpenAI and Amazon have signed a $38 billion deal that enables the ChatGPT maker to run its artificial intelligence systems on Amazon’s data centers in the U.S.

    OpenAI will be able to power its AI tools using “hundreds of thousands” of Nvidia’s specialized AI chips through Amazon Web Services as part of the deal announced Monday.

    Amazon shares increased 4% after the announcement.

    The agreement comes less than a week after OpenAI altered its partnership with its longtime backer Microsoft, which until early this year was the startup’s exclusive cloud computing provider.

    California and Delaware regulators also last week allowed San Francisco-based OpenAI, which was founded as a nonprofit, to move forward on its plan to form a new business structure to more easily raise capital and make a profit.

    “The rapid advancement of AI technology has created unprecedented demand for computing power,” Amazon said in a statement Monday. It said OpenAI “will immediately start utilizing AWS compute as part of this partnership, with all capacity targeted to be deployed before the end of 2026, and the ability to expand further into 2027 and beyond.”

    AI requires huge amounts of energy and computing power and OpenAI has long signaled that it needs more capacity, both to develop new AI systems and keep existing products like ChatGPT answering the questions of its hundreds of millions of users. It’s recently made more than $1 trillion worth of financial obligations in spending for AI infrastructure, including data center projects with Oracle and SoftBank and semiconductor supply deals with chipmakers Nvidia, AMD and Broadcom.

    Some of the deals have raised investor concerns about their “circular” nature, since OpenAI doesn’t make a profit and can’t yet afford to pay for the infrastructure that its cloud backers are providing on the expectations of future returns on their investments. OpenAI CEO Sam Altman last week dismissed doubters he says have aired “breathless concern” about the deals.

    “Revenue is growing steeply. We are taking a forward bet that it’s going to continue to grow,” Altman said on a podcast where he appeared with Microsoft CEO Satya Nadella.

    Amazon is already the primary cloud provider to AI startup Anthropic, an OpenAI rival that makes the Claude chatbot.

    [ad_2]

    Source link