Nvidia Corp. will invest as much as $100 billion in OpenAI to support building of new data centers and other artificial intelligence infrastructure, a blockbuster deal that underscores booming demand for AI tools like ChatGPT and the computing power to make them run. The investment is intended to help OpenAI build data centers with a […]
Nvidia Corp. will invest as much as $100 billion in OpenAI to support building of new data centers and other artificial intelligence infrastructure, a blockbuster deal that underscores booming demand for AI tools like ChatGPT and the computing power to make them run. The investment is intended to help OpenAI build data centers with a […]
It takes a lot of computing power to run an AI product – and as the tech industry races to tap the power of AI models, there’s a parallel race underway to build the infrastructure that will power them. On a recent earnings call, Nvidia CEO Jensen Huang estimated that between $3 and $4 trillion will be spent on AI infrastructure by the end of the decade – with much of that money coming from AI companies themselves. Along the way, they’re placing immense strain on power grids, and pushing the industry’s building capacity to its limit.
Below, we’ve laid out everything we know about the biggest AI infrastructure projects, including major spending from Meta, Oracle, Microsoft, Google, and OpenAI. We’ll keep it updated as the boom continues, and the numbers climb even higher.
Microsoft’s $1 billion investment in OpenAI
This is arguably the deal that kicked off the whole contemporary AI boom: in 2019, Microsoft made a $1 billion investment in a buzzy non-profit called OpenAI, known mostly for its association with Elon Musk. Crucially, the deal made Microsoft the exclusive cloud provider for OpenAI – and as the demands of model-training became more intense, more of Microsoft’s investment started to come in the form of Azure cloud credit rather than cash. It was a great deal for both sides: Microsoft was able to claim more Azure sales, and OpenAI got more money for its biggest single expense. In the years that followed, Microsoft would build its investment up to nearly $14 billion – a move that is set to pay off enormously when OpenAI converts into a for-profit company.
The partnership between the two companies has unwound more recently. In January, OpenAI announced it would no longer be using Microsoft’s cloud exclusively, instead giving the company a right of first refusal on future infrastructure demands but pursuing others if Azure couldn’t meet their needs. More recently, Microsoft began exploring other foundation models to power its AI products, establishing even more independence from the AI giant.
OpenAI’s arrangement with Microsoft was so successful that it’s become a common practice for AI services to sign on with a particular cloud provider. Anthropic has received $8 billion in investment from Amazon, while making kernel-level modifications on the company’s hardware to make it better-suited for AI training. Google Cloud has also signed on smaller AI companies like Loveable and Windsurf as “primary computing partners,” although those deals did not involve any investment. And even OpenAI has gone back to the well, receiving a $100 billion investment from Nvidia in September, giving it capacity to buy even more of the company’s GPUs.
The rise of Oracle
Techcrunch event
San Francisco | October 27-29, 2025
On June 30th 2025, Oracle revealed in an SEC filing that it had signed a $30 billion cloud services deal with an unnamed partner, more than the company’s cloud revenues for all of the previous fiscal year. OpenAI was eventually revealed as the partner, securing Oracle a spot alongside Google as one of the OpenAI’s string of post-Microsoft hosting partners. Unsurprisingly, the company’s stock went shooting up.
A few months later, it happened again. On September 10th, Oracle revealed a five-year, $300 billion deal for compute power, set to begin in 2027. Oracle’s stock climbed even higher, briefly making founder Larry Ellison the richest man in the world. The sheer scale of the deal is stunning: OpenAI does not have $300 billion to spend, so the figure presumes immense growth for both companies, and more than a little faith. But before a single dollar is spent, the deal has already cemented Oracle as one of the leading AI infrastructure providers – and a financial force to be reckoned with.
Building tomorrow’s hyperscale data centers
For companies like Meta that already have significant legacy infrastructure, the story is more complicated – although equally expensive. Mark Zuckerberg has said that Meta plans to spend $600 billion on US infrastructure through the end of 2028. In just the first half of 2025, the company spent $30 billion more than the previous year, driven largely by the company’s growing AI ambitions. Some of that spending goes toward big ticket cloud contracts, like a recent $10 billion deal with Google Cloud, but even more resources are being poured into two massive new data centers. A new 2,250-acre site in Louisiana, dubbed Hyperion, will cost an estimated $10 billion to build out and provide an estimated 5 gigawatts of compute power. Notably, the site includes an arrangement with a local nuclear power plant to handle the increased energy load. A smaller site in Ohio, called Prometheus, is expected to come online in 2026, powered by natural gas.
That kind of buildout comes with real environmental costs. Elon Musk’s xAI built its own hybrid data center and power-generation plant in South Memphis, Tennessee. The plant has quickly become one of the county’s largest emitters of smog-producing chemicals, thanks to a string of natural gas turbines that experts say violate the Clean Air Act.
The Stargate moonshot
Just two days after his second inauguration, President Trump announced a joint venture between SoftBank, OpenAI and Oracle, meant to spend $500 billion building AI infrastructure in the United States. Named “Stargate” after the 1994 film, the project arrived with incredible amounts of hype, with Trump calling it “the largest AI infrastructure project in history. Sam Altman seemed to agree, saying, ”I think this will be the most important project of this era.”
In broad strokes, the plan was for SoftBank to provide the funding, with Oracle handling the buildout with input from OpenAI. Overseeing it all was Trump, who promised to clear away any regulatory hurdles that might slow down the build. But there were doubts from the beginning, including from Elon Musk, Altman’s business rival, who claimed the project did not have the available funds.
As the hype has died down, the project has lost some momentum. In August, Bloomberg reported that the partners were failing to reach consensus. Nonetheless, the project has moved forward with the construction of eight data centers in Abilene, Texas, with construction on the final building set to be finished by the end of 2026.
Struggling Santa Clara tech giant gets a big win as rival Nvidia agrees to invest $5 billion in a partnership between both companies to make AI chips for PCs and data centers. Intel stock jumps 23% the next day.
Joel Engardio
San Francisco supervisor becomes the latest Bay Area politician recalled from office, as Sunset-area voters angry at his support for more housing and turning the Great Highway into a park vote to remove him.
Ian Choudri
California High Speed Rail Authority CEO loses $4 billion after Trump pulls federal funds from the troubled project, but lands $1 billion a year from state lawmakers who extend cap-and-trade program.
Nvidia has paid nearly a billion dollars to bring in fresh talent and technology from an AI hardware startup.
According to a CNBC report on Thursday, Nvidia spent more than $900 million in cash and stock to hire Rochan Sankar, the CEO of AI chip startup Enfabrica, as well as several other employees at the company. Additionally, as part of the deal, Nvidia is allowed to license Enfabrica technology. The deal closed last week, and Sankar has already begun working at Nvidia, per CNBC‘s sources.
Enfabrica’s chips use special software to keep data center speeds up, but costs down. The startup’s standout feature is a system that incorporates cheaper memory costs, which noticeably reduces the cost of operating AI.
The deal, which involves bringing in new talent, is similar to those conducted recently by Google and Meta. In June, Meta invested $14.3 billion in AI data training startup Scale AI. The deal involved former Scale AI CEO Alexandr Wang leaving the startup to join Meta’s superintelligence team.
Meanwhile, in July, Google signed a $2.4 billion agreement with AI coding startup Windsurf to hire the startup’s CEO, Varun Mohan, as well as other employees. Google also obtained a nonexclusive license to Windsurf’s technology.
Nvidia CEO Jensen Huang. Photo by Chesnot/Getty Images
The advantage of trading money for new talent is that tech giants can circumvent the complex regulatory hurdles that come with acquisitions — and still poach top talent from other companies.
Nvidia first began its involvement with Enfabrica in 2023, as one of the backers in a $125 million Series B funding round for the startup. Enfabrica was last valued at around $600 million in November, following a $115 million Series C round, according to PitchBook.
Nvidia has also made or considered a few other high-profile deals lately. Earlier this week, the AI chipmaker announced that it would be investing $5 billion into Intel to develop advanced technology, a deal that Nvidia CEO Jensen Huang called “an incredible investment.” On Friday, Nvidia signed a letter of intent to evaluate a $500 million investment in self-driving car startup Wayve.
Nvidia is the world’s most valuable company, a spot it claimed in June. One month later, Nvidia became the world’s first company to exceed $4 trillion in market value. The AI chipmaker is worth $4.32 trillion at the time of writing.
Nvidia has paid nearly a billion dollars to bring in fresh talent and technology from an AI hardware startup.
According to a CNBC report on Thursday, Nvidia spent more than $900 million in cash and stock to hire Rochan Sankar, the CEO of AI chip startup Enfabrica, as well as several other employees at the company. Additionally, as part of the deal, Nvidia is allowed to license Enfabrica technology. The deal closed last week, and Sankar has already begun working at Nvidia, per CNBC‘s sources.
Enfabrica’s chips use special software to keep data center speeds up, but costs down. The startup’s standout feature is a system that incorporates cheaper memory costs, which noticeably reduces the cost of operating AI.
(AP) – Nvidia, the world’s leading chipmaker, announced on Thursday that it’s investing $5 billion in Intel and will collaborate with the struggling semiconductor company.
Nvidia said it will spend $5 billion to buy Intel common stock at $23.28 a share. The investment, which is subject to regulatory approvals, comes a month after the U.S. government took a 10% stake in Intel.
Nvidia CEO Jensen Huang called it “a fusion of two world-class platforms” that combines Intel’s strength in making conventional computer chips, known as CPUs, that power most laptops, with Nvidia’s focus on the specialized graphics chips that are critical for artificial intelligence.
“This partnership is a recognition that computing has fundamentally changed,” Huang told reporters Thursday. “The era of accelerated and AI computing has arrived.”
Intel shares jumped nearly 23%, its biggest one-day percentage gain since 1987. Nvidia shares added more than 3%.
For data centers, Intel will make custom chips that Nvidia will use in its AI infrastructure platforms. For personal computer products, Intel will build chips that integrate Nvidia technology.
The agreement provides a lifeline for Intel, which was a Silicon Valley pioneer that enjoyed decades of growth as its processors powered the personal computer boom, but fell into a slump after missing the shift to the mobile computing era unleashed by the iPhone’s 2007 debut.
Intel fell even farther behind in recent years amid the AI boom that’s propelled Nvidia into the world’s most valuable company. Intel lost nearly $19 billion last year and another $3.7 billion in the first six months of this year, and expects to slash its workforce by a quarter by the end of 2025.
The U.S. government stepped in last month to secure a 10% stake — 433.3 million shares of non-voting stock priced at $20.47 apiece — making it one of Intel’s biggest shareholders. Federal officials said they invested in Intel in order to bolster U.S. technology and domestic manufacturing. The total value of the U.S. government’s stake in Intel now stands at $13.2 billion, a $2.5 billion increase from what it stood before the Nvidia investment was announced.
Huang said Nvidia has been in talks with Intel for about a year. Intel CEO Lip-Bu Tan, who joined the press call with Huang on Thursday, said he’s been talking to Nvidia since he was named Intel’s new leader in March.
“This is a very big, important milestone,” Tan said. “I call it a game-changing opportunity that we can work together.”
The deal is “bullish for U.S. tech,” Wedbush Securities analyst Daniel Ives said in a client note.
Ives said it brings Intel “front and center into the AI game” and, combined with the U.S. government stake, adds to “a golden few weeks for Intel after years of pain and frustration for investors.”
Nvidia, meanwhile, has soared because its specialized chips are underpinning the AI boom. The chips, known as graphics processing units, or GPUs, are highly effective at developing powerful AI systems.
The deal between the two chipmakers comes as China moves to be less dependent on U.S. semiconductor technology. This week, Chinese officials reportedly forbade several large domestic technology companies from purchasing Nvidia chips, and China-based Huawei announced that it was expanding its development of AI chips and manufacturing.
While Nvidia and Intel, both headquartered in Santa Clara, California, will work together to develop new chips, a manufacturing deal has yet to be struck between the two. The potential access to Intel’s chip foundries by Nvidia poses a risk to Taiwan Semiconductor Manufacturing Company, which currently manufactures the tech giant’s flagship processors. Huang emphasized Thursday that both his company and Intel remain “very successful customers” of TSMC.
Of Nvidia’s own Intel stake, Huang said “the Trump administration had no involvement in this partnership at all,” though “would have been very supportive, of course.”
Huang has been in Britain on a visit that coincides with Trump’s trip to the country, and he has been attending events with the president along with other Silicon Valley bigwigs.
At a signing ceremony for a trans-Atlantic tech partnership on Thursday with British Prime Minister Keir Starmer, Trump mused that AI was “taking over the world.”
“I’m looking at you guys. You’re taking over the world, Jensen,” Trump said.
Huang and Trump also both attended a royal banquet, prompting the tech mogul to dish about the Windsor Castle event to Intel’s CEO in the seconds before their press event.
“The cognac was excellent, but just not enough of it,” Huang told Tan. “I guess the cognac was from 1912.”
Nvidia, the world’s leading chipmaker, announced on Thursday that it’s investing $5 billion in Intel and will collaborate with the struggling semiconductor company.
The two companies will team up to work on custom data centers that form the backbone of artificial intelligence infrastructure as well as personal computer products, Nvidia said in a press release.
Nvidia said it will spend $5 billion to buy Intel common stock at $23.28 a share. The investment, which is subject to regulatory approvals, comes a month after the U.S. government took a 10% stake in Intel.
“This historic collaboration tightly couples NVIDIA’s AI and accelerated computing stack with Intel’s CPUs and the vast x86 ecosystem — a fusion of two world-class platforms,” Nvidia CEO Jensen Huang said. “Together, we will expand our ecosystems and lay the foundation for the next era of computing.”
The two companies said they will work on “seamlessly connecting” their architectures.
In morning trading, Intel shares jumped 25%, its biggest one-day percentage gain in decades. Nvidia shares added 2%.
For data centers, Intel will make custom chips that Nvidia will use in its AI infrastructure platforms. While for PC products, Intel will build chips that integrate Nvidia technology.
The agreement provides a lifeline for Intel, which was a Silicon Valley pioneer that enjoyed decades of growth as its processors powered the personal computer boom, but fell into a slump after missing the shift to the mobile computing era unleashed by the iPhone’s 2007 debut.
Intel fell even farther behind in recent years amid the artificial intelligence boom that’s propelled Nvidia into the world’s most valuable company. Intel lost nearly $19 billion last year and another $3.7 billion in the first six months of this year, and expects to slash its workforce by a quarter by the end of 2025.
The U.S. government stepped in last month to secure a 10% stake, making it one of Intel’s biggest shareholders. Federal officials said they invested in Intel in order to bolster U.S. technology and domestic manufacturing.
The deal is “bullish for U.S. tech,” Wedbush Securities analyst Daniel Ives said in a client note.
“This is a game-changer deal for Intel as it now brings them front and center into the AI game,” Ives said. “Along with the recent U.S. government investment for 10% (equity stake in Intel) this has been a golden few weeks for Intel after years of pain and frustration for investors.”
Nvidia, meanwhile, has soared because its specialized chips are underpinning the artificial intelligence boom. The chips, known as graphics processing units, or GPUs, are highly effective at developing powerful AI systems.
The deal between the two chipmakers comes as China moves to be less dependent on U.S. semiconductor technology. This week, Chinese officials reportedly forbade several large domestic technology companies from purchasing Nvidia chips, and Huawei announced that it was expanding its development of AI chips and manufacturing.
While Nvidia and Intel will work together to develop new chips, a manufacturing deal has yet to be struck between the two. The potential access to Intel’s chip foundries by Nvidia poses a risk to Taiwan Semiconductor Manufacturing Company, which currently manufactures the tech giant’s flagship processors.
China is ending its antitrust probe into Google, which had centered around Android’s ubiquity in the mobile world and what impact, if any, it was having on Chinese phone makers like Oppo and Xiaomi that use the software. As reported by the , this move comes amid ongoing discussions between the US and Chinese governments over , , tariffs and the broader trading relationship between the world’s two largest economies.
Google’s search engine remains blocked in China, along with many of its other core products like Gmail, YouTube and Google Maps. Despite this, the tech giant still generates substantial revenue in the country through cloud services and ad sales to Chinese companies targeting overseas audiences.
According to the Financial Times, the decision by Beijing to ease up on Google is a tactical move, as China increasingly flexes its regulatory scrutiny on NVIDIA as a negotiating tool during trade talks with the US.
Earlier this summer NVIDIA with the Trump administration to sell its pared-back H20 GPUs in China on the condition that it gives the US government 15 percent of the sales. Shortly thereafter, however, China began local companies from buying the H20 chips. Recently, the government Chinese tech companies from buying NVIDIA’s newest AI chip made specifically for the region, the RTX Pro 6000D.
In yet another move to exert control and flex power, Chinese regulators have accused NVIDIA of with its acquisition of chipmaker Mellanox. Were the chipmaker to be found in violation of China’s anti-monopoly law, the company could owe fines between 1 percent and 10 percent of its 2024 sales.
US and Chinese officials just wrapped three days of trade talks in Madrid, with President Donald Trump and President Xi Jinping set to speak on Friday. The leaders are expected to discuss a supposed framework for a that would cede control of the company’s US business to American companies, resulting in a roughly 80 percent stake in the entity domestically.
A regulator has accused NVIDIA of violating China’s antitrust laws over its acquisition of chipmaker Mellanox. In its preliminary findings of an investigation it commenced , the State Administration for Market Regulation (SAMR) claimed that the company breached both national regulations and the conditional terms China outlined when it rubberstamped the $6.9 billion takeover. The SAMR hasn’t announced any penalties yet, as the investigation will continue.
The SAMR is said to have determined its preliminary findings several weeks ago. According to sources, the regulator held off from releasing its statement until now, as trade talks with the US take place in Madrid, with the idea of giving Chinese officials more leverage. (Those talks have so far resulted in .)
NVIDIA and Mellanox back in 2019. China approved it in April the following year on the condition that NVIDIA continued to supply GPUs and interconnect products to the country and adhere to “fair, reasonable, and non-discriminatory principles,” per the .
Last month, it was reported that China was discouraging companies in the country from buying NVIDIA’s H20 chips pending a national security review. Officials were said to have taken offense at remarks from Howard Lutnick, the US commerce secretary. After the US allowed NVIDIA to start offering chips to China again in July following a , Lutnick said the company wasn’t going to be selling its most cutting-edge tech there.
“We don’t sell them our best stuff, not our second best stuff, not even our third best. The fourth one down, we want to keep China using it,” he told CNBC. “The idea is the Chinese are more than capable of building their own. You want to keep one step ahead of what they can build, so they keep buying our chips. You want to sell the Chinese enough that their developers get addicted to the American technology stack.”
Google’s parent company, Alphabet, is now worth $3 trillion, a feat only achieved by three other tech giants: Nvidia, Microsoft, and Apple.
Alphabet shares gained more than 4% in value on Monday, allowing the company to achieve a historic market capitalization of $3.03 trillion at the time of writing. Market capitalization measures the total value of a company by multiplying its share price by the number of outstanding shares.
Alphabet hit the $3 trillion mark just over two decades after Google first went public in 2004, and more than 10 years after its own creation as Google’s parent company.
Alphabet’s market cap has grown tremendously, more than 70%, from a low of $1.8 trillionin April. The recent surge value is partially due to an antitrust ruling earlier this month in the case Department of Justice (DOJ) v. Google, which resulted in lighter penalties than initially suggested by the DOJ. The ruling caused Alphabet shares to rise by over 20% over the past month.
Alphabet CEO Sundar Pichai. Photographer: David Paul Morris/Bloomberg via Getty Images
In the week following the ruling, Alphabet gained $234 billion in market cap. The company’s stock is up more than 30% year-to-date. For context, the Nasdaq as a whole is up 15% for the year, per CNBC.
Wall Street generally views Alphabet stock favorably. More than 80% of Wall Street analysts recommend buying the stock as of Monday, per Bloomberg.
Alphabet joins other tech giants that have made it into the $3 trillion club — and beyond. Apple achieved the $3 trillion milestone in June 2023, while Nvidia and Microsoft have taken it a step further by passing the $4 trillion mark.
Alphabet’s focus in recent years has been on artificial intelligence, as the company strives to compete with Meta, OpenAI, and other key players in the AI race. While announcing its second-quarter earnings in July, Alphabet mentioned that it was increasing its AI expenditures from $75 billion to $85 billion amid growing demand for its cloud and AI services.
“AI is positively impacting every part of the business, driving strong momentum,” Alphabet and Google CEO Sundar Pichai stated in the earnings report.
Google’s parent company, Alphabet, is now worth $3 trillion, a feat only achieved by three other tech giants: Nvidia, Microsoft, and Apple.
Alphabet shares gained more than 4% in value on Monday, allowing the company to achieve a historic market capitalization of $3.03 trillion at the time of writing. Market capitalization measures the total value of a company by multiplying its share price by the number of outstanding shares.
Alphabet hit the $3 trillion mark just over two decades after Google first went public in 2004, and more than 10 years after its own creation as Google’s parent company.
Trade tensions between China and the U.S. regarding semiconductors just got even more strained.
On Monday, China’s State Administration for Market Regulation ruled that semiconductor giant Nvidia was in violation of the country’s antitrust regulations, as first reported by Bloomberg. The ruling was in reference to Nvidia’s 2020 acquisition of Mellanox Technologies, a computer networking supplier, for $7 billion.
An Nvidia spokesperson supplied the following statement, “We comply with the law in all respects. We will continue to cooperate with all relevant government agencies as they evaluate the impact of export controls on competition in the commercial markets.”
China didn’t announce any consequences tied to its findings and will continue to investigate. Still, the ruling is likely to cast a pall over ongoing tariff negotiations between the U.S. and China, currently taking place in Madrid. While these trade discussions aren’t specifically about semiconductors, the question of Chinese access to Nvidia chips is a major point of contention between the two regimes.
The outgoing Biden administration announced its AI Diffusion Rule back in January that was meant to restrict U.S.-made AI chips to many countries, with further restrictions specifically for China and other adversaries.
While the U.S. Department of Commerce formally repealed Biden’s AI rule in May, the future of AI chip exports to China remains in flux. The Trump administration slapped licensing agreements on chips heading to China in April. A few months later, in July, these companies were given the green light to start selling these chips again.
Just a few weeks after that the country struck a deal requiring companies selling chips to China to give the U.S. a 15% cut of the revenue made on those sales. China has discouraged firms from buying Nvidia chips and, as of a recent earnings call, none of the company’s chips have made it through the new export process.
In a move drawing considerable attention across the tech industry, Nvidia Corporation has publicly critiqued the recently proposed Gain AI Act, emphasizing its potential to stifle competition in the rapidly evolving artificial intelligence sector.
The GAIN AI Act, which stands for Guaranteeing Access and Innovation for National Artificial Intelligence Act, was introduced as part of the U.S. National Defense Authorization Act, with the goal of ensuring that the United States is the dominant market force for AI.
It has not yet passed and remains a hotly debated policy topic both here and abroad because of the restrictions it looks to enact.
Backers say it aims to protect American market interests by prioritizing domestic orders for advanced AI chips and processors, as well as secure supply chains for critical AI hardware, and theoretically reduce our reliance on foreign manufacturers.
So it’s no huge surprise that Nvidia, a Chinese corporation and currently the world’s biggest company, would take aim at a law that might potentially restrict the competitiveness of foreign technology.
The company said as much during a recent industry forum.
“We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips,” an Nvidia spokesperson said.
Is the Gain AI Act a good idea for innovation?
It depends on who you ask.
Essentially, the law seeks to strengthen national security and economic competitiveness by ensuring that key AI components remain accessible to American companies and government agencies before they are supplied abroad.
Its language takes a hard line on what the priority should be for the United States government.
“It should be the policy of the United States and the Department of Commerce to deny licenses for the export of the most powerful AI chips, including such chips with total processing power of 4,800 or above and to restrict the export of advanced artificial intelligence chips to foreign entities so long as United States entities are waiting and unable to acquire those same chips,” the legislation reads.
Nvidia’s critique reflects broader industry anxieties about regulatory environments that might hinder innovation. As global competition intensifies, particularly with formidable advances in AI from regions such as China, firms like Nvidia are closely watching how regulatory frameworks are taking shape abroad.
But it’s not just foreign companies. American market players, too, have said it could hit many domestic operations hard.
“Advanced AI chips are the jet engine that is going to enable the U.S. AI industry to lead for the next decade,” Brad Carson, president of Americans for Responsible Innovation (ARI), a lobbying group for the AI industry, said in a widely distributed statement.
“Globally, these chips are currently supply-constrained, which means that every advanced chip sold abroad is a chip the U.S. cannot use to accelerate American R&D and economic growth,” Carson said. “As we compete to lead on this dual-use technology, including the GAIN AI Act in the NDAA would be a major win for U.S. economic competitiveness and national security.”
‘Doomer science fiction’
Nvidia didn’t stop there. It then took aim at an earlier attempt to make the U.S. more competitive in the chipmaker market, a policy called the AI Diffusion Rule, which ultimately failed.
The company minced no words in a follow-up statement, saying that the past attempts by legislators to control market forces based on protectionist policies was ultimately a bad idea.
“The AI Diffusion Rule was a self-defeating policy, based on doomer science fiction, and should not be revived,” it read.
“Our sales to customers worldwide do not deprive U.S. customers of anything—and in fact expand the market for many U.S. businesses and industries,” it said. “The pundits feeding fake news to Congress about chip supply are attempting to overturn President Trump’s AI Action Plan and surrender America’s chance to lead in AI and computing worldwide.”
The challenge will be creating laws that are as dynamic as the technologies they aim to govern, fostering a climate where innovation and ethical accountability are not mutually exclusive, but rather mutually reinforcing.
We’ve tried this before
Nvidia’s mention of the AI Diffusion rule was no accident. That ill-fated policy had many of the same political goals but ultimately stumbled at the finish line and was a relatively toothless attempt to rein in some of the world’s most competitive companies.
The Biden administration’s AI Diffusion rule, enacted in January 2025, represented a significant shift in U.S. export controls targeting cutting-edge artificial intelligence technology.
Designed to curb the spread of advanced AI tools to rival nations, the regulation mandated licensing for the sale of high-end AI chips and imposed strict caps on computing power accessible to foreign recipients. Its goal was to slow the diffusion of sensitive AI capabilities that could enhance military or strategic applications abroad.
However, the Trump-era approach to export controls, which focused on a more targeted, bilateral framework, was poised to replace the Biden administration’s broader strategy.
President Trump had announced plans to rescind the AI Diffusion rule, criticizing it as overly bureaucratic and potentially hindering U.S. innovation. Instead, his administration favored engaging in country-specific agreements to control export practices, aiming for a more adaptable, case-by-case approach.
Though the AI Diffusion rule was ultimately rolled back, the Bureau of Industry and Security (BIS) signaled a renewed emphasis on enforcing existing regulations. The agency issued a notice reinforcing actions against companies with a “high probability” of violations, warning that increased scrutiny would be applied to entities with knowledge of potential breaches.
Whether this latest attempt to advance American interests meets a similar fate remains to be seen.
CEO Arthur Mensch is steering Mistral away from the AGI hype and toward Europe’s A.I. sovereignty. Photo by Ludovic Marin/AFP via Getty Images
Paris-based Mistral AI is on track for a new funding round that would value the A.I. startup at 12 billion euros ($14 billion), Bloomberg reports. The investment, expected to total around 2 billion euros ($2.3 billion), would solidify the company’s position at the center of Europe’s sovereign A.I. strategy and bring it closer to its goal of challenging dominant U.S. rivals.
Founded in 2023, Mistral has already raised some 1.1 billion euros ($1.3 billion) over the past two years. Its upcoming valuation would more than double the 5.8 billion euros ($6.8 billion) figure it reached last June following a 468 million euro ($550 million) round that drew backers such as Andreessen Horowitz, Salesforce and Nvidia.
Mistral did not respond to requests for comment from Observer.
For now, the startup still pales in size compared to its Silicon Valley competitors. Anthropic closed a round earlier this month at a staggering $183 billion valuation, while OpenAI is reportedly eyeing $500 billion. Still, Mistral is eager to compete. Its products include an A.I. assistant called “Le Chat,” designed for European customers and positioned as an alternative to OpenAI’s ChatGPT and Anthropic’s Claude chatbots.
Mistral was co-founded by Arthur Mensch, a former researcher at Google DeepMind, along with former Meta researchers Timothée Lacroix and Guillaume Lample. Mistral has tried to distinguish itself by emphasizing open access. It has released several open-source language models. Unlike American A.I. giants, Mistral has also rejected pursuing AGI. Mensch, who serves as CEO, has said his firm is more focused on ensuring U.S. startups don’t dominate how the technology shapes global culture.
Mistral is central to Europe’s A.I. playbook
Mistral is part of a broader surge in European A.I. investment. In 2024, venture capital rounds involving A.I. and machine learning companies based in Europe were estimated to have reached 13.2 billion euros ($15.5 billion), up 20 percent from 2023, according to data from Pitchbook.
Mistral is part of a broader surge in European A.I. investment. In 2024, venture capital rounds involving A.I. and machine learning companies across the continent were expected to reach 13.2 billion euros ($15.5 billion), a 20 percent increase from the year before, according to PitchBook.
As one of Europe’s leading startups, Mistral is central to the region’s goal of building an A.I. ecosystem independent of technology from America or China. Earlier this year, the company partnered with Nvidia to launch a European A.I. platform that will allow companies to develop applications and strengthen domestic infrastructure. French President Emmanuel Macron hailed the initiative as “a game changer, because it will increase our sovereignty and it will allow us to do much more.”
Mistral’s rapid ascent is tied to broader efforts to bolster A.I. across Europe and France. Its Nvidia partnership followed Macron’s announcement at Paris’ global A.I. summit in February, where he pledged more than 100 billion euros ($117 billion) to support France’s A.I. industry. European players must move quickly, Macron stressed at the time: “We are committed to going faster and faster.”
OpenAI is gearing up to start the mass production of its own AI chips next year to be able to provide the massive computing power its users need and to lessen its reliance on NVIDIA, according to the Financial Times. The company reportedly designed the custom AI chip with US semiconductor maker Broadcom, whose CEO recently announced that it has a new client that put in a whopping $10 billion in orders. It didn’t name the client, but the Times‘ sources confirmed that it was OpenAI, which apparently doesn’t have plans to sell the chips and will only be using them internally.
Reutersreported way back in 2023 that OpenAI was already exploring the possibility of making its own AI chips after Sam Altman blamed GPU shortages for the company API’s speed and reliability. The news organization also previously reported that OpenAI was working with both Broadcom and Taiwan Semiconductor Manufacturing Co. (TSMC) to create its own product. The Times didn’t say whether OpenAI still has an ongoing partnership with TSMC.
After GPT-5 came out, Altman announced the changes OpenAI is implementing in order to keep up with “increased demand.” In addition to prioritizing paid ChatGPT users, he said that OpenAI was going to double its compute fleet “over the next 5 months.” Making its own chips will address any potential GPU shortages the company may encounter in doubling its fleet, and it could also save the company money. The Times says custom AI chips called “XPUs” like the one OpenAI is reportedly developing will eventually take a big share of the AI market. At the moment, NVIDIA is still the leading name in the industry. It recently revealed that its revenue for the second quarter ending on July 27 rose 56 percent compared to the same period last year, and it didn’t even have to ship any H20 chips to China.
Some top investors are eyeing AI opportunities outside of the Magnificent 7.
Nvidia and other major AI firms face high expectations and a potential growth slowdown.
Experts suggest investing in data storage and hardware providers for AI opportunities.
The first wave of AI stocks — including chipmakers like Nvidia and Broadcom, as well as hyperscalers like Microsoft, Meta, and Amazon — may all still be great buys.
But there’s no mistaking the high expectations that investors are placing on these household names going forward. Plus, the uber-explosive earnings growth for names like Nvidia may soon start to slow, as the company told investors in its recent earnings call.
So if you missed their run-ups in the market, you might be feeling like you missed the boat on the AI trade. (To be fair, you probably didn’t miss out entirely — these companies have huge exposures in the S&P 500 and Nasdaq 100, meaning most index investors already have some fairly sizeable positions in the AI giants.)
But if you’re looking for some overlooked AI firms, there are still plenty of opportunities out there, according to Que Nguyen, the CIO at Research Affiliates, and Brian Mulberry, a senior portfolio manager at Zacks Investment Management.
In recent interviews with Business Insider, the duo shared some of their preferred non-mega-cap AI stocks right now.
For Nguyen’s part, she said to look to stocks like Western Digital (WDC), Seagate Technologies (STX), Hewlett Packard Enterprise (HPE), and Micron Technology (MU) — all data storage providers.
“One of the things that I see is AI is spreading and benefiting an entire ecosystem of technology companies,” Nguyen said. “So you look at even boring companies like hard disk companies — in order to have AI you need to be able to store a lot of data and get it quickly, right?”
“None of these companies is nearly as expensive as the Mag 7,” she continued. “Don’t just stick with the Mag 7, or Nvidia and AMD. Look more broadly — own something diversified. You have no idea where the next killer app is going to come, or where the next big investment theme is going to be.”
Some examples of diversified products offering specific AI and tech stocks expsoure include the Global X Artificial Intelligence & Technology ETF (AIQ) and the iShares AI Adopters & Applications UCITS ETF (AIAA).
Meanwhile, Mulberry said he likes stocks like Amphenol (APH) and Emcor (EME).
Both are hardware providers, and are raking in money from hyperscalers as they spend hundreds of billions to build out their AI data centers, Mulberry said. Consensus earnings estimates for both firms show growth in the next couple of years, he said.
“They’re simply benefiting from the actual dollars being spent without having to increase their own capex,” he said of the stocks.
He continued: “They’re very specialized electrical connectors, and they don’t have to do anything other than just show up and start helping build out data centers with their expertise.”
One of the trickiest parts of any new computer build or upgrade is finding the right video card. In a gaming PC, the GPU is easily the most important component, and you can hamstring your experience by buying the wrong model. The buying process can be frustrating, with many manufacturers selling their models above their suggested retail price. In this guide, we’ll help you navigate the market and find the right GPU for your needs.
Table of contents
How to buy a GPU
There are a lot of things to consider before buying a graphics card. We’ll go through everything in depth below, but here’s a TL;DR list of what you should consider: the types of games you play, the amount of VRAM in the graphics cards you’re considering, the physical size of the card and how much power it requires, the manufacturers that make the GPUs on your shortlist and, finally, your budget for a new GPU. We have some of our favorites recommended at the end of this guide, but it’s important to remember that there isn’t one best graphics card for everyone — the best GPU will you will depend largely on how you plan on using it, with what frequency and how much you’re willing to spend.
It’s all about the games
The first question to ask yourself is what kind of games do you want to play. Competitive shooters like Valorant, Overwatch and Marvel Rivals were designed to run on older hardware. As such, even entry-level GPUs like the GeForce RTX 5060 can push those games at 120 frames per second and above at 1080p (more on why that’s important in a moment).
By contrast, if you want to play modern, single-player games with ray tracing and other graphical extras, you’ll need a more powerful GPU. Just how much more powerful will depend on the resolution of your monitor.
A 1440p monitor has 78 percent more pixels than a 1080p screen, and a 4K display has more than twice as many pixels as a QHD panel. In short, running a game at 4K, especially at anything above 60 frames per second, is demanding, and most GPUs will need to use upscaling techniques like NVIDIA’s Deep Learning Super Sampling (DLSS) and AMD’s FidelityFX Super Resolution (FSR) to push new games at high refresh rates.
While we’re on the subject of resolution, it doesn’t make sense to spend a lot of money on a 4K monitor only to pair it with an inexpensive GPU. That’s a recipe for a bad experience. As you’re shopping for a new video card, you should think about the resolution and frame rate you want to play your games. If you’re in the market for both a GPU and display, be sure to check out our guide to the best gaming monitors.
If your budget allows, a good bet is to buy a midrange card that can comfortably render all but the most demanding games at 1440p and at least 144 frames per second. Put another way, you want a GPU that can saturate a monitor at its native resolution and refresh rate in as many games as possible. That will give you the smoothest possible experience in terms of motion clarity, and allow you to dabble in both competitive shooters and the latest single-player games as the mood strikes you.
NVIDIA vs AMD and Intel
Photo by Devindra Hardawar/Engadget
One of the confusing aspects of the GPU industry are all the players involved. What you need to know is that there are three main players: AMD, Intel and NVIDIA. They design the cards you can buy, but delegate the manufacturing of them to so-called add-in board (AIB) partners like ASUS, XFX, Gigabyte and others.
As you can probably imagine, this creates some headaches. The most annoying of which is that AMD, Intel and NVIDIA will often set recommended prices for their graphic cards, only for their partners to sell their versions of those GPUs above the manufacturer’s suggested retail price (MSRP). For example, NVIDIA’s website lists the RTX 5070 with a starting price of $549. On Newegg, there are no 5070s listed at that price. The only models anywhere close to $549 are open box specials. If you want one that comes sealed, that will cost you at least $600.
As for what company you should buy your new GPU from, before 2025, NVIDIA was the undisputed king of the market. Specific GeForce cards may have not offered the best rasterization performance in their price range, but between their performance in games with ray tracing and the fact NVIDIA was ahead on features like DLSS, an RTX GPU was a safe bet.
However, with this year’s RTX 50 series release, other than models like the RTX 5080 and 5090 where there’s no competition, it’s safe to say NVIDIA missed the mark this generation. If you’re in the market for an entry- or mid-level GPU, AMD and Intel offer better value, with cards that come with enough VRAM for now and into the future. That said, there are still a few reasons you might consider an NVIDIA GPU, starting with ray tracing.
Ray tracing
For decades, developers have used rasterization techniques to approximate how light behaves in the real world, and the results have been commendable. But if you know what to look for, it’s easy to see where the illusion falls apart. For that reason, real-time ray tracing has been a goal of industry for years, and in 2018 it became a reality with NVIDIA’s first RTX cards.
In some games, effects like ray-traced reflections and global illumination are transformational. Unfortunately, those features are expensive to run, often coming at a significant frame-rate drop without upscaling. Since ray tracing was optional in many games before 2025, you could save money by buying an AMD GPU. For example, even if the RX 7800 XT was worse at ray tracing than the RTX 4070, the former was often cheaper to buy, had more onboard VRAM and was as good or better rasterization performance in many games.
However, you can’t ignore ray tracing performance anymore. We’re starting to see releases like Doom: The Dark Ages where the tech is an integral part of a game’s rendering pipeline, and more are likely to follow in the future. Thankfully, AMD’s newest cards are much better in that regard, though you’ll still get an edge running an NVIDIA model. For that reason, if ray tracing is important to you, NVIDIA cards are still the way to go.
Refresh rates and frame rates
If you’re new to the world of PC gaming, it can be tricky to wrap your head around refresh rates. In short, the higher the refresh rate of a monitor, the more times it can update the image it displays on screen every second, thereby producing a smoother moving picture.
For example, moving elements on a monitor with a 240Hz refresh rate will look better than on one with a 120Hz refresh rate. However, that’s all contingent on your GPU being able to consistently render a game at the appropriate frame rates. In the case of a 120Hz monitor, you want a GPU with enough headroom to drive most games at 120 fps. Realistically, most video cards won’t be able to achieve that in every game, but it’s a good baseline to aim for when shopping for a new GPU.
Upscaling and latency
I’ve mentioned DLSS a few times already. Alongside FSR and Intel XeSS, DLSS is an example of what’s known as an image reconstruction technology. More and more, native rendering is going out of fashion in game design. With ray tracing and other modern effects enabled, even the most powerful GPUs can struggle to render a game at 1440p or 4K and a playable framerate. That’s why many developers will turn to DLSS, FSR or XeSS to eke out additional performance by upscaling a lower resolution image to QHD or UHD.
Upscaling in games is nothing new. For example, the PS4 Pro used a checkerboard technique to output games in 4K. What is different now is how modern GPUs go about it. With DLSS, NVIDIA pioneered an approach that uses machine learning to recreate an image at a higher resolution, and in the process, addressed some of the pitfalls of past upscaling methods. If you’re sensitive to these sorts of things, there’s still blur and shimmer with DLSS, FSR and XeSS, but it’s much less pronounced and can lead to significant performance gains.
To DLSS, NVIDIA later added single and multi-frame generation. DLSS is only available on NVIDIA cards, and following the recent release of DLSS 4, widely considered to offer the best image quality. That’s another reason why you might choose an NVIDIA card over one of its competitors. However, if you decide to go with an AMD GPU, don’t feel like you’re missing out. The company recently released FSR 4. While it’s not quite on par with DLSS 4 in terms of support and image quality, it’s a major leap over FSR 3 and FSR 2.
While on the subject of DLSS, I’ll also mention NVIDIA Reflex. It’s a latency-reducing technology NVIDIA introduced in 2020. AMD has its own version called Radeon Anti-Lag, but here again Team Green has a slight edge thanks to the recent release of Reflex 2. If you’re serious about competitive games, Reflex 2 can significantly reduce input lag, which will make it easier to nail your shots in Counter-Strike 2, Valorant and other shooters.
Driver support
Previously, one of the reasons to pick an NVIDIA GPU over the competition was the company’s solid track record of driver support. With one of the company’s video cards, you were less likely to run into stability issues and games failing to launch. In 2025, NVIDIA’s drivers have been abysmal, with people reporting frequent issues and bugs. So if you care about stability, AMD has a slight edge right now.
VRAM
As you’re comparing different GPUs, especially those in the same tier, pay close attention to the amount of VRAM they offer. Modern games will eat up as much VRAM as a GPU can offer, and if your card has a low amount, such as 8GB, you’re likely to run into a performance bottleneck.
If your budget allows for it, always go for the model with more VRAM. Consider, for instance, the difference between the $299 RTX 5060 and $429 RTX 5060 Ti. I know spending an extra $130 — close to 50 percent more — on the 5060 Ti is going to be a lot for some people, but it’s the difference between a card that is barely adequate for any recent release and one that will last you for a few years, and it all comes down to the amount of VRAM offered in each. Simply put, more is better.
A slight caveat to this is when comparing models that have different memory bandwidths. A GPU that can access more of its memory faster can outperform one with more memory, even if it has less of it outright. Here, you’ll want to read reviews of the models you’re comparing to see how they perform in different games.
Size and power draw
Modern GPUs are big. Most new cards will take up at least two PCI slots on the back of your motherboard. They can also vary dramatically in length, depending on the number of fans the AIB has added to cool the PCB. To be safe, be sure to check the length of the card you want to buy against the maximum clearance listed by your case manufacturer. If you have a radiator at the front of your case, you will also need to factor the size of that in your measurements. The last thing you want is to buy a card that doesn’t fit in your case.
Lastly, be sure to check the recommended power supply for the card you want. As a rule of thumb, unless you know what you’re doing, it’s best to just stick with the manufacturer’s recommendation. For instance, NVIDIA suggests pairing the RTX 5070 with a 750 watt PSU. So if you’re currently running a 650 watt unit, you’ll need to factor in the price of a PSU upgrade with your new GPU.
Should you buy a used GPU?
Devindra Hardawar for Engadget
It depends. If you can find a deal on an old RTX 40 series GPU, then yes. NVIDIA’s RTX 50 series don’t offer greatly improved performance over their predecessors, and with most models selling for more than their suggested retail price, it’s not a great time to buy a new NVIDIA card.
That said, I suspect finding a good deal on a used GPU will be difficult. Most people will know the value of what they have, and considering the current market, will probably try to get as much as they can for their old card.
You may find better deals on older AMD and Intel GPUs, but I think you’re better off spending more now on a new model from one of those companies since the generational gains offered by their latest cards are much more impressive. Simply put, the 9070 XT and B580 are two of the best cards you can buy right now.
Anything older than a card from NVIDIA’s 40 series or AMD’s RX 6000 family is not worth considering. Unless your budget is extremely tight or you mostly play older games, you’re much better off spending more to buy a new card that will last you longer.
When is a good time to buy a new GPU?
If you’ve read up to this point, you’re probably wondering if it’s even worth buying a GPU right now. The answer is (unsurprisingly) complicated. There are a handful of great cards like the Intel B580 and Radeon 9070 XT that are absolutely worth buying. The problem is finding any GPU at prices approaching those set by AMD, Intel or NVIDIA is really tough. To make things worse, uncertainty around President Trump’s tariff policies is likely to push prices even higher. If you own a relatively recent GPU, you’re probably best off trying to hold onto your current card until things settle down.
However, if your GPU isn’t cutting it anymore, you face a difficult decision: overpay now, or wait and potentially pay even more later. As much as I’m reluctant to recommend a prebuilt PC, if you’re already planning to build a new computer, it’s worth exploring your options there since you might end up saving money on a video card when it’s bundled together with all the other components you need.
Best GPUs for 2025: Engadget recommendations
Entry-level (1080p) GPUs
As we mentioned above, if you’re only aiming to play basic competitive shooters like Valorant and Overwatch 2 in 1080p, an entry-level GPU may be all you need. While 1080p isn’t an ideal resolution when it comes to sharpness, many gamers prefer it since it’s easier to reach higher framerates. And it also helps that 1080p gaming monitors, like the AOC 24G15N 24-inch we recommend, tend to offer speedy refresh rates for between $100 and $200. When you’re zipping through matches, you likely won’t have time to take a breath and appreciate the detail from higher resolutions.
Here are our recommendations for entry-level video cards.
GIGABYTE
Surprisingly enough, you can actually find this modern NVIDIA GPU for $300. While you’ll have to live with 8GB of RAM, that’s more than enough for 1080p gaming, and it also has the benefit of DLSS 4 upscaling.
With a $250 list price and 12GB of RAM, it’s hard to go wrong with the B580 on paper. Unfortunately, its price has shot up significantly, and it’s often hard to find it in stock. Still, it delivers excellent 1080p performance, and it can also play some games in 1440p well. (Check out our Intel Arc B580 review.)
While entry-level cards can dabble with 1440p gaming, it’s worth stepping up to something a bit more powerful if you actually want to achieve higher refresh rates. For most gamers, 1440p is the best balance between sharpness and high framerates. It looks noticeably better than 1080p, and doesn’t require the horsepower overhead of 4K. (And there’s a good chance you won’t really see a visual difference with the jump to 4K.)
Here are our recommendations for midrange GPUs.
Gigabyte / Best Buy
Sapphire Pulse / Newegg
AMD surprised us all with the Radeon RX 9070 and 9070 XT, two midrange cards that offered similar power to and more VRAM than NVIDIA’s more expensive cards. While you won’t see the RX 9070 for its $550 launch price today, you can still snag one for a slight premium. (Check out our AMD Radeon RX 9070 and 9070 XT review.)
If you want the most of what modern PC games have to offer, including 4K and all of the benefits of ray tracing, then be ready to spend big bucks on a high-end GPU. If you’re going this route, though, be sure you’re also gaming on a high-end monitor that befits these powerful GPUs.
Here are our recommendations for premium GPUs.
GIGABYTE
The RTX 5070 Ti surprised me with excellent 4K gaming performance for a launch price that was well below the RTX 5080. While its price has jumped significantly since then, it’s still the best overall NVIDIA card if you want to play in 4K at 120Hz or beyond. (Check out our NVIDIA RTX 5070 Ti review.)
If the RTX 5070 Ti isn’t enough for you, the RTX 5080’s additional power and 24GB of VRAM should suit your fancy. Just be prepared to pay around $1,500 for it, a 50 percent jump from its $999 launch price.
Listen, there’s only one choice here and it’s NVIDIA’s enormously powerful and fantastically expensive RTX 5090. It’s an absolute beast, with 32GB of VRAM and the most hardware NVIDIA has ever stuffed into a consumer GeForce GPU. The RTX 5090 doesn’t make sense for 99 percent of gamers — especially since it’s now going for $3,000, up from its $2,000 launch price — but if you have the cash to spare, it’ll certainly earn you bragging rights. (Check out our NVIDIA RTX 5090 review.)
Nearly 40% of Nvidia’s second quarter revenue came from just two customers, according to a filing with the Securities and Exchange Commission.
On Wednesday, the chipmaker reported record revenue of $46.7 billion during the quarter that ended on July 27 — a 56% year-over-year increase largely driven by the AI data center boom. However, subsequent reporting highlighted how much of that growth seems to be coming from just a handful of customers.
Specifically, Nvidia said that a single customer represented 23% of total Q2 revenue, while sales to another customer represented 16% of Q2 revenue. The filing does not identify either of these customers, only referring to them as “Customer A” and “Customer B.”
During the first half of the fiscal year, Nvidia says Customer A and Customer B accounted for 20% and 15% of total revenue, respectively. Four other customers accounted for 14%, 11%, another 11%, and 10% of Q2 revenue, the company says.
In its filing, the company says these are all “direct” customers — such as original equipment manufacturers (OEMs), system integrators, or distributors — who purchase their chips directly from Nvidia. Indirect customers, such as cloud service providers and consumer internet companies, purchase Nvidia chips from these direct customers.
In other words, it sounds unlikely that a big cloud provider like Microsoft, Oracle, Amazon, or Google might secretly be Customer A or Customer B — though those companies may be indirectly responsible for that massive spending.
In fact, Nvidia’s Chief Financial Officer Nicole Kress said that “large cloud service providers” accounted for 50% of Nvidia’s data center revenue, which in turn represented 88% of the company’s total revenue, according to CNBC.
Techcrunch event
San Francisco | October 27-29, 2025
What does this mean for Nvidia’s future prospects? Gimme Credit analyst Dave Novosel told Fortune that while “concentration of revenue among such a small group of customers does present a significant risk,” the good news is that “these customers have bountiful cash on hand, generate massive amounts of free cash flow, and are expected to spend lavishly on data centers over the next couple of years.”
Washington — The Trump administration’s 10% stake in Intel, announced not long after President Trump had called on the chip maker’s CEO to resign, is being criticized by conservatives and some economic policy experts alike, who worry such extensive government intervention undermines free enterprise.
Kevin Hassett, director of the White House National Economic Council, may have fueled those misgivings, telling CNBC this week that although Intel is a “very, very special” circumstance, that “there’ll be more transactions, if not in this industry, then other industries.” The possibility of the U.S. acquiring stakes in additional U.S. companies was immediately met with criticism.
Adam Posen, president of the Petersen Foundation for International Economics, responded immediately to Hassett’s comment, posting on X, “ARE you effing kidding me? We are going past 1984 into Animal Farm territory at this point,” referring to the George Orwell satirical novel critiquing totalitarianism. “Did anyone vote for this? Anyone?”
Daniel Di Martino, a fellow at the right-of-center Manhattan Institute, predicted that if that happens, the U.S. would see more cronyism, with the result that “companies will underperform because they know they will be bailed out,” and “taxpayers will lose billions.”
“You can’t just be against socialism when the left does it,” conservative talk show host Erick Erikson said of the Intel agreement. “If you’re not against socialism overall, guess what? You’re going to get socialism. So if you support socialism, apparently Donald Trump is your guy.”
Why did the U.S. invest in Intel?
Mr. Trump says he’d like to increase chip production in the U.S. and reduce the nation’s dependence on chips manufactured overseas. He believes that the investment in Intel will help the U.S. to better position itself to maintain its technological edge over China in the artificial intelligence race. But the U.S. had already invested in Intel through the Biden-era CHIPS and Science Act, and Mr. Trump and his top administration officials said the U.S. government is owed a return on their investment.
White House press secretary Karoline Leavitt on Thursday said the U.S. is taking a stake “to ensure that the United States government is making our country wealthy again and is benefitting from some of these deals.”
“We should get an equity stake for our money,” Commerce Secretary Howard Lutnick told CNBC. “So we’ll deliver the money, which was already committed under the Biden administration. We’ll get equity in return for it.”
But Intel has been struggling — not just for a couple of years, but for decades, said Scott Lincicome, a leading economic and trade policy expert who is a vice president at the libertarian Cato Institute and has criticized the Intel deal.
Intel prospered in the 1990s and early 2000s, when most personal computers relied on the company’s processors. The emergence of competitors like AMD and Intel’s own failure to adapt to mobile computing after the 2007 advent of the iPhone clobbered the chipmaker.
And now, as Nvidia and AMD vie for dominance in the AI chip race, Intel has been lagging.
“Even if you think government should be investing in companies, Intel is not a lean, mean, innovating machine,” Lincicome said.
The company lost nearly $19 billion last year and another $3.7 billion in the first six months of this year, prompting company plans to reduce its workforce by 25% by the end of the year. The company said the administration made the $8.9 billion investment in Intel common stock because of the government’s confidence in the role Intel plays in “expanding the domestic semiconductor industry.”
The Biden administration originally said Intel had to meet certain benchmarks to get the taxpayer money, but Mr. Trump removed those goals to buy the stake in Intel.
Peril in partial government ownership
Economic policy experts fear the U.S. stake in Intel will throw open the door to political pressure and cronyism.
Intel warned in a federal filing this week that there “could be adverse reactions, immediately or over time, from investors, employees, customers, suppliers, other business or commercial partners, foreign governments or competitors.”
Lincicome maintains that Intel was able to obtain a government infusion of cash not on the strength of its operations, but rather because it has the best lobbyists. And this will just lead more companies to vie for investment in the same way, he said.
“This is one of the problems with government picking winners and losers in industrial policy in general,” Lincicome said.
He outlined his concerns in an op-ed for the Washington Post this week.
“With the U.S. government as its largest shareholder, Intel will face constant pressure to align corporate decisions with the goals of whatever political party is in power,” Lincicome wrote. “Will Intel locate or continue facilities — such as its long-delayed ‘megafab’ in Ohio — based on economic efficiency or government priorities? Will it hire and fire based on merit or political connections?”
Lincicome isn’t the only analyst who pointed out that the uncomfortable decisions CEOs make could come into conflict with U.S. prerogatives if the country holds a stake in their companies.
“There are major risks to these companies,” said Michael Strain, director of economic policy studies at the American Enterprise Institute, while acknowledging it isn’t yet clear what the Trump administration is planning for any future investments. “A lot of the things that companies need to do in order to stay competitive in the market are politically unpopular,” like layoffs. “It’s going to be a lot harder for these companies to engage in those painful but necessary moves if the president feels like they would create a political vulnerability for him.”
Companies without U.S. investment will feel pressure, too, said Di Martino. A company that needs semiconductor chips may decide to buy from Intel because it doesn’t want to lose government contracts.
The Trump administration has shown a willingness to use industrial policy in other ways that depart from free market economic principles long favored by conservatives and corporate America. Most notably, Mr. Trump’s aggressive — and sometimes punitive — use of tariffs, which he has said will reduce the country’s trade deficit, revive American manufacturing and generate federal revenue, hearkens back to the mercantilism of centuries past and contrasts with the laissez-faire ideas that have shaped the American economy.
How is the U.S. paying for the Intel stake?
Much of the cash for the stake is coming from the Biden-era CHIPS and Science Act, which is intended to boost America’s competitiveness in the chip industry.
Intel has already received $2.2 billion from the CHIPS Act, and is on track to receive another $5.7 billion injection from the law. A different federal program gave Intel $3.2 billion, for a grand total of $11.1 billion, according to a release from Intel. Intel and the federal government say the ownership will be passive, and have not said how long the U.S. intends to hold onto its stake, although there is a provision for the government to expand its stake further.
How Intel plans to use the U.S. investment
The chip maker says it’s planning to use the money to expand its chip-making capacity by modernizing and increasing the size of U.S. sites in Arizona and elsewhere.
Hassett has defended the U.S. stake, referring to the process of partial ownership of Intel as “very, very special circumstances” because of the funding made available by the CHIPS Act. When he was asked about the U.S. bar for acquiring equity stakes in companies, Hassett told CNBC, “If we are adding fundamental value to your business, I think it’s fair for Donald Trump to think about the American people.”
Strain said a government stake in U.S. companies also poses a big risk for taxpayers, too.
“This is also going to accrue to the detriment of the American people, because you’re going to see a lot of good taxpayer money chasing bad investments because the government’s not going to extricate itself quickly or easily from these arrangements, and more generally, countries that have gone down this route have had slower productivity growth, slower increases in living standards, and companies that are less likely to be industry leaders,” Strain said.
Past U.S. stakes in big banks and automakers
One reason economists are uncomfortable with the government’s stake in Intel stems from the message it may send about the U.S. economy. The most prominent modern example of a similar U.S. investment took place during the 2008 financial crisis, when the U.S. sunk $700 billion into a big bank bailout and over $17 billion into two of the big three U.S. automakers. It did so because the banks were considered “too big to fail” and the potential collapse of the auto companies could cost millions of jobs.
Experts now are raising questions about the wisdom of buying a stake in a company when the economy isn’t in crisis.
Lincicome said the administration is sending a contradictory message by highlighting the struggles China is having with its economy while at the same time saying “we want to be more like China” by having the federal government more involved in U.S. companies.
“There is no crisis, there’s certainly no war, so this is a big break from what we’ve done before,” Lincicome said.
Although economists and politicians differ on the success of the General Motors and Chrysler bailouts, Lincicome said there was undoubtedly a crisis. The federal government took ownership stakes in the two automakers to stabilize them but within a few years had sold the stakes, after the companies were on firmer financial footing.
Not socialism, but maybe a step in that direction?
While partial ownership of Intel or other companies isn’t exactly socialism, Di Martino said it “absolutely” blurs the lines between the private sector and the public sector.
“Socialism and free enterprise are not a switch, they are a continuum,” Di Martino said, adding partial ownership of U.S. companies would be “definitely a step toward socialism, there is no doubt about that.”
Di Martino said the U.S. ownership stake in Intel “certainly gets us closer [to socialism] and makes us less prosperous.”
“I think the right way to describe it is a move toward state capitalism,” Strain said. “I don’t think I would describe it as socialism.”
Lutnick put it this way: “Intel agreed to give us 10% of their company, which, of course, was worth $11 billion.”
“So, it’s not socialism,” he said at a Trump Cabinet meeting Tuesday. “This is capitalism.”
Di Martino is dubious about whether that’s true. “We are intervening in the capital markets in a way that is going to lead to inefficiencies,” he said, adding, “And it’s going to shift capital away from other companies.”
Chip-design giant Nvidia is seeing high demand for its graphics processing units amid rising adoption of agentic AI. The demand is adding to Nvidia’s data center revenue, which increased 56% year over year to $46 billion, according to Nvidia’s fiscal second-quarter 2026 earnings results, announced Aug. 27. Financial institutions are prioritizing deployment of agentic AI […]
Nvidia, the world’s most valuable company with a market capitalization of $4.39 trillion at the time of writing, beat revenue expectations for its fiscal second quarter, reporting sales of $46.74 billion on Wednesday after market close.
Nvidia posted that data center revenue was up 56% from a year prior, reaching $41.1 billion.
The company’s longtime CEO, Jensen Huang, told Fox Business Network’sThe Claman Countdown on Thursday that AI, which Nvidia is advancing, would cause “some jobs” to disappear but result in new jobs becoming “invented.”
“One thing for sure, every job will be changed as a result of AI,” Huang said.
Nvidia CEO Jensen Huang. Photo by Wan Quan/VCG via Getty Images
Huang also told Fox Business that he expects the economy to be doing “very well” in the future due to AI and automation, and stated that the quality of life for humanity would improve.
Huang’s remarks add to what he said last month on an episode of The All-In Podcast. On the podcast, Huang stated that the “one thing we know for certain” is that people who use AI will replace those who don’t. He predicted that AI use will lead to more millionaires in the next five years than the Internet produced in two decades.
Huang also called AI the “greatest technology equalizer of all time” because it allows anyone to program by simply using plain English prompts (a practice known as “vibe coding,” which even Google CEO Sundar Pichai has dabbled in).
“AI in my case is creating jobs,” Huang said on the podcast, adding that the technology enables people to “create things that other people would like to buy.”
AI allows creative people to act on their ideas by providing technical services. In turn, it enables technical people to use it for creative endeavors, Huang pointed out.
Nvidia’s stock was up over 30% year-to-date at the time of writing.
Nvidia, the world’s most valuable company with a market capitalization of $4.39 trillion at the time of writing, beat revenue expectations for its fiscal second quarter, reporting sales of $46.74 billion on Wednesday after market close.
Nvidia posted that data center revenue was up 56% from a year prior, reaching $41.1 billion.
The company’s longtime CEO, Jensen Huang, told Fox Business Network’sThe Claman Countdown on Thursday that AI, which Nvidia is advancing, would cause “some jobs” to disappear but result in new jobs becoming “invented.”