Nvidia’s data center revenue skyrocketed during its fiscal first quarter 2025 as companies look to the chip-making giant for AI infrastructure. Data center revenue reached $22.6 billion, up 427% year over year, according to the company’s earnings report for the quarter ended April 28. Nvidia’s fiscal Q4 2024 data center revenue jumped 409% YoY to […]
The artificial intelligence (AI) market has continued gaining traction in 2024 as companies spend huge amounts of money on building up infrastructure so that they don’t fall behind in the race to deploy and integrate AI applications.
According to one estimate, global spending on AI is expected to cross a whopping $200 billion this year, and chipmakers such as Nvidia(NASDAQ: NVDA) have allowed investors to get rich from this massive splurge. Looking ahead, the market for semiconductors powering AI applications is expected to deliver a whopping $341 billion in annual revenue in 2033. The latest developments in the AI chip market signal that Nvidia continues to remain the best bet for investors to capitalize on this tremendous opportunity.
AMD’s and Intel’s earnings reports make it clear that they are far behind Nvidia
Nvidia enjoyed an early start in the AI chip market. Its A100 processors were used for training ChatGPT, the chatbot that kicked off the AI revolution toward the end of 2022. The company’s AI GPUs (graphics processing units) gained immense popularity and its H100 processor became a runaway success.
Rivals such as Advanced Micro Devices and Intel were left to play catch up as they didn’t have a chip powerful enough to compete with Nvidia’s H100. Both companies were behind Nvidia by at least a year on the AI chip development curve. This is evident from the fact that AMD’s rival to Nvidia’s H100, the MI300X accelerator, was launched in December 2023. Meanwhile, Intel’s H100 opponent, the Gaudi 3, was announced last month and will start shipping later this year.
Nvidia’s H100 went into full production in September 2022. This lead has allowed Nvidia to exercise a solid grip over the AI chip market and also explains why its competitors’ latest offerings aren’t gaining much traction. For instance, AMD sees its AI GPU sales hitting at least $4 billion in 2024. Intel is further behind and expects the Gaudi 3 launch to help it generate $500 million in AI chip sales in the second half of 2024.
Nvidia is leagues ahead of both Intel and AMD considering that it sold $47.5 billion worth of data center chips in fiscal 2024, an increase of 217% from the previous year. This also indicates that AMD and Intel’s new chips, which were supposed to help them cut into Nvidia’s 90%-plus market share, aren’t making much of a dent in the latter’s dominant position.
One of the reasons why that’s the case is because Nvidia has cornered a huge chunk of the supply of AI chips from its foundry partner, Taiwan Semiconductor Manufacturing (popularly known as TSMC). More specifically, Nvidia reportedly commands half of TSMC’s advanced chip packaging capacity that’s deployed for manufacturing AI chips.
What’s more, Nvidia is all set to widen the technology gap with its rivals with the launch of new AI GPUs based on the Blackwell architecture later this year. Market research company TrendForce expects Nvidia to secure a dedicated chip supply from TSMC for its next-generation chips.
TSMC’s monthly capacity to make advanced chips is expected to increase 150% this year to 40,000 wafers a month. By next year, TSMC is expected to double its capacity once again. The key thing to note here is that Nvidia is expected to consume more than half of TSMC’s advanced chip packaging capacity. So, Nvidia’s tight control over TSMC’s advanced chip supply is going to help it keep the likes of Intel and AMD at bay.
Nvidia’s AI lead is set to translate into terrific growth
Investment bank UBS recently increased its Nvidia price target to $1,150 from $1,100 citing the impending arrival of its next-generation AI GPUs. UBS is expecting the company to deliver $175 billion in revenue in 2025 (which will coincide with its fiscal 2026), along with earnings of $41 per share. Those estimates point toward a massive jump compared to Nvidia’s fiscal 2024 revenue of $60.9 billion and $12.96 per share in earnings.
Assuming Nvidia does hit $41 per share in earnings this fiscal year and trades at 30 times earnings, in line with the Nasdaq-100 index’s earnings multiple (using the index as a proxy for tech stocks), its stock price could hit $1,230 within a couple of years. That would be a 36% jump from current levels. However, Nvidia currently trades at 74 times earnings, and it is likely to trade at a premium valuation in the future as well thanks to its AI chip dominance.
So, it won’t be surprising to see this AI stock delivering much stronger gains than what analysts are expecting, which is why it would be a good idea to buy Nvidia following the latest earnings reports from its peers.
Should you invest $1,000 in Nvidia right now?
Before you buy stock in Nvidia, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $550,688!*
Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. TheStock Advisorservice has more than quadrupled the return of S&P 500 since 2002*.
Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Intel and recommends the following options: long January 2025 $45 calls on Intel and short May 2024 $47 calls on Intel. The Motley Fool has a disclosure policy.
Shares of Nvidia (NASDAQ: NVDA) briefly tumbled today, falling as much as 10.7% early in the trading session, but recovered those losses throughout the morning to finish the session down just 1.7%.
The catalyst for the movement was comments from famed billionaire investor Stanley Druckenmiller.
Image source: Getty Images.
What did Druckenmiller say about Nvidia?
In an interview on CNBC this morning, Druckenmiller, who had an impeccable record running the Duquesne Capital Management fund for nearly 30 years, said he’d cut his Nvidia stake in late March.
Druckenmiller argued that it was time to take profits in the stock, saying, “A lot of what we recognized has become recognized by the marketplace now.”
The former hedge fund titan was early to recognize Nvidia’s potential in the generative AI boom as his Duquesne Family Office moved aggressively into Nvidia stock in the fourth quarter of 2022 when ChatGPT launched.
Druckenmiller began selling his Nvidia stake in Q4, though that looks premature in retrospect as the AI stock surged again in this year’s Q1.
We’ll learn how much of his Nvidia stake he cut when 13F reports come in over the next week.
Why Nvidia stock bounced back
There was no particular catalyst for the recovery in Nvidia stock. Shares seemed to gain as some investors believed the sharp decline to be a buying opportunity.
Druckenmiller wasn’t particularly bearish on Nvidia and expressed long-term optimism for AI, but he seemed content to take profits as the stock has surged since he first began buying.
While we’re unlikely to see more multibagging returns from Nvidia as the company is now worth more than $2 trillion, the stock could continue to outperform. We’ll learn more when Nvidia reports Q1 earnings later this month.
Should you invest $1,000 in Nvidia right now?
Before you buy stock in Nvidia, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $564,547!*
Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. TheStock Advisorservice has more than quadrupled the return of S&P 500 since 2002*.
Jeremy Bowman has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy.
Intel(NASDAQ: INTC) was once the world’s leading chipmaker. The company dominated the central processing unit (CPU) market and held lucrative partnerships with tech titans like Apple. However, a series of missteps and market changes have seen Intel’s stock tumble 46% in the last three years.
In recent years, the company has posted multiple quarters of revenue declines alongside dwindling market share in the chip sector. Intel has finally returned to revenue growth, posting gains in its fourth quarter of 2023 and the first quarter of 2024. Yet, investors remain wary of its stock. Rising competitors like Nvidia and Advanced Micro Devices threaten Intel’s future in budding markets like AI.
However, the company seeks to separate itself from these chipmakers by adopting an internal foundry model similar to how Taiwan Semiconductor Manufacturing Company operates. It’ll take time for Intel to see significant earnings growth from this transition, but it could be worth investing in the company at one of its lowest positions to profit from its potential comeback.
Here’s why Intel’s stock is a buy for investors willing to hold for the very long term.
A dismal quarterly release
On April 25, Intel released its Q1 2024 earnings. During the quarter, revenue rose 9% year over year to $13 billion yet missed analysts’ forecasts by about $80 million. Non-GAAP (adjusted) earnings per share (EPS) came to $0.18, beating expectations by $0.04.
However, growth and better-than-expected EPS weren’t enough to rally investors, with Intel’s stock dipping 14% since posting its Q1 earnings.
Intel disappointed with weak guidance for its current quarter (Q2 2024). For the second quarter, the company expects to achieve earnings of $0.10 per share on sales of $13 billion. Comparatively, Wall Street expected EPS of $0.25 and $13.6 billion in revenue.
Intel’s CEO Pat Gelsinger reiterated the company’s long-term potential, saying, “We are one of two, maybe three, companies in the world that can continue to enable next-generation chip technologies.” Taiwan Semi is the other obvious name on his list, followed by a “perhaps” for Samsung.
Q1 2024 was the first to include income from Intel Foundry, its new manufacturing division. The segment posted revenue of just over $4 billion, down 10% year over year. Meanwhile, operating losses came to $2.5 billion, compared to the $7 billion in losses reported in 2023.
However, Counterpoint analyst Akshara Bassi points out, “Running a foundry is a capital-intensive business. That’s why most of the competitors are fabless; they are more than happy to outsource it to TSMC.” Intel expects foundry losses to peak in 2024 and break even by the end of 2030.
Intel is moving in the right direction, but its stock will require patience
Intel’s transition to a foundry-first company will be costly. However, the company expects the change to help reduce costs and save between $8 billion and $10 billion by 2025. Meanwhile, increased efficiencies are projected to see Intel achieve non-GAAP gross margins of 60%.
Furthermore, Intel won’t be alone in covering the costs of its foundry business. The company is a leading recipient of President Biden’s CHIPS Act, an initiative to expand the United States’ semiconductor manufacturing capabilities. The program has earmarked $8.5 billion for Intel to build at least four facilities across the country.
Government funds have already begun to be dispersed, with the Biden administration announcing in April that Samsung would receive just over $6 billion to build out its chip manufacturing capacity. It’s still early days for the initiative, with experts projecting it to be 2027 or 2028 before U.S.-built chips make their way into mainstream consumer use. However, it’s a promising direction for Intel.
INTC PE Ratio (Forward) Chart
Perhaps the only positive development in Intel’s tumbling stock price is that its shares are potentially the best-valued option in the chip market. Nvidia and AMD’s recent rallies have seen their stock prices skyrocket over the last year.
The chart above shows Intel has the lowest forward price-to-earnings ratio and price-to-sales ratio out of these three chipmakers by a significant margin. These figures suggest Intel’s stock is trading at a bargain compared to AMD or Nvidia.
As a result, anyone looking to add a chip stock to their portfolio would probably do well to make a long-term investment in Intel now while they wait for Nvidia and AMD to come down to a more attractive price point.
Should you invest $1,000 in Intel right now?
Before you buy stock in Intel, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Intel wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $544,015!*
Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. TheStock Advisorservice has more than quadrupled the return of S&P 500 since 2002*.
Dani Cook has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Apple, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Intel and recommends the following options: long January 2025 $45 calls on Intel and short May 2024 $47 calls on Intel. The Motley Fool has a disclosure policy.
Only four companies in the world are worth more than $2 trillion: Microsoft, Apple, Alphabet — parent company of Google — and computer chip maker Nvidia. The California-based company saw its stock market value soar from $1 trillion to $2 trillion in just eight months this past year, fueled by the insatiable demand for its cutting-edge technology — the hardware and software that make today’s artificial intelligence possible. We wondered how a company founded in 1993 to improve video game graphics turned into a titan of 21st-century AI. So, we went to Silicon Valley to meet Nvidia’s 61-year-old co-founder and CEO Jensen Huang, who has no doubt AI is about to change everything.
At Nvidia’s annual developers conference this past March, the mood wasn’t just upbeat … it was downright giddy. More than 11,000 enthusiasts — software developers, tech moguls, and happy shareholders — filed into San Jose’s pro hockey arena to kick off a four-day AI extravaganza. They came to see this man: Jensen Huang, CEO of Nvidia.
Jensen Huang: Welcome to GTC!
Bill Whitaker: What was that like for you to walk out on that stage and see that?
Jensen Huang: You know, Bill, I’m an engineer. I’m not a performer. When I walked out there– and all of the people going crazy, it took the breath outta me. And so I was the scariest I’ve ever been. I’m still scared. (laughter)
You’d never know it. Clad in his signature cool, black outfit, Jensen shared the stage with Nvidia-powered robots
Jensen Huang: Let me finish up real quick
And shared his vision of an AI future …
Jensen Huang: A new Industrial Revolution
It reminded us of the transformational moment when Apple’s Steve Jobs unveiled the iPhone. Jensen Huang unveiled Nvidia’s latest graphics processing unit, or GPU.
Nvidia CEO Jensen Huang
60 Minutes
Jensen Huang: This is Blackwell.
Designed in America, but made in Taiwan like most advanced semiconductors, Blackwell, he says, is the fastest chip ever.
Jensen Huang: Google is gearing up for Blackwell. The whole industry is gearing up for Blackwell.
Nvidia ushered in the AI revolution with its game-changing GPU – a single chip able to process a myriad of calculations all at once – not sequentially like more standard chips. The GPU is the engine of Nvidia’s AI computer, enabling it to rapidly absorb a fire hose of information.
Jensen Huang: It does quadrillions of calculations a second, it’s just insane numbers.
Bill Whitaker: Is it doing things now that surprise you?
Jensen Huang: We’re hoping that it does things that surprise us. That’s the whole point. In some areas like drug discovery designing better materials that are lighter, stronger. We need artificial intelligence to help us explore the universe in places that we could’ve never done ourselves.
Jensen Huang: Let me show you. Here, Bill, look at this.
Jensen took us around the GTC Convention Hall to show us what AI has made possible in just the past few years.
Robot: I’m making your drink, now.
Some creations were dazzling
Jensen Huang: This is a digital twin of the earth. Once it learns how to calculate weather, it can calculate and predict weather 3,000 times faster than a supercomputer, and 1,000 times less energy.
But Nvidia’s AI revolution extends far beyond this hall.
Pinar Seyhan Demirdag: Blue metallic spaceship, and let’s generate something.
Pinar Seyhan Demirdag and Bill Whitaker
60 Minutes
Pinar Seyhan Demirdag is originally from Istanbul, but co-founded Cuebric near Boston. Her AI application uses Nvidia’s GPUs to instantly turn a simple text prompt into a virtual movie set for a fraction of the cost of today’s backdrops.
Bill Whitaker: This isn’t something that’s already planned and in there?
Pinar Seyhan Demirdag: No, we’re doing it in real time. It’s live.
Bill Whitaker: Is Hollywood knocking at your door?
Pinar Seyhan Demirdag: We’re- we’re getting a lot of love.
Nearby at Generate:Biomedicines, Dr. Alex Snyder, head of research and development, is using Nvidia’s technology to create protein-based drugs. She was surprised at first to see they showed promise in the lab.
Alex Snyder: Initially when I was told about the application of AI to drug development, I sort of rolled my eyes and said yeah, you know, show me the data. And then I looked at the data, and it was very compelling.
Dr. Snyder’s team asks its AI models to create new proteins to fight specific diseases like cancer and asthma. A new way to defeat the coronavirus is now in clinical trials.
Bill Whitaker: You’re now working with proteins that do not exist in nature? That you’re coming up with by way of AI?
Alex Snyder: Yes, we are actually generating what we call de novo. Completely new structures that have not existed before.
Bill Whitaker: Do you trust it?
Alex Snyder: As scientists we can’t trust. We have to test. We’re not putting Frankensteins into people. We’re taking what’s known and we’re really pushing the field, we’re pushing the biology to make drugs that look like regular drugs, but function even better.
Brett Adcock: This is a technology that will only get better from here.
Brett Adcock is CEO of Figure, a Silicon Valley startup with funding from Nvidia. Look at his answer to labor shortages: an Nvidia GPU-driven prototype called Figure 01.
Brett Adcock: I think what’s been really extraordinary is the pace of progress we’ve made in 21 months.
Bill Whitaker: From zero to this in 21 months–
Figure has developed an Nvidia GPU-driven humanoid robot.
60 Minutes
Brett Adcock: Ze– zero to this, yeah. We– we were walking this robot in under a year since I incorporated the company.
Bill Whitaker: Could you do this without– NVIDIA’s technology?
Brett Adcock: We think they’re arguably the best in the world at this. I don’t know if this would be possible without them.
Figure 01: I’m here to assist with tasks as requested.
We were amazed that Figure 01 is not just walking, but seemed to reason.
Bill Whitaker: Hand me something healthy.
Figure 01: On it.
Figure 01 was able to understand I wanted the orange, not the packaged snack.
Bill Whitaker: Thank you.
It’s not yet perfected…
Bill Whitaker: You’re gonna get it.
But the early results are so promising, German automaker BMW plans to start testing the robot in its South Carolina factory this year.
Brett Adcock: I think there’s an opportunity to ship billions of robots in the coming decades onto the planet.
Bill Whitaker: Billions? I would think that a lot of workers would look at that as, “This robot is taking my job.”
Brett Adcock: I think over time, AI and robotics will start doing more and more of what humans can and better.
Bill Whitaker: But what about the worker?
Jensen Huang: The workers work for companies. And so companies, when they become more productive, earnings increase. I’ve never seen one company that had earnings increase and not hire more people.
Bill Whitaker: There are some jobs that are going to become obsolete.
Jensen Huang: Well, lemme offer it this way. I believe that you still want human in the loop, because we have good judgment, because there are circumstances that the machines are not– just not going to understand.
The futuristic Nvidia campus sits just down the road from its modest birthplace … this Denny’s in San Jose …
Bill Whitaker: Good morning.
…where 31 years ago, Nvidia was just an idea.
Jensen Huang: My goodness.
Bill Whitaker and Jensen Huang in a Denny’s
60 Minutes
When he was 15, Jensen Huang worked as a dishwasher at Denny’s. As a 30-year-old electrical engineer, married with two children, he and two friends, Nvidia co-founders Chris Malachowsky and Curtis Priem, envisioned a whole new way of processing video game graphics.
Jensen Huang: And so we came here, right here to this Denny’s, sat right back there, and the three of us decided to start the company. Frankly I- I had no idea how to do it. And nor did they. None of us knew how to do anything.
Their big idea: accelerate the processing power of computers with a new graphics chip. Their initial attempt flopped and nearly bankrupted the company in 1996.
Jensen Huang: And the genius of the engineers, and Chris, and Curtis, um, we pivoted to the right way of doing things.
And created their groundbreaking GPU. The chip took video games from this, to this today.
Jensen Huang: Completely changed computer graphics– saved the company– launched us into– into the stratosphere.
Just eight years after Denny’s, Nvidia earned a spot in the S&P 500. Jensen then set his sights on developing the software and hardware for a revolutionary, GPU-driven supercomputer, which would take the company far beyond video games. To Wall Street it was a risky bet. To early developers of AI, it was a revelation.
Bill Whitaker: Was that luck or was that vision?
Jensen Huang: That was– luck founded by vision. We invented this capability and then, one day, the researchers that were– creating deep learning, discovered this architecture, because this architecture turns out to have been perfect for them.
Bill Whitaker: Perfect for AI?
Jensen Huang: Perfect for AI.
Jensen Huang: This is the first one we’ve ever shipped.
In 2016, Jensen delivered Nvidia’s AI supercomputer, the first of its kind, to Elon Musk, then a board member of OpenAI, which used it to create the building blocks of ChatGPT.
Jensen Huang: How are you?
When AI took off…
Jensen Huang: Hey guys.
So did Jensen Huang’s reputation.
Man: Can we get a picture?
Jensen Huang: Yeah, yeah.
He’s now a Silicon Valley celebrity. He told us the boy who immigrated from Taiwan at age 9 could never have conceived of this.
Nvidia CEO Jensen Huang
60 Minutes
Jensen Huang: It is the most extraordinary thing, Bill, that a normal dishwasher-busboy could grow up to be this. There’s no magic– it’s just 61 years of hard work every single day. I don’t think there’s anything more than that.
We met a humble Jensen at Denny’s. Back at Nvidia’s headquarters in Santa Slara, we saw he can be intense.
Bill Whitaker: Let me tell you what some of the people who you work with said about you: Demanding. Perfectionist. Not easy to work for. All that sound right?
Jensen Huang: Perfectly, yeah. It should be like that. If you want to do extraordinary things, it shouldn’t be easy.
Jensen Huang: All right, guys. Keep up the good work.
Nvidia has never done better. Investors are bullish – but last year more than 600 top AI scientists, ethicists, and others signed this statement urging caution, warning of AI’s risk to humanity.
Bill Whitaker: When I talk to you and I hear you speak, part of me goes, “Gee whiz.” And the other part of me goes, “Oh my God. What are we in for?”
Jensen Huang: Yeah, yeah.
Bill Whitaker: Which one is it?
Jensen Huang: It’s both. It’s both. Yeah. You’re feeling all the right feelings. I feel both.
Bill Whitaker: You feel both?
Jensen Huang: Sure. Sure.
Pinar Seyhan Demirdag: Humanity will have the choice to see themselves inferior to machines or superior to machines.
Pinar Seyhan Demirdag is an AI optimist, though she named her company Cuebric, an homage to Stanley Kubrick…the director of “2001: A Space Odyssey.”
In that film, Hal, the AI computer goes rogue.
Bill Whitaker: I think that’s what worries people about AI, that we will lose control of it.
Pinar Seyhan Demirdag: Just because a machine can do faster calculations, comparisons, and analytical solution creation, that doesn’t make it smarter than you. It simply computates faster. In my world, in my belief, smarts have to do with your capacity to love, create, expand, transcend. These are qualities that no machine can ever bear, that are reserved to only humans.
Jensen Huang: There is something going on.
Jensen Huang sees an AI future of progress and prosperity … not one with machines as our masters. We can only hope he’s right.
Jensen Huang: Thank you all for coming! Thank you.
Produced by Marc Lieberman and Cassidy McDonald. Broadcast associate, Mariah B. Campbell. Edited by Peter M. Berman.
Bill Whitaker is an award-winning journalist and 60 Minutes correspondent who has covered major news stories, domestically and across the globe, for more than four decades with CBS News.
In today’s video, I discuss recent updates impacting Nvidia (NASDAQ: NVDA), Meta Platforms (NASDAQ: META), and other semiconductor companies. Check out the short video to learn more, consider subscribing, and click the special offer link below.
*Stock prices used were the after-market prices of April 24, 2024. The video was published on April 24, 2024.
Should you invest $1,000 in Nvidia right now?
Before you buy stock in Nvidia, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $488,186!*
Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. TheStock Advisorservice has more than quadrupled the return of S&P 500 since 2002*.
Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. Jose Najarro has positions in Meta Platforms, Microsoft, and Nvidia. The Motley Fool has positions in and recommends Meta Platforms, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.
A French startup has raised a hefty seed investment to “rearchitect compute infrastructure” for developers wanting to build and train AI applications more efficiently.
FlexAI, as the company is called, has been operating in stealth since October 2023, but the Paris-based company is formally launching Wednesday with €28.5 million ($30 million) in funding, while teasing its first product: an on-demand cloud service for AI training.
This is a chunky bit of change for a seed round, which normally means real substantial founder pedigree — and that is the case here. FlexAI co-founder and CEO Brijesh Tripathi was previously a senior design engineer at GPU giant and now AI darling Nvidia, before landing in various senior engineering and architecting roles at Apple; Tesla (working directly under Elon Musk); Zoox (before Amazon acquired the autonomous driving startup); and, most recently, Tripathi was VP of Intel’s AI and super compute platform offshoot, AXG.
FlexAI co-founder and CTO Dali Kilani has an impressive CV, too, serving in various technical roles at companies including Nvidia and Zynga, while most recently filling the CTO role at French startup Lifen, which develops digital infrastructure for the healthcare industry.
The seed round was led by Alpha Intelligence Capital (AIC), Elaia Partners and Heartcore Capital, with participation from Frst Capital, Motier Ventures, Partech and InstaDeep CEO Karim Beguir.
FlexAI team in Paris
The compute conundrum
To grasp what Tripathi and Kilani are attempting with FlexAI, it’s first worth understanding what developers and AI practitioners are up against in terms of accessing “compute”; this refers to the processing power, infrastructure and resources needed to carry out computational tasks such as processing data, running algorithms, and executing machine learning models.
“Using any infrastructure in the AI space is complex; it’s not for the faint-of-heart, and it’s not for the inexperienced,” Tripathi told TechCrunch. “It requires you to know too much about how to build infrastructure before you can use it.”
By contrast, the public cloud ecosystem that has evolved these past couple of decades serves as a fine example of how an industry has emerged from developers’ need to build applications without worrying too much about the back end.
“If you are a small developer and want to write an application, you don’t need to know where it’s being run, or what the back end is — you just need to spin up an EC2 (Amazon Elastic Compute cloud) instance and you’re done,” Tripathi said. “You can’t do that with AI compute today.”
In the AI sphere, developers must figure out how many GPUs (graphics processing units) they need to interconnect over what type of network, managed through a software ecosystem that they are entirely responsible for setting up. If a GPU or network fails, or if anything in that chain goes awry, the onus is on the developer to sort it.
“We want to bring AI compute infrastructure to the same level of simplicity that the general purpose cloud has gotten to — after 20 years, yes, but there is no reason why AI compute can’t see the same benefits,” Tripathi said. “We want to get to a point where running AI workloads doesn’t require you to become data centre experts.”
With the current iteration of its product going through its paces with a handful of beta customers, FlexAI will launch its first commercial product later this year. It’s basically a cloud service that connects developers to “virtual heterogeneous compute,” meaning that they can run their workloads and deploy AI models across multiple architectures, paying on a usage basis rather than renting GPUs on a dollars-per-hour basis.
GPUs are vital cogs in AI development, serving to train and run large language models (LLMs), for example. Nvidia is one of the preeminent players in the GPU space, and one of the main beneficiaries of the AI revolution sparked by OpenAI and ChatGPT. In the 12 months since OpenAI launched an API for ChatGPT in March 2023, allowing developers to bake ChatGPT functionality into their own apps, Nvidia’s shares ballooned from around $500 billion to more than $2 trillion.
LLMs are pouring out of the technology industry, with demand for GPUs skyrocketing in tandem. But GPUs are expensive to run, and renting them from a cloud provider for smaller jobs or ad-hoc use-cases doesn’t always make sense and can be prohibitively expensive; this is why AWS has been dabbling with time-limited rentals for smaller AI projects. But renting is still renting, which is why FlexAI wants to abstract away the underlying complexities and let customers access AI compute on an as-needed basis.
“Multicloud for AI”
FlexAI’s starting point is that most developers don’t really care for the most part whose GPUs or chips they use, whether it’s Nvidia, AMD, Intel, Graphcore or Cerebras. Their main concern is being able to develop their AI and build applications within their budgetary constraints.
This is where FlexAI’s concept of “universal AI compute” comes in, where FlexAI takes the user’s requirements and allocates it to whatever architecture makes sense for that particular job, taking care of the all the necessary conversions across the different platforms, whether that’s Intel’s Gaudi infrastructure, AMD’s Rocm or Nvidia’s CUDA.
“What this means is that the developer is only focused on building, training and using models,” Tripathi said. “We take care of everything underneath. The failures, recovery, reliability, are all managed by us, and you pay for what you use.”
In many ways, FlexAI is setting out to fast-track for AI what has already been happening in the cloud, meaning more than replicating the pay-per-usage model: It means the ability to go “multicloud” by leaning on the different benefits of different GPU and chip infrastructures.
For example, FlexAI will channel a customer’s specific workload depending on what their priorities are. If a company has limited budget for training and fine-tuning their AI models, they can set that within the FlexAI platform to get the maximum amount of compute bang for their buck. This might mean going through Intel for cheaper (but slower) compute, but if a developer has a small run that requires the fastest possible output, then it can be channeled through Nvidia instead.
Under the hood, FlexAI is basically an “aggregator of demand,” renting the hardware itself through traditional means and, using its “strong connections” with the folks at Intel and AMD, secures preferential prices that it spreads across its own customer base. This doesn’t necessarily mean side-stepping the kingpin Nvidia, but it possibly does mean that to a large extent — with Intel and AMD fighting for GPU scraps left in Nvidia’s wake — there is a huge incentive for them to play ball with aggregators such as FlexAI.
“If I can make it work for customers and bring tens to hundreds of customers onto their infrastructure, they [Intel and AMD] will be very happy,” Tripathi said.
“I want to get AI compute to the point where the current general purpose cloud computing is,” Tripathi noted. “You can’t do multicloud on AI. You have to select specific hardware, number of GPUs, infrastructure, connectivity, and then maintain it yourself. Today, that’s that’s the only way to actually get AI compute.”
When asked who the exact launch partners are, Tripathi said that he was unable to name all of them due to a lack of “formal commitments” from some of them.
“Intel is a strong partner, they are definitely providing infrastructure, and AMD is a partner that’s providing infrastructure,” he said. “But there is a second layer of partnerships that are happening with Nvidia and a couple of other silicon companies that we are not yet ready to share, but they are all in the mix and MOUs [memorandums of understanding] are being signed right now.”
The Elon effect
Tripathi is more than equipped to deal with the challenges ahead, having worked in some of the world’s largest tech companies.
“I know enough about GPUs; I used to build GPUs,” Tripathi said of his seven-year stint at Nvidia, ending in 2007 when he jumped ship for Apple as it was launching the first iPhone. “At Apple, I became focused on solving real customer problems. I was there when Apple started building their first SoCs [system on chips] for phones.”
Tripathi also spent two years at Tesla from 2016 to 2018 as hardware engineering lead, where he ended up working directly under Elon Musk for his last six months after two people above him abruptly left the company.
“At Tesla, the thing that I learned and I’m taking into my startup is that there are no constraints other than science and physics,” he said. “How things are done today is not how it should be or needs to be done. You should go after what the right thing to do is from first principles, and to do that, remove every black box.”
“One of the first things I did at Tesla was to figure out how many microcontrollers there are in a car, and to do that, we literally had to sort through a bunch of those big black boxes with metal shielding and casing around it, to find these really tiny small microcontrollers in there,” Tripathi said. “And we ended up putting that on a table, laid it out and said, ‘Elon, there are 50 microcontrollers in a car. And we pay sometimes 1,000 times margins on them because they are shielded and protected in a big metal casing.’ And he’s like, ‘let’s go make our own.’ And we did that.”
GPUs as collateral
Looking further into the future, FlexAI has aspirations to build out its own infrastructure, too, including data centers. This, Tripathi said, will be funded by debt financing, building on a recent trend that has seen rivals in the space including CoreWeave and Lambda Labs use Nvidia chips as collateral to secure loans — rather than giving more equity away.
“Bankers now know how to use GPUs as collaterals,” Tripathi said. “Why give away equity? Until we become a real compute provider, our company’s value is not enough to get us the hundreds of millions of dollars needed to invest in building data centres. If we did only equity, we disappear when the money is gone. But if we actually bank it on GPUs as collateral, they can take the GPUs away and put it in some other data center.”
Markets are still just in the first phase of an AI-led upsurge, Goldman Sachs wrote in a recent research note.
Nvidia is the central piece in the opening inning, but analysts still see more upside in further phases of the AI story.
The firm says eventually, AI will broaden to benefit other sectors, such as computer services.
Artificial intelligence has already propelled markets into overdrive, and yet this equity power fuel is nowhere close to running low, Goldman Sachs said.
Instead, stocks are just in the first phase of the AI-led upsurge, which will broaden out to uplift more and more sectors, the bank said in a Tuesday post.
“If Nvidia represents the first phase of the AI trade, Phase 2 will be about other companies that are helping to build AI-related infrastructure,” it said. “Phase 3 deals with companies incorporating AI into their products to boost revenue, while Phase 4 is about the AI-related productivity gains that should be possible across many businesses.”
Here’s a deeper rundown of Goldman’s AI timeline:
First phase
Ever since ChatGPT sparked the AI race in late 2022, chipmaker Nvidia has catapulted in markets. Given that its semiconductors are the basis for this emerging software, the company has made itself a cornerstone of the technological transition, climbing as much as 590% in this period.
“Remarkably, though, those gains have been entirely driven by earnings growth: The company’s price-to-earnings ratio is barely higher than it was at the start of last year,” Goldman said.
Supporting the view that the first phase is not over, analysts see even more gains ahead. Recently, Evercore ISI put out a bull target of $1,540, representing 81% upside from Friday’s stock price.
“We think investors underestimate the importance of the chip+hardware+software ecosystem that Nvidia has created,” analysts said.
Second phase
Eventually, Goldman expects other firms to benefit from the AI buildout, though this isn’t limited to just semiconductor producers and designers. Cloud providers, computer equipment makers and security software developers will all have a part to play.
That also extends to real world infrastructure, as AI will need expansive data centers to run it — a boost to everything from real estate to the utility sector.
As generative AI advances, firms that can integrate the technology into their offerings will win out, Goldman said.
Already, top tech firms are racing to implement services that lean on AI, and investors have rewarded those that best do so. For instance, Wedbush Securities’ Dan Ives has long celebrated Microsoft’s CoPilot tool as one example, calling it an “iPhone moment” for the company.
Its stock has gained as much as 14.4% this year.
Fourth phase
With infrastructure and services established to support AI over the long-haul, the technology will have free reign to maximize company productivity across the economy, Goldman said.
“Software and services companies and commercial and professional services firms appear to have the biggest potential for earnings gains from AI, because they have a combination of relatively high labor costs overall and a high share of their labor bill that may be exposed to AI automation,” it wrote.
Hardware is emerging as a key AI growth area. For Big Tech companies with the money and talent to do so, developing in-house chips helps reduce dependence on outside designers such as Nvidia and Intel while also allowing firms to tailor their hardware specifically to their own AI models, boosting performance and saving on energy costs.
“From Meta’s point of view … it gives them a bargaining tool with Nvidia,” Edward Wilford, an analyst at tech consultancy Omdia, told Fortune. “It lets Nvidia know that they’re not exclusive, [and] that they have other options. It’s hardware optimized for the AI that they are developing.”
Why does AI need new chips?
AI models require massive amounts of computing power because of the huge amount of data required to train the large language models behind them. Conventional computer chips simply aren’t capable of processing the trillions of data points AI models are built upon, which has spawned a market for AI-specific computer chips, often called “cutting-edge” chips because they’re the most powerful devices on the market.
Semiconductor giant Nvidia has dominated this nascent market: The wait list for Nvidia’s $30,000 flagship AI chip is months long, and demand has pushed the firm’s share price up almost 90% in the past six months.
And rival chipmaker Intel is fighting to stay competitive. It just released its Gaudi 3 AI chip to compete directly with Nvidia. AI developers—from Google and Microsoft down to small startups—are all competing for scarce AI chips, limited by manufacturing capacity.
Why are tech companies starting to make their own chips?
Both Nvidia and Intel can produce only a limited number of chips because they and the rest of the industry rely on Taiwanese manufacturer TSMC to actually assemble their chip designs. With only one manufacturer solidly in the game, the manufacturing lead time for these cutting-edge chips is multiple months. That’s a key factor that led major players in the AI space, such as Google and Meta, to resort to designing their own chips. Alvin Nguyen, a senior analyst at consulting firm Forrester, told Fortune that chips designed by the likes of Google, Meta, and Amazon won’t be as powerful as Nvidia’s top-of-the-line offerings—but that could benefit the companies in terms of speed. They’ll be able to produce them on less specialized assembly lines with shorter wait times, he said.
“If you have something that’s 10% less powerful but you can get it now, I’m buying that every day,” Nguyen said.
Even if the native AI chips Meta and Google are developing are less powerful than Nvidia’s cutting-edge AI chips, they could be better tailored to the company’s specific AI platforms. Ngyuen said that in-house chips designed for a company’s own AI platform could be more efficient and save on costs by eliminating unnecessary functions.
“It’s like buying a car. Okay, you need an automatic transmission. But do you need the leather seats, or the heated massage seats?” Ngyuen said.
“The benefit for us is that we can build a chip that can handle our specific workloads more efficiently,” Melanie Roe, a Meta spokesperson, wrote in an email to Fortune.
Nvidia’s top-of-the-line chips sell for about $25,000 apiece. They’re extremely powerful tools, and they’re designed to be good at a wide range of applications, from training AI chatbots to generating images to developing recommendation algorithms such as the ones on TikTok and Instagram. That means a slightly less powerful, but more tailored chip could be a better fit for a company such as Meta, for example—which has invested in AI primarily for its recommendation algorithms, not consumer-facing chatbots.
“The Nvidia GPUs are excellent in AI data centers, but they are general purpose,” Brian Colello, equity research lead at investment research firm Morningstar, told Fortune. “There are likely certain workloads and certain models where a custom chip might be even better.”
The trillion-dollar question
Ngyuen said that more specialized in-house chips could have added benefits by virtue of their ability to integrate into existing data centers. Nvidia chips consume a lot of power, and they give off a lot of heat and noise—so much so that tech companies may be forced to redesign or move their data centers to integrate soundproofing and liquid cooling. Less powerful native chips, which consume less energy and release less heat, could solve that problem.
AI chips developed by Meta and Google are long-term bets. Ngyuen estimated that these chips took roughly a year and a half to develop, and it’ll likely be months before they’re implemented at a large scale. For the foreseeable future, the entire AI world will continue to depend heavily on Nvidia (and, to a lesser extent, Intel) for its computing hardware needs. Indeed, Mark Zuckerberg recently announced that Meta was on track to own 350,000 Nvidia chips by the end of this year (the company’s set to spend around $18 billion on chips by then.) But movement away from outsourcing computing power and toward native chip design could loosen Nvidia’s chokehold on the market.
“The trillion-dollar question for Nvidia’s valuation is the threat of these in-house chips,” Colello said. “If these in-house chips significantly reduce the reliance on Nvidia, there’s probably downside to Nvidia’s stock from here. This development is not surprising, but the execution of it over the next few years is the key valuation question in our mind.”
Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.
GlobalFoundries, a company that makes chips for others, including AMD and General Motors, previously announced a partnership with Lightmatter. Harris says his company is “working with the largest semiconductor companies in the world as well as the hyperscalers,” referring to the largest cloud companies like Microsoft, Amazon, and Google.
If Lightmatter or another company can reinvent the wiring of giant AI projects, a key bottleneck in the development of smarter algorithms might fall away. The use of more computation was fundamental to the advances that led to ChatGPT, and many AI researchers see the further scaling-up of hardware as being crucial to future advances in the field—and to hopes of ever reaching the vaguely-specified goal of artificial general intelligence, or AGI, meaning programs that can match or exceed biological intelligence in every way.
Linking a million chips together with light might allow for algorithms several generations beyond today’s cutting edge, says Lightmatter’s CEO Nick Harris. “Passage is going to enable AGI algorithms,” he confidently suggests.
The large data centers that are needed to train giant AI algorithms typically consist of racks filled with tens of thousands of computers running specialized silicon chips and a spaghetti of mostly electrical connections between them. Maintaining training runs for AI across so many systems—all connected by wires and switches—is a huge engineering undertaking. Converting between electronic and optical signals also places fundamental limits on chips’ abilities to run computations as one.
Lightmatter’s approach is designed to simplify the tricky traffic inside AI data centers. “Normally you have a bunch of GPUs, and then a layer of switches, and a layer of switches, and a layer of switches, and you have to traverse that tree” to communicate between two GPUs, Harris says. In a data center connected by Passage, Harris says, every GPU would have a high-speed connection to every other chip.
Lightmatter’s work on Passage is an example of how AI’s recent flourishing has inspired companies large and small to try to reinvent key hardware behind advances like OpenAI’s ChatGPT. Nvidia, the leading supplier of GPUs for AI projects, held its annual conference last month, where CEO Jensen Huang unveiled the company’s latest chip for training AI: a GPU called Blackwell. Nvidia will sell the GPU in a “superchip” consisting of two Blackwell GPUs and a conventional CPU processor, all connected using the company’s new high-speed communications technology called NVLink-C2C.
The chip industry is famous for finding ways to wring more computing power from chips without making them larger, but Nvidia chose to buck that trend. The Blackwell GPUs inside the company’s superchip are twice as powerful as their predecessors but are made by bolting two chips together, meaning they consume much more power. That trade-off, in addition to Nvidia’s efforts to glue its chips together with high-speed links, suggests that upgrades to other key components for AI supercomputers, like that proposed by Lightmatter, could become more important.
Nvidia and Amazon Web Services, the lucrative cloud arm of Amazon, have a surprising amount in common. For starters, their core businesses emerged from a happy accident. For AWS, it was realizing that it could sell the internal services — storage, compute and memory — that it had created for itself in-house. For Nvidia, it was the fact that the GPU, created for gaming purposes, was also well suited to processing AI workloads.
That eventually led to some explosively growing revenue in recent quarters. Nvidia’s revenue has been growing at triple digits, moving from $7.1 billion in Q1 2024 to $22.1 billion Q4 2024. That’s a pretty amazing trajectory, although the vast majority of that growth was in the company’s data center business.
While Amazon never experienced that kind of intense growth spurt, it has consistently been a big revenue driver for the e-commerce giant, and both companies have experienced first market advantage. Over the years, though, Microsoft and Google have joined the market creating the Big Three cloud vendors, and it is expected that other chip makers will eventually begin to gain meaningful market share, too, even as the revenue pie continues to grow over the next several years.
Both companies were clearly in the right place at the right time. As web apps and mobile began emerging around 2010, the cloud provided the on-demand resources. Enterprises soon began to see the value of moving workloads or building applications in the cloud, rather than running their own data centers. Similarly, as AI took off over the last decade, and large language models more recently, it coincided with the explosion in the use of GPUs to process these workloads.
Over the years, AWS has grown into a tremendously profitable business, currently on a run rate close to $100 billion, one that even separate from Amazon would be a highly successful company. But AWS growth has begun to slow down, even as Nvidia’s takes off. It’s partly the law of large numbers, something that will eventually affect Nvidia, too.
The question is whether Nvidia can sustain that growth to become a long-term revenue powerhouse like AWS has become for Amazon. If the GPU market begins to tighten, Nvidia does have other businesses, but as this chart shows, these are much smaller revenue generators that are growing much more slowly than the GPU data center business currently is.
Image Credits: Nvidia
The short-term financial outlook
As the above chart notes, Nvida’s revenue growth has been astronomical in recent quarters. And according to both Nvidia and Wall Street analysts, it’s set to continue.
In its recent earnings report covering the fourth quarter of its fiscal 2024 (the three months ending January 31, 2024), Nvidia told its investors that it anticipates $24 billion worth of revenue in its current quarter (Q1 FY25). Compared to its year-ago first quarter, Nvidia expects to post growth of around 234%.
That is simply not a number we often see from mature public companies. However, given the company’s massive revenue ramp in recent quarters, its growth rate is expected to decline. From a 22% revenue gain from the third to fourth quarter of its recently concluded fiscal year, Nvidia anticipates a more modest 8.6% growth rate from the final quarter of its fiscal 2024 to the first of its fiscal 2025. Certainly, on a year-over-year comparison and not a look back at just three months, Nvidia’s growth rate remains incredible for the current period. But there are other growth declines on the horizon.
For example, analysts expect Nvidia to generate $110.5 billion worth of revenue in its current fiscal year, up just over 81% from its year-ago results. That’s dramatically lower than the 126% gain it posted in its recently concluded fiscal 2024.
To which we ask: So what? For at least the next several quarters, Nvidia is expected to continue scaling its revenue past the $100 billion annual run rate mark, impressive for a company that in its year-ago period today saw total revenues of just $7.19 billion.
In short, analysts, and to a more modest degree Nvidia, see huge buckets of growth ahead for the company, even if some of the eye-popping revenue growth figures will slow this calendar year. It’s unclear what happens on a slightly longer timeframe.
Momentum ahead
It seems that AI could be the gift that keeps on giving for Nvidia for the next several years, even as more competition from AMD, Intel and other chipmakers begins to emerge. Much like AWS, Nvidia will face stiffer competition eventually, but it controls so much of the market right now, it can afford to cede some.
Looking at it purely at the chip level, not at boards or other adjacencies, IDC shows Nvidia firmly in control:
Image Credits: IDC
If you look at the board level with these market share numbers from Jon Peddie Research (JPR), a firm that tracks the GPU market, while Nvidia still dominates, AMD is coming on stronger:
Image Credits: Jon Peddie Research
C Robert Dow, an analyst at JPR, says some of these fluctuations have to do with when new products are introduced. “AMD gains percentage points here and there depending on cycles in the market — when new cards are introduced — and inventory levels, but Nvidia has been in a dominant position for years, and that will continue,” Dow told TechCrunch.
Shane Rau, an IDC analyst who follows the silicon market, also expects the dominance to continue, even as trends shift and change. “There are trends and countertrends, the markets in which Nvidia participates are big and getting bigger, and growth will continue, at least for another five years,” Rau said.
Part of the reason for that is Nvidia is selling more than just the chip itself. “They’ll sell you boards, systems, software, services and time on one of their own supercomputers. So any of those markets are big and growing and Nvidia is attached to all of them,” he said.
But not everyone sees Nvidia as an unstoppable force. David Linthicum, a longtime cloud consultant and author, says that you don’t always need GPUs, and companies are beginning to realize that. “They say they need GPUs. I look at it, do some of the back of the envelope math, and they don’t need them. CPUs are perfectly fine,” he said.
As this happens, he thinks Nvidia will begin to slow down and competition will loosen its stronghold on the market. “I think that we’re going to see Nvidia morph into a weaker player over the next couple of years. And we’re going to see that because there’s too many substitutes that are being built out there.”
Rau says other vendors will also benefit as companies expand AI use cases with Nvidia products. “What I think you’ll see going forward is growing markets that’ll create tailwinds for Nvidia. But then there’ll be other companies that also follow in those tailwinds that will benefit from AI particularly.”
It’s also possible that some disruptive force will come into play and that would be a positive outcome to keep one company from becoming too dominant. “You almost hope disruption will happen because that’s the way markets and capitalism work best, right? Someone gets an early lead, other suppliers follow, the market grows. You get established players, who are eventually disrupted by a better way to do the same thing within their market or within adjacent markets that are crossing into theirs,” Rau said.
In fact, we are beginning to see that happening at Amazon as Microsoft gains ground via its relationship with OpenAI and Amazon is forced to play catch-up when it comes to AI. Whatever happens to Nvidia in the long run, it’s firmly in the driver’s seat right now, making money hand over fist, dominating a growing market and having just about everything going its way. But that doesn’t mean it will always be this way or that there won’t be more competitive pressure down the road.
In less than two years, NVIDIA’s H100 chips, which are used by nearly every AI company in the world to train large language models that power services like ChatGPT, made it one of the world’s most valuable companies. On Monday, NVIDIA announced a next-generation platform called Blackwell, whose chips are between seven and 30 times faster than the H100 and use 25 times less power.
“Blackwell GPUs are the engine to power this new Industrial Revolution,” said NVIDIA CEO Jensen Huang at the company’s annual GTC event in San Jose attended by thousands of developers, and which some compared to a Taylor Swift concert. “Generative AI is the defining technology of our time. Working with the most dynamic companies in the world, we will realize the promise of AI for every industry,” Huang added in a press release.
NVIDIA’s Blackwell chips are named in honor of David Harold Blackwell, a mathematician who specialized in game theory and statistics. NVIDIA claims that Blackwell is the world’s most powerful chip. It offers a significant performance upgrade to AI companies with speeds of 20 petaflops compared to just 4 petaflops that the H100 provided. Much of this speed is made possible thanks the 208 billion transistors in Blackwell chips compared to 80 billion in the H100. To achieve this, NVIDIA connected two large chip dies that can talk to each other at speeds up to 10 terabytes per second.
In a sign of just how dependent our modern AI revolution is on NVIDIA’s chips, the company’s press release includes testimonials from seven CEOs who collectively lead companies worth trillions of dollars. They include OpenAI CEO Sam Altman, Microsoft CEO Satya Nadella, Alphabet CEO Sundar Pichai, Meta CEO Mark Zuckerberg, Google DeepMind CEO Demis Hassabis, Oracle chairman Larry Ellison, Dell CEO Michael Dell, and Tesla CEO Elon Musk.
“There is currently nothing better than NVIDIA hardware for AI,” Musk says in the statement. “Blackwell offers massive performance leaps, and will accelerate our ability to deliver leading-edge models. We’re excited to continue working with NVIDIA to enhance AI compute,” Altman says.
NVIDIA did not disclose how much Blackwell chips would cost. Its H100 chips currently run between 25,000 and $40,000 per chip, according to CNBC, and entire systems powered by these chips can cost as much as $200,000.
Despite their costs, NVIDIA’s chips are in high demand. Last year, delivery wait times were as high as 11 months. And having access to NVIDIA’s AI chips is increasingly seen as a status symbol for tech companies looking to attract AI talent. Earlier this year, Zuckerberg touted the company’s efforts to build “a massive amount of infrastructure” to power Meta’s AI efforts. “At the end of this year,” Zuckerberg wrote, “we will have ~350k Nvidia H100s — and overall ~600k H100s H100 equivalents of compute if you include other GPUs.”
(Bloomberg) — Two Nvidia Corp. directors sold about $180 million in shares of the chipmaker in recent days, becoming the latest insiders to cash in as the stock continues to push deeper into record territory.
Most Read from Bloomberg
Tench Coxe, a former managing director at venture capital firm Sutter Hill Ventures who has been on Nvidia’s board since 1993, sold 200,000 shares on March 5 at $850.03 to $852.50, according to a filing. Coxe still holds more than 3.7 million shares.
Mark Stevens, a director since 2008, sold 12,000 shares on March 4 at $852.06 to $855.02.
The sales come amid a blistering rally for Nvidia that’s seen the stock soar 79% this year on optimism that brisk sales of its chips used for artificial intelligence computing will continue unabated. Nvidia closed at another record on Wednesday as it extended its win streak to a fifth day, and now boasts a market value of $2.2 trillion, trailing only Microsoft Corp. and Apple Inc. in the S&P 500 Index.
Last month, other directors unloaded 99,000 shares after Nvidia’s blowout earnings report. The shares sold were worth about $80 million at the time.
Nvidia corporate offices on Meridian Parkway in Durham.
Brian Gordon
Remember last spring when Nvidia surpassed a trillion dollars market cap? Well, the California chipmaker is now worth nearly twice that. With its dizzying stock ascent, it is the world’s fourth-most valuable company behind Microsoft, Apple and the Saudi Arabian Oil Group.
Microsoft has offices in the Triangle in North Carolina. Apple does, too (though we’re waiting on the big promised campus). The Saudi oil group, to my knowledge, does not.
But Nvidia is here, in Durham, and it seems to be hiring. On LinkedIn, the company lists 51 openings with the Bull City as a possible location, alongside cities like Austin, Santa Clara, and Redmond, Washington. Most of the roles are for various types of senior engineers. Several of the openings were posted just this week.
The company makes processing chips called graphics processing units, or GPUs, which work with other chips, called CPUs, to perform more complex computations. For most of its history, Nvidia GPUs were used to power video games; the original Xbox, for example, exclusively used the company’s chips.
Today, the market has an insatiable appetite for Nvidia chips to run their expanding AI platforms. I hope employees have been getting paid in stock options: Since Jan 1., Nvidia stock is up 65%. Since this time last year, it’s risen 252%.
Competitors are coming; Microsoft, Meta, and Google are among the giants investing in manufacturing their own chips. Yet AI-driven demand has for now made Nvidia the undisputed sector king. Last week, its market cap briefly eclipsed $2 trillion. Companywide, its headcount has swelled, too. At the start of 2020, Nvidia had 13,775 employees. As of late January, it employed 29,600.
How many of those workers are in the Triangle? In May, Nvidia told me it had around 300 workers based at its Meridian Parkway office near Research Triangle Park. Last week, I asked for an updated area headcount and initially got rejected by a company spokesperson who said, in an apparent change in policy, that Nvidia doesn’t “disclose numbers by location.”
I pushed to ask if 300 was still an accurate local employment figure, and the spokesperson said yes.
“We’re a growing company and will continue to expand our presence in various industries including health care, automotive, and more,” she said.
So Nvidia’s Durham footprint isn’t massive yet. It lists more than 1,500 positions on LinkedIn overall, which puts the 51 openings listing Durham in context. But the company is here. And it is posting jobs. It stands to reason its Durham workforce will grow. Whether Nvidia tells us about it in the future is less certain.
On to the rest of this week’s news.
Jobs deal comes, job deals go
Every other Tuesday often turn into busy days for North Carolina business reporters. That is when the state’s Economic Investment Committee generally meets to approve new economic incentives or terminate previous deals. This week was no exception, with North Carolina doing both. First the good news, the EIC approved a grant for the Japanese pharma company Kyowa Kirin to bring 102 jobs to the Lee County city of Sanford.
Japanese businesses and the Tar Heel State have had a strong relationship dating back to the 1970s. Gov. Roy Cooper’s team boasts his trip last fall to the East Asian country has helped North Carolina land multiple recent economic projects.
Now the less positive news. On Tuesday, the EIC canceled incentive grants for Clorox in Durham and Syneos Health in Wake County. Both companies cited the rise of remote work among the reasons they weren’t going to reach their initial hiring commitments. Clorox appears to also be divesting from its vitamin/supplements division.
Honestly, both Clorox and Syneos have larger area workforces than I realized. Syneos, a contract research organization, has its global headquarters in Morrisville, where it employs around 2,000. Clorox says it still has more than 500 employees in the Durham area, including at the Burt’s Bees headquarters.
The Burt’s Bees headquarters at the American Tobacco Campus in Durham. Mark Schultz mschultz@newsobserver.com
Election years are good years for Bandwidth
It’s an election year, which means Raleigh’s Bandwidth will be busy. The telecommunications software company provides mass messaging services — text and voice — for candidates. On an earnings call Wednesday, Bandwidth’s chief financial officer Daryl Raiford said the company projects “political campaign messaging and associated surcharges” to contribute $40 million in revenue.
And Bandwidth had a good week overall, with earnings beating expectations, sending its stock up 65%.
“In 2024, we expect our growth in commercial messaging to be joined by further benefit from the U.S. election season, where our capabilities uniquely serve many longstanding customers,” company CEO David Morken told investors.
Bandwidth’s sprawling new 533,000-square-foot campus off Edward Mills Road in west Raleigh. Chantal Allam
Tillis talks taxes in WSJ
In a Wall Street Journal op-ed, North Carolina Sen. Thom Tillis said he would reject the Wyden-Smith Tax Bill that passed the House a month ago. “House Republicans let themselves get played by Democrats seeking to expand the welfare state,” he wrote. Tillis objects to how the legislation approaches the child tax credit — who would be eligible for the program and how Congress would fund it.
In his opinion piece, Tillis didn’t mention the North Carolina startups that are staring down bankruptcy due to a recent accounting change to Section 174 of the federal tax code, which Wyden-Smith would address. This is disappointing, says Evan Garland, a business consultant in Raleigh who has led a lobbying effort to reform Section 174.
“I feel it remains important for these companies’ voices be heard, as a counterpoint,” she told me in an email. .
Short Stuff: Robots, haircuts and sports betting
Bits and bytes. A handful of Triangle restaurants have debuted robots to bring food to tables. They’re hits at a pho place in Morrisville, a seafood spot in Rocky Mount, and a Mexican restaurant in Wake Forest.
Over/under 7,000 commercials. Sports betting goes statewide March 11. I’ve been seeing a lot of North Carolina-specific ads from Fanduel and Draft Kings spokespeople like comedian Kevin Hart and former Duke star JJ Reddick.
City accommodations. Boxyard RTP, the Research Triangle’s de facto downtown, has welcomed several new businesses, including a barber shop, New Wave Capitol Suites, which opened mid-February. A nice addition for the Park’s employees — and its future residents.
A robot delivers an order to lunchtime guests at Pho 919 in Morrisville. Ethan Hyman ehyman@newsobserver.com
National Tech Happenings
The Federal Trade Commission is challenging Kroger’s attempt to acquire fellow giant supermarket chain Albertsons, arguing customers will suffer. The two companies disagree, arguing the competitive market now encompasses nontraditional grocers like Amazon. A decade ago, Kroger purchased Harris Teeter and later closed its Triangle-area Kroger stores.
The Supreme Court is hearing arguments around Florida and Texas laws that limit how social media companies moderate their platforms. The states say they want to ensure political fairness. The Florida law prevents a platform from banning a candidate while the Texas law restricts the companies from removing political content.
Can thermal batteries halve U.S. industrial heating costs? The startup Antora Energy raised $150 million to ramp up production.
Thanks for reading!
This story was originally published March 1, 2024, 9:02 AM.
Related stories from Raleigh News & Observer
Brian Gordon is the Technology & Innovation reporter for The News & Observer and The Herald-Sun. He writes about jobs, start-ups and all the big tech things transforming the Triangle. Brian previously worked as a senior statewide reporter for the USA Today Network and covered education for the Asheville Citizen-Times.
Without a doubt, many artificial intelligence (AI) investors are kicking themselves for missing out on Nvidia‘s big gains. With the stock up 280% in the last year and over 1,800% in five years, it is one of the major beneficiaries in the AI space.
Nonetheless, Nvidia is not the only AI stock in the chip industry, and AI is so much more than semiconductors. With the breadth of AI investing options, the industry should continue to bring opportunity. Three Fool.com contributors have ideas on where AI investors can look next: Amazon(NASDAQ: AMZN), The Trade Desk(NASDAQ: TTD), and Tesla (NASDAQ: TSLA).
Amazon has many ways to win when it comes to AI
Jake Lerch (Amazon): There are hotter AI stocks out there, but Amazon remains one worth watching, and buying. Here’s why:
First, the company is the largest cloud services provider. Amazon Web Services (AWS) is estimated to have about 31% of the worldwide cloud services market. That’s important because new generative AI tools and applications often utilize cloud services like AWS. As the AI revolution rolls on, Amazon is poised to profit thanks to its lead in the cloud infrastructure market.
Second, Amazon’s massive e-commerce business dovetails nicely with many different AI applications. For example, the company has already introduced Rufus, a new AI-powered shopping assistant designed to help people by answering questions, making pricing comparisons, and generating product recommendations.
In addition, Amazon is using AI in many other areas of its operations, such as:
Streamlining prescription drug delivery time and cost through Amazon Pharmacy.
Lowering the company’s environmental impact through AI-generated recommendations to reduce packaging use.
Improving shopping recommendations via Amazon Fashion.
Updating Alexa-enabled devices to enhance conversation and dialogue between users and Alexa.
On top of all of that, Amazon remains one of the world’s best-run companies. Shares are up 73% over the last 12 months, while revenue growth has bounced back to a solid 13%.
In short, Amazon remains a smart choice for AI-focused investors.
The Trade Desk benefits from AI and digital advertising tailwinds
Justin Pope (The Trade Desk): Artificial intelligence is a hot topic today, but it began disrupting the advertising business several years back when The Trade Desk was in its infancy. Brands and other companies can buy advertising on The Trade Desk’s platform, which uses AI and user data to match ads to potential customers. This is far more effective than traditional advertising, which would broadcast to broad audiences on television, radio, or in print.
The Trade Desk has thrived, growing profitably since its 2016 initial public offering. The reason? The Trade Desk sits in an ideal spot in the industry. Advertising dollars are shifting to digital mediums and while competitors like Meta Platforms and Alphabet operate with limited transparency, The Trade Desk offers more information to its clients, and that is winning over customers.
TTD Revenue (TTM) Chart
Total worldwide ad spending in 2023 was an estimated $830 billion, which means that The Trade Desk’s $9.6 billion in gross ad spending translates to just over 1% of market share. That leaves a tremendous growth runway for this company operating outside the closed ecosystems of big technology companies.
The Trade Desk’s long-term growth opportunities and profitable business model make the stock a no-brainer AI investment you can hold for the long term.
Tesla likely has some AI-driven surprises under the hood
Will Healy(Tesla): Investors may tend to look at Tesla as an automaker, but it’s actually a diverse business also developing battery technology, solar energy solutions, and AI breakthroughs.
Instead of relying on chip companies like Nvidia for its technology, Tesla has developed its own semiconductor and robotics solutions. Among these are the Dojo chip, designed to power neural networks, and the FSD (full self-driving) chip, which would power fully autonomous vehicles.
CEO Elon Musk wants to launch a robotaxi business based on Tesla technology. With robotaxis, analysts at Cathie Wood’s Ark Invest believe Tesla’s revenue could reach a minimum of $600 billion by 2027, over seven times the 2023 level of $82 billion.
Wood believes that growth would take Tesla’s stock price to $2,000 per share, a more than tenfold gain from today’s levels.
While that may seem outrageous, and Musk has a track record of being overly ambitious in his promises, Wood predicted a split-adjusted price target of $267 per Tesla share in 2018. Within less than three years, Wood’s prediction came to pass, so she could be right again.
Tesla’s stock price has pulled back as Tesla has cut prices on electric vehicles (EVs) to boost sales and stay competitive with emerging rivals. That pessimism has taken its P/E ratio down to 45, a low valuation rarely seen in the stock’s history.
Although profits are expected to fall 1% this year, analysts predict a 36% increase in 2025. These earnings forecasts give some validation to Wood’s thesis. Some of that optimism may be related to the release of the lower-cost, compact Model 2 EV expected for 2025, and investors are also likely to jump in as the company improves its AI and self-driving capabilities.
Should you invest $1,000 in Amazon right now?
Before you buy stock in Amazon, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Amazon wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. The Stock Advisor service has more than tripled the return of S&P 500 since 2002*.
Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. Jake Lerch has positions in Alphabet, Amazon, Nvidia, and Tesla. Justin Pope has no position in any of the stocks mentioned. Will Healy has positions in The Trade Desk. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Nvidia, Tesla, and The Trade Desk. The Motley Fool has a disclosure policy.
Envestnet data and analytics revenue fell during its fiscal fourth quarter amid rumblings in December of a Yodlee sale. The wealthtech giantâs data and analytics revenue fell 7% year over year to $38.6 million during the quarter, according to its earnings presentation.  WHY IT MATTERS: In December, there were talks of a potential sale […]
History is proof the U.S. stock market always climbs to new highs given enough time. But the stocks that lead the charge higher aren’t always the same. To help find the new leaders, Wall Street often groups them together to separate them from the rest of the market. For example, CNBC financial analyst Jim Cramer coined the FAANG acronym in 2017 to describe five of the largest technology companies at the time:
Facebook, which now trades as Meta Platforms
Apple
Amazon
Netflix
Google, which now trades as Alphabet
That leadership shifted in 2023 when a group of seven stocks drove the S&P 500 index to an annual return of twice its historical average. Bank of America analyst Michael Hartnett dubbed those stocks the “Magnificent Seven,” and they include:
Meta Platforms
Apple
Amazon
Alphabet
Microsoft
Nvidia (NASDAQ: NVDA)
Tesla
Image source: Getty Images.
It’s time for the “AI Five,” according to one analyst
With Tesla stock sinking 22% so far this year, Jim Cramer thinks it should be booted from the Magnificent Seven entirely. The company is facing sluggish electric vehicle sales in 2024, which could keep a lid on its stock price and weaken the power of the Magnificent Seven as a group.
It prompted one analyst — Glen Kacher from Light Street Capital — to rethink the stock market’s leadership altogether. He thinks investors should be focused on artificial intelligence (AI), so he has identified a new group of stocks and called it the “AI Five.” It includes:
Nvidia
Microsoft
Taiwan Semiconductor Manufacturing
Advanced Micro Devices(NASDAQ: AMD)
Broadcom (NASDAQ: AVGO)
Each company has a hand in developing the hardware and software necessary to bring AI to life. Here are two AI Five stocks investors should consider buying right now.
1. Advanced Micro Devices (AMD)
Advanced Micro Devices might be one of the best semiconductor stocks to own in 2024. Its new MI300 data center chips are designed to process AI workloads, and they are shaping up to be the main rivals to Nvidia’s industry-leading H100.
The MI300 comes in two configurations. The MI300X is a pure graphics processor (GPU) like the H100, whereas the MI300A combines GPU and central processing unit (CPU) hardware to create the world’s first accelerated processing unit (APU) for data centers. The MI300A will power the El Capitan supercomputer at the Lawrence Livermore National Laboratory, and it’s expected to be the most powerful in the world when it comes online later this year.
Some of the world’s largest data center operators, companies like Meta Platforms, Microsoft, and Oracle, are also racing to get their hands on MI300 chips. They have relied almost entirely on Nvidia up until now, but supply constraints are pushing them to look for viable alternatives, and AMD is ready.
In the fourth quarter of 2023, AMD issued a bullish forecast for the MI300. The company originally expected the GPU to pull in $2 billion worth of sales in 2024, but it raised that number to $3.5 billion, much to the delight of investors.
AI is also coming to personal computers, where users can process AI on-device for a faster experience, which reduces the reliance on external data centers. AMD’s Ryzen AI series of neural processing units (NPUs) already power more than 50 notebook designs, and the company is working with Microsoft to develop a new version of Windows that will run AI workloads more efficiently.
Millions of personal computers have already shipped with Ryzen AI chips, giving AMD a 90% market share in the segment. Ryzen AI drove the company’s Client segment revenue to $1.5 billion in the fourth quarter, representing a whopping 62% year-over-year increase. AMD expects that momentum to continue, especially because it’s preparing to launch a next-generation chip that could be more than three times faster.
Simply put, 2024 is set to be incredibly exciting for AMD, and the company could be on the cusp of a multiyear growth cycle on the back of its new hardware slate.
2. Broadcom
As far as being an AI stock, Broadcom lives in the shadow of glamorous names like AMD and Nvidia. However, Broadcom is developing AI on multiple fronts, and its stock has delivered a 343% return over the last five years, so it definitely warrants some attention. Despite being founded in 1991, the company really took a leap forward when it merged with semiconductor giant Avago Technologies in 2016.
Broadcom is now a conglomerate that not only includes Avago but also several acquired companies like semiconductor device supplier CA Technologies, cybersecurity giant Symantec, and cloud software developer VMware. Broadcom spent a whopping $98.6 billion on those three acquisitions since 2018.
VMware, which had a price tag of $69 billion alone, is an increasingly important company in the context of the AI boom. Its software allows users to run virtual machines to distribute cloud infrastructure more efficiently. For example, one user on one server might only utilize 10% of its capacity, but virtual machines allow multiple users to plug into that server so it operates at capacity. Considering so many companies are racing to access AI data center infrastructure, optimization is one way they can squeeze the most value out of what they have.
Broadcom itself is also considered a leader in networking and server connectivity solutions for the data center. It developed a high-bandwidth switch called Tomahawk 5, which is designed to accelerate AI and machine learning workloads. A switch regulates how fast data travels from one point to another, and considering developers are feeding billions of data points to powerful GPUs to train AI models, it has become an important piece of the infrastructure puzzle.
Broadcom generated a record-high $35.8 billion in revenue during fiscal 2023 (ended Oct. 29), which was an increase of 8% compared to fiscal 2022. However, Broadcom’s revenue is expected to grow by 40% in fiscal 2024 to $50 billion, thanks to the inclusion of VMware’s financial results for the first time.
Based on Broadcom’s $42.25 in non-GAAP (adjusted) earnings per share in fiscal 2023 and its current stock price of $1,226.55, it trades at a price-to-earnings (P/E) ratio of 29.1. That’s a 9% discount to the 32.1 P/E of the Nasdaq-100 index, which implies Broadcom is still cheap relative to its peers in the tech sector.
Given the company’s growing presence in AI through acquisitions and in-house development, Broadcom looks like a great AI Five stock to buy now and hold — especially at this price.
Where to invest $1,000 right now
When our analyst team has a stock tip, it can pay to listen. After all, the newsletter they have run for two decades, Motley Fool Stock Advisor, has more than tripled the market.*
They just revealed what they believe are the 10 best stocks for investors to buy right now… and Advanced Micro Devices made the list — but there are 9 other stocks you may be overlooking.
Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. Bank of America is an advertising partner of The Ascent, a Motley Fool company. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon, Apple, Bank of America, Meta Platforms, Microsoft, Netflix, Nvidia, Oracle, Taiwan Semiconductor Manufacturing, and Tesla. The Motley Fool recommends Broadcom and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.
Chip manufacturing behemoth Nvidia posted record revenue in the fourth quarter as companies across industries look to develop and deploy generative AI. The Santa Clara, Calif.-based company posted data center revenue of $18.4 billion, up 409% year over year, according to the companyâs earnings report. âFourth-quarter data center growth was driven by both training and […]
Nvidia crushed expectations with a bumper quarterlyearnings report on Wednesday, reporting a 265% increase in revenue from the same period a year ago, sending shares up over 9% in extended trading. CEO Jensen Huang said Nvidia now has to “allocate [chips] fairly” as customers flock to its processors, key to the AI boom. “Accelerated computing and generative AI have hit the tipping point,” Huang said.
But amidst the blowout quarter, Nvidia also acknowledged how tensions between the U.S. and China, particularly over semiconductors, is affecting its business. China now represents “mid-single digit percentage” of Nvidia’s data center revenue, chief financial officer Collette Kress said on Wednesday. She suggested that China would make up a similar percentage of revenue for the current quarter as well. (Data center revenue aligns with Nvidia’s AI chip business)
It’s a significant drop: Nvidia has previously noted that China made up as much as a quarter of the company’s data center revenue.
The U.S. first announced controls on the sales of advanced semiconductors to China in October 2022. Companies like Nvidia then developed chips that complied with the restrictions yet still offered the same advanced capabilities. The Biden administration updated its restrictions last October to close that loophole.
On Thursday, Kress admitted that the U.S. government has not granted a license to Nvidia to ship restricted products to China. Nvidia has started shipping alternative products to China that don’t require a license, she continued.
Huang said Nvidia has “immediately paused” and “reset” its product offerings in China, which he blamed for the drop in data center revenue from China. The company would do its best to succeed in the Chinese market “within the specifications of U.S. restrictions”, he said.
Nvidia is again trying to develop chips for the Chinese market that comply with U.S. restrictions, but Chinese customers are reportedly turning to domestic alternatives instead. Chinese tech companies are less interested in buying Nvidia’s downgraded products, which are now closer in performance to cheaper Chinese options, the Wall Street Journal reports. Chinese chipmakers are pitching their own chips as a safer option due to the possibility of new controls from the U.S., Reuters reported in December.