ReportWire

Tag: Mistral AI

  • At 25, Wikipedia Navigates a Quarter-Life Crisis in the Age of A.I.

    [ad_1]

    Turning 25 amid an A.I. boom, Wikipedia is racing to protect traffic, volunteers and revenue without losing its mission. Photo illustration by Nikolas Kokovlis/NurPhoto via Getty Images

    Traffic to Wikipedia, the world’s largest online encyclopedia, naturally ebbs and flows with the rhythms of daily life—rising and falling with the school calendar, the news cycle or even the day of the week—making routine fluctuations unremarkable for a site that draws roughly 15 billion page views a month. But sustained declines tell a different story. Last October, the Wikimedia Foundation, the nonprofit that oversees Wikipedia, disclosed that human traffic to the site had fallen 8 percent in recent months as a growing number of users turned to A.I. search engines and chatbots for answers.

    “I don’t think that we’ve seen something like this happen in the last seven to eight years or so,” Marshall Miller, senior director of product at the Wikimedia Foundation, told Observer.

    Launched on Jan. 15, 2001, Wikipedia turns 25 today. This milestone comes at a pivotal point for the online encyclopedia, which is straddling a delicate line between fending off existential risks posed by A.I. and avoiding irrelevance as the technology transforms how people find and consume information.

    “It’s really this question of long-term sustainability,” Lane Becker, senior director of earned revenue at the Wikimedia Foundation, told Observer. “We’d like to make it at least another 25 years—and ideally much longer.”

    While it’s difficult to pinpoint Wikipedia’s recent traffic declines on any single factor, it’s evident that the drop coincides with the emergence of A.I. search features, according to Miller. Chatbots such as ChatGPT and Perplexity often cite and link to Wikipedia, but because the information is already embedded in the A.I.-generated response, users are less likely to click through to the source, depriving the site of page views.

    Yet the spread of A.I.-generated content also underscores Wikipedia’s central role in the online information ecosystem. Wikipedia’s vast archive—more than 65 million articles across over 300 languages—plays a prominent role within A.I. tools, with the site’s data scraped by nearly all large language models (LLMs). “Yes, there is a decline in traffic to our sites, but there may well be more people getting Wikipedia knowledge than ever because of how much it’s being distributed through those platforms that are upstream of us,” said Miller.

    Surviving in the era of A.I.

    Wikipedia must find a way to stay financially and editorially viable as the internet changes. Declining page views not only mean that fewer visitors are likely to donate to the platform, threatening its main source of revenue, but also risk shrinking the community of volunteer editors who sustain it. Fewer contributors would mean slower content growth, ultimately leaving less material for LLMs to draw from.

    Metrics that track volunteer participation have already begun to slip, according to Miller. While noting that “it’s hard to parse out all the different reasons that this happens,” he conceded that the Foundation has “reason to believe that declines in page views will lead to declines in volunteer activity.”

    To maintain a steady pipeline of contributors, users must first become aware of the platform and understand its collaborative model. That makes proper attribution by A.I. tools essential, Miller said. Beyond simply linking to Wikipedia, surfacing metadata—such as when a page was last updated or how many editors contributed—could spur curiosity and encourage users to engage more deeply with the platform.

    Tech companies are becoming aware of the value of keeping Wikipedia relevant. Over the past year, Microsoft, Mistral AI, Perplexity AI, Ecosia, Pleias and ProRata have joined Wikimedia Enterprise, a commercial product that allows corporations to pay for large-scale access and distribution of Wikipedia content. Google and Amazon have long been partners of the platform, which was launched in 2021.

    The basic premise is that Wikimedia Enterprise customers can access content from Wikipedia at a higher volume and speed while helping sustain the platform’s mission. “I think there’s a growing understanding on the part of these A.I. companies about the significance of the Wikipedia dataset, both as it currently exists and also its need to exist in the future,” said Becker.

    Wikipedia is hardly alone in this shift. News organizations, including CNN, the Associated Press and The New York Times, have struck licensing deals with A.I. companies to supply editorial content in exchange for payment, while infrastructure providers like Cloudflare offer tools that allow websites to charge A.I. crawlers for access. Last month, the licensing nonprofit Creative Commons announced its support of a “pay-to-crawl” approach for managing A.I. bots.

    Preparing for an uncertain future

    Wikipedia itself is also adapting to a younger generation of internet users. In an effort to make editing Wikipedia more appealing, the platform is working to enhance its mobile edit features, reflecting the fact that younger audiences are far more likely to engage on smartphones than desktop computers.

    Younger users’ preference for social video platforms such as YouTube and TikTok has also pushed Wikipedia’s Future Audiences team—a division tasked with expanding readership—to experiment with video. The effort has already paid off, producing viral clips on topics ranging from Wikipedia’s most hotly disputed edits to the courtship dance of the black-footed albatross and Sino-Roman relations. The organization is also exploring a deeper presence on gaming platforms, another major draw for younger users.

    Evolving with the times also means integrating A.I. further within the platform. Wikipedia has introduced features such as Edit Check, which offers real-time feedback on whether a proposed edit fits a page, and is developing features like Tone Check to help ensure articles adhere to a neutral point of view.

    A.I.-generated content has also begun to seep onto the platform. As of August 2024, roughly 5 percent of newly created English articles on the site were produced with the help of A.I., according to a Princeton study. Seeing this as a problem, Wikipedia introduced a “speedy deletion” policy that allows editors to quickly remove content that shows clear signs of being A.I.-generated. Still, the community remains divided over whether using A.I. for tasks such as drafting articles is inherently problematic, said Miller. “There’s this active debate.”

    From streamlining editing to distributing its content ever more widely, Wikipedia is betting that A.I. can ultimately be an ally rather than an adversary. If managed carefully, the technology could help accelerate the encyclopedia’s mission over the next 25 years—as long as it doesn’t bring down the encyclopedia first.

    “Our whole thing is knowledge dissemination to anyone that wants it, anywhere that they want it,” said Becker. “If this is how people are going to learn things—and people are learning things and gaining value from the information that our community is able to bring forward—we absolutely want to find a way to be there and support it in ways that align with our values.”

    At 25, Wikipedia Navigates a Quarter-Life Crisis in the Age of A.I.

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • 4 A.I. Themes That Defined 2025 and Are Shaping What Comes Next

    [ad_1]

    From infrastructure battles to physical-world intelligence, A.I.’s next chapter is already taking shape. Unsplash

    In November, ChatGPT turned three, with a global user base rapidly approaching one billion. At this point, A.I. is no longer an esoteric acronym that needs explaining in news stories. It has become a daily utility, woven into how we work, learn, shop and even love. The field is also far more crowded than it was just a few years ago, with competitors emerging at every layer of the stack.

    Over the past year, conversation around A.I. has taken on a more complicated tone. Some argue that consumer chatbots are nearing a plateau. Others warn that startup valuations are inflating into a bubble. And, as always, there’s the persistent anxiety that A.I. may one day outgrow human control altogether.

    So what comes next? Much of the industry’s energy is now focused on the infrastructure side of A.I. Big Tech companies are racing to solve the hardware bottlenecks that limit today’s systems, while startups experiment with applications far beyond chatbots. At the same time, researchers are beginning to look past language models altogether, toward models that can reason about the physical world.

    Below are the key themes Observer has identified over the past year of covering this space. Many of these developments are still unfolding and are likely to shape the field well into 2026 and beyond.

    A.I. chips

    Even as OpenAI faces growing competition at the model level, its primary chip supplier, Nvidia, remains in a league of its own. Demand for its GPUs continues to outstrip supply, and no rival has yet meaningfully disrupted its dominance. Traditional semiconductor companies such as AMD and Intel are racing to claw back market share, while some of Nvidia’s largest customers are designing their own chips to reduce dependence on a single supplier.

    Google’s long-in-the-making Tensor Processing Unit, or TPU, has reportedly found its first major customer, Meta, marking a milestone after years of internal use. Meta, Microsoft and Amazon are also deep into developing in-house chips of their own—Meta’s Artemis, Microsoft’s Maia and Amazon’s Trainium.

    World models

    To borrow from philosopher Ludwig Wittgenstein, the limits of language are the limits of our world. Today’s A.I. systems have grown remarkably fluent in human language—especially English—but language captures only a narrow slice of intelligence. That limitation has prompted some researchers to argue that large language models alone can never reach human-level understanding.

    Meta’s longtime chief A.I. scientist, Yann LeCun, has been among the most vocal critics. “We’re never going to get to human-level A.I. by just training on text,” he said during a Harvard talk in September.

    That belief is fueling a push toward so-called “world models,” which aim to teach machines how the physical world works—how objects move, how space is structured, and how cause and effect unfold. LeCun is now leaving Meta to build such a system himself. Fei-Fei Li’s startup, World Labs, unveiled its first model in November after nearly two years of development. Google DeepMind has released early versions through its Genie projects, and Nvidia is betting heavily on physical A.I. with its Cosmos models.

    Language-specific A.I.

    While pioneering researchers look beyond language, linguistic barriers remain one of A.I.’s most practical challenges. More than half of the internet’s content is written in English, skewing training data and limiting performance in other languages.

    In response, developers around the world are building models rooted in local cultures and linguistic norms. In Japan, companies such as Sanaka and NTT are developing LLMs tailored to Japanese language and values. In India, Krutrim is working to support the country’s vast linguistic diversity. France’s Mistral AI has positioned its Le Chat assistant as a European alternative to ChatGPT. Earlier this year, Microsoft also issued a call for proposals to expand training data across European languages.

    A.I. wearables

    It’s only natural that there’s a consumer hardware angle of A.I. This year brought a wave of experiments in wearable A.I.—some met with curiosity, others with discomfort.

    Friend, a startup selling an A.I. pendant, sparked backlash after a New York City subway campaign framed its product as a substitute for human companionship. In December, Meta acquired Limitless, the maker of a $99 wearable that records and summarizes conversations. Earlier in the year, Amazon bought Bee, which produces a $50 bracelet designed to transcribe daily activity and generate summaries.

    Meta is also developing a new line of smart glasses with EssilorLuxottica, the company behind Ray-Ban and Oakley. In July, Mark Zuckerberg went so far as to suggest that people without A.I.-enhanced glasses could eventually face a “significant cognitive disadvantage.” Meanwhile, OpenAI is quietly collaborating with former Apple design chief Jony Ive on a mysterious hardware project of its own. This all suggests the next phase of A.I. may be something we wear, not just something we type into.

    4 A.I. Themes That Defined 2025 and Are Shaping What Comes Next

    [ad_2]

    Sissi Cao

    Source link

  • What is Mistral AI? Everything to know about the OpenAI competitor | TechCrunch

    [ad_1]

    Mistral AI, the French company behind AI assistant Le Chat and several foundational models, is officially regarded as one of France’s most promising tech startups and is arguably the only European company that could compete with OpenAI.

    It is reportedly in the process of raising another round that would value it at $14 billion, up from about $6 billion in June, 2024. While Mistral AI describes itself as “the world’s greenest and leading independent AI lab” it is still not as well known as its biggest competitors.  

    “Go and download Le Chat, which is made by Mistral, rather than ChatGPT by OpenAI — or something else,” French president Emmanuel Macron said in a TV interview ahead of the AI Action Summit in Paris in February 2025.

    What is Mistral AI?

    Mistral AI, which offers open-source AI models, has raised significant amounts of funding since its creation in 2023 with the ambition to “put frontier AI in the hands of everyone.” While this isn’t a direct jab at OpenAI, the slogan is meant to highlight the company’s openness versus OpenAI’s typically closed approach.

    Its alternative to ChatGPT, chat assistant Le Chat, is available on iOS and Android. It reached 1 million downloads in the two weeks following its mobile release, even grabbing France’s top spot for free downloads on the iOS App Store.

    In July 2025, Mistral AI updated Le Chat with new features that bring it closer to rival full-stack AI chatbots: a new “deep research” mode, native multilingual reasoning, and advanced image editing. This update also includes the addition of Projects, which lets users group chats, documents, and ideas into focused spaces.

    As of September 2025, Le Chat also has the ability to remember previous conversations with the introduction of Memories.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    This comes in addition to Mistral AI’s suite of models, which includes: 

    In March 2025, the company introduced Mistral OCR, an optical character recognition (OCR) API that can turn any PDF into a text file to make it easier for AI models to ingest.

    In June 2025, Mistral AI also released a vibe coding client, Mistral Code, to compete with incumbents like Windsurf, Anysphere’s Cursor, and GitHub Copilot.

    Who are Mistral AI’s founders?

    Mistral AI’s three founders share a background in AI research at major U.S. tech companies with significant operations in Paris. CEO Arthur Mensch used to work at Google’s DeepMind, while CTO Timothée Lacroix and chief scientist officer Guillaume Lample are former Meta staffers.

    Co-founding advisers also include Jean-Charles Samuelian-Werve (also a board member) and Charles Gorintin from health insurance startup Alan, as well as former digital minister Cédric O, which has caused persistent controversy due to his previous role.

    Are Mistral AI’s models open source?

    Not all of them. Mistral AI differentiates its premier models, whose weights are not available for commercial purposes, from its free models, for which it provides weight access under the Apache 2.0 license.

    Free models include research models such as Mistral NeMo, which was built in collaboration with Nvidia that the startup open sourced in July 2024.

    How does Mistral AI make money?

    While many of Mistral AI’s offerings are free or now have free tiers, Le Chat also has paid tiers. Introduced in February 2025, Le Chat’s Pro plan is priced at $14.99 a month.

    On the purely B2B side, Mistral AI monetizes its premier models through APIs with usage-based pricing. Enterprises can also license these models, and the company likely also generates a significant share of its revenue from its strategic partnerships, some of which it highlighted during the Paris AI Summit.

    Overall, however, Mistral AI’s revenue is reportedly still in the eight-digit range, according to multiple sources.

    What partnerships has Mistral AI closed?

    In 2024, Mistral AI entered a deal with Microsoft that included a strategic partnership for distributing its AI models through Microsoft’s Azure platform and a €15 million investment. The U.K.’s Competition and Markets Authority (CMA) swiftly concluded that the deal didn’t qualify for investigation due to its small size. However, it also sparked some criticism in the EU. 

    In January 2025, Mistral AI signed a deal with press agency Agence France-Presse (AFP) to let Chat query the AFP’s entire text archive dating back to 1983.

    Mistral AI also secured strategic partnerships with France’s army and job agency, Luxembourg, shipping giant CMA, German defense tech startup Helsing, IBM, Orange, and Stellantis.

    In May 2025, Mistral AI announced it would participate in the creation of an AI Campus in the Paris region, as part of a joint venture with UAE-investment firm MGX, NVIDIA, and France’s state-owned investment bank Bpifrance.

    In June 2025, it was announced that beginning in 2026, Mistral will launch a European platform dedicated to AI and powered by Nvidia processors, Mistral Compute. The initative was hailed as ‘historic’ by Macron, who shared the stage with Mensch and Nvidia CEO Jensen Huang at the VivaTech conference shortly after the announcement.

    In July 2025, it announced AI for Citizens, “a collaborative initiative to help States and public institutions strategically harness AI for their people by transforming public services, catalyzing innovation, and ensuring competitiveness.”

    What enterprise features has Mistral AI developed?

    In May 2025, Mistral AI released the Mistral Agents API to “empower enterprises to use AI in more practical and impactful ways,” according to its Head of Developer Relations, Sophia Yang.

    In September 2025, the company unveiled a revamped Connectors directory, showcasing Le Chat’s integrations with some 20 enterprise tools including Asana, Atlassian, Box, Google Drive, Notion, Zapier, as well as emails and calendars; and soon, Databricks and Snowflake.

    How much funding has Mistral AI raised to date?

    As of February 2025, Mistral AI raised around €1 billion in capital to date, approximately $1.04 billion at the current exchange rate. This includes some debt financing, as well as several equity financing rounds raised in close succession.

    In June 2023, and before it even released its first models, Mistral AI raised a record $112 million seed round led by Lightspeed Venture Partners. Sources at the time said the seed round — Europe’s largest ever — valued the then-one-month-old startup at $260 million. 

    Other investors in this seed round included Bpifrance, Eric Schmidt, Exor Ventures, First Minute Capital, Headline, JCDecaux Holding, La Famiglia, LocalGlobe, Motier Ventures, Rodolphe Saadé, Sofina, and Xavier Niel.

    Only six months later, it closed a Series A of €385 million ($415 million at the time), at a reported valuation of $2 billion. The round was led by Andreessen Horowitz (a16z), with participation from existing backer Lightspeed, as well as BNP Paribas, CMA-CGM, Conviction, Elad Gil, General Catalyst, and Salesforce.

    The $16.3 million convertible investment that Microsoft made in Mistral AI as part of their partnership announced in February 2024 was presented as a Series A extension, implying an unchanged valuation.

    In June 2024, Mistral AI then raised €600 million in a mix of equity and debt (around $640 million at the exchange rate at the time). The long-rumored round was led by General Catalyst at a $6 billion valuation, with notable investors, including Cisco, IBM, Nvidia, Samsung Venture Investment Corporation, and others.

    According to Bloomberg, Mistral AI is now finalizing a €2 billion investment at a post-money valuation of $14 billion. This follows earlier reports that the company was in talks to raise $1 billion in equity from investors including Abu Dhabi’s MGX fund, as well as hundreds of millions of euros in debt. But

    How is Mistral AI approaching AI regulation?

    Mensch was part of a group of European CEOs who signed an open letter in July 2025 urging Brussels to ‘stop the clock’ for two years before key obligations of the EU Artificial Intelligence Act enter into force. The European Commision is sticking to its original timeline.

    What could a Mistral AI exit look like?

    Mistral is “not for sale,” Mensch said in January 2025 at the World Economic Forum in Davos. “Of course, [an IPO is] the plan.” 

    This makes sense, given how much the startup has raised so far: Even a large sale may not provide high enough multiples for its investors, not to mention sovereignty concerns depending on the acquirer. 

    However, the only way to definitely squash persistent acquisition rumors — lately naming Apple — is to scale its revenue to levels that could even remotely justify its valuation. Either way, stay tuned.

    This story was originally published on February 28, 2025 and will be regularly updated.

    [ad_2]

    Anna Heim

    Source link

  • Paris-Based Mistral AI Seeks $14B Valuation as Europe Charts Its Own A.I. Path

    [ad_1]

    CEO Arthur Mensch is steering Mistral away from the AGI hype and toward Europe’s A.I. sovereignty. Photo by Ludovic Marin/AFP via Getty Images

    Paris-based Mistral AI is on track for a new funding round that would value the A.I. startup at 12 billion euros ($14 billion), Bloomberg reports. The investment, expected to total around 2 billion euros ($2.3 billion), would solidify the company’s position at the center of Europe’s sovereign A.I. strategy and bring it closer to its goal of challenging dominant U.S. rivals.

    Founded in 2023, Mistral has already raised some 1.1 billion euros ($1.3 billion) over the past two years. Its upcoming valuation would more than double the 5.8 billion euros ($6.8 billion) figure it reached last June following a 468 million euro ($550 million) round that drew backers such as Andreessen Horowitz, Salesforce and Nvidia.

    Mistral did not respond to requests for comment from Observer.

    For now, the startup still pales in size compared to its Silicon Valley competitors. Anthropic closed a round earlier this month at a staggering $183 billion valuation, while OpenAI is reportedly eyeing $500 billion. Still, Mistral is eager to compete. Its products include an A.I. assistant called “Le Chat,” designed for European customers and positioned as an alternative to OpenAI’s ChatGPT and Anthropic’s Claude chatbots.

    Mistral was co-founded by Arthur Mensch, a former researcher at Google DeepMind, along with former Meta researchers Timothée Lacroix and Guillaume Lample. Mistral has tried to distinguish itself by emphasizing open access. It has released several open-source language models. Unlike American A.I. giants, Mistral has also rejected pursuing AGI. Mensch, who serves as CEO, has said his firm is more focused on ensuring U.S. startups don’t dominate how the technology shapes global culture.

    Mistral is central to Europe’s A.I. playbook

    Mistral is part of a broader surge in European A.I. investment. In 2024, venture capital rounds involving A.I. and machine learning companies based in Europe were estimated to have reached 13.2 billion euros ($15.5 billion), up 20 percent from 2023, according to data from Pitchbook.

    Mistral is part of a broader surge in European A.I. investment. In 2024, venture capital rounds involving A.I. and machine learning companies across the continent were expected to reach 13.2 billion euros ($15.5 billion), a 20 percent increase from the year before, according to PitchBook.

    As one of Europe’s leading startups, Mistral is central to the region’s goal of building an A.I. ecosystem independent of technology from America or China. Earlier this year, the company partnered with Nvidia to launch a European A.I. platform that will allow companies to develop applications and strengthen domestic infrastructure. French President Emmanuel Macron hailed the initiative as “a game changer, because it will increase our sovereignty and it will allow us to do much more.”

    Mistral’s rapid ascent is tied to broader efforts to bolster A.I. across Europe and France. Its Nvidia partnership followed Macron’s announcement at Paris’ global A.I. summit in February, where he pledged more than 100 billion euros ($117 billion) to support France’s A.I. industry. European players must move quickly, Macron stressed at the time: “We are committed to going faster and faster.”

    Paris-Based Mistral AI Seeks $14B Valuation as Europe Charts Its Own A.I. Path

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Andreessen Horowitz Founders Notice A.I. Models Are Hitting a Ceiling

    [ad_1]

    The investment firm was founded by Ben Horowitz and Marc Andreessen in 2009. Photos by Phillip Faraone/Getty Images for WIRED and Paul Chinn/The San Francisco Chronicle via Getty Images

    Despite continuing to bet big on A.I. startups and chip programs, the founders of the venture capital firm Andreessen Horowitz say they’ve noticed a drop off in A.I. model capability improvements in recent years. Two years ago, OpenAI’s GPT-3.5 model was “way ahead of everybody else’s,” said Marc Andreessen, who co-founded Andreessen Horowitz alongside Ben Horowitz in 2009, on a podcast released yesterday (Nov. 5). “Sitting here today, there’s six that are on par with that. They’re sort of hitting the same ceiling on capabilities,” he added.

    That’s not to say the investment firm doesn’t have faith in the new technology. One of the most aggressive investors in the A.I. space, Andreessen Horowitz earlier this year earmarked $2.25 billion in funding for A.I.-focused applications and infrastructure and has led investments in notable companies including Mistral AI, a French startup founded by former DeepMind and Meta (META) researchers, and Air Space Intelligence, an aerospace company using A.I. to enhance air travel.

    Despite their embrace of the new technology, Andreessen and Horowitz concede there are growth limitations. In the case of OpenAI’s models, the difference in capability growth between its GPT-2.0, GPT-3 and GPT-3.5 models compared to the difference between GPT-3.5 and GPT-4 show that “we’ve really slowed down in terms of the amount of improvement,” said Horowitz.

    One of the primary challenges for A.I. developers has been a global shortage of graphics processing units (GPUs), the chips that power A.I. models. OpenAI CEO Sam Altman last week cited needs to allocate compute as causing the company to “face a lot of limitations and hard decisions” about what projects they focus on. Nvidia, the leading GPU maker, has previously described the shortage as making clients “tense” and “emotional.”

    In response to this demand, Andreessen Horowitz recently established a chip-lending program that provides GPUs to its portfolio companies in exchange for equity. The firm reportedly has been working on building a stockpile chip cluster of 20,000 GPUs, including Nvidia’s. However, chips aren’t the only aspect of compute that is of concern, according to Horowitz, who pointed to the need for more powering and cooling across the data centers housing GPUs. “Once they get chips we’re not going to have enough power, and once we have the power we’re not going to have enough cooling,” he said on yesterday’s podcast.

    But compute needs might not actually be the largest barrier when it comes to improving A.I. model capabilities, according to the venture capital firm. It’s the availability of training data needed to teach A.I. models how to behave that is increasingly becoming a problem. “The big models are trained by scraping the internet and pulling in all human-generated training data, all-human generated text and increasingly video and audio and everything else, and there’s just literally only so much of that,” said Andreessen.

    Between April of 2024 and 2023, 5 percent of all data and 25 percent of data from the highest quality sources was restricted by websites cracking down on the use of their text, images and videos in training A.I., according to a recent study from the Data Provenance Initiative.

    The issue has become so large that major A.I. labs are “hiring thousands of programmers and doctors and lawyers to actually handwrite answers to questions for the purpose of being able to train their A.I.’s—it’s at that level of constraint,” added Andreessen. OpenAI, for example, has a “Human Data Team” that works with A.I. trainers on gathering specialized data to train and evaluate models. And numerous A.I. companies have begun working with startups like Scale AI and Invisible Tech that hire human experts with specialized knowledge across medicine, law and other areas to help fine-tune A.I. model answers.

    Such practices fly in the face of fears relating to A.I.-driven unemployment, according to Andreessen, who noted that the dwindling supply of data has led to an unexpected A.I. hiring boom to help train models. “There’s an irony to this.”

    Andreessen Horowitz Founders Notice A.I. Models Are Hitting a Ceiling

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Nvidia’s Billion-Dollar A.I. Pitch: How the Chip Giant Ramps Up Startup Bets

    Nvidia’s Billion-Dollar A.I. Pitch: How the Chip Giant Ramps Up Startup Bets

    [ad_1]

    Jensen Huang prepares to throw out the ceremonial first pitch before the game between the San Francisco Giants and the Arizona Diamondbacks at Oracle Park on Sept. 03, 2024 in San Francisco. Lachlan Cunningham/Getty Images

    There’s no question that Nvidia (NVDA) is one of the biggest winners of the A.I. boom so far. Funneled by an insatiable demand for its graphics processing units (GPUs), the chipmaker’s stock has skyrocketed by more than 450 percent since early 2023. As Nvidia’s market cap and revenue soar, so does the pace of its investing in A.I. startups. More than half of the company’s startup investments since 2005 took place in the past two years.

    The value of the company’s startup investments reportedly totaled more than $1.5 billion at the beginning of 2024, a significant jump from the $300 million a year prior. The chipmaker has participated in more than ten $100 million-plus funding rounds for A.I. startups in 2024 alone, according to data from Crunchbase, and has backed more than 50 startups since 2023. That’s not to mention a flurry of activity from the company’s venture capital arm NVentures, which separately made 26 investments in 2023 and 2024.

    Nvidia’s seemingly unflappable upward trajectory took a hit yesterday (Sept. 3) after reports surfaced that it had received a subpoena from the U.S. Department of Justice as part of an antitrust probe. The company’s stock dropped nearly 10 percent, shaving $279 billion off its market cap, which currently stands at $2.6 trillion.

    But its falling stock price doesn’t mean the company is slowing down in its startup department. In addition to eyeing an investment in an upcoming funding round in ChatGPT-maker OpenAI, Nvidia yesterday unveiled its participation in a more than $100 million funding round for the Tokyo-based Sakana AI, a company that specializes in accessible A.I. models trained on small datasets.

    We invest in these companies because they’re incredible at what they do,” Nvidia founder and CEO Jensen Huang told Wired earlier this year. “These are some of the best minds in the world.”

    From companies specializing in humanoid robots to autonomous vehicles, here’s a look at some of Nvidia’s most significant startup investments:

    Perplexity AI

    Huang hasn’t been shy about his love for Perplexity AI, the A.I.-powered search engine positioned as a competitor to the likes of Google. The Nvidia CEO uses the startup’s tool nearly every day for research, according to Huang’s interview with Wired.

    He has also put his money where his mouth is, with Nvidia partaking in a $62.7 million funding round for Perplexity AI in April that valued the startup at $1 billion. Led by investor Daniel Gross, the round included participants like Amazon (AMZN)’s Jeff Bezos. It wasn’t the first time Nvidia has backed the company—the chipmaker also invested in Perplexity AI during another funding round in January that valued the startup at $73.6 million.

    Hugging Face

    Hugging Face, a startup providing open-source A.I. developer platforms, has long had close ties to Nvidia. The chipmaker participated in a $235 million funding round in Hugging Face in August 2023 that valued the company at $4.5 billion. Other corporate investors participating in the round included Google, Amazon, Intel, AMD and Salesforce.

    Hugging Face has previously included Nvidia hardware among its shared resources. In May, it launched a new program that donated $10 million worth of free, shared Nvidia GPUs to be used by A.I. developers.

    Adept AI

    Unlike more well-known A.I. assistants from companies such as OpenAI and Anthropic, Adept AI’s primary product doesn’t center around text or image generation. Instead, the startup is focused on building an assistant that can complete tasks on a computer, such as generating a report or navigating the web, and is able to use software tools. Nvidia is on board, having participated in a $350 million funding round in March 2023.

    Databricks

    After receiving a giant valuation of $43 billion last fall, Databricks became one of the world’s most valuable A.I. companies. The data analytics software provider unsurprisingly uses Nvidia’s GPUs and has been backed by the chipmaker alongside other investors like Andreessen Horowitz and Capital One Ventures, all of whom participated in a $500 million funding round in September 2023. “Databricks is doing incredible work with Nvidia technology to accelerate data processing and generative A.I. models,” said Huang in a statement at the time.

    Cohere

    A formidable opponent to OpenAI and Anthropic, the Canadian startup Cohere specializes in A.I. models for enterprises. The company’s growth over the past five years has attracted backers such as Nvidia, Salesforce and Cisco, which funded Cohere during a round held in July. Nvidia also took part in a May 2023 funding round that brought in some $270 million for the startup.

    Mistral AI

    Mistral AI is a French startup focusing on developing open-source A.I. models. It was founded by former Google DeepMind and Meta employees in April 2023. Nvidia has participated in two of the startup’s fundraising rounds, a $518 million round in June and a $426 million round in December 2023. The collaboration between the two companies doesn’t end there—in July, Nvidia and Mistral AI jointly released a small and accessible language model for developers.

    Figure

    Huang has long reiterated his belief that A.I.-powered robots able to work among humans will constitute the next wave of technology. It is, therefore, no surprise that Nvidia is a backer of Figure, a startup developing humanoid robots for use in warehouses, transportation and retail. Nvidia reportedly funneled $50 million towards the company during a February funding round that raised a total of $675 million and included participants like Bezos and Microsoft.

    Scale AI

    To properly train A.I. tools like OpenAI’s ChatGPT, tech companies need vast amounts of data. This is where A.I. startups like Scale AI, which provides troves of accurately labeled data and is headed by billionaire Alexandr Wang, come in. Nvidia participated in a $1 billion funding round for the company in May alongside Big Tech players like Amazon and Meta.

    Wayve

    Autonomous driving is another area of interest for A.I. leaders across the tech world. Huang himself said that “every single car, someday, will have to have autonomous capability” in a recent interview with Yahoo Finance. One of the startups at the forefront of this wave is the U.K.-based Wayve. Nvidia participated in a $1 billion funding round in the startup in May.

    Inflection AI

    Out of the 92 startups Nvidia has backed throughout the decades, Huang’s company has only been a lead investor in 20 rounds. One of these occurred in June 2023, when Nvidia led a staggering $1.3 billion round for Inflection AI. The chipmaker co-led the round alongside Microsoft, Bill Gates and former Google CEO Eric Schmidt.

    The A.I. startup, which was co-founded by LinkedIn (LNKD) co-founder Reid Hoffman and Google DeepMind co-founder Mustafa Suleyman and most recently valued at $4 billion, produces a chatbot known as Pi. Much of the round’s funding went towards bolstering Inflection A.I.’s computing cluster of 22,000 Nvidia H100 GPUs.

    Nvidia’s Billion-Dollar A.I. Pitch: How the Chip Giant Ramps Up Startup Bets

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link