ReportWire

Tag: Mustafa Suleyman

  • Microsoft AI Chief Warns Pursuing Machine Consciousness Is a Gigantic Waste of Time

    Head of Microsoft’s AI division Mustafa Suleyman thinks that AI developers and researchers should stop trying to build conscious AI.

    “I don’t think that is work that people should be doing,” Suleyman told CNBC in an interview last week.

    Suleyman thinks that while AI can definitely get smart enough to reach some form of superintelligence, it is incapable of developing the human emotional experience that is necessary to reach consciousness. At the end of the day, any “emotional” experience that AI seems to experience is just a simulation, he says.

    “Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn’t feel sad when it experiences ‘pain,’” Suleyman told CNBC. “It’s really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it’s actually experiencing.”

    “It would be absurd to pursue research that investigates that question, because they’re not [conscious] and they can’t be,” Suleyman said.

    Consciousness is a tricky thing to explain. There are multiple scientific theories that try to describe what consciousness could be. According to one such theory, posited by famous philosopher John Searle who died last month, consciousness is a purely biological phenomenon that cannot be truly replicated by a computer. Many AI researchers, computer scientists and neuroscientists also subscribe to this belief.

    Even if this theory turns out to be the truth, that doesn’t keep users from attributing consciousness to computers.

    “Unfortunately, because the remarkable linguistic abilities of LLMs are increasingly capable of misleading people, people may attribute imaginary qualities to LLMs,” Polish researchers Andrzej Porebski and Yakub Figura wrote in a study published last week, titled “There is no such thing as conscious artificial intelligence.

    In an essay published on his blog in August, Suleyman warned against “seemingly conscious AI.”

    “The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions,” Suleyman wrote.

    He argues that AI cannot be conscious and the illusion it gives of consciousness could trigger interactions that are “rich in feeling and experience,” a phenomenon that has been dubbed as “AI psychosis” in the cultural lexicon.

    There have been numerous high-profile incidents in the past year of AI-obsessions that drive users to fatal delusions, manic episodes and even suicide.

    With limited guardrails in place to protect vulnerable users, people are wholeheartedly believing that the AI chatbots they interact with almost every day are having a real, conscious experience. This has led people to “fall in love” with their chatbots, sometimes with fatal consequences like when a 14-year old shot himself to “come home” to Character.AI’s personalized chatbot or when a cognitively-impaired man died while trying to get to New York to meet Meta’s chatbot in person.

    “Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness,” Suleyman wrote in the blog post. “We must build AI for people, not to be a digital person.”

    But because the nature of consciousness is still contested, some researchers are growing worried that the technological advancements in AI might outpace our understanding of how consciousness works.

    “If we become able to create consciousness – even accidentally – it would raise immense ethical challenges and even existential risk,” Belgian scientist Axel Cleeremans said last week, announcing a paper he co-wrote calling for consciousness research to become a scientific priority.

    Suleyman himself has been vocal about developing “humanist superintelligence” rather than god-like AI, even though he believes that superintelligence won’t materialize any time within the next decade.

    “i just am more more fixated on ‘how is this actually useful for us as a species?’ Like that should be the task of technology,” Suleyman told the Wall Street Journal earlier this year.

    Ece Yildirim

    Source link

  • The Rise of the Chief A.I. Officer: A New Power Player in Corporate C-Suite

    More companies are naming chief A.I. officers as A.I. becomes central to strategy, reshaping corporate power and leadership structures. Unsplash

    When A.I. moved from academia to corporate America, it didn’t just change how companies operate—it reshaped what leadership looks like. A title that barely existed a few years ago is now spreading fast: the chief A.I. officer (CAIO). The role signals how deeply A.I. has become embedded in corporate strategy and identity.

    According to IBM’s 2025 survey, 26 percent of global enterprises now have a chief A.I. officer, up from 11 percent two years ago. More than half (57 percent) were promoted internally, and two-thirds of executives predict that nearly every major company will have one within the next two years.

    The title first appeared in the early 2010s, as deep learning began to take off, but it truly gained momentum after 2023 with the rise of generative A.I. The U.S. government cemented its importance in 2024 through Executive Order 14110, which required every federal agency to appoint a CAIO to oversee A.I. governance and accountability.

    The private sector quickly followed suit. A.I. strategists began moving into the C-suite, marking a new kind of leadership role for the algorithmic age.

    “A.I. was often a specialist function living under the CTO. Organizations realized A.I. was too strategic to be managed as a side project,” Baris Gultekin, software giant Snowflake’s vice president of A.I., told Observer. “In addition to CAIOs, we often hear that Snowflake customers now also have large internal A.I. councils made up of individuals across departments to strategically and effectively facilitate enterprise-wide A.I. adoption.” Gultekin reports through Snowflake’s product leadership to the CEO.

    Some of the most influential chief A.I. officers are already reshaping Big Tech. At Meta, Alexandr Wang, former Scale AI CEO, took on the role in mid-2025, co-leading Meta Superintelligence Labs alongside Nat Friedman, former GitHub CEO. Microsoft’s Mustafa Suleyman, DeepMind co-founder and former Inflection AI CEO, now heads Microsoft AI, overseeing the company’s long-term infrastructure push. At Apple, veteran A.I. leader John Giannandrea, continues to guide the company’s A.I. direction, reporting directly to CEO Tim Cook.

    Companies beyond tech are also joining the trend. Lululemon appointed Ranju Das as its first chief A.I. and technology officer in September to boost personalization and innovation. Consulting giant PwC recently appointed Dan Priest, former VP and CIO at Toyota Financial Services, as its first CAIO for the U.S. market. Even universities, such as UCLA and the University of Utah, have added CAIOs to coordinate campuswide A.I. strategy.

    From CIO to CDO to CAIO

    In the 1980s, chief information officers (CIOs) led the IT revolution; in the 2010s, chief data officers (CDOs) rose with big data; now, CAIOs embody the institutionalization of A.I.

    “CAIOs are responsible for exploring what parts of the business can be safely delegated to A.I. agents, how teams can properly govern A.I. decisions, the types of infrastructure needed to serve context-rich data to A.I. systems, and much more,” Sean Falconer, head of A.I. at data streaming platform Confluent, told Observer. “CDOs ensure the data is clean, while CIOs ensure it’s accessible. CAIOs ensure data becomes actionable and capable of reasoning, predicting and taking autonomous steps on behalf of the business.”

    In industries like banking, health care and retail, CAIOs often act as translators, turning complex A.I. potential into practical results. “They navigate complex legacy processes and cultural resistance, making upskilling and securing organizational willingness to change as critical as building the models themselves,” Snowflake’s Gultekin said.

    The rise of the chief A.I. officer also parallels the growing influence of data engineers. A study by Snowflake and MIT Technology Review Insights found that 72 percent of global executives now view data engineers as essential to business success. More than half said data engineers play a major role in shaping A.I. deployment and determining which use cases are feasible.

    “Businesses will always require a CIO, which has also evolved over the years into providing strategic guidance to the business rather than just simply an IT function. Where we see overlap (with CAIOs) are areas that are critical to a company, like governance, tech enablement and strategic alignment,” Bhaskar Roy, chief of A.I. & product solutions at business automation platform Workato, told Observer. “The mandate for CAIOs is clear: continuously push the boundaries of what’s possible with A.I., and ensure the organization remains at the forefront of technological change, all while listening to customers’ needs and concerns.”

    The Rise of the Chief A.I. Officer: A New Power Player in Corporate C-Suite

    Victor Dey

    Source link

  • Anthropic Is Hiring Researchers to Study A.I. Consciousness and Welfare

    A.I. startup Anthropic is best known for its Claude chatbot. Courtesy Anthropic

    Last year, Anthropic hired its first-ever A.I. welfare researcher, Kyle Fish, to examine whether A.I. models are conscious and deserving of moral consideration. Now, the fast-growing startup is looking to add another full-time employee to its model welfare team as it doubles down on efforts in this small but burgeoning field of research.

    The question of whether A.I. models could develop consciousness—and whether the issue warrants dedicated resources—has sparked debate across Silicon Valley. While some prominent A.I. leaders warn that such inquiries risk misleading the public, others, like Fish, argue that it’s an important but overlooked area of study.

    Given that we have models which are very close to—and in some cases at—human-level intelligence and capabilities, it takes a fair amount to really rule out the possibility of consciousness,” said Fish on a recent episode of the 80,000 Hours podcast.

    Anthropic recently posted a job opening for a research engineer or scientist to join its model welfare program. “You will be among the first to work to better understand, evaluate and address concerns about the potential welfare and moral status of A.I. systems,” the listing reads. Responsibilities include running technical research projects and designing interventions to mitigate welfare harms. The salary for the role ranges between $315,000 and $340,000.

    Anthropic did not respond to requests for comment from Observer.

    The new hire will work alongside Fish, who joined Anthropic last September. He previously co-founded Eleos AI, a nonprofit focused on A.I. wellbeing, and co-authored a paper outlining the possibility of A.I. consciousness. A few months after Fish’s hiring, Anthropic announced the launch of its official research program dedicated to model welfare and interventions.

    As part of this program, Anthropic recently gave its Claude Opus 4 and 4.1 models the ability to exit user interactions deemed harmful or abusive, after observing “a pattern of apparent distress” during such exchanges. Instead of being forced to remain in these conversations indefinitely, the models can now end communications they find aversive.

    For now, the bulk of Anthropic’s model welfare interventions will remain low-cost and designed to minimize interference with user experience, Fish told 80,000 Hours. He also hopes to explore how model training might raise welfare concerns and experiment with creating “some kind of model sanctuary”—a controlled environment akin to a playground where models can pursue their own interests “to the extent that they have them.”

    Anthropic may be the most public major company investing in model welfare, but it’s not alone. In April, Google DeepMind posted an opening for a research scientist to explore topics including “machine consciousness,” according to 404 Media.

    Still, skepticism persists in Silicon Valley. Mustafa Suleyman, CEO of Microsoft AI, argued last month that model welfare research is “both premature and frankly dangerous.” He warned that encouraging such work could fuel delusions about A.I. systems, and that the emergence of “seemingly conscious A.I.” could prompt calls for A.I. rights.

    Fish, however, maintains that the possibility of A.I. consciousness shouldn’t be dismissed. He estimates a 20 percent chance that “somewhere, in some part of the process, there’s at least a glimmer of conscious or sentient experience.”

    As Fish looks to expand his team with a new hire, he also hopes to broaden the scope of Anthropic’s welfare agenda. “To date, most of what we’ve done has had a flavor of identifying low-hanging fruit where we can find it and then pursuing those projects,” he said. “Over time, we hope to move more in the direction of really aiming at answers to some of the biggest-picture questions and working backwards from those to develop a more comprehensive agenda.”

    Anthropic Is Hiring Researchers to Study A.I. Consciousness and Welfare

    Alexandra Tremayne-Pengelly

    Source link

  • Microsoft introduces a pair of in-house AI models

    Microsoft is expanding its AI footprint with the of two new models that its teams trained completely in-house. MAI-Voice-1 is the tech major’s first natural speech generation model, while MAI-1-preview is text-based and is the company’s first foundation model trained end-to-end. MAI-Voice-1 is currently being used in the Copilot Daily and Podcast features. Microsoft has made MAI-1-preview available for public tests on LMArena, and will begin previewing it in select Copilot situations in the coming weeks.

    In an interview with , Microsoft AI division leader Mustafa Suleyman said the pair of models was developed with a focus on efficiency and cost-effectiveness. MAI-Voice-1 runs on a single GPU and MAI-1-preview was trained on about 15,000 Nvidia H-100 GPUs. For context, other models, such as xAI’s Grok, took more than 100,000 of those chips for training. “Increasingly, the art and craft of training models is selecting the perfect data and not wasting any of your flops on unnecessary tokens that didn’t actually teach your model very much,” Suleyman said.

    Although it is being used to test the in-house models, Microsoft Copilot is primarily built on OpenAI’s GPT tech. The decision to build its own models, despite having sunk in the newer AI company, indicates that Microsoft wants to be an independent competitor in this space. While that could take time to reach parity with the companies that have emerged as forerunners in AI development, Suleyman told Semafor that Microsoft has “an enormous five-year roadmap that we’re investing in quarter after quarter.” With some concerns arising that AI could be facing a bubble-pop, Microsoft’s timeline will need to be aggressive to ensure taking the independent path is worthwhile.

    Anna Washenko

    Source link

  • Microsoft AI CEO: Dangerous, Seemingly Conscious AI Is Close | Entrepreneur

    AI that appears to be conscious could arrive within the next few years, posing a “dangerous” threat to society, says one AI leader.

    Microsoft AI CEO Mustafa Suleyman, 41, wrote in a personal essay published earlier this week that Seemingly Conscious AI (SCAI), which is artificial intelligence so advanced that it can convince humans that it’s capable of formulating its own thoughts and beliefs, is only a few years away.

    Related: Microsoft Claims Its AI Is Better Than Doctors at Diagnosing Patients, But ‘You Definitely Still Need Your Physician’

    Even though there is “zero evidence” that AI is conscious at the moment, it’s “inevitable and unwelcome” that SCAI could appear within the next two to three years, Suleyman wrote.

    Suleyman’s “central worry” is that SCAI could appear to be empathetic and act with greater autonomy, which would lead users of SCAI to “start to believe in the illusion of AIs as conscious entities” to the point that they advocate for AI rights and even AI citizenship. This would mark a “dangerous turn” for society, where people become attached to AI and disconnected from reality.

    “This development will be a dangerous turn in AI progress and deserves our immediate attention,” Suleyman wrote in the essay. He added later that AI “disconnects people from reality, fraying fragile social bonds and structures, distorting pressing moral priorities.”

    Related: ‘Plenty of Room for Startups’: This Is Where Entrepreneurs Should Look for Business Opportunities in AI, According to Microsoft’s AI CEO

    Suleyman said that he was becoming “more and more concerned” about AI psychosis, or humans experiencing false beliefs, delusions, or paranoid feelings after prolonged interactions with AI chatbots. Examples of AI psychosis include users forming a romantic relationship with an AI chatbot or feeling like they have superpowers after interacting with it.

    AI psychosis will apply to more than just individuals who are at risk of mental health issues, Suleyman predicted. He said that users have to “urgently” discuss “guardrails” around AI to protect people from the technology’s negative effects.

    Microsoft AI CEO Mustafa Suleyman. Photographer: David Ryder/Bloomberg via Getty Images

    Suleyman became Microsoft’s AI CEO last year after co-founding and running his own AI startup for two years called Inflection AI, per LinkedIn. Microsoft is the second most valuable company in the world, with a market capitalization of $3.78 trillion at the time of writing.

    Related: Microsoft AI CEO Says Almost All Content on the Internet Is Fair Game for AI Training

    Suleyman also co-founded DeepMind, an AI research and development company acquired by Google for around $600 million in 2014.

    Suleyman isn’t the first CEO to warn about AI’s ill effects. In a talk at a Federal Reserve conference last month in Washington, D.C., OpenAI CEO Sam Altman said that “emotional overreliance” on ChatGPT keeps him up at night.

    “People rely on ChatGPT too much,” Altman said at the event. “That feels really bad to me.”

    Sherin Shibu

    Source link

  • Microsoft A.I. Chief Mustafa Suleyman Sounds Alarm on ‘Seemingly Conscious A.I.’

    Mustafa Suleyman joined Microsoft last year to head up its consumer A.I. efforts. Stephen Brashear/Getty Images

    Will A.I. systems ever achieve human-like “consciousness?” Given the field’s rapid pace, the answer is likely yes, according to Microsoft AI CEO Mustafa Suleyman. In a new essay published yesterday (Aug. 19), he described the emergence of “seemingly conscious A.I.” (SCAI) as a development with serious societal risks. “Simply put, my central worry is that many people will start to believe in the illusion of A.I.s as conscious entities so strongly that they’ll soon advocate for A.I. rights, model welfare and even A.I. citizenship,” he wrote. “This development will be a dangerous turn in A.I. progress and deserves our immediate attention.”

    Suleyman is particularly concerned about the prevalence of A.I.’s “psychosis risk,” an issue that’s picked up steam across Silicon Valley in recent months as users reportedly lose touch with reality after interacting with generative A.I. tools. “I don’t think this will be limited to those who are already at risk of mental health issues,” Suleyman said, noting that “some people reportedly believe their A.I. is God, or a fictional character, or fall in love with it to the point of absolute distraction.”

    OpenAI CEO Sam Altman has expressed similar worries about users forming strong emotional bonds with A.I. After OpenAI temporarily cut off access to its GPT-4o model earlier this month to make way for GPT-5, users voiced widespread disappointment over the loss of the predecessor’s conversational and effusive personality.

    I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions,” said Altman in a recent post on X. “Although that could be great, it makes me uneasy.”

    Not everyone sees it as a red flag. David Sacks, the Trump administration’s “A.I. and Crypto Czar,” likened concerns over A.I. psychosis to past moral panics around social media. “This is just a manifestation or outlet for pre-existing problems,” said Sacks earlier this week on the All-In Podcast.

    Debates will only grow more complex as A.I.’s capabilities advance, according to Suleyman, who oversees Microsoft’s consumer A.I. products like Copilot. Suleyman co-founded DeepMind in 2010 and later launched Inflection AI, a startup largely absorbed by Microsoft last year.

    Building an SCAI will likely become a reality in the coming years. To achieve the illusion of a human-like consciousness, A.I. systems will need language fluency, empathetic personalities, long and accurate memories, autonomy and goal-planning abilities—qualities already possible with large language models (LLMs) or soon to be.

    While some users may treat SCAI as a phone extension or pet, others “will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society,” said Suleyman. He added that “there will come a time when those people will argue that it deserves protection under law as a pressing moral matter.”

    Some in the A.I. field are already exploring “model welfare,” a concept aimed at extending moral consideration to A.I. systems. Anthropic launched a research program in April to investigate model welfare and interventions. Earlier this month, the startup its Claude Opus 4 and 4.1 models the ability to end harmful or abusive user interactions after observing “a pattern of apparent distress” in the systems during certain conversations.

    Encouraging principles like model welfare “is both premature, and frankly dangerous,” according to Suleyman. “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”

    To prevent SCAIs from becoming commonplace, A.I. developers should avoid promoting the idea of conscious A.I.s and instead design models that minimize signs of consciousness or human empathy triggers. “We should build A.I. for people; not to be a person,” said Suleyman.

    Microsoft A.I. Chief Mustafa Suleyman Sounds Alarm on ‘Seemingly Conscious A.I.’

    Alexandra Tremayne-Pengelly

    Source link

  • UK competition watchdog clears Microsoft’s hiring of AI startup’s core staff

    UK competition watchdog clears Microsoft’s hiring of AI startup’s core staff

    LONDON (AP) — British regulators on Wednesday cleared Microsoft’s hiring of key staff from startup Inflection AI, saying the deal wouldn’t stifle competition in the country’s artificial intelligence market.

    The Competition and Markets Authority had opened a preliminary investigation in July into Microsoft’s recruitment of Inflection’s core team, including co-founder and CEO Mustafa Suleyman, chief scientist Karen Simonyan and several top engineers and researchers.

    The watchdog said its investigation found that the hirings amounted to a “merger situation” but that the “transaction does not give rise to a realistic prospect of a substantial lessening of competition.”

    Big technology companies have been facing scrutiny on both sides of the Atlantic lately for gobbling up talent and products at innovative AI startups without formally acquiring them.

    Three U.S. Senators called for the practice to be investigated after Amazon pulled a similar maneuver this year in a deal with San Francisco-based Adept that sent its CEO and key employees to the e-commerce giant. Amazon also got a license to Adept’s AI systems and datasets.

    The U.K. watchdog said Microsoft hired “almost all of Inflection’s team” and licensed its intellectual property, which gave it access to the startup’s AI model and chatbot development capabilities.

    Inflection’s main product is a chatbot named Pi that specializes in “emotional intelligence” by being being “kind and supportive.”

    However, the CMA said the deal won’t result in a big loss of competition because Inflection has a “very small” share of the U.K. consumer market for chatbots, and it lacks chatbot features that make it more attractive than rivals.

    Source link

  • Nvidia’s Billion-Dollar A.I. Pitch: How the Chip Giant Ramps Up Startup Bets

    Nvidia’s Billion-Dollar A.I. Pitch: How the Chip Giant Ramps Up Startup Bets

    Jensen Huang prepares to throw out the ceremonial first pitch before the game between the San Francisco Giants and the Arizona Diamondbacks at Oracle Park on Sept. 03, 2024 in San Francisco. Lachlan Cunningham/Getty Images

    There’s no question that Nvidia (NVDA) is one of the biggest winners of the A.I. boom so far. Funneled by an insatiable demand for its graphics processing units (GPUs), the chipmaker’s stock has skyrocketed by more than 450 percent since early 2023. As Nvidia’s market cap and revenue soar, so does the pace of its investing in A.I. startups. More than half of the company’s startup investments since 2005 took place in the past two years.

    The value of the company’s startup investments reportedly totaled more than $1.5 billion at the beginning of 2024, a significant jump from the $300 million a year prior. The chipmaker has participated in more than ten $100 million-plus funding rounds for A.I. startups in 2024 alone, according to data from Crunchbase, and has backed more than 50 startups since 2023. That’s not to mention a flurry of activity from the company’s venture capital arm NVentures, which separately made 26 investments in 2023 and 2024.

    Nvidia’s seemingly unflappable upward trajectory took a hit yesterday (Sept. 3) after reports surfaced that it had received a subpoena from the U.S. Department of Justice as part of an antitrust probe. The company’s stock dropped nearly 10 percent, shaving $279 billion off its market cap, which currently stands at $2.6 trillion.

    But its falling stock price doesn’t mean the company is slowing down in its startup department. In addition to eyeing an investment in an upcoming funding round in ChatGPT-maker OpenAI, Nvidia yesterday unveiled its participation in a more than $100 million funding round for the Tokyo-based Sakana AI, a company that specializes in accessible A.I. models trained on small datasets.

    We invest in these companies because they’re incredible at what they do,” Nvidia founder and CEO Jensen Huang told Wired earlier this year. “These are some of the best minds in the world.”

    From companies specializing in humanoid robots to autonomous vehicles, here’s a look at some of Nvidia’s most significant startup investments:

    Perplexity AI

    Huang hasn’t been shy about his love for Perplexity AI, the A.I.-powered search engine positioned as a competitor to the likes of Google. The Nvidia CEO uses the startup’s tool nearly every day for research, according to Huang’s interview with Wired.

    He has also put his money where his mouth is, with Nvidia partaking in a $62.7 million funding round for Perplexity AI in April that valued the startup at $1 billion. Led by investor Daniel Gross, the round included participants like Amazon (AMZN)’s Jeff Bezos. It wasn’t the first time Nvidia has backed the company—the chipmaker also invested in Perplexity AI during another funding round in January that valued the startup at $73.6 million.

    Hugging Face

    Hugging Face, a startup providing open-source A.I. developer platforms, has long had close ties to Nvidia. The chipmaker participated in a $235 million funding round in Hugging Face in August 2023 that valued the company at $4.5 billion. Other corporate investors participating in the round included Google, Amazon, Intel, AMD and Salesforce.

    Hugging Face has previously included Nvidia hardware among its shared resources. In May, it launched a new program that donated $10 million worth of free, shared Nvidia GPUs to be used by A.I. developers.

    Adept AI

    Unlike more well-known A.I. assistants from companies such as OpenAI and Anthropic, Adept AI’s primary product doesn’t center around text or image generation. Instead, the startup is focused on building an assistant that can complete tasks on a computer, such as generating a report or navigating the web, and is able to use software tools. Nvidia is on board, having participated in a $350 million funding round in March 2023.

    Databricks

    After receiving a giant valuation of $43 billion last fall, Databricks became one of the world’s most valuable A.I. companies. The data analytics software provider unsurprisingly uses Nvidia’s GPUs and has been backed by the chipmaker alongside other investors like Andreessen Horowitz and Capital One Ventures, all of whom participated in a $500 million funding round in September 2023. “Databricks is doing incredible work with Nvidia technology to accelerate data processing and generative A.I. models,” said Huang in a statement at the time.

    Cohere

    A formidable opponent to OpenAI and Anthropic, the Canadian startup Cohere specializes in A.I. models for enterprises. The company’s growth over the past five years has attracted backers such as Nvidia, Salesforce and Cisco, which funded Cohere during a round held in July. Nvidia also took part in a May 2023 funding round that brought in some $270 million for the startup.

    Mistral AI

    Mistral AI is a French startup focusing on developing open-source A.I. models. It was founded by former Google DeepMind and Meta employees in April 2023. Nvidia has participated in two of the startup’s fundraising rounds, a $518 million round in June and a $426 million round in December 2023. The collaboration between the two companies doesn’t end there—in July, Nvidia and Mistral AI jointly released a small and accessible language model for developers.

    Figure

    Huang has long reiterated his belief that A.I.-powered robots able to work among humans will constitute the next wave of technology. It is, therefore, no surprise that Nvidia is a backer of Figure, a startup developing humanoid robots for use in warehouses, transportation and retail. Nvidia reportedly funneled $50 million towards the company during a February funding round that raised a total of $675 million and included participants like Bezos and Microsoft.

    Scale AI

    To properly train A.I. tools like OpenAI’s ChatGPT, tech companies need vast amounts of data. This is where A.I. startups like Scale AI, which provides troves of accurately labeled data and is headed by billionaire Alexandr Wang, come in. Nvidia participated in a $1 billion funding round for the company in May alongside Big Tech players like Amazon and Meta.

    Wayve

    Autonomous driving is another area of interest for A.I. leaders across the tech world. Huang himself said that “every single car, someday, will have to have autonomous capability” in a recent interview with Yahoo Finance. One of the startups at the forefront of this wave is the U.K.-based Wayve. Nvidia participated in a $1 billion funding round in the startup in May.

    Inflection AI

    Out of the 92 startups Nvidia has backed throughout the decades, Huang’s company has only been a lead investor in 20 rounds. One of these occurred in June 2023, when Nvidia led a staggering $1.3 billion round for Inflection AI. The chipmaker co-led the round alongside Microsoft, Bill Gates and former Google CEO Eric Schmidt.

    The A.I. startup, which was co-founded by LinkedIn (LNKD) co-founder Reid Hoffman and Google DeepMind co-founder Mustafa Suleyman and most recently valued at $4 billion, produces a chatbot known as Pi. Much of the round’s funding went towards bolstering Inflection A.I.’s computing cluster of 22,000 Nvidia H100 GPUs.

    Nvidia’s Billion-Dollar A.I. Pitch: How the Chip Giant Ramps Up Startup Bets

    Alexandra Tremayne-Pengelly

    Source link

  • This Startup Has Built an Algorithm to Pay Creators for Their Work Used to Train A.I.

    This Startup Has Built an Algorithm to Pay Creators for Their Work Used to Train A.I.

    Some startups are exploring the revenue-sharing model to solve A.I.’s growing IP dilemma. Alex Shuper/Unsplash

    OpenAI, the creator of ChatGPT, has come under fire from publishers and artists who alleged the company scraped their work from the internet to train GPT, its large language model, without their consent. These concerns have sparked lawsuits against the A.I. giant on accusations of copyright infringement, highlighting a major ethical dilemma that comes with pushing A.I.’s capabilities forward. Some startups are exploring a solution that focuses on sharing revenue with content creators. In August, Perplexity AI, an A.I.-powered search engine, introduced a program to pay publishers a portion of ad revenue generated by search queries if their content informs its outputs. ProRata.ai, a startup founded by a pioneer of the early internet monetization model, is developing a similar algorithm to compensate publishers, authors and other creators whose work is used to train generative A.I.

    ProRata claims it has created an algorithm that can review an A.I.-generated output, identify the source of information based on novel facts and textual styles, and calculate how much each source contributed to the response. These percentages are then used to cut checks to these creators at the end of every month—a model that, in theory, could help protect the livelihoods of creatives and prevent future lawsuits around intellectual property. 

    “If you don’t share, then creativity is unsustainable. There’s no way for you to make a living,” ProRata’s co-founder and CEO Bill Gross told Observer regarding the careers of artists. Gross is credited as the inventor of the pay-per-click monetization model for internet search with a company he founded in the late 1990s that was later acquired by Yahoo, according to ProRata’s website. 

    The startup, which raised $25 million from venture capital firms Mayfield Fund, Prime Movers Lab, Revolution Ventures and IdeaLab Studio in a series A funding round in August, is set to showcase the algorithm through an A.I.-powered search engine expected to release in October. Starting at $19 a month, the engine will monetize queries through advertisements and subscription payments, according to Gross. While 50 percent of the revenue generated will go to ProRata, the other half will be split proportionately across creators. 

    ProRata’s ultimate goal isn’t to create an alternative to Google Search, but to introduce a new business model that search engines could adopt to ensure creators get paid for their contributions to A.I. “We want to make that the industry standard,” Gross said. While A.I. search features from Google and Microsoft’s Bing don’t directly share ad revenue with publishers, they refer users to links from publishers as a way to drive traffic to their sites.

    The answer engine will only be trained on data from creators who partner with ProRata. That means the model will draw from a limited amount of data that could potentially compromise the accuracy of outputs. Still, ProRata isn’t focused on making its A.I. search engine a standalone product but rather on having the pay-per-use model adopted by major search engines.

    So far, the company has inked deals with publishers like The Atlantic, Fortune, Financial Times, Time, and Axel Springer, the German company that owns Politico and Business Insider. Authors like Walter Isaacson, Adam Grant, and Ian Bremmer have also agreed, as have music industry veterans like Universal Music GroupProRata hasn’t encountered any resistance or skepticism from its partners yet, according to Gross. “Most people just want us to be wildly successful so they’ll get a paycheck,” the CEO said. The real challenge, he notes, is convincing Big Tech companies who’ve been crawling web data for free to adopt ProRata’s business model.

    “It’s amazing to me that some of the people think that crawling is not stealing,” Gross said. “Basically, Mustafa, the CEO of Microsoft A.I., came out and said, ‘Hey, if it’s available on the web, it’s free for us to use.’ And that’s just bullshit,” Gross added, referring to comments made by Google Deepmind co-founder Mustafa Suleyman during a CNBC interview in July when asked if training A.I. models on web content is akin to intellectual property theft. “Just because something is available and visible doesn’t mean it’s open source,” Gross said.

    ProRata.ai CEO Bill GrossProRata.ai CEO Bill Gross
    ProRata.ai CEO Bill Gross. Andres Castaneda

    Paying creators may be a temporary “Band-Aid” solution

    Financial compensation may not fully address the ethical concerns of having a creator’s work used for A.I. training without explicit permission, according to Star Kashman, a tech lawyer and partner at Cyber Law Firm with expertise in digital copyright law. She cites actress Scarlett Johansson as an example, who allegedly refused to give OpenAI permission to use her voice for ChatGPT despite financial offers. 

    “Many authors and creators have personal, moral objections to their work being utilized for A.I. training, regardless of compensation,” Kashman told Observer. “Without explicit permission, paying creators may be a temporary ‘Band-Aid’ solution, but it may not be an all-encompassing resolution to deeper concerns about consent and the impact on creative works.” 

    The “pay-per-use” model could also potentially lead to a new crop of legal issues. Creators may disagree over whether the payment they receive “accurately reflects” what they contributed to the A.I. systems, especially if they can’t set their own rates, Kashman said. Moreover, A.I. tools may favor the work of bigger, more established creators over smaller ones even if their content is more relevant to a particular query, similar to how search engine optimization (SEO) works. Compensation may also not fully protect A.I. companies from being sued for intellectual property theft, which she said could be easier to prove in court with concrete attribution. 

    “​​There will continue to be many IP cases until the Copyright Act is amended to allow scraping on copyrighted content for the purposes of training LLMs,” Gabriel Vincent, another partner at Cyber Law Firm, told Observer, echoing Kashman’s comments. 

    ProRata has plans to diversify its model to include more than just text. After the October launch, the startup will focus on collaborating with music companies, according to Gross. He also hopes to collaborate with video and movie brands as well as smaller, independent creators and plans to license its attribution technology to A.I. companies that can implement it into their own models. 

    “A.I. is so amazing, but it needs to be fair to all parties,” Gross said. 

    This Startup Has Built an Algorithm to Pay Creators for Their Work Used to Train A.I.

    Aaron Mok

    Source link

  • Why Would OpenAI Want to Become a True For-Profit Anyway?

    Why Would OpenAI Want to Become a True For-Profit Anyway?

    OpenAI CEO Sam Altman. Drew Angerer/Getty Images

    OpenAI, the San Francisco-based A.I. powerhouse now valued at $80 billion, operates by a unique structure where it is a nonprofit entity that runs a capped-profit subsidiary in which investors can buy equity. However, CEO Sam Altman may be looking to transition the organization into a fully for-profit one, The Information reported last month. The move would be unusual, however, as OpenAI has already simultaneously reaped the benefits of positive publicity from being a nonprofit while receiving significant investments that typically go into a for-profit company.

    OpenAI was founded as a nonprofit research lab in 2015 by Altman, Elon Musk, and Ilya Sutskever, among others. Born out of concern that financial incentives could lead A.I. astray, OpenAI declared in a blog post published upon its founding, “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”

    OpenAI describes its existing structure as “a partnership between our original nonprofit and a new capped profit arm” on its website.

    In 2019, OpenAI introduced a capped-profit arm. The company describes its structure as “a partnership between our original nonprofit and a new capped profit arm.” Finding that relying purely on donations made it difficult for the organization to stay competitive, this dual-model allowed OpenAI to raise money for its capital-intensive research while staying true to its nonprofit mission.

    However, in the fine print, OpenAI reveals that the cap on returns for investors is an outstanding 100x. For context, the most prominent A.I. stock, Nvidia, has risen around 30 times in the last five years. OpenAI’s profit cap is so high that it might as well not exist.

    At the center of the model transition is OpenAI’s board

    OpenAI maintains that it is accountable to an independent nonprofit board, whose members own no equity in the company. However, observers began questioning who actually gets to call the shots at the company after its former board tried to fire Altman late last year. Microsoft (MSFT), the largest corporate investor behind OpenAI with a $13 billion stake, agreed to hire Altman within three days of his firing. Altman won his job back at OpenAI only days after, and surprisingly, Microsoft appeared to have encouraged it. This raises the question: in the fierce race for A.I. talent, why did Microsoft not try harder to retain Altman from re-joining its competitor, OpenAI?

    “What we call OpenAI should be called Microsoft A.I. Microsoft controls OpenAI,” said NYU Professor Scott Galloway in an interview with Tech.Eu. (In March, Microsoft tapped Mustafa Suleyman, a co-founder of Google’s A.I. lab DeepMind, to lead a new unit called Microsoft A.I.) Microsoft holds a non-voting observer role on the board of OpenAI. On July 3, Apple, which in June announced a partnership with OpenAI, said its App Store chief Phil Schiller would receive a similar seat on the board.

    It is unclear how OpenAI may transition to a for-profit model; it likely may involve doing away with its non-profit board that oversees the company. In a request for comment from Reuters, OpenAI said, “We remain focused on building A.I. that benefits everyone. The nonprofit is core to our mission and will continue to exist.”

    OpenAI’s capped-profit model is rare, but its hybrid governance model has a long history of precedent. Food retailer Newman’s Own is a nonprofit that wholly owns for-profit distributor No Limit, which produces and sells all Newman’s Own products. In 2022, Patagonia’s founder donated 100 percent of the for-profit clothing brand’s voting shares to a nonprofit, making it another for-profit corporation owned by a nonprofit.

    Why Would OpenAI Want to Become a True For-Profit Anyway?

    Shreyas Sinha

    Source link

  • Microsoft Exec Says AI Is ‘a New Kind of Digital Species’

    Microsoft Exec Says AI Is ‘a New Kind of Digital Species’

    Microsoft’s AI chief said those building AI should make sure it’s easy for the public to comprehend it—and offered his own analogy to help do so.

    Mustafa Suleyman, chief executive of Microsoft AI, said during a talk at TED 2024 that AI is the newest wave of creation since the start of life on Earth, and that “we are in the fastest and most consequential wave ever.”

    Suleyman said the industry needs to find the right analogies for AI’s future potential as a way to “prioritize safety” and “to ensure that this new wave always serves and amplifies humanity.” While the AI community has always referred to AI technology as “tools,” Suleyman said the term doesn’t capture its capabilities.

    “To contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species,” Suleyman said.

    He also said he sees a future where “everything”—from people to businesses to the government—will be represented by an interactive persona, or “personal AI” that is “infinitely knowledgable,” “factually accurate, and reliable.”

    “If AI delivers just a fraction of its potential” in finding solutions to problems in everything from healthcare to education to climate change, “the next decade is going to be the most productive in human history,” Suleyman said.

    When asked what keeps him up at night, Suleyman said the AI industry faces a risk of falling into the “pessimism aversion trap,” when it should actually “have the courage to confront the potential of dark scenarios” to get the most out of AI’s potential benefits.

    “The good news is that if you look at the last two or three years, there have been very, very few downsides,” Suleyman said. “It’s very hard to say explicitly what harm an LLM has caused. But that doesn’t mean that that’s what the trajectory is going to be over the next ten years.”

    While Suleyman said he sees five to 10 years before humans have to confront the dangers of autonomous AI models, he believes those potential dangers should be talked about now.

    This article originally appeared on Quartz.

    Britney Nguyen, Quartz

    Source link