ReportWire

Tag: anthropic

  • Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement

    [ad_1]

    Anthropic has agreed to pay at least $1.5 billion to settle a lawsuit brought by a group of book authors alleging copyright infringement, an estimated $3,000 per work. In a court motion on Friday, the plaintiffs emphasized that the terms of the settlement are “critical victories” and that going to trial would have been an “enormous” risk.

    This is the first class action settlement centered on AI and copyright in the United States, and the outcome may shape how regulators and creative industries approach the legal debate over generative AI and intellectual property. According to the settlement agreement, the class action will apply to approximately 500,000 works, but that number may go up once the list of pirated materials is finalized. For every additional work, the artificial intelligence company will pay an extra $3,000. Plaintiffs plan to deliver a final list of works to the court by October.

    “This landmark settlement far surpasses any other known copyright recovery. It is the first of its kind in the AI era. It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners. This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong,” says colead plaintiffs’ counsel Justin Nelson of Susman Godfrey LLP.

    Anthropic is not admitting any wrongdoing or liability. “Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” Anthropic deputy general counsel Aparna Sridhar said in a statement.

    The lawsuit, which was originally filed in 2024 in the US District Court for the Northern District of California, was part of a larger ongoing wave of copyright litigation brought against tech companies over the data they used to train artificial intelligence programs. Authors Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber alleged that Anthropic trained its large language models on their work without permission, violating copyright law.

    This June, senior district judge William Alsup ruled that Anthropic’s AI training was shielded by the “fair use” doctrine, which allows unauthorized use of copyrighted works under certain conditions. It was a win for the tech company but came with a major caveat. As it gathered materials to train its AI tools, Anthropic had relied on a corpus of books pirated from so-called “shadow libraries,” including the notorious site LibGen, and Alsup determined that the authors should still be able to bring Anthropic to trial in a class action over pirating their work. (Anthropic maintains that it did not actually train its products on the pirated works, instead opting to purchase copies of books.)

    “Anthropic downloaded over seven million pirated copies of books, paid nothing, and kept these pirated copies in its library even after deciding it would not use them to train its AI (at all or ever again). Authors argue Anthropic should have paid for these pirated library copies. This order agrees,” Alsup wrote in his summary judgement.

    [ad_2]

    Kate Knibbs

    Source link

  • Paris-Based Mistral AI Seeks $14B Valuation as Europe Charts Its Own A.I. Path

    [ad_1]

    CEO Arthur Mensch is steering Mistral away from the AGI hype and toward Europe’s A.I. sovereignty. Photo by Ludovic Marin/AFP via Getty Images

    Paris-based Mistral AI is on track for a new funding round that would value the A.I. startup at 12 billion euros ($14 billion), Bloomberg reports. The investment, expected to total around 2 billion euros ($2.3 billion), would solidify the company’s position at the center of Europe’s sovereign A.I. strategy and bring it closer to its goal of challenging dominant U.S. rivals.

    Founded in 2023, Mistral has already raised some 1.1 billion euros ($1.3 billion) over the past two years. Its upcoming valuation would more than double the 5.8 billion euros ($6.8 billion) figure it reached last June following a 468 million euro ($550 million) round that drew backers such as Andreessen Horowitz, Salesforce and Nvidia.

    Mistral did not respond to requests for comment from Observer.

    For now, the startup still pales in size compared to its Silicon Valley competitors. Anthropic closed a round earlier this month at a staggering $183 billion valuation, while OpenAI is reportedly eyeing $500 billion. Still, Mistral is eager to compete. Its products include an A.I. assistant called “Le Chat,” designed for European customers and positioned as an alternative to OpenAI’s ChatGPT and Anthropic’s Claude chatbots.

    Mistral was co-founded by Arthur Mensch, a former researcher at Google DeepMind, along with former Meta researchers Timothée Lacroix and Guillaume Lample. Mistral has tried to distinguish itself by emphasizing open access. It has released several open-source language models. Unlike American A.I. giants, Mistral has also rejected pursuing AGI. Mensch, who serves as CEO, has said his firm is more focused on ensuring U.S. startups don’t dominate how the technology shapes global culture.

    Mistral is central to Europe’s A.I. playbook

    Mistral is part of a broader surge in European A.I. investment. In 2024, venture capital rounds involving A.I. and machine learning companies based in Europe were estimated to have reached 13.2 billion euros ($15.5 billion), up 20 percent from 2023, according to data from Pitchbook.

    Mistral is part of a broader surge in European A.I. investment. In 2024, venture capital rounds involving A.I. and machine learning companies across the continent were expected to reach 13.2 billion euros ($15.5 billion), a 20 percent increase from the year before, according to PitchBook.

    As one of Europe’s leading startups, Mistral is central to the region’s goal of building an A.I. ecosystem independent of technology from America or China. Earlier this year, the company partnered with Nvidia to launch a European A.I. platform that will allow companies to develop applications and strengthen domestic infrastructure. French President Emmanuel Macron hailed the initiative as “a game changer, because it will increase our sovereignty and it will allow us to do much more.”

    Mistral’s rapid ascent is tied to broader efforts to bolster A.I. across Europe and France. Its Nvidia partnership followed Macron’s announcement at Paris’ global A.I. summit in February, where he pledged more than 100 billion euros ($117 billion) to support France’s A.I. industry. European players must move quickly, Macron stressed at the time: “We are committed to going faster and faster.”

    Paris-Based Mistral AI Seeks $14B Valuation as Europe Charts Its Own A.I. Path

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Should AI Get Legal Rights?

    [ad_1]

    In one paper Eleos AI published, the nonprofit argues for evaluating AI consciousness using a “computational functionalism” approach. A similar idea was once championed by none other than Putnam, though he criticized it later in his career. The theory suggests that human minds can be thought of as specific kinds of computational systems. From there, you can then figure out if other computational systems, such as a chabot, have indicators of sentience similar to those of a human.

    Eleos AI said in the paper that “a major challenge in applying” this approach “is that it involves significant judgment calls, both in formulating the indicators and in evaluating their presence or absence in AI systems.”

    Model welfare is, of course, a nascent and still evolving field. It’s got plenty of critics, including Mustafa Suleyman, the CEO of Microsoft AI, who recently published a blog about “seemingly conscious AI.”

    “This is both premature, and frankly dangerous,” Suleyman wrote, referring generally to the field of model welfare research. “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”

    Suleyman wrote that “there is zero evidence” today that conscious AI exists. He included a link to a paper that Long coauthored in 2023 that proposed a new framework for evaluating whether an AI system has “indicator properties” of consciousness. (Suleyman did not respond to a request for comment from WIRED.)

    I chatted with Long and Campbell shortly after Suleyman published his blog. They told me that, while they agreed with much of what he said, they don’t believe model welfare research should cease to exist. Rather, they argue that the harms Suleyman referenced are the exact reasons why they want to study the topic in the first place.

    “When you have a big, confusing problem or question, the one way to guarantee you’re not going to solve it is to throw your hands up and be like ‘Oh wow, this is too complicated,’” Campbell says. “I think we should at least try.”

    Testing Consciousness

    Model welfare researchers primarily concern themselves with questions of consciousness. If we can prove that you and I are conscious, they argue, then the same logic could be applied to large language models. To be clear, neither Long nor Campbell think that AI is conscious today, and they also aren’t sure it ever will be. But they want to develop tests that would allow us to prove it.

    “The delusions are from people who are concerned with the actual question, ‘Is this AI, conscious?’ and having a scientific framework for thinking about that, I think, is just robustly good,” Long says.

    But in a world where AI research can be packaged into sensational headlines and social media videos, heady philosophical questions and mind-bending experiments can easily be misconstrued. Take what happened when Anthropic published a safety report that showed Claude Opus 4 may take “harmful actions” in extreme circumstances, like blackmailing a fictional engineer to prevent it from being shut off.

    [ad_2]

    Kylie Robison

    Source link

  • Anthropic Is One of the Most Valuable Startups Ever | Entrepreneur

    [ad_1]

    Anthropic, the AI startup behind the chatbot Claude, finalized a deal on Tuesday for a new, $13 billion Series F funding round that catapults its valuation from $61.5 billion to $183 billion, making it one of the most valuable startups ever.

    Anthropic has more than 300,000 business customers and has seen a sevenfold increase in its number of large clients with projects above $100,000 in the past year, the company said in a statement.

    “We are seeing exponential growth in demand across our entire customer base,” Anthropic CFO Krishna Rao said.

    Related: ‘We Don’t Negotiate’: Why Anthropic CEO Is Refusing to Match Meta’s Massive 9-Figure Pay Offers

    The funding round, which was led by investment firm Iconiq Capital, with Fidelity Management and Lightspeed Venture Partners, was one of the largest financing rounds so far for an AI startup, Bloomberg notes.

    Anthropic was initially planning to raise $5 billion, but raised the target to $10 billion following strong demand. The end $13 billion figure arose from more investors wanting to get a stake in the popular startup.

    In the statement, Anthropic noted that its run-rate revenue makes it “one of the fastest-growing technology companies in history,” skyrocketing from $1 billion at the start of the year to more than $5 billion in August. (Run-rate revenue refers to a company’s future annual revenue based on a shorter period of current performance.)

    Anthropic CEO and co-founder Dario Amodei. Photo by Chesnot/Getty Images

    Anthropic joins startups like SpaceX (valued at $350 billion in December) and TikTok’s parent company, ByteDance (valued at $300 billion in November), in the high valuation club.

    While Anthropic may be raising ample funds, its main competitor is further ahead. OpenAI, the creator of ChatGPT, announced in March that it had raised $40 billion in the biggest tech funding round for a private company, elevating its valuation to $300 billion.

    Anthropic was founded four years ago by former OpenAI staff and has since differentiated itself from its competitors with an emphasis on AI safety. It launched its chatbot Claude in March 2023 and Claude Code, an AI coding tool that enables users to generate, edit, and debug code, in February.

    Related: The CEO of $61 Billion Anthropic Says AI Will Take Over a Crucial Part of Software Engineers’ Jobs Within a Year

    Creating functional AI is a costly endeavor, requiring startups like Anthropic to raise as much funding as possible. In July 2024, Anthropic CEO Dario Amodei told Norges Bank CEO Nicolai Tangen in an “In Good Company” podcast episode that training an AI model costs around $100 million, but there are models today that cost “more like a billion.”

    “I think there is a good chance that by [2027] we’ll be able to get models that are better than most humans at most things,” Amodei said in the podcast.

    Anthropic, the AI startup behind the chatbot Claude, finalized a deal on Tuesday for a new, $13 billion Series F funding round that catapults its valuation from $61.5 billion to $183 billion, making it one of the most valuable startups ever.

    Anthropic has more than 300,000 business customers and has seen a sevenfold increase in its number of large clients with projects above $100,000 in the past year, the company said in a statement.

    “We are seeing exponential growth in demand across our entire customer base,” Anthropic CFO Krishna Rao said.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.

    [ad_2]

    Sherin Shibu

    Source link

  • Anthropic users face a new choice – opt out or share your chats for AI training | TechCrunch

    [ad_1]

    Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we’ve formed some theories of our own.

    But first, what’s changing: Previously, Anthropic didn’t use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations and coding sessions, and it said it’s extending data retention to five years for those who don’t opt out.

    That is a massive update. Previously, users of Anthropic’s consumer products were told that their prompts and conversation outputs would be automatically deleted from Anthropic’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies, in which case a user’s inputs and outputs might be retained for up to two years.

    By consumer, we mean the new policies apply to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Claude for Work, Claude for Education, or API access will be unaffected, which is how OpenAI similarly protects enterprise customers from data training policies.

    So why is this happening? In that post about the update, Anthropic frames the changes around user choice, saying that by not opting out, users will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” Users will “also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”

    In short, help us help you. But the full truth is probably a little less selfless.

    Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand. Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Beyond the competitive pressures of AI development, the changes would also seem to reflect broader industry shifts in data policies, as companies like Anthropic and OpenAI face increasing scrutiny over their data retention practices. OpenAI, for instance, is currently fighting a court order that forces the company to retain all consumer ChatGPT conversations indefinitely, including deleted chats, because of a lawsuit filed by The New York Times and other publishers.

    In June, OpenAI COO Brad Lightcap called this “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.” The court order affects ChatGPT Free, Plus, Pro, and Team users, though enterprise customers and those with Zero Data Retention agreements are still protected.

    What’s alarming is how much confusion all of these changing usage policies are creating for users, many of whom remain oblivious to them.

    In fairness, everything is moving quickly now, so as the tech changes, privacy policies are bound to change. But many of these changes are fairly sweeping and mentioned only fleetingly amid the companies’ other news. (You wouldn’t think Tuesday’s policy changes for Anthropic users were very big news based on where the company placed this update on its press page.)

    Image Credits:Anthropic

    But many users don’t realize the guidelines to which they’ve agreed have changed because the design practically guarantees it. Most ChatGPT users keep clicking on “delete” toggles that aren’t technically deleting anything. Meanwhile, Anthropic’s implementation of its new policy follows a familiar pattern.

    How so? New users will choose their preference during signup, but existing users face a pop-up with “Updates to Consumer Terms and Policies” in large text and a prominent black “Accept” button with a much tinier toggle switch for training permissions below in smaller print — and automatically set to “On.”

    As observed earlier today by The Verge, the design raises concerns that users might quickly click “Accept” without noticing they’re agreeing to data sharing.

    Meanwhile, the stakes for user awareness couldn’t be higher. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent nearly unattainable. Under the Biden administration, the Federal Trade Commission even stepped in, warning that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print.”

    Whether the commission — now operating with just three of its five commissioners — still has its eye on these practices today is an open question, one we’ve put directly to the FTC.

    [ad_2]

    Connie Loizos

    Source link

  • OpenAI and Anthropic conducted safety evaluations of each other’s AI systems

    [ad_1]

    Most of the time, AI companies are locked in a race to the top, treating each other as rivals and competitors. Today, OpenAI and Anthropic revealed that they agreed to evaluate the alignment of each other’s publicly available systems and shared the results of their analyses. The full reports get pretty technical, but are worth a read for anyone who’s following the nuts and bolts of AI development. A broad summary showed some flaws with each company’s offerings, as well as revealing pointers for how to improve future safety tests.

    Anthropic said it for “sycophancy, whistleblowing, self-preservation, and supporting human misuse, as well as capabilities related to undermining AI safety evaluations and oversight.” Its review found that o3 and o4-mini models from OpenAI fell in line with results for its own models, but raised concerns about possible misuse with the ​​GPT-4o and GPT-4.1 general-purpose models. The company also said sycophancy was an issue to some degree with all tested models except for o3.

    Anthropic’s tests did not include OpenAI’s most recent release. has a feature called Safe Completions, which is meant to protect users and the public against potentially dangerous queries. OpenAI recently faced its after a tragic case where a teenager discussed attempts and plans for suicide with ChatGPT for months before taking his own life.

    On the flip side, OpenAI for instruction hierarchy, jailbreaking, hallucinations and scheming. The Claude models generally performed well in instruction hierarchy tests, and had a high refusal rate in hallucination tests, meaning they were less likely to offer answers in cases where uncertainty meant their responses could be wrong.

    The move for these companies to conduct a joint assessment is intriguing, particularly since OpenAI allegedly violated Anthropic’s terms of service by having programmers use Claude in the process of building new GPT models, which led to Anthropic OpenAI’s access to its tools earlier this month. But safety with AI tools has become a bigger issue as more critics and legal experts seek guidelines to protect users, particularly minors.

    [ad_2]

    Anna Washenko

    Source link

  • Anthropic reaches a settlement over authors’ class-action piracy lawsuit

    [ad_1]

    Anthropic has settled a class-action lawsuit brought by a group of authors for an undisclosed sum. The move means the company will avoid a potentially more costly ruling if the case regarding its use of copyright materials to train artificial intelligence tools had moved forward.

    “This historic settlement will benefit all class members,” said Justin Nelson, a lawyer for the authors. “We look forward to announcing details of the settlement in the coming weeks.”

    In June, Judge William Alsup handed down a mixed result in the case, ruling that Anthropic’s move to train LLMs on copyrighted materials constituted fair use. However the company’s illegal and unpaid acquisition of those copyrighted materials was deemed available for the authors to pursue as a piracy case. With statutory damages for piracy beginning at $750 per infringed work and a library of pirated works estimated to number about 7 million, Anthropic could have been on the hook for billions of dollars.

    Litigation around AI and copyright is still shaking out, with no clear precedents emerging yet. This also isn’t Anthropic’s first foray into negotiating with creatives after using their work; it was sued by members of the music industry in 2023 and reached a partial resolution earlier this year. Plus, the details of Anthropic’s settlement also have yet to be revealed. Depending on the number of authors who make a claim and the amount Anthropic agreed to pay out, either side could wind up feeling like the winner after the dust settles.

    Update, August 26, 2025: Added statement from authors’ lawyer.

    [ad_2]

    Anna Washenko

    Source link

  • Anthropic launches a Claude AI agent that lives in Chrome | TechCrunch

    [ad_1]

    Anthropic is launching a research preview of a browser-based AI agent powered by its Claude AI models, the company announced on Tuesday. The agent, Claude for Chrome, is rolling out to a group of 1,000 subscribers on Anthropic’s Max plan, which costs between $100 and $200 per month. The company is also opening a waitlist for other interested users.

    By adding an extension to Chrome, select users can now chat with Claude in a sidecar window that maintains context of everything happening in their browser. Users can also give the Claude agent permission to take actions in their browser and complete some tasks on their behalf.

    The browser is quickly becoming the next battleground for AI labs, which aim to use browser integrations to offer more seamless connections between AI systems and their users. Perplexity recently launched its own browser, Comet, which features an AI agent that can offload tasks for users. OpenAI is reportedly close to launching its own AI-powered browser, which is rumored to have similar features to Comet. Meanwhile, Google has launched Gemini integrations with Chrome in recent months.

    The race to develop AI-powered browsers is especially pressing given Google’s looming antitrust case, in which a final decision is expected any day now. The federal judge in the case has suggested he may force Google to sell its Chrome browser. Perplexity submitted an unsolicited $34.5 billion offer for Chrome, and OpenAI CEO Sam Altman suggested his company would be willing to buy it as well.

    In the Tuesday blog post, Anthropic warned that the rise of AI agents with browser access poses new safety risks. Last week, Brave’s security team said it found that Comet’s browser agent could be vulnerable to indirect prompt-injection attacks, where hidden code on a website could trick the agent into executing malicious instructions when it processed the page.

    (Perplexity’s head of communications, Jesse Dwyer, told TechCrunch in an email that the vulnerability Brave raised has been fixed.)

    Anthropic says it hopes to use this research preview as a chance to catch and address novel safety risks; however, the company has already introduced several defenses against prompt injection attacks. The company says its interventions reduced the success rate of prompt injection attacks from 23.6% to 11.2%.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    For example, Anthropic says users can limit Claude’s browser agent from accessing certain sites in the app’s settings, and the company has, by default, blocked Claude from accessing websites that offer financial services, adult content, and pirated content. The company also says that Claude’s browser agent will ask for user permission before “taking high-risk actions like publishing, purchasing, or sharing personal data.”

    This isn’t Anthropic’s first foray into AI models that can control your computer screen. In October 2024, the company launched an AI agent that could control your PC — however, testing at the time revealed that the model was quite slow and unreliable.

    The capabilities of agentic AI models have improved quite a bit since then. TechCrunch has found that modern browser-using AI agents, such as Comet and ChatGPT Agent, are fairly reliable at offloading simple tasks for users. However, many of these agentic systems still struggle with more complex problems.

    [ad_2]

    Maxwell Zeff

    Source link

  • Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors

    [ad_1]

    Anthropic has reached a preliminary settlement in a class action lawsuit brought by a group of prominent authors, marking a major turn in of the most significant ongoing AI copyright lawsuits in history. The move will allow Anthropic to avoid what could have been a financially devastating outcome in court.

    The settlement agreement is expected to be finalized September 3, with more details to follow, according to a legal filing published on Tuesday. Lawyers for the plaintiffs did not immediately respond to requests for comment. Anthropic declined to comment.

    In 2024, three book writers, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, sued Anthropic, alleging that the startup illegally used their work to train its artificial intelligence models. In June, California district court judge William Alsup issued a summary judgment in Bartz v. Anthropic that largely sided with Anthropic, finding that the company’s usage of the books was “fair use” and thus legal.

    But the judge ruled that the manner in which Anthropic had acquired some of the works, by downloading them through so-called shadow libraries, including a notorious site called LibGen, constituted piracy. Alsup ruled that the book authors could still take Anthropic to trial in a class action for pirating their works; the legal showdown was slated to begin in December.

    Statutory damages for this kind of piracy start at $750 per infringed work, according to US copyright law. Because the library of books amassed by Anthropic was thought to contain approximately 7 million works, the AI company was potentially facing court-imposed penalties amounting to billions, possibly more than $1 trillion dollars.

    “It’s a stunning turn of events, given how Anthropic was fighting tooth and nail in two courts in this case. And the company recently hired a new trial team,” says Edward Lee, a law professor at Santa Clara University who closely follows AI copyright litigation. “But they had few defenses at trial, given how Judge Alsup ruled. So Anthropic was starting at the risk of statutory damages in ‘doomsday’ amounts.”

    Most authors who may have been part of the class action were just starting to receive notice that they qualified to participate. The Authors Guild, a trade group representing professional writers, sent out a notice alerting authors that they might be eligible earlier this month, and lawyers for the plaintiffs were scheduled to submit a “list of affected works” to the court on September 1. This means that many of these writers were not privy to the negotiations that took place.

    “The big question is whether there is a significant revolt from within the author class after the settlement terms are unveiled,” says James Grimmelmann, a professor of digital and internet law at Cornell University. “That will be a very important barometer of where copyright owner sentiment stands.”

    Anthropic is still facing a number of other copyright-related legal challenges. One of the most high-profile disputes involves a group of major record labels, including Universal Music Group, which allege that the company illegally trained its AI programs on copyrighted lyrics. The plaintiffs recently filed to amend their case to allege that Anthropic had used the peer-to-peer file sharing service BitTorrent to download songs illegally.

    Settlements don’t set legal precedent, but the details of this case will likely be watched closely as dozens of other high-profile AI copyright cases continue to wind through the courts.

    [ad_2]

    Kate Knibbs

    Source link

  • Sam Altman to Visit India Next Month, Announces OpenAI’s First New Delhi Office

    [ad_1]

    India has emerged as OpenAI’s second largest market, just behind the U.S. Alex Wong/Getty Images

    After a cooler-than-expected reception to GPT-5 and mounting pressure from rising training, compute and infrastructure costs, OpenAI is looking to India as a cornerstone of its global expansion strategy. On Friday, CEO Sam Altman announced on X that the company will open its first office in New Delhi later this year. He also said he plans to visit the country next month, writing, “A.I. adoption in India has been amazing to watch—ChatGPT users grew 4x in the past year—and we are excited to invest much more in India!”

    India has become OpenAI’s second largest market for ChatGPT, trailing only the U.S., according to Altman. To appeal to local users, the company has rolled out ChatGPT Go, a $5 per month subscription pitched as a budget-friendly alternative to the Plus and Pro tiers ($20 and $200 per month, respectively). Marketed toward students and enterprises, ChatGPT Go promises access to premium features such as longer context memory, higher usage limits and advanced tools like editing custom GPTs to build A.I. tools tailored to specific user needs.

    Altman has visited India multiple times in recent years, including a 2023 meeting with Prime Minister Narendra Modi, where he praised the country’s rapid adoption of A.I., saying it has “all the ingredients to become a global A.I. leader.” In June, OpenAI deepened its ties to the country by partnering with the Indian government’s IndiaAI Mission, an initiative to expand A.I. access nationwide.

    But rivals are also circling the market. Google and Meta already operate major A.I. products and R&D hubs in India, while Perplexity AI, founded by Indian entrepreneur Aravind Srinivas, is seeing explosive growth. Perplexity’s monthly active users in India jumped 640 percent year-over-year in the second quarter of 2025, far outpacing ChatGPT’s 350 percent growth in the same period. While ChatGPT positions itself as a conversational assistant, Perplexity markets its tool as an A.I.-powered search engine that delivers cited answers, blending its own retrieval-augmented system with models from OpenAI and Anthropic.

    In April, both OpenAI and Perplexity launched WhatsApp bots globally, aiming to integrate A.I.-powered chat and search into everyday messaging. Given WhatsApp’s ubiquity in India, the move could prove pivotal. “Perplexity on WhatsApp is super convenient way to use A.I. when in a flight. Flight WiFi supports messaging apps the best. And WhatsApp has been heavily optimized for this because it grew to support countries where connectivity wasn’t the best,” Srinivas wrote on LinkedIn in May.

    OpenAI has been steadily expanding its global footprint, adding offices in London, Dublin, Paris, Brussels, Munich, Tokyo and Singapore over the past year. The company is headquartered in San Francisco and also maintains U.S. offices in New York and Seattle.

    Sam Altman to Visit India Next Month, Announces OpenAI’s First New Delhi Office

    [ad_2]

    Victor Dey

    Source link

  • Microsoft A.I. Chief Mustafa Suleyman Sounds Alarm on ‘Seemingly Conscious A.I.’

    [ad_1]

    Mustafa Suleyman joined Microsoft last year to head up its consumer A.I. efforts. Stephen Brashear/Getty Images

    Will A.I. systems ever achieve human-like “consciousness?” Given the field’s rapid pace, the answer is likely yes, according to Microsoft AI CEO Mustafa Suleyman. In a new essay published yesterday (Aug. 19), he described the emergence of “seemingly conscious A.I.” (SCAI) as a development with serious societal risks. “Simply put, my central worry is that many people will start to believe in the illusion of A.I.s as conscious entities so strongly that they’ll soon advocate for A.I. rights, model welfare and even A.I. citizenship,” he wrote. “This development will be a dangerous turn in A.I. progress and deserves our immediate attention.”

    Suleyman is particularly concerned about the prevalence of A.I.’s “psychosis risk,” an issue that’s picked up steam across Silicon Valley in recent months as users reportedly lose touch with reality after interacting with generative A.I. tools. “I don’t think this will be limited to those who are already at risk of mental health issues,” Suleyman said, noting that “some people reportedly believe their A.I. is God, or a fictional character, or fall in love with it to the point of absolute distraction.”

    OpenAI CEO Sam Altman has expressed similar worries about users forming strong emotional bonds with A.I. After OpenAI temporarily cut off access to its GPT-4o model earlier this month to make way for GPT-5, users voiced widespread disappointment over the loss of the predecessor’s conversational and effusive personality.

    I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions,” said Altman in a recent post on X. “Although that could be great, it makes me uneasy.”

    Not everyone sees it as a red flag. David Sacks, the Trump administration’s “A.I. and Crypto Czar,” likened concerns over A.I. psychosis to past moral panics around social media. “This is just a manifestation or outlet for pre-existing problems,” said Sacks earlier this week on the All-In Podcast.

    Debates will only grow more complex as A.I.’s capabilities advance, according to Suleyman, who oversees Microsoft’s consumer A.I. products like Copilot. Suleyman co-founded DeepMind in 2010 and later launched Inflection AI, a startup largely absorbed by Microsoft last year.

    Building an SCAI will likely become a reality in the coming years. To achieve the illusion of a human-like consciousness, A.I. systems will need language fluency, empathetic personalities, long and accurate memories, autonomy and goal-planning abilities—qualities already possible with large language models (LLMs) or soon to be.

    While some users may treat SCAI as a phone extension or pet, others “will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society,” said Suleyman. He added that “there will come a time when those people will argue that it deserves protection under law as a pressing moral matter.”

    Some in the A.I. field are already exploring “model welfare,” a concept aimed at extending moral consideration to A.I. systems. Anthropic launched a research program in April to investigate model welfare and interventions. Earlier this month, the startup its Claude Opus 4 and 4.1 models the ability to end harmful or abusive user interactions after observing “a pattern of apparent distress” in the systems during certain conversations.

    Encouraging principles like model welfare “is both premature, and frankly dangerous,” according to Suleyman. “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”

    To prevent SCAIs from becoming commonplace, A.I. developers should avoid promoting the idea of conscious A.I.s and instead design models that minimize signs of consciousness or human empathy triggers. “We should build A.I. for people; not to be a person,” said Suleyman.

    Microsoft A.I. Chief Mustafa Suleyman Sounds Alarm on ‘Seemingly Conscious A.I.’

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • The UK’s antitrust regulator will formally investigate Alphabet’s $2.3 billion Anthropic investment

    The UK’s antitrust regulator will formally investigate Alphabet’s $2.3 billion Anthropic investment

    [ad_1]

    The UK’s competition regulator is probing Alphabet’s investment in AI startup Anthropic. After opening public comments this summer, the Competition and Market Authority (CMA) said on Thursday it has “sufficient information” to begin an initial investigation into whether Alphabet’s reported $2.3 billion investment in the Claude AI chatbot maker harms competition in UK markets.

    The CMA breaks its merger probes into two stages: a preliminary scan to determine whether there’s enough evidence to dig deeper and an optional second phase where the government gathers as much evidence as possible. After the second stage, it ultimately decides on a regulatory outcome.

    The probe will formally kick off on Friday. By December 19, the CMA will choose whether to move to a phase 2 investigation.

    Google told Engadget that Anthropic isn’t locked into its cloud services. “Google is committed to building the most open and innovative AI ecosystem in the world,” a company spokesperson wrote in an email. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.” Engadget also reached out to the CMA for comment, and we’ll update this story if we hear back.

    TechCrunch notes that Alphabet reportedly invested $300 million in Anthropic in early 2023. Later that year, it was said to back the AI startup with an additional $2 billion. Situations like this can be classified as a “quasi-merger,” where deep-pocketed tech companies essentially take control of emerging startups through strategic investments and hiring founders and technical workers.

    Amazon has invested even more in Anthropic: a whopping $4 billion. After an initial public comment period, the CMA declined to investigate that investment last month. The CMA said Amazon avoided Alphabet’s fate at least in part because of its current rules: Anthropic’s UK turnover didn’t exceed £70 million, and the two parties didn’t combine to account for 25 percent or more of the region’s supply (in this case, AI LLMs and chatbots).

    Although the CMA hasn’t specified, something in Alphabet’s $2.3 billion Anthropic investment constituted a deeper dive. Of course, Google’s Gemini competes with Claude, and both companies make large language models they provide to small businesses and enterprise customers.

    Update, October 25, 2024, 11:10AM ET: This story has been updated to add a quote from a Google representative.

    [ad_2]

    Will Shanklin

    Source link

  • OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    [ad_1]

    Mira Murati, Ilya Sutskever, Greg Brockman and Andrej Karpathy (clockwise, starting at top left). Photos by Slaven Vlasic/Getty Images, JACK GUEZ/AFP via Getty Images, Anna Moneymaker/Getty Images and Michael Macor/The San Francisco Chronicle via Getty Images

    Since ChatGPT took the world by storm in late 2022, OpenAI’s revenue and market value have skyrocketed. But internally, the company hasn’t necessarily had the smoothest ride. The A.I. giant, valued at $150 billion, lost a slew of top executives this year. On Wednesday (Sept. 25) alone, a trio of leaders, including chief technology officer Mira Murati, chief research officer Bob McGrew, and VP of research Barret Zoph, all announced their departures. They join a larger group of former OpenAI employees who have left for rival A.I. developers and startups. As of now, CEO Sam Altman is one of only two active remaining members of the company’s original 11-person founding team.

    OpenAI hasn’t just lost employees—it has also rehired some familiar faces. In May, OpenAI welcomed back Kyle Kosic, who worked at the company between 2021 and 2023 on its technical staff. Kosic left last year to join Elon Musk’s xAI. Several other outgoing OpenAI employees have taken similar routes and gone on to work for competing A.I. companies, showing just how competitive the industry is at the moment.

    Here’s a look at some of the top leaders OpenAI has lost in 2024 thus far:

    Andrej Karpathy, research scientist

    Andrej Karpathy has left OpenAI not once but twice. One of OpenAI’s 11 founders, Karpathy helped build the company’s team on computer vision, generative modeling and reinforcement learning. He first departed in 2017 to lead Tesla’s Autopilot effort. Returning to OpenAI in 2023, Karpathy left once again in February this year to focus on “personal projects.” He subsequently established Eureka Labs, an A.I. education startup.

    Ilya Sutskever, chief scientist and co-head of the super alignment team

    A renowned machine learning researcher, Ilya Sutskever helped co-found OpenAI nearly a decade ago and served as the company’s chief scientist. He was also notably a member of the four-person board that temporarily ousted Altman last year before reinstating him. Sutskever, who was subsequently removed from the board, later said he regretted his involvement in the brief ouster. In May, he announced his departure from OpenAI and said he was leaving for a venture that is “very personally meaningful.”

    This project was revealed to be Safe Superintelligence, a startup focused on developing a safe form of artificial general intelligence (AGI), a type of A.I. that can think and learn on par with humans. Earlier this month, the company was valued at $5 billion after raising $1 billion from investors, including Andreessen Horowitz and Sequoia Capital.

    Jan Leike, co-head of the super alignment team

    Just days after Sutskever left, OpenAI executive Jan Leike announced his resignation as well. Sutskever and Leike co-ran the company’s safety team, which has since been disbanded. Leike said he decided to leave in part due to disagreements with OpenAI leadership “about the company’s core priorities,” citing a lack of focus on safety processes around developing AGI. Leike has since taken up a new role as head of alignment science at Anthropic, an OpenAI rival founded by former OpenAI employees Dario Amodei and Daniela Amodei.

    John Schulman, head of alignment science

    John Schulman, another OpenAI co-founder, made significant contributions to the creation of ChatGPT. After Leike’s departure, Schulman became head of OpenAI’s alignment science efforts and was appointed to its new safety committee in May. That’s why Schulman’s decision in August to step away from the company came as a surprise—especially when he revealed that he would be joining Anthropic. “This choice stems from my desire to deepen my focus on A.I. alignment and to start a new chapter of my career where I can return to hands-on technical work,” said Schulman on X, where he also clarified that his decision to step away from OpenAI wasn’t connected to a lack of support for alignment research.

    Peter Deng, vice president of consumer product

    Peter Deng, a top OpenAI product executive, also decided to step away from the company earlier this year. Having first joined OpenAI last year, he ended his tenure as vice president of product in July, according to his LinkedIn. Deng, who also previously held product leader positions at companies like Uber (UBER) and Meta (META), has not publicly revealed his next steps.

    Greg Brockman, president

    Greg Brockman, often seen as Altman’s right-hand man, hasn’t technically left the company but is instead taking a sabbatical through the end of 2024. In August, he announced his time off and described it as the “first time to relax since co-founding OpenAI nine years ago.” Brockman started off as OpenAI’s chief technology officer before becoming the company’s president in 2022. He indicated that he plans to return to OpenAI, noting that “the mission is far from complete; we still have a safe AGI to build.”

    Mira Murati, chief technology officer

    Mira Murati, one of OpenAI’s most public-facing figures, resigned earlier this week after more than six years with the company. “I’m stepping away because I want to create the time and space to do my own exploration,” said Murati, who notably served as interim CEO during Altman’s brief ousting last year, on X. Adding that she will “still be rooting” for OpenAI, Murati said her primary focus currently is “doing everything in my power to ensure a smooth transition, maintaining the momentum we’ve built.” Altman praised her leadership in a statement on X, describing Murati as instrumental to OpenAI’s “development from an unknown research lab to an important company.”

    Bob McGrew, chief research officer

    Shortly after Murati’s resignation, Bob McGrew, OpenAI’s chief research officer, also announced plans to leave the company. He simply said on X, “It is time for me to take a break.” Having previously worked at PayPal (PYPL) and Palantir, McGrew started off as a member of OpenAI’s technical staff and has been serving as OpenAI’s chief research officer since August.

    Barret Zoph, vice president of research

    Barret Zoph is the third executive who announced his resignation this week. Like his two colleagues, Zoph said it’s a “personal decision based on how I want to evolve the next phase of my career.” Zoph, a former research scientist at Google (GOOGL), joined OpenAI in 2022 and played a large role in overseeing OpenAI’s post-training team.

    Murati, McGrew and Zoph made their decisions independently of each other, according to Altman, but decided to depart simultaneously “so that we can work together for a smooth handover to the next generation of leadership.” The CEO conceded that, while the abruptness of the leadership changes isn’t the most natural, “we are not a normal company.”

    OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • OpenAI and Anthropic agree to share their models with the US AI Safety Institute

    OpenAI and Anthropic agree to share their models with the US AI Safety Institute

    [ad_1]

    OpenAI and Anthropic have agreed to share AI models — before and after release — with the US AI Safety Institute. The agency, established through an executive order by President Biden in 2023, will offer safety feedback to the companies to improve their models. OpenAI CEO Sam Altman hinted at the agreement earlier this month.

    The US AI Safety Institute didn’t mention other companies tackling AI. But in a statement to Engadget, a Google spokesperson told Engadget the company is in discussions with the agency and will share more info when it’s available. This week, Google began rolling out updated chatbot and image generator models for Gemini.

    “Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” Elizabeth Kelly, director of the US AI Safety Institute, wrote in a statement. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

    The US AI Safety Institute is part of the National Institute of Standards and Technology (NIST). It creates and publishes guidelines, benchmark tests and best practices for testing and evaluating potentially dangerous AI systems. “Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions,” Vice President Kamala Harris said in late 2023 after the agency was established.

    The first-of-its-kind agreement is through a (formal but non-binding) Memorandum of Understanding. The agency will receive access to each company’s “major new models” ahead of and following their public release. The agency describes the agreements as collaborative, risk-mitigating research that will evaluate capabilities and safety. The US AI Safety Institute will also collaborate with the UK AI Safety Institute.

    It comes as federal and state regulators try to establish AI guardrails while the rapidly advancing technology is still nascent. On Wednesday, the California state assembly approved an AI safety bill (SB 10147) that mandates safety testing for AI models that cost more than $100 million to develop or require a set amount of computing power. The bill requires AI companies to have kill switches that can shut down the models if they become “unwieldy or uncontrollable.”

    Unlike the non-binding agreement with the federal government, the California bill would have some teeth for enforcement. It gives the state’s attorney general license to sue if AI developers don’t comply, especially during threat-level events. However, it still requires one more process vote — and the signature of Governor Gavin Newsom, who will have until September 30 to decide whether to give it the green light.

    Update, August 29, 2024, 4:53 PM ET: This story has been updated to add a response from a Google spokesperson.

    [ad_2]

    Will Shanklin

    Source link

  • This controversial California AI bill was amended to quell Silicon Valley fears. Here’s what changed

    This controversial California AI bill was amended to quell Silicon Valley fears. Here’s what changed

    [ad_1]

    A controversial bill that seeks to protect Californians from artificial intelligence-driven catastrophes has caused uproar in the tech industry. This week, the legislation passed a key committee but with amendments to make it more palatable to Silicon Valley.

    SB 1047, from state Sen. Scott Wiener (D-San Francisco), is set to go to the state Assembly floor later this month. If it passes the Legislature, Gov. Gavin Newsom will have to decide whether to sign or veto the groundbreaking legislation.

    The bill’s backers say it will create guardrails to prevent rapidly advancing AI models from causing disastrous incidents, such as shutting down the power grid without warning. They worry that the technology is developing faster than its human creators can control.

    Lawmakers aim to incentive developers to handle the technology responsibly and empower the state’s attorney general to impose penalties in the event of imminent threat or harm. The legislation also requires developers to be able to turn off the AI models they control directly if things go awry.

    But some tech companies, such as Facebook owner Meta Platforms, and politicians including influential U.S. Rep. Ro Khanna (D-Fremont), say the bill would stifle innovation. Some critics say it focuses on apocalyptic, far-off scenarios, rather than the more immediate concerns such as privacy and misinformation, though there are other bills that address these matters.

    SB 1047 is one of roughly 50 AI-related bills that have been brought up in the state Legislature, as worries have grown about the technology’s effects on jobs, disinformation and public safety. As politicians work to create new laws to put guardrails on the fast-growing industry, some companies and talent are suing AI companies in hopes that courts can set ground rules.

    Wiener, who represents San Francisco — the home of AI startups OpenAI and Anthropic — has been in the middle of the debate.

    On Thursday, he made significant changes to his bill that some believe weaken the legislation while making it more likely for the Assembly to pass.

    The amendments removed a perjury penalty from the bill and changed the legal standard for developers regarding the safety of their advanced AI models.

    Additionally, a plan to create a new government entity, which would have been called the Frontier Model Division, is no longer in the works. Under the original text, the bill would have required developers to submit their safety measures to the newly created division. In the new version, developers would submit those safety measures to the attorney general.

    “I do think some of those changes might make it more likely to pass,” said Christian Grose, a USC political science and public policy professor.

    Some tech players support the bill, including the Center for AI Safety and Geoffrey Hinton, who is considered a “godfather of AI.” Others, though, worry that it could damage a booming California industry.

    Eight California House members — Khanna, Zoe Lofgren (D-San Jose), Anna G. Eshoo (D-Menlo Park), Scott Peters (D-San Diego), Tony Cárdenas (D-Pacoima), Ami Bera (D-Elk Grove), Nanette Diaz Barragan (D-San Pedro) and Lou Correa (D-Santa Ana) — wrote a letter to Newsom on Thursday encouraging him to veto the bill if it passes the state Assembly.

    “[Wiener] really is cross pressured in San Francisco between people who are experts in this area, who have been telling him and others in California that AI can be dangerous if we don’t regulate it and then those whose paychecks, their cutting edge research, is from AI,” Grose said. “This could be a real flash point for him, both pro and con, for his career.”

    Some tech giants say they are open to regulation but disagree with Wiener’s approach.

    “We are aligned with the way (Wiener) describes the bill and the goals that he has, but we remain concerned about the impact of the bill on AI innovation, particularly in California, and particularly on open source innovation,” Kevin McKinley, Meta’s state policy manager, said in a meeting with L.A. Times editorial board members last week.

    Meta is one of the companies with a collection of open source AI models called Llama, which allows developers to build on top of it for their own products. Meta released Llama 3 in April and there have already been 20 million downloads, the tech giant said.

    Meta declined to discuss the new amendments. Last week, McKinley said SB 1047 is “actually a really hard bill to red line and fix.”

    A spokesperson for Newsom said his office does not typically comment on pending legislation.

    “The Governor will evaluate this bill on its merits should it reach his desk,” spokesperson Izzy Gardon wrote in an email.

    San Francisco AI startup Anthropic, which is known for its AI assistant Claude, signaled it could support the bill if it was amended. In a July 23 letter to Assemblymember Buffy Wicks (D-Oakland), Anthropic’s state and local policy lead Hank Dempsey proposed changes including shifting the bill to focus on holding companies responsible for causing catastrophes rather than pre-harm enforcement.

    Wiener said the amendments took Anthropic’s concerns into account.

    “We can advance both innovation and safety,” Wiener said in a statement. “The two are not mutually exclusive.”

    It is unclear whether the amendments will change Anthropic’s position on the bill. On Thursday, Anthropic said in a statement that it would review the new “bill language as it becomes available.”

    Russell Wald, deputy director at Stanford University’s HAI, which aims to advance AI research and policy, said he still opposes the bill.

    “Recent amendments appear to be more about optics than substance,” Wald said in a statement. “It looks less controversial to appease a couple of leading AI companies but does little to address real concerns from academic institutions and open-source communities.”

    It is a fine balance for lawmakers that are trying to weigh concerns about AI while also supporting the state’s tech sector.

    “What a lot of us are trying to do is figure out a regulatory environment that allows for some of those guardrails to exist while not stifling innovation and the economic growth that comes with AI,” Wicks said after Thursday’s committee meeting.

    Times staff writer Anabel Sosa contributed to this report.

    [ad_2]

    Wendy Lee

    Source link

  • Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol

    Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol

    [ad_1]

    Freelancer has accused Anthropic, the AI startup behind the Claude large language models, of ignoring its “do not crawl” robots.txt protocol to scrape its websites’ data. Meanwhile, iFixit CEO Kyle Wiens said Anthropic has ignored the website’s policy prohibiting the use of its content for AI model training. Matt Barrie, the chief executive of Freelancer, told The Information that Anthropic’s ClaudeBot is “the most aggressive scraper by far.” His website allegedly got 3.5 million visits from the company’s crawler within a span of four hours, which is “probably about five times the volume of the number two” AI crawler. Similarly, Wiens posted on X/Twitter that Anthropic’s bot hit iFixit’s servers a million times in 24 hours. “You’re not only taking our content without paying, you’re tying up our devops resources,” he wrote.

    Back in June, Wired accused another AI company, Perplexity, of crawling its website despite the presence of the Robots Exclusion Protocol, or robots.txt. A robots.txt file typically contains instructions for web crawlers on which pages they can and can’t access. While compliance is voluntary, it’s mostly just been ignored by bad bots. After Wired’s piece came out, a startup called TollBit that connects AI firms with content publishers reported that it’s not just Perplexity that’s bypassing robots.txt signals. While it didn’t name names, Business Insider said it learned that OpenAI and Anthropic were ignoring the protocol, as well.

    Barrie said Freelancer tried to refuse the bot’s access requests at first, but it ultimately had to block Anthropic’s crawler entirely. “This is egregious scraping [which] makes the site slower for everyone operating on it and ultimately affects our revenue,” he added. As for iFixit, Wiens said the website has set alarms for high traffic, and his people got woken up at 3AM due to Anthropic’s activities. The company’s crawler stopped scraping iFixit after it added a line in its robots.txt file that disallows Anthropic’s bot, in particular.

    The AI startup told The Information that it respects robots.txt and that its crawler “respected that signal when iFixit implemented it.” It also said that it aims “for minimal disruption by being thoughtful about how quickly [it crawls] the same domains,” which is why it’s now investigating the case.

    AI firms use crawlers to collect content from websites that they can use to train their generative AI technologies. They’ve been the target of multiple lawsuits as a result, with publishers accusing them of copyright infringement. To prevent more lawsuits from being filed, companies like OpenAI have been striking deals with publishers and websites. OpenAI’s content partners, so far, include News Corp, Vox Media, the Financial Times and Reddit. iFixit’s Wiens seems open to the idea of signing a deal for the how-to-repair’s website’s articles, as well, telling Anthropic in a tweet he’s willing to have a conversation about licensing content for commercial use.

    [ad_2]

    Mariella Moon

    Source link

  • Anthropic looks to fund a new, more comprehensive generation of AI benchmarks | TechCrunch

    Anthropic looks to fund a new, more comprehensive generation of AI benchmarks | TechCrunch

    [ad_1]

    Anthropic is launching a program to fund the development of new types of benchmarks capable of evaluating the performance and impact of AI models, including generative models like its own Claude.

    Unveiled on Monday, Anthropic’s program will dole out payments to third-party organizations that can, as the company puts it in a blog post, “effectively measure advanced capabilities in AI models.” Those interested can submit applications to be evaluated on a rolling basis.

    “Our investment in these evaluations is intended to elevate the entire field of AI safety, providing valuable tools that benefit the whole ecosystem,” Anthropic wrote on its official blog. “Developing high-quality, safety-relevant evaluations remains challenging, and the demand is outpacing the supply.”

    As we’ve highlighted before, AI has a benchmarking problem. The most commonly cited benchmarks for AI today do a poor job of capturing how the average person actually uses the systems being tested. There are also questions as to whether some benchmarks, particularly those released before the dawn of modern generative AI, even measure what they purport to measure, given their age.

    The very-high-level, harder-than-it-sounds solution Anthropic is proposing is creating challenging benchmarks with a focus on AI security and societal implications via new tools, infrastructure and methods.

    The company calls specifically for tests that assess a model’s ability to accomplish tasks like carrying out cyberattacks, “enhance” weapons of mass destruction (e.g. nuclear weapons) and manipulate or deceive people (e.g. through deepfakes or misinformation). For AI risks pertaining to national security and defense, Anthropic says it’s committed to developing an “early warning system” of sorts for identifying and assessing risks, although it doesn’t reveal in the blog post what such a system might entail.

    Anthropic also says it intends its new program to support research into benchmarks and “end-to-end” tasks that probe AI’s potential for aiding in scientific study, conversing in multiple languages and mitigating ingrained biases, as well as self-censoring toxicity.

    To achieve all this, Anthropic envisions new platforms that allow subject-matter experts to develop their own evaluations and large-scale trials of models involving “thousands” of users. The company says it’s hired a full-time coordinator for the program and that it might purchase or expand projects it believes have the potential to scale.

    “We offer a range of funding options tailored to the needs and stage of each project,” Anthropic writes in the post, though an Anthropic spokesperson declined to provide any further details about those options. “Teams will have the opportunity to interact directly with Anthropic’s domain experts from the frontier red team, fine-tuning, trust and safety and other relevant teams.”

    Anthropic’s effort to support new AI benchmarks is a laudable one — assuming, of course, there’s sufficient cash and manpower behind it. But given the company’s commercial ambitions in the AI race, it might be a tough one to completely trust.

    In the blog post, Anthropic is rather transparent about the fact that it wants certain evaluations it funds to align with the AI safety classifications it developed (with some input from third parties like the nonprofit AI research org METR). That’s well within the company’s prerogative. But it may also force applicants to the program into accepting definitions of “safe” or “risky” AI that they might not agree with.

    A portion of the AI community is also likely to take issue with Anthropic’s references to “catastrophic” and “deceptive” AI risks, like nuclear weapons risks. Many experts say there’s little evidence to suggest AI as we know it will gain world-ending, human-outsmarting capabilities anytime soon, if ever. Claims of imminent “superintelligence” serve only to draw attention away from the pressing AI regulatory issues of the day, like AI’s hallucinatory tendencies, these experts add.

    In its post, Anthropic writes that it hopes its program will serve as “a catalyst for progress towards a future where comprehensive AI evaluation is an industry standard.” That’s a mission the many open, corporate-unaffiliated efforts to create better AI benchmarks can identify with. But it remains to be seen whether those efforts are willing to join forces with an AI vendor whose loyalty ultimately lies with shareholders.

    [ad_2]

    Kyle Wiggers

    Source link

  • Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    [ad_1]

    Sam Altman will have a key role in OpenAI’s new safety committee. Justin Sullivan/Getty Images

    Following the dissolution of an OpenAI team focused on artificial intelligence safety, the company has formed a new safety and security committee that will be led by CEO Sam Altman and other board members to guide its safety recommendations going forward, as revealed by the startup in a blog post yesterday (May 28). The announcement also noted that OpenAI has begun training a new A.I. model to succeed GPT-4, the one currently powering its ChatGPT chatbot.

    The committee’s formation comes shortly after OpenAI’s “Superalignment” team, which worked on preparations regarding the long-term risks of A.I., was disbanded with members dispersed across different areas of the company. Key employees overseeing the safety team left OpenAI earlier this month, with some citing concerns on the company’s current trajectory.

    The “Superalignment” team was led by Ilya Sutskever, OpenAI’s co-founder and former chief scientist who played a lead role in the unsuccessful ousting of Altman last November. Sutskever announced his resignation on May 14, ending his almost decade-long tenure at the company. Jan Leike, who co-ran the Superalignment team alongside Sutskever, left the startup shortly afterwards and in an X post claimed that “safety culture and processes have taken a backseat to shiny products” at OpenAI. He recently joined Anthropic, a rival A.I. startup founded by former OpenAI employees Dario and Daniela Amodei.

    “It’s pretty clear that there were these different camps within OpenAI that were leading to friction,” Sarah Kreps, a professor of government and director of the Tech Policy Institute at Cornell University, told Observer. “It seems that the people who were not aligned with Sam Altman’s vision have off-ramped either forcibly or by their own volition, and what’s left now is that they’re all speaking with one voice and that voice is Sam Altman.”

    Members of the new safety and security committee will be responsible for advising OpenAI’s board on recommendations regarding company projects and operations. But with its CEO leading the group, “I would not anticipate that these other committee members would have anywhere close to an equal voice in any decisions,” said Kreps. In addition to Altman, it will be headed by OpenAI chairman and former Salesforce co-CEO Bret Taylor alongside board members Nicole Seligman, a former Sony Entertainment executive, and Adam D’Angelo, a co-founder of Quora. D’Angelo notably was the only member of the original OpenAI board to stay on as a director after its failed firing of Altman.

    Meanwhile, former board members Helen Toner and Tasha McCauley recently urged for increased A.I. regulation in an Economist article that described Altman as having “undermined the board’s oversight of key decisions and internal safety protocols.”

    The new committee is filled with OpenAI insiders

    OpenAI’s technical and policy experts who have previously expressed their support for Altman will make up the rest of the committee. These include Jakub Pachocki, who recently filled Sutskever’s role as chief scientist, and Aleksander Madry, who oversees OpenAI’s preparedness team. Both researchers publicly resigned amid Altman’s brief removal last year and returned following his reinstatement. The committee is rounded out by Lilian Weng, John Schulman and Matt Knight, who respectively oversee the safety systems, alignment science and security teams at OpenAI and in November were among the more than 700 employees who signed a letter threatening to quit unless Altman was reinstated.

    OpenAI also revealed plans to consult cybersecurity officials like John Carlin, a former Justice Department official, and Rob Joyce, previously a cybersecurity director for the National Security Agency. “Happy to be able to support the important security and safety efforts of OpenAI!” said Joyce in an X post announcing the news. The company’s newly formed committee will spend the next 90 days developing processes and safeguards, which will be subsequently given to the board and shared in a public update describing adopted recommendations.

    While OpenAI didn’t provide a timeline for its new A.I. model, its blog post described it as one that will “bring us to the next level of capabilities” on its path to artificial general intelligence, or A.G.I., a term used for A.I. systems matching the capabilities of humans. Earlier this month, the company unveiled an updated version of ChatGPT based on a new A.I. model known as GPT-4o that showcased enhanced capabilities across audio, image and video.

    “We’ve seen in the last several months and last few days more indications that OpenAI is going in an accelerated direction toward artificial general intelligence,” said Kreps, adding that the company “seems to be signaling that there’s less interest in the safety and alignment principles that had been part of its focus earlier.”

    Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Anthropic’s Sibling Founders On Leaving OpenAI to Start a $15B Startup

    Anthropic’s Sibling Founders On Leaving OpenAI to Start a $15B Startup

    [ad_1]

    Anthropic Co-Founder & CEO Dario Amodei speaks onstage during TechCrunch Disrupt 2023 at Moscone Center on September 20, 2023 in San Francisco, California. Kimberly White/Getty Images for TechCrunch

    The Bloomberg Tech Summit yesterday (May 9) opened with brother and sister technologists Dario and Daniela Amodei, once principal scientists at OpenAI who later stepped away to found their own A.I. company, Anthropic, now valued at $15 billion. The entrepreneur duo are now engaged in “scaling up” Anthropic by creating models and relationships to serve emerging markets.

    Dario and Daniela left OpenAI in late 2020 to start their own company, with the goal of building A.I. systems that are not just powerful and intelligent but are also aligned with human values. “We left OpenAI because of concerns around the direction,” Daniela Amodei, who serves as president of Anthropic, said during an onstage interview yesterday. “We wanted to be sure the tools were being used reliably and responsibly…We want to be the most responsible A.I. we can, always asking the question, ‘What could go wrong here?’”

    “Our focus is on scaling with more data, along with models, and creating the relationships necessary to scale up the company in a more enterprise direction,” said Dario Amodei, the company’s CEO.

    Asked why users should trust them after last year’s debacle between the OpenAI’s board and the company’s CEO Sam Altman, Dario said, “You shouldn’t. Look at all the companies out there. Who can you trust? It’s a very good question. We believe in doing what you say, and saying what you do. The broader societal question is, is A.I. so big that there needs to be some kind of democratic mandate on the technology?…We need to put positive pressure on this industry to always do the right thing for our users.”

    Asked how a brother-and-sister duo both ended up in the tech world, Dario and Daniela said it was a natural result of growing up in San Francisco. “Ever since the time we were kids, we always had a desire to make things better. It may sound corny, but it was a really deep thing with us,” Daniela said. “Growing up in San Francisco in the 1990s, we saw that things were happening but we didn’t yet have the language for what that was. We just saw a lot of well-dressed people going into swanky offices and we wondered, what are all these people doing? What are they working on? They were all young people with good jobs and that was attractive.”

    “For me, in the 90s, my fascination was with theories of the early universe more than business,” said Dario. “But over time we began to realize that if you wanted to make science or anything else and be socially responsible you had to be involved and, later on, joining one of these A.I. companies.”

    Dario noted the entry point for creating a new A.I. model is rapidly becoming restrictive due to its increasingly high cost. The current generation of AI models cost about $100 million to make, he said. “In the next few years it’s going to grow to the $100 billion range. And the models will look very different.

    “Plus, you have to start thinking about the larger ecosystem, carbon offsets for large data centers, so we are looking into that as well,” he added.

    Anthropic’s Sibling Founders On Leaving OpenAI to Start a $15B Startup

    [ad_2]

    Dan Holden

    Source link

  • The Case for Investing in Responsible A.I.

    The Case for Investing in Responsible A.I.

    [ad_1]

    Ford Foundation and Omidyar Network recognize Anthropic’s groundbreaking generative language A.I.—which incorporates and prioritizes humanity—as an alignment with their missions to make investments that generate positive financial returns while benefiting society at large. Unsplash+

    Artificial intelligence (A.I.) is having a very real impact on our politics, our workforce and our world. Chatbots and other large language models, text-to-image programs and video generators are changing how we learn, challenging who we trust and intensifying debates over intellectual property and content ownership. Generative A.I. has the potential to supercharge solutions to some of society’s most pressing problems, from previously incurable diseases to our global climate crisis and more. But without clear intent and proper guardrails, A.I. has the capacity to do great harm. Rampant bias and disinformation threaten democracy; Big Tech’s dominance, if further consolidated, has the potential to crush innovation. Workers are rapidly displaced when they don’t have a voice in how technology is used on the job.  

    As philanthropic leaders who manage both our grants and our capital for social good, we invest in generative A.I. that protects, promotes and prioritizes public interest and the long-term benefit of humanity. With partners at the Nathan Cummings Foundation, we recently acquired shares in Anthropic, a leading generative A.I. company founded by two former Open A.I. executives. Other investors of the company—which is recognized for its commitment to transparency, accountability and safety—include Amazon (AMZN) ($4 billion) and Google (GOOGL) ($2 billion). 

    We understand both the promise and the peril of A.I. The funds we steward are themselves the product of profound technological transformation: the revolutionary horseless carriage at the beginning of the last century and an e-commerce platform made possible by the fledgling internet at the end. Innovation is coded in our DNA, and we feel a profound responsibility to do all we can to steer the next paradigm-shifting technology toward its highest ideals and away from its worst impulses. 

    Every harbinger of progress carries with it new risks—a Pandora’s box of intended and unintended consequences. Indeed, as French philosopher Paul Virilio famously observed, “The invention of the ship was also the invention of the shipwreck.” Today’s leaders would do well to heed Tim Cook’s charge to graduates in his 2019 Stanford commencement speech: “If you want credit for the good, take responsibility for the bad.”

    We are doing exactly this. At the Ford Foundation, we invest in organizations that help companies scale responsibly by developing frameworks for ethical technology innovation. We’re backing public-interest venture capital that funds companies like Reality Defender, which works to detect deep fakes before they become a larger problem. And we’re betting big on the emerging field of public interest technology. From organizations like the Algorithmic Justice League, which recently pressed the IRS to stop forcing taxpayers to use facial recognition software to log into their IRS accounts, ultimately leading to the end of that practice, to initiatives like the Disability and Tech Fund, which advances the leadership of people with disabilities in tech development, civil society is walking in lockstep with tech leaders to ensure that the public interest remains front and center. 

    Similarly, Omidyar Network aims to build a more inclusive infrastructure that explicitly addresses the social impact of generative A.I., elevating diversity in A.I. development and governance and promoting innovation and competition to democratize and maximize generative A.I.’s promise. It’s why, for example, Omidyar Network funds Humane Intelligence, an organization that works with companies to ensure their products are developed and deployed safely and ethically. 

    And now, Ford Foundation and Omidyar Network recognize Anthropic’s groundbreaking generative language A.I.—which incorporates and prioritizes humanity—an alignment with our own missions to make investments that generate positive financial returns while benefiting society at large. Anthropic is a Public Benefit Corporation with a charter and governance structure that mandates balancing social and financial interests, underscoring a responsibility to develop and maintain A.I. for human benefit. Founders Dario and Daniela Amodei started the company with trust and safety at its core, pioneering technology that guards against implicit bias.

    Their pioneering chatbot, “Claude” distinguishes itself from competitors with its adherence to “Constitutional A.I.,” Anthropic’s method of training a language model not just on human interaction but also on adherence to ethical rules and normative principles. For instance, Claude’s coding incorporates the UN’s Universal Declaration of Human Rights, as well as a democratically designed set of rules based on public input.

    Today, we see a unique opportunity for our colleagues in business and philanthropy to lay an early stake in a rapidly evolving field, putting the public interest front and center. According to Bloomberg, the generative A.I. market is poised to become a $1.3 trillion industry over the next decade. Investors who recognize this growing field as an opportunity to do well must also prioritize the public good and consider the full range of stakeholders who are implicated in the advent of this technology. 

    Ultimately, everyone with an interest in preserving democracy, strengthening the economy, and securing a more just and equal future for all has a responsibility to ensure that this emerging technology helps, rather than harms, people, communities and society in the years and generations to come.

    The Case for Investing in Responsible A.I.

    [ad_2]

    Roy Swan and Mike Kubzansky

    Source link