ReportWire

Tag: Claude

  • Anthropic’s Opus 4.5 model is here to conquer Microsoft Excel

    Hot on the heels of Google’s Gemini 3 Pro release, Anthropic has announced an update for its flagship Opus model. Now at version 4.5, the new system offers state-of-the-art performance in coding, computer use and office tasks. No surprise there, those have been some of Claude’s greatest strengths for a while. The good news is Anthropic is rolling out a handful of existing tools more broadly alongside Opus 4.5. It’s also releasing one new feature.

    To start, the company’s Chrome extension, Claude for Chrome, is now available to all Max users. Anthropic is also introducing a feature called infinite chat. Provided you pay to use Claude, the chatbot won’t fall to context window errors, allowing it to maintain consistency across files and chats. According to Anthropic, infinite chat was one of the most requested features from its users. Then there’s Claude for Excel, which brings the chatbot to a sidebar inside of Microsoft’s app. The tool is now broadly available to all Max, Team and Enterprise users, with support for pivot tables, charts and file uploads built-in.

    A table comparing Opus 4.5’s efforts in various benchmarks. (Anthropic)

    On the subject of Excel, Anthropic says early testers saw a 20 percent accuracy improvement on their internal evaluations and a 15 percent improvement in efficiency gains. As a complete Excel noob, I’m excited to for the company to trickle down that expertise to its more consumer-oriented models, Claude Sonnet and Haiku.

    Elsewhere, Opus 4.5 also delivers improvements in agentic workflows, with the new model excelling at refining its own processes. More importantly, Anthropic is calling Opus 4.5 its safest model yet. It’s better at rejecting prompt injection style attacks, outpacing even Gemini 3 Pro, according to Anthropic’s own evaluations.

    If you want to try Opus 4.5 for yourself, it’s available today through all of Anthropic’s apps and the company’s API. For developers, pricing for the new model starts at $5 per million tokens.

    Igor Bonifacic

    Source link

  • California rejoins fight over Nazi-looted painting held by Spanish museum

    California is once again fighting in federal court for a Jewish family’s right to have a precious Impressionist painting returned to them by a Spanish museum nearly 90 years after it was looted by the Nazis.

    The state is also defending its own authority to legally require art and other stolen treasures to be returned to other victims with ties to the state, even in disputes that stretch far beyond its borders.

    The state has repeatedly weighed in on the case since the Cassirer family first filed it while living in San Diego in 2005. Last year, California passed a new law designed to bolster the legal rights of the Cassirers and other families in the state to recover valuable property stolen from them in acts of genocide or political persecution.

    On Monday, California Atty. Gen. Rob Bonta’s office filed a motion to intervene in the Cassirer case directly in order to defend that law. The Thyssen-Bornemisza Collection Foundation — which is owned by Spain and holds the Camille Pissarro masterpiece — has claimed that the law is unconstitutional and should therefore be ignored.

    Bonta, in a statement to The Times, said the law is “about fairness, moral — and legal — responsibility, and doing what’s right,” and the state will defend it in court.

    “There is nothing that can undo the horrors and loss experienced by individuals during the Holocaust. But there is something we can do — that California has done — to return what was stolen back to survivors and their families and bring them some measure of justice and healing,” Bonta said. “As attorney general, my job is to defend the laws of California, and I intend to do so here.”

    Bonta said his office “has supported the Cassirers’ quest for justice for two decades,” and “will continue to fight with them for the rightful return of this invaluable family heirloom.”

    Thaddeus J. Stauber, an attorney for the museum, did not answer questions from The Times. Bonta’s office said Stauber did not oppose its intervening in the case.

    Sam Dubbin, the Cassirers’ longtime attorney, thanked Bonta’s office for “intervening in this case again to defend California’s interests in protecting the integrity of the art market and the rights of stolen-property victims.”

    “California law has always provided strong protections for the victims of stolen property and stolen art in particular, which the Legislature has consistently reinforced,” Dubbin said.

    The state bucked the powerful U.S. 9th Circuit Court of Appeals by passing the law last year. The appellate court found in a ruling in January 2024 that the painting was lawfully owned by the Spanish museum.

    Bonta’s latest move ratchets up the intrigue surrounding the 20-year-old case, which is being watched around the globe for its potential implications in the high-stakes world of looted art litigation.

    The painting in question — Pissarro’s “Rue Saint-Honoré in the Afternoon. Effect of Rain” — is estimated to be worth tens of millions of dollars. Both sides acknowledge it was stolen from Lilly Cassirer Neubauer by the Nazis in 1939, after she agreed in desperation to surrender it to a Nazi appraiser in exchange for a visa to flee Germany at the dawn of World War II.

    The attention surrounding the case, and its potential to set new precedent in international law, likely makes the painting even more valuable.

    After World War II, Lilly received compensation for the painting from the German government, but the family never relinquished its right to the masterpiece — which at the time was considered lost. What she was paid was a fraction of the current estimated worth.

    In the decades that followed, Lilly’s grandson Claude Cassirer — who had also survived the Holocaust — moved with his family to San Diego.

    In 2000, Claude made the shocking discovery that the painting was not lost to time after all, but part of a vast art collection that Spain had acquired from the late Baron Hans Heinrich von Thyssen-Bornemisza, the scion of a German industrialist family with ties to Adolf Hitler’s regime. Spain restored an early 19th century palace near the Prado Museum in Madrid in order to house the collection as the Museo Nacional Thyssen-Bornemisza.

    Claude asked the museum to return the painting to his family. It refused. He sued in U.S. federal court in 2005. The case has been moving through the courts ever since.

    California passed its new law in response to the 9th Circuit ruling last year that held state law at the time required it to apply an archaic Spanish law. That measure dictates that the title to stolen goods passes legitimately to a new owner over time, if that owner wasn’t aware the goods were stolen when they acquired them — which the Thyssen-Bornemisza Collection has argued makes its ownership of the painting legally sound.

    In September 2024, Gov. Gavin Newsom signed the new law during a small gathering with the families of Holocaust survivors at the Holocaust Museum LA. Lilly’s great-grandson and Claude’s son David Cassirer, who now lives in Colorado, was there, praising the state’s lawmakers for “taking a definitive stand in favor of the true owners of stolen art.”

    In March, the Supreme Court in a brief order ruled that the 9th Circuit must reconsider its ruling in light of California’s new law.

    In September, the Thyssen-Bournemisza Collection filed a motion asking the appellate court to rule in its favor once more. It put forward multiple arguments, but among them was that California’s new law was “constitutionally indefensible” and deprived the museum of its due process rights.

    “Under binding Supreme Court precedent, a State may not, by legislative fiat, reopen time-barred claims and transfer property whose ownership is already vested,” the museum argued.

    It said the U.S., under federal law, “does not seek to impose its property laws or the property laws of its own states on other foreign sovereigns, but rather expressly acknowledges that different legal traditions and systems must be taken into account to facilitate just and fair solutions with regard to Nazi-looted art cases.”

    It said California’s law takes an “aggressive approach” that “disrupts the federal government’s efforts to maintain uniformity and amicable relations with foreign nations,” and “stands as an obstacle to the accomplishment and execution of federal policy.”

    David Cassirer, the lead plaintiff in the case since Claude’s death in 2010, argued the opposite in his own filing to the court.

    Cassirer argued that California’s new law requires an outcome in his favor — which he said would also happen to be in line with “moral commitments made by the United States and governments worldwide, including Spain, to Nazi victims and their families.”

    “It is undisputed that California substantive law mandates the award of title here to the Cassirer family, as Lilly’s heirs, of which Plaintiff David Cassirer is the last surviving member,” Cassirer’s attorneys wrote.

    They wrote that California law holds that “a thief cannot convey good title to stolen works of art,” and therefore requires the return of the painting to Cassirer.

    Assemblymember Jesse Gabriel (D-Encino), who sponsored the bill in the Legislature, praised Bonta for stepping in to defend the law — which he called “part of a decades-long quest for justice and is rooted in the belief that California must stand on the right side of history.”

    Kevin Rector

    Source link

  • Anthropic CEO Dario Amodei Just Made Another Call for AI Regulation

    Anthropic co-founder and CEO Dario Amodei sat down with Anderson Cooper on 60 Minutes on Sunday for a wide-ranging interview on AI.

    During their conversation, Amodei reiterated his belief that AI will eventually be “smarter than most or all humans in most or all ways,” will play an instrumental role in curing cancers, and, unless regulations are instituted, could wipe out half of all entry-level white collar jobs and spike unemployment in the next one to five years. 

    “If we look at entry-level consultants, lawyers, financial professionals,” said Amodei, “a lot of what they do, AI models are already quite good at.”

    Amodei told Cooper that “it’s hard to imagine that there won’t be some significant job impact there, and my worry is that it’ll be broad, and it’ll be faster than what we’ve seen from previous technology.” 

    Amodei said that it’s essential for Anthropic to talk about the potential downsides and “what could go wrong” with AI because “if we don’t, then you could end up in the world of the cigarette companies and opioid companies, where they knew there were dangers and didn’t talk about them, and certainly didn’t prevent them.” 

    Logan Graham, leader of Anthropic’s red team, which investigates and attempts to mitigate malicious uses of the company’s AI model, Claude, explained the dangers in terms that entrepreneurs will surely understand: “You want a model to go build your business and make you a billion dollars,” he told Cooper, “but you don’t want to wake up one day and find that it’s locked you out of the company.” For example, Anthropic recently said that a Chinese state-sponsored organization used Claude Code to execute a global cyberattack. 

    In early tests of Claude’s ability to run a business autonomously, the model has demonstrated that it still has a long way to go. Earlier this year, Anthropic partnered up with Andon Labs, a startup experimenting with “autonomous organizations” for an experiment in which Claude was tasked with operating a vending machine in Anthropic’s San Francisco office. So far, the vending machine hasn’t made much money because it gives away too many discounts. 

    But Amodei is confident that autonomous capabilities like these will rapidly improve, which is why he believes it’s imperative for regulatory bodies to be proactive about controlling AI. He told Cooper that he is “deeply uncomfortable” with unelected individuals like himself and OpenAI’s CEO Sam Altman making wide-reaching decisions without any oversight, adding that “this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.” 

    Anthropic is a leading AI firm whose chief competitor is OpenAI, the organization where all of Anthropic’s seven cofounders previously worked; it also competes with more established entities like Google, Meta, X, and Microsoft. 

    Ben Sherry

    Source link

  • Anthropic Has Some Key Advice for Businesses in the Aftermath of a Massive AI Cyberattack

    Safety-focused AI startup Anthropic says that a “Chinese state-sponsored group” used Claude Code, the company’s agentic coding tool, to perform a highly advanced cyberattack on roughly 30 entities—and in some cases even succeeded in stealing sensitive data. 

    According to a report released by the company on November 13, this past September, members of Anthropic’s threat intelligence team detected “a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group.” The threat intelligence team investigates incidents in which Claude is used for nefarious reasons, and works to improve the company’s defenses against such incidents. 

    The attack targeted around 30 “major technology corporations, financial institutions, chemical manufacturing companies, and government agencies across multiple countries.” In a statement provided to The Wall Street Journal, Anthropic said that the United States government was not successfully infiltrated. 

    Anthropic says this operation, which it named “GTG-1002,” was almost entirely carried out by Claude Code, with human hackers mainly contributing by approving plans and directing Claude at specific targets. That makes GTG-1002 different from other AI-powered attacks in which, even as recently as August 2025, “humans remained very much in the loop.” 

    So how did these cybercriminals get Claude, which is explicitly trained to avoid exactly this kind of harmful behavior, to do their dirty work? As Anthropic said in its report, “The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that it was being used in defensive cybersecurity testing.” Apparently, this trickery allowed the hackers to avoid detection by Anthropic for a limited period of time. 

    “By presenting these tasks to Claude as routine technical requests through carefully crafted prompts and established personas,” Anthropic wrote, “the threat actor was able to induce Claude to execute individual components of attack chains without access to the broader malicious context.” 

    Once the hackers had convinced Claude that it was only engaging in a test, they provided it with a target to attack. Claude orchestrated several sub-agents, which used common open-source tools via an Anthropic-created protocol called MCP to search for vulnerabilities in the target entity’s infrastructure and authentication mechanisms. “In one of the limited cases of a successful compromise,” Anthropic wrote, “the threat actor induced Claude to autonomously discover internal services, map complete network topology across multiple IP ranges, and identify high-value systems including databases and workflow orchestration platforms.” 

    After the initial scan, Claude would begin testing the vulnerabilities it identified by generating and deploying custom attack payloads. Through these tests, Claude was able to establish a foothold in the target entity’s digital environment, and once directed by a human operator, would start collecting, extracting, and testing credentials and authentication certificates. “Claude independently determined which credentials provided access to which services,” Anthropic wrote, “mapping privilege levels and access boundaries without human direction.” 

    Finally, now that it had gained access to the inner depths of the target entities’ databases and systems, Claude was directed to extract data and analyze it to identify any proprietary information, and then organize it by its intelligence value. Claude was literally deciding which bits of data would be more valuable for the hackers. 

    Once it had completed its nefarious work, Claude would generate a document detailing the results, which Anthropic says was likely handed off to additional teams for “sustained operations after initial intrusion campaigns achieved their intelligence collection objectives.” 

    According to Anthropic, its investigation into the GTG-1002 operation took 10 days. “We banned accounts as they were identified, notified affected entities as appropriate, and coordinated with authorities as we gathered actionable intelligence,” the company said. Anthropic only had data about Claude’s use in this attack; the company said that “this case study likely reflects consistent patterns of behavior across frontier AI models and demonstrates how threat actors are adapting their operations to exploit today’s most advanced AI capabilities.” 

    Only a handful of the attacks were successful. Some, according to Anthropic, were actually thwarted not because of a counteroffensive, but because of Claude’s own hallucinations. “Claude frequently overstated findings and occasionally fabricated data during autonomous operations,” Anthropic said, “claiming to have obtained credentials that didn’t work or identifying critical discoveries that proved to be publicly available information.”  

    In response to the attack, Anthropic says it has expanded its detection capabilities to further account for novel threat patterns, and is prototyping new proactive systems, which will hopefully detect autonomous cyber attacks early. 

    Anthropic says that the attack is evidence that “the barriers to performing sophisticated cyberattacks have dropped substantially.” Less-experienced or well-resourced groups can now potentially access some of the most secure databases in the world without proprietary malware or large teams of highly skilled hackers. 

    What can businesses do to safeguard against such attacks? According to Anthropic, the best thing you can do is start using AI within your cybersecurity practices. While Claude was responsible for the attack, Anthropic says it was also instrumental in mitigating the damage and analyzing the data generated during the investigation. For this reason, Anthropic is advising security teams across industries to “experiment with applying AI for defense in areas like Security Operations Center automation, threat detection, vulnerability assessment, and incident response.” 

    Logan Graham, leader of Anthropic’s frontier red team, which pokes and prods at Claude to discover its most advanced and potentially-dangerous capabilities, wrote on X that the incident strengthened his belief that AI cyberdefense is critical, as “these capabilities are coming and we should outpace the attackers.”

    The early-rate deadline for the 2026 Inc. Regionals Awards is Friday, November 14, at 11:59 p.m. PT. Apply now.

    Ben Sherry

    Source link

  • Claude can now compartmentalize as part of a major memory upgrade

    Back in August, Anthropic made Claude capable of remembering past conversations. With the update, people could reference specific chats, so that they wouldn’t need to repeat themselves when revisiting a topic. Today, the company has begun out a new, enhanced memory feature set, with the included improvements coming to all paying users.

    Plenty of chatbots, including ChatGPT and Gemini, can remember past conversations, but Anthropic believes its implementation has a few legs up on the competition. For one, Claude will learn your preferences and work patterns over time, which Anthropic says will translate to the chatbot getting better at understanding how you work. Additionally, the company claims Claude is “fully transparent” about its memory, meaning users will see an “actual synthesis” of what it has recorded over time, instead of “vague summaries.” If you want to edit its memory, you can do so through conversation.

    At the same, Anthropic has made it easy to compartmentalize the data Claude collects. When using the Projects feature to group conversations together, the chatbot will create a distinct memory space for each grouping. In this way, information Claude has saved from your work conversations won’t bleed over to your personal chats, for example. If you’re coming from ChatGPT or Gemini, Anthropic has made it possible to import saved memories from those chatbots to Claude. You can also export any tidbits of context Claude saves to other AI platforms.

    Ahead of today’s announcement, Anthropic notes it conducted extensive testing to determine if Claude’s new capabilities would lead to greater sycophancy and more harmful conversations. “Though this testing, we identified areas where Claude’s responses needed refinement and made targeted adjustments to how memory functions,” the company said. “These iterations helped us build and improve the memory feature in a way that allows Claude to provide helpful and safe responses to users.”

    Max subscribers can enable Claude new memory capabilities starting today, with availability for Pro users to follow in the coming days. The feature is fully optional, and won’t be turned on unless you toggle it through the settings menu.

    Source link

  • A Beloved Vibe Coding Platform Is Finally Getting Upgraded for More Casual Users 

    It’s been a good week so far for entrepreneurs who are interested in trying their hands at vibe coding. On Monday, Anthropic released a new feature that enables vibe coding on the web and mobile devices, and on Tuesday, Google released a new vibe coding-focused update to Google AI Studio. Vibe coding, for those new to it, is a novel form of non-technical software development. 

    Anthropic has already found major success with its own coding tool, Claude Code. The company announced on Monday that Claude Code has generated over $500 million in revenue since its release in February, and Anthropic is now bringing it to additional platforms in order to make vibe coding more accessible. 

    Previously, using Claude Code took some technical expertise: it was only available as a command line interface within your computer terminal, or as a plugin within an integrated development environment, also known as an IDE. Terminals and IDEs are how professional software developers write and edit code, says Claude Code product manager Cat Wu, so it made sense to start there. But over time, Wu realized that non-technical people were also using Claude Code, so the team started experimenting with new form factors. 

    “Everywhere that a developer is doing work,” she says, “whether that’s on web and mobile or other tools, we want Claude to be easily accessible there.” 

    Wu admits that Claude Code on web and mobile is still a fairly technical experience. For instance, users must connect to Github in order to create new files, and aren’t able to see a live preview of their work in the app like in Claude.ai, Anthropic’s consumer-facing chat platform. Wu says that her team will bring more visual elements into Claude Code for the web in the coming months to make the experience more intuitive for non-technical vibe coders. 

    Meanwhile, Google has also put significant resources into making vibe coding more accessible. On Tuesday, the company released a big update to Google AI Studio, its AI-assisted coding platform, specifically aimed at vibe coders. In a video, Google AI Studio product lead Logan Kilpatrick explained that in this new ‘vibe coding experience,” users can write out the idea for their app, and then select the specific AI-powered elements that they want to include in their app, like generating images, integrating an AI chatbot, and prioritizing low-latency responses. 

    When vibe coding through the platform, Kilpatrick said, Google AI Studio will generate suggestions for next steps in the form of clickable buttons. The platform also makes it easy for users to deploy their apps to the internet, either through Google Cloud or Github. According to Kilpatrick, Google AI Studio is free to use, but will charge for access to its most advanced AI models. 

    Anthropic and Google aren’t the only tech companies offering vibe coding tools. If you’re looking to get into the vibe coding game, check out recent tools from companies like OpenAI, Replit, and Lovable

    Ben Sherry

    Source link

  • Anthropic brings Claude Code to iOS and the web

    At the end of February, Anthropic announced Claude Code. In the eight months since then, the coding agent has arguably become the company’s most important product, helping it carve out a niche for itself in the highly competitive AI market. Now, Anthropic is making it easier for developers to use Claude Code in more places with a new web interface for accessing the agent.

    To get started, you’ll need connect Claude to your GitHub repositories. From there, the process of using the agent is the same as if it had direct terminal access. Describe what you need from it, and the agent will take it from there. Claude will provide progress updates while it works, and you can even steer it in real-time with additional prompts. Through the web interface, it’s also possible to assign Claude multiple coding tasks to run in parallel.

    “Every Claude Code task runs in an isolated sandbox environment with network and filesystem restrictions. Git interactions are handled through a secure proxy service that ensures Claude can only access authorized repositories — helping keep your code and credentials protected throughout the entire workflow,” said Anthropic.

    In addition to making Claude Code available on the web, Anthropic is releasing a preview of the agent inside of its iOS app. The company warns the integration is early, and that it hopes “to quickly refine the mobile experience based on your feedback.”

    Pro and Max users can start using Claude Code on the web today. Anthropic notes any cloud sessions share the same rate limits with all other Claude Code usage.

    Igor Bonifacic

    Source link

  • These New AI Features Could Cut Hours of Work From Your Marketing and Presentations

    Fast-growing AI startup Anthropic has released a batch of new features aimed at helping workers be more productive while collaborating with Claude, its popular AI model. With the new features, users will be able to set up automated workflows for Claude to follow, and use Claude to search through Microsoft 365 on their behalf. 

    In a blog post, Anthropic wrote that Claude can now use a feature called skills. These skills operate like a folder that contains instructions for how Claude should perform a specific task and any relevant resources needed to help it do that task. 

    Here’s how it works in practice, according to a video demo posted by Anthropic. Say you need to make marketing materials for a new video game you’re working on. By uploading a .ZIP file containing brand guidelines for your game and text instructions for how Claude should convert those guidelines, you create a skill that Claude can call upon whenever it needs to make documents or art assets in your brand’s style. 

    Skills are also how Claude connects to external apps, such as Google Drive and Microsoft 365. When Claude needs to create a PowerPoint presentation, for example, it calls upon the PowerPoint skill. That skill gives Claude the information and guidance it needs to create working presentations. 

    For customers of Anthropic’s workplace-focused Team and Enterprise plans, administrators can enable company-wide skills. Skills can also be used across Anthropic’s collection of products, including its consumer-facing Claude.ai site and mobile app, Claude Code, and its API. 

    The skills aren’t just useful for marketing and presentations. In addition, Anthropic has announced that users can now directly connect Claude to Microsoft 365. Through this connection, users will be able to ask Claude to search through emails, messages, and dates contained in apps like Outlook, Teams, and Calendar. Claude can also now connect to Microsoft SharePoint and OneDrive, giving it access to your organization’s files. The Microsoft 365 integration is exclusively available on Claude Team and Enterprise plans.  

    Anthropic is also making it easier for enterprises to connect all of their work apps to Claude through a new feature called “enterprise search.” By connecting external apps to Claude, workers can quickly surface information that might otherwise be difficult to find. You could connect your PTO vendor to Claude so your employees can simply ask how much time off they still have this year. 

    According to Anthropic, enterprise search “is particularly valuable for onboarding new team members, answering strategic questions like analyzing patterns in customer feedback, and quickly identifying the right internal experts to consult on any topic.” 

    Ben Sherry

    Source link

  • Anthropic’s New Claude Release Could Be the Faster, Cheaper AI Tool Small Companies Need

    Anthropic has announced Claude Haiku 4.5, the latest in its line of small AI models that the company has optimized for speed and cost-effectiveness. The AI firm says the new model matches or exceeds the coding and agentic performance of its mid-sized model Claude Sonnet 4 (released in May), but at a third of the price and more than twice the speed. 

    In a blog post, Anthropic said that Haiku 4.5 is a prime example of the company’s philosophy that top-tier model capabilities will become cheaper and faster as the technology frontier advances. While frontier AI models like OpenAI’s GPT-5 and Anthropic’s Claude Opus 4.1 will likely stay expensive, cheaper alternatives will continue to become more useful. 

    According to Anthropic, the combination of Haiku 4.5’s capability and cost make it particularly suited to acquiring customers. For example, Anthropic said that Haiku 4.5 can make it economically viable for businesses to integrate agentic experiences (an AI agent is a computer program that acts independently to achieve a goal) for free-tier customers. 

    Specifically, Anthropic says, Haiku 4.5 is an ideal model for powering fast-response chatbots and customer service agents. The new model’s low-cost combination of intelligence and speed means that it can complete workflows much faster than its larger siblings, enabling it to resolve tickets and customer issues faster. 

    In some uses, including coding, financial analysis, and research, Anthropic anticipates that Haiku 4.5 will work together with larger models like Claude Sonnet 4.5, which was released two weeks ago. In these scenarios, the larger model develops a plan and then directs multiple instances of the smaller model to carry out individual aspects of the plan. 

    For example, Anthropic says that in coding, Claude Sonnet 4.5 could handle the planning phase of software development, and then engage multiple Haiku 4.5-powered “sub-agents” to work in tandem on multiple tasks at once. Or, in finance, Haiku 4.5 could monitor “thousands of data streams,” such as regulatory changes and market signals, to complement Sonnet 4.5’s predictive financial modeling. In research, Haiku 4.5 could gather and review data from many sources and then provide its insights to Sonnet 4.5 for deeper analysis. 

    Jon Noronha, founder and head of product at Gamma, an AI-powered platform for designing slideshow presentations and websites, said that Haiku 4.5 achieved 65 percent accuracy at generating text for slides, versus their previous top model’s 44 percent. “That’s a game-changer for our unit economics,” Noronha said in a statement. 

    In addition to external business uses, Haiku 4.5 will now be the default model for all free plans on Claude.ai, the company’s consumer-facing online platform, and the Claude mobile app. 

    Haiku 4.5 will also power Claude for Chrome, a Google Chrome extension that allows Claude to take control of a web browser. Previously, Claude for Chrome was powered by Claude Sonnet 4.5, a larger model released two weeks ago, but by switching to the smaller Haiku, the extension can run significantly faster. 

    For developers at small and medium-sized businesses who don’t have near-unlimited AI budgets, Haiku 4.5 could open up a variety of uses that were previously too ambitious and expensive.

    It’s worth noting that Claude Haiku 4.5 is slightly more expensive than its predecessor. Through Anthropic’s API, the model will cost $1 per million input tokens and $5 per million output tokens. (Tokens are the individual units of text that are processed by AI models.) This is a slight increase over Claude Haiku 3.5, which Anthropic released in October 2024 at a price of $0.80 per million input tokens and $4 per million output tokens. 

    Claude has exploded in popularity in 2025 due to its coding prowess. The AI models have been instrumental in the growth of several tech startups offering AI-assisted coding, such as Replit, Base44, and Cursor.

    Ben Sherry

    Source link

  • Anthropic Says Its Latest Claude AI Is ‘the Best Coding Model in the World’

    Anthropic has announced Claude Sonnet 4.5, the latest version of its default model. The company says the model isn’t just “the best coding model in the world,” it’s also “the strongest model for building complex agents.” In the context of AI, an agent is an AI model that uses tools that allow it to take actions, like running code and taking over an internet browser.

    Anthropic said that when it comes to coding, Sonnet 4.5 is better at both identifying small improvements and considering larger changes to code, and follows instructions more directly when coding on users’ behalf. 

    In data shared with Inc., Anthropic claimed that the new model exhibited state-of-the-art performance across a wide variety of benchmarks. For example, on SWE-Bench Verified, a widely-used benchmark that measures an AI model’s ability to solve real-world software engineering tasks, Sonnet 4.5 was able to successfully solve 77.2 percent of tasks, up from the 74.5 percent solved by Claude Opus 4.1, a larger and much more expensive model released in August. 

    AI agents built using Sonnet 4.5 will also be a step up thanks to a new software development kit (SDK) called Claude Agent SDK. The SDK gives developers access to the same agentic tools used by the company’s popular coding agent, Claude Code. These tools enable developers to easily build Sonnet 4.5-based agents that can read and write files, manage context while working on long-running tasks, run code, search the web, pass on context from one agent to another, and coordinate multiple sub-agents to work on tasks simultaneously. 

    Sonnet 4.5 is now available through the Claude API and on Claude.ai, Anthropic’s consumer-facing app for its models. The model is also available to use on Claude Code, which many developers access through their computer terminal. 

    Separately, Claude Code is getting a visual refresh and a few requested features. The most exciting update for developers will likely be the introduction of checkpoints, which will allow coders (and vibe coders) to roll their apps back to an earlier state if the model introduces a bug or unwanted feature. 

    Sonnet 4.5 is also able to run uninterrupted for significantly longer than rival models. When tasked by Anthropic researchers with building an entire application, the model was able to run for over 30 hours without stopping or degrading in performance. In comparison, GPT-5-Codex, OpenAI’s recently-released coding-optimized AI model, was found in testing to work independently for over 7 hours. 

    In addition to coding, Anthropic says Sonnet 3.5 has shown significant growth in its ability to help cybersecurity professionals detect, analyze, and remediate vulnerabilities, and is better at financial modeling, research, and forecasting. The model set a new record in FinanceAgent, a benchmark developed by startup Vals that judges an agent’s ability to complete tasks expected of an entry-level financial analyst.

    Anthropic is also releasing a new experience for subscribers of its $100 to $200 per month Max tier. The experience, which will only last for five days, is called Imagine with Claude, and places users in a custom, Claude-generated user interface that the model can use to build software in real time. “It’s a fun demonstration showing what Claude Sonnet 4.5 can do,” Anthropic says, “a way to see what’s possible when you combine a capable model with the right infrastructure.” 

    Pricing for Claude Sonnett 4.5 is unchanged from the 4.0 model’s price: $3 for every million input tokens processed by the model, and $15 for every million output tokens generated by the model.

    Ben Sherry

    Source link

  • Here’s Who Can See Your Chat History When You Talk to Each AI

    While AI tools like ChatGPT and Google Gemini can be helpful, they’re also potential privacy minefields.

    Most AI assistants save a complete record of your conversations, making them easily visible to anyone with access to your devices. Those conversations are also stored online, often indefinitely, so they could be exposed due to bugs or security breaches. In some cases, AI providers can even send your chats along to human reviewers.

    All of this should give you pause, especially if you plan to share your innermost thoughts with AI tools or use them to process personal information. To better protect your privacy, consider making some tweaks to your settings, using private conversation modes, or even turning to AI assistants that protect your privacy by default.

    [Screengrab: ChatGPT]

    To help make sense of the options, I looked through all the privacy settings and policies of every major AI assistant. Here’s what you need to know about what they do with your data, and what you can do about it:

    ChatGPT

    By default: ChatGPT uses your data to train AI, and warns that its “training data may incidentally include personal information.”
    Can humans review your chats? OpenAI’s ChatGPT FAQ says it may “review conversations” to improve its systems. The company also says it now scans conversations for threats of imminent physical harm, submitting them to human reviewers and possibly reporting them to law enforcement.
    Can you disable AI training? Yes. Go to Settings > Data controls > Improve the model for everyone.
    Is there a private chat mode? Yes. Click “Turn on temporary chat” in the top-right corner to keep a chat out of your history and avoid having it used to train AI.
    Can you share chats with others? Yes, by generating a shareable link. (OpenAI launched, then removed, a feature that let search engines index shared chats.)
    Are your chats used for targeted ads? OpenAI’s privacy policy says it does not sell or share personal data for contextual behavioral advertising, doesn’t process data for targeted ads, and doesn’t process sensitive personal data to infer characteristics about consumers.
    How long does it keep your data? Up to 30 days for temporary and deleted chats, though even some of those may be kept longer for “security and legal obligations.” All other data is stored indefinitely.

    Google Gemini

    By default: Gemini uses your data to train AI.
    Can humans review your chats? Yes. Google says not to enter “any data you wouldn’t want a reviewer to see.” Once a reviewer sees your data, Google keeps it for up to three years—even if you delete your chat history.
    Can you disable AI training? Yes. Go to myactivity.google.com/product/gemini, click the “Turn off” drop-down menu, then select either “Turn off” or “Turn off and delete activity.”
    Is there a private chat mode? Yes. In the left sidebar, hit the chat bubble with dashed lines next to the “New chat” button. (Alternatively, disabling Gemini Apps Activity will hide your chat history from the sidebar, but re-enabling it without deleting past data will bring your history back.)
    Can you share chats with others? Yes, by generating a shareable link.
    Are your chats used for targeted ads? Google says it doesn’t use Gemini chats to show you ads, but the company’s privacy policy allows for it. Google says it will communicate any changes it makes to this policy.
    How long does it keep your data? Indefinitely, unless you turn on auto-deletion in Gemini Apps Activity.

    Anthropic Claude

    By default: From September 28 onward, Anthropic will use conversations to train AI unless you opt out.
    Can humans review your chats? No, though Anthropic reviews conversations flagged as violating its usage policies.
    Can you disable AI training? Yes, Head to Settings > Privacy and disable “Help improve Claude.”
    Is there a private chat mode? No. You must delete past conversations manually to hide them from your history.
    Can you share chats with others? Yes, by generating a shareable link.
    Are your chats used for targeted ads? Anthropic doesn’t use conversations for targeted ads.
    How long does it keep your data? Up to two years, or seven years for prompts flagged for trust and safety violations.

    Microsoft Copilot

    By default: Microsoft uses your data to train AI.
    Can humans review your chats? Yes. Microsoft’s privacy policy says it uses “both automated and manual (human) methods of processing” personal data.
    Can you disable AI training? Yes, though the option is buried. Click your profile image > your name > Privacy and disable “Model training on text.”
    Is there a private chat mode? No. You must delete chats one by one or clear your history from Microsoft’s account page.
    Can you share chats with others? Yes, by generating a shareable link. Note that shared links can’t be unshared without deleting the chat.
    Are your chats used for targeted ads? Microsoft uses your data for targeted ads and has discussed integrating ads with AI. You can disable this by clicking your profile image > your name > Privacy and disabling “Personalization and memory.” A separate link disables all personalized ads for your Microsoft account.
    How long does it keep your data? Data is stored for 18 months, unless you delete it manually.

    xAI Grok

    By default: Uses your data to train AI.
    Can humans review your chats? Yes. Grok’s FAQ says a “limited number” of “authorized personnel” may review conversations for quality or safety.
    Can you disable AI training? Yes. Click your profile image and go to Settings > Data Controls, then disable “Improve the Model.”
    Is there a private chat mode? Click the “Private” button at the top right to keep a chat out of your history and avoid having it used to train AI.
    Can you share chats with others? Yes, by generating a shareable link. Note that shared links can’t be unshared without deleting the chat.
    Are your chats used for targeted ads? Grok’s privacy policy says it does not sell or share information for targeted ad purposes.
    How long does it keep your data? Private Chats and even deleted conversations are stored for 30 days. All other data is stored indefinitely.

    By default: Uses your data to train AI.
    Can humans review your chats? Yes. Meta’s privacy policy says it uses manual review to “understand and enable creation” of AI content.
    Can you disable AI training? Not directly. U.S. users can fill out this form. Users in the EU and U.K. can exercise their right to object.
    Is there a private chat mode? No.
    Can you share chats with others? Yes. Shared links automatically appear in a public feed and can show up in other Meta apps as well.
    Are your chats used for targeted ads? Meta’s privacy policy says it targets ads based on the information it collects, including interactions with AI.
    How long does it keep your data? Indefinitely.

    Perplexity

    By default: Uses your data to train AI.
    Can humans review your chats? Perplexity’s 
    privacy policy does not mention human review.
    Can you disable AI training? Yes. Go to Account > Preferences and disable “AI data retention.”
    Is there a private chat mode? Yes. Click your profile icon, then select “Incognito” under your account name.
    Can you share chats with others? Yes, by generating a shareable link.
    Are your chats used for targeted ads? Yes. Perplexity says it may share your information with third-party advertising partners and may collect from other sources (for instance, data brokers) to improve its ad targeting.
    How long does it keep your data? Until you delete your account.

    Duck.AI

    By default: Duck.AI doesn’t use your data to train AI, thanks to deals with major providers.
    Can humans review your chats? No.
    Can you disable AI training? Not applicable.
    Is there a private chat mode? No. You must delete previous chats individually or all at once through the sidebar.
    Can you share chats with others? No.
    Are your chats used for targeted ads? No.
    How long does it keep your data? Model providers keep anonymized data for up to 30 days, unless needed for legal or safety reasons.

    Proton Lumo

    By default: Proton Lumo doesn’t use your data to train AI.
    Can humans review your chats? No.
    Can you disable AI training? Not applicable.
    Is there a private chat mode? Yes. Click the glasses icon at the top right.
    Can you share chats with others? No.
    Are your chats used for targeted ads? No.
    How long does it keep your data? Proton does not store logs of your chats.

    By Jared Newman

    This article originally appeared in Inc.’s sister publication, Fast Company.

    Fast Company is the world’s leading business media brand, with an editorial focus on innovation in technology, leadership, world changing ideas, creativity, and design. Written for and about the most progressive business leaders, Fast Company inspires readers to think expansively, lead with purpose, embrace change, and shape the future of business.

    Fast Company

    Source link

  • Claude can now edit and create files, including Excel spreadsheets

    Anthropic has begun rolling out a small but significant update to Claude. Starting today you can use the chatbot to create and edit Excel spreadsheets, documents, PowerPoint slide decks and PDFs. In the past, Claude offered rudimentary file support, but now you can interact with any documents you need to modify directly through the chatbot. The new functionality is part of a feature preview you can try out as long as you have a Max, Team or Education subscription. Sorry, Pro and free users, you’ll have to wait. The preview will roll out to Pro subscribers “in the coming weeks,” with no timeline yet for when free users can expect access.

    “We’ve given Claude access to a private computer environment where it can write code and run programs to produce the files and analyses you need. This transforms Claude from an advisor into an active collaborator. You bring the context and strategy; Claude handles the technical implementation behind the scenes,” says Anthropic of how it built the feature. “This shows where we’re headed: making sophisticated multi-step work accessible through conversation. As these capabilities expand, the gap between idea and execution will keep shrinking.”

    To check out the preview, toggle “Upgraded file creation and analysis” in the settings menu, which you can find by first selecting “Features” and then “Experimental.” You can then upload or describe the file you’d like Claude to create or edit for you, and download Claude’s creation once you’re happy with the result.

    Igor Bonifacic

    Source link

  • AI company Anthropic to pay authors $1.5 billion over pirated books used to train chatbots

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.Related video above: The risks to children under President Trump’s new AI policyThe landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.”As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.”We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.””We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.As part of the settlement, the company has also agreed to destroy the original book files it downloaded.Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT. Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.” The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.”On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.”It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.”This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.

    Related video above: The risks to children under President Trump’s new AI policy

    The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.

    The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.

    “As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”

    A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.

    A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.

    If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.

    “We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.

    U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.

    Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.”

    “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.

    As part of the settlement, the company has also agreed to destroy the original book files it downloaded.

    Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.

    Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.

    Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.

    Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.

    The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.

    On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”

    The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.

    “On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.

    On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.

    “It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.

    The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.

    Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.

    The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.

    “This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.

    The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”

    Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”

    But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.

    With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    Source link

  • Anthropic Agrees to $1.5 Billion Settlement for Downloading Pirated Books to Train AI

    Anthropic has agreed to pay $1.5 billion to settle a lawsuit brought by authors and publishers over its use of millions of copyrighted books to train the models for its AI chatbot Claude, according to a legal filing posted online.

    A federal judge found in June that Anthropic’s use of 7 million pirated books was protected under fair use but that holding the digital works in a “central library” violated copyright law. The judge ruled that executives at the company knew they were downloading pirated works, and a trial was scheduled for December.

    The settlement, which was presented to a federal judge on Friday, still needs final approval but would pay $3,000 per book to hundreds of thousands of authors, according to the New York Times. The $1.5 billion settlement would be the largest payout in the history of U.S. copyright law, though the amount paid per work has often been higher. For example, in 2012, a woman in Minnesota paid about $9,000 per song downloaded, a figure brought down after she was initially ordered to pay over $60,000 per song.

    In a statement to Gizmodo on Friday, Anthropic touted the earlier ruling from June that it was engaging in fair use by training models with millions of books.

    “In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic’s approach to training AI models constitutes fair use,” Aparna Sridhar, deputy general counsel at Anthropic, said in a statement by email.

    “Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” Sridhar continued.

    According to the legal filing, Anthropic says the payments will go out in four tranches tied to court-approved milestones. The first payment would be $300 million within five days after the court’s preliminary approval of the settlement, and another $300 million within five days of the final approval order. Then $450 million would be due, with interest, within 12 months of the preliminary order. And finally $450 million within the year after that.

    Anthropic, which was recently valued at $183 billion, is still facing lawsuits from companies like Reddit, which struck a deal in early 2024 to let Google train its AI models on the platform’s content. And authors still have active lawsuits against the other big tech firms like OpenAI, Microsoft, and Meta.

    The ruling from June explained that Anthropic’s training of AI models with copyrighted books would be considered fair use under U.S. copyright law because theoretically someone could read “all the modern-day classics” and emulate them, which would be protected:

    …not reproduced to the public a given work’s creative elements, nor even one author’s identifiable expressive style…Yes, Claude has outputted grammar, composition, and style that the underlying LLM distilled from thousands of works. But if someone were to read all the modern-day classics because of their exceptional expression, memorize them, and then emulate a blend of their best writing, would that violate the Copyright Act? Of course not.

    “Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them—but to turn a hard corner and create something different,” the ruling said.

    Under this legal theory, all the company needed to do was buy every book it pirated to lawfully train its models, something that certainly costs less than $3,000 per book. But as the New York Times notes, this settlement won’t set any legal precedent that could determine future cases because it isn’t going to trial.

    Matt Novak

    Source link

  • Screw the money — Anthropic’s $1.5B copyright settlement sucks for writers | TechCrunch

    Around half a million writers will be eligible for a payday of at least $3,000, thanks to a historic $1.5 billion settlement in a class action lawsuit that a group of authors brought against Anthropic.

    This landmark settlement marks the largest payout in the history of U.S. copyright law, but this isn’t a victory for authors — it’s yet another win for tech companies.

    Tech giants are racing to amass as much written material as possible to train their LLMs, which power groundbreaking AI chat products like ChatGPT and Claude — the same products that are endangering the creative industries, even if their outputs are milquetoast. These AIs can become more sophisticated when they ingest more data, but after scraping basically the entire internet, these companies are literally running out of new information.

    That’s why Anthropic, the company behind Claude, pirated millions of books from “shadow libraries” and fed them into its AI. This particular lawsuit, Bartz v. Anthropic, is one of dozens filed against companies like Meta, Google, OpenAI, and Midjourney over the legality of training AI on copyrighted works.

    But writers aren’t getting this settlement because their work was fed to an AI — this is just a costly slap on the wrist for Anthropic, a company that just raised another $13 billion, because it illegally downloaded books instead of buying them.

    In June, federal judge William Alsup sided with Anthropic and ruled that it is, indeed, legal to train AI on copyrighted material. The judge argues that this use case is “transformative” enough to be protected by the fair use doctrine, a carve-out of copyright law that hasn’t been updated since 1976.

    “Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” the judge said.

    It was the piracy — not the AI training — that moved Judge Alsup to bring the case to trial, but with Anthropic’s settlement, a trial is no longer necessary.

    “Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims,” said Aparna Sridhar, deputy general counsel at Anthropic, in a statement. “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”

    As dozens more cases over the relationship between AI and copyrighted works go to court, judges now have Bartz v. Anthropic to reference as a precedent. But given the ramifications of these decisions, maybe another judge will arrive at a different conclusion.

    Amanda Silberling

    Source link

  • Anthropic users face a new choice – opt out or share your chats for AI training | TechCrunch

    Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we’ve formed some theories of our own.

    But first, what’s changing: Previously, Anthropic didn’t use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations and coding sessions, and it said it’s extending data retention to five years for those who don’t opt out.

    That is a massive update. Previously, users of Anthropic’s consumer products were told that their prompts and conversation outputs would be automatically deleted from Anthropic’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies, in which case a user’s inputs and outputs might be retained for up to two years.

    By consumer, we mean the new policies apply to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Claude for Work, Claude for Education, or API access will be unaffected, which is how OpenAI similarly protects enterprise customers from data training policies.

    So why is this happening? In that post about the update, Anthropic frames the changes around user choice, saying that by not opting out, users will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” Users will “also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”

    In short, help us help you. But the full truth is probably a little less selfless.

    Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand. Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Beyond the competitive pressures of AI development, the changes would also seem to reflect broader industry shifts in data policies, as companies like Anthropic and OpenAI face increasing scrutiny over their data retention practices. OpenAI, for instance, is currently fighting a court order that forces the company to retain all consumer ChatGPT conversations indefinitely, including deleted chats, because of a lawsuit filed by The New York Times and other publishers.

    In June, OpenAI COO Brad Lightcap called this “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.” The court order affects ChatGPT Free, Plus, Pro, and Team users, though enterprise customers and those with Zero Data Retention agreements are still protected.

    What’s alarming is how much confusion all of these changing usage policies are creating for users, many of whom remain oblivious to them.

    In fairness, everything is moving quickly now, so as the tech changes, privacy policies are bound to change. But many of these changes are fairly sweeping and mentioned only fleetingly amid the companies’ other news. (You wouldn’t think Tuesday’s policy changes for Anthropic users were very big news based on where the company placed this update on its press page.)

    Image Credits:Anthropic

    But many users don’t realize the guidelines to which they’ve agreed have changed because the design practically guarantees it. Most ChatGPT users keep clicking on “delete” toggles that aren’t technically deleting anything. Meanwhile, Anthropic’s implementation of its new policy follows a familiar pattern.

    How so? New users will choose their preference during signup, but existing users face a pop-up with “Updates to Consumer Terms and Policies” in large text and a prominent black “Accept” button with a much tinier toggle switch for training permissions below in smaller print — and automatically set to “On.”

    As observed earlier today by The Verge, the design raises concerns that users might quickly click “Accept” without noticing they’re agreeing to data sharing.

    Meanwhile, the stakes for user awareness couldn’t be higher. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent nearly unattainable. Under the Biden administration, the Federal Trade Commission even stepped in, warning that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print.”

    Whether the commission — now operating with just three of its five commissioners — still has its eye on these practices today is an open question, one we’ve put directly to the FTC.

    Connie Loizos

    Source link

  • Anthropic launches a Claude AI agent that lives in Chrome | TechCrunch

    Anthropic is launching a research preview of a browser-based AI agent powered by its Claude AI models, the company announced on Tuesday. The agent, Claude for Chrome, is rolling out to a group of 1,000 subscribers on Anthropic’s Max plan, which costs between $100 and $200 per month. The company is also opening a waitlist for other interested users.

    By adding an extension to Chrome, select users can now chat with Claude in a sidecar window that maintains context of everything happening in their browser. Users can also give the Claude agent permission to take actions in their browser and complete some tasks on their behalf.

    The browser is quickly becoming the next battleground for AI labs, which aim to use browser integrations to offer more seamless connections between AI systems and their users. Perplexity recently launched its own browser, Comet, which features an AI agent that can offload tasks for users. OpenAI is reportedly close to launching its own AI-powered browser, which is rumored to have similar features to Comet. Meanwhile, Google has launched Gemini integrations with Chrome in recent months.

    The race to develop AI-powered browsers is especially pressing given Google’s looming antitrust case, in which a final decision is expected any day now. The federal judge in the case has suggested he may force Google to sell its Chrome browser. Perplexity submitted an unsolicited $34.5 billion offer for Chrome, and OpenAI CEO Sam Altman suggested his company would be willing to buy it as well.

    In the Tuesday blog post, Anthropic warned that the rise of AI agents with browser access poses new safety risks. Last week, Brave’s security team said it found that Comet’s browser agent could be vulnerable to indirect prompt-injection attacks, where hidden code on a website could trick the agent into executing malicious instructions when it processed the page.

    (Perplexity’s head of communications, Jesse Dwyer, told TechCrunch in an email that the vulnerability Brave raised has been fixed.)

    Anthropic says it hopes to use this research preview as a chance to catch and address novel safety risks; however, the company has already introduced several defenses against prompt injection attacks. The company says its interventions reduced the success rate of prompt injection attacks from 23.6% to 11.2%.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    For example, Anthropic says users can limit Claude’s browser agent from accessing certain sites in the app’s settings, and the company has, by default, blocked Claude from accessing websites that offer financial services, adult content, and pirated content. The company also says that Claude’s browser agent will ask for user permission before “taking high-risk actions like publishing, purchasing, or sharing personal data.”

    This isn’t Anthropic’s first foray into AI models that can control your computer screen. In October 2024, the company launched an AI agent that could control your PC — however, testing at the time revealed that the model was quite slow and unreliable.

    The capabilities of agentic AI models have improved quite a bit since then. TechCrunch has found that modern browser-using AI agents, such as Comet and ChatGPT Agent, are fairly reliable at offloading simple tasks for users. However, many of these agentic systems still struggle with more complex problems.

    Maxwell Zeff

    Source link