ReportWire

Tag: Claude Code

  • Anthropic launches Claude Cowork, a version of its coding AI for regular people

    If you follow Anthropic, you’re probably familiar with Claude Code. Since the fall of 2024, the company has been training its AI models to use and navigate computers like a human would, and the coding agent has been the most practical expression of that work, giving developers a way to automate rote programming tasks. Starting today, Anthropic is giving regular people a way to take advantage of those capabilities, with the release of a new preview feature called Claude Cowork.

    The company is billing Cowork as “a simpler way for anyone — not just developers — to work with Claude.” After you give the system access to a folder on your computer, it can read, edit or create new files in that folder on your behalf.

    Anthropic gives a few different example use cases for Cowork. For instance, you could ask Claude to organize your downloads folder, telling it to rename the files contained within to something that’s easier to parse at a glance. Another example: you could use Claude to turn screenshots of receipts and invoices into a spreadsheet for tracking expenses. Cowork can also navigate websites — provided you install Claude’s Chrome plugin — and make can use Anthropic’s Connectors framework to access third-party apps like Canva.

    “Cowork is designed to make using Claude for new work as simple as possible. You don’t need to keep manually providing context or converting Claude’s outputs into the right format,” the company said. “Nor do you have to wait for Claude to finish before offering further ideas or feedback: you can queue up tasks and let Claude work through them in parallel.”

    If the idea of granting Claude access to your computer sounds ill-advised, Anthropic says Claude “can’t read or edit anything you don’t give it explicit access to.” However, the company does note the system can “take potentially destructive actions,” such as deleting a file that is important to you or misinterpreting your instructions. For that reason, Anthropic suggests it’s best to give “very clear” guidance to Claude.

    Anthropic isn’t the first to offer a computer agent. Microsoft, for example, has been pushing Copilot hard for nearly three years, despite seemingly limited adoption. For Anthropic, the challenge will be convincing people these tools are useful where others have failed. The fact Claude Code has been universally loved by programmers may make that task easier.

    For now, Anthropic is giving users of its pricey Claude Max subscription first access to the preview. If you want to try Cowork for yourself, you’ll also need a Mac with the Claude macOS app installed. For everyone else, you’ll need to join a wait list.  

    Igor Bonifacic

    Source link

  • Claude’s Chrome plugin is now available to all paid users

    Anthropic is finally letting more people use Claude in Google Chrome. The company’s AI browser plugin is expanding beyond $200-per-month Max subscribers and is now available to anyone who pays for a Claude subscription.

    The Claude Chrome plugin allows for easy access to Anthropic’s AI regardless of where you are on the web, but its real draw is how it lets Claude navigate and use websites on your behalf. Anthropic says that Claude can fill out forms, manage your calendar and email and complete multi-step workflows based on a prompt. The latest version of the plugin also features integration with Claude Code, Anthropic’s AI coding tool, and allows users to record a workflow and “teach” Claude how to do what they want it to do.

    Before agents were the buzzword du jour, “computer use,” the ability for AI models to understand and interact with computer interfaces, was a major focus at Anthropic and other AI companies. Now computer use is just one tool in the larger tool bag for agents, but that understanding of what digital buttons to click and how to click them is what makes Claude’s Chrome plugin possible.

    OpenAI and Perplexity offer similar agentic capabilities in their respective ChatGPT Atlas and Comet browsers. At this point the only AI company not fully setting its AI models loose on a browser is Google. You can access Gemini in Google Chrome and ask questions about a webpage, but Google hasn’t yet let its AI model navigate or use the web on a user’s behalf. Those features, first demoed in Project Mariner, are presumably on the way.

    Ian Carlos Campbell

    Source link

  • Anthropic’s AI was used by Chinese hackers to run a Cyberattack

    A few months ago, Anthropic published a detailing how its Claude AI model had been weaponized in a “vibe hacking” extortion scheme. The company has continued to monitor how the agentic AI is being used to coordinate cyberattacks, and now that a state-backed group of hackers in China utilized Claude in an attempted infiltration of 30 corporate and political targets around the world, with some success.

    In what it labeled “the first documented case of a large-scale cyberattack executed without substantial human intervention,” Anthropic said that the hackers first chose their targets, which included unnamed tech companies, financial institutions and government agencies. They then used Claude Code to develop an automated attack framework, after successfully bypassing the model’s training to avoid harmful behavior. This was achieved by breaking the planned attack into smaller tasks that didn’t obviously reveal their wider malicious intent, and telling Claude that it was a cybersecurity firm using the AI for defensive training purposes.

    After writing its own exploit code, Anthropic said Claude was then able to steal usernames and passwords that allowed it to extract “a large amount of private data” through backdoors it had created. The obedient AI reportedly even went to the trouble of documenting the attacks and storing the stolen data in separate files.

    The hackers used AI for 80-90 percent of its operation, only occasionally intervening, and Claude was able to orchestrate an attack in far less time than humans could have done. It wasn’t flawless, with some of the information it obtained turning out to be publicly available, but Anthropic said that attacks like this will likely become more sophisticated and effective over time.

    You might be wondering why an AI company would want to publicize the dangerous potential of its own technology, but Anthropic says its investigation also acts as evidence of why the assistant is “crucial” for cyber defense. It said Claude was successfully used to analyze the threat level of the data it collected, and ultimately sees it as a tool that can assist cybersecurity professionals when future attacks happen.

    Claude is by no means the only AI that has benefited cybercriminals. Last year, said that its generative AI tools were being used by hacker groups with ties to China and North Korea. They reportedly used GAI to assist with code debugging, researching potential targets and drafting phishing emails. OpenAI said at the time that it had blocked the groups’ access to its systems.

    Matt Tate

    Source link

  • Anthropic brings Claude Code to iOS and the web

    At the end of February, Anthropic announced Claude Code. In the eight months since then, the coding agent has arguably become the company’s most important product, helping it carve out a niche for itself in the highly competitive AI market. Now, Anthropic is making it easier for developers to use Claude Code in more places with a new web interface for accessing the agent.

    To get started, you’ll need connect Claude to your GitHub repositories. From there, the process of using the agent is the same as if it had direct terminal access. Describe what you need from it, and the agent will take it from there. Claude will provide progress updates while it works, and you can even steer it in real-time with additional prompts. Through the web interface, it’s also possible to assign Claude multiple coding tasks to run in parallel.

    “Every Claude Code task runs in an isolated sandbox environment with network and filesystem restrictions. Git interactions are handled through a secure proxy service that ensures Claude can only access authorized repositories — helping keep your code and credentials protected throughout the entire workflow,” said Anthropic.

    In addition to making Claude Code available on the web, Anthropic is releasing a preview of the agent inside of its iOS app. The company warns the integration is early, and that it hopes “to quickly refine the mobile experience based on your feedback.”

    Pro and Max users can start using Claude Code on the web today. Anthropic notes any cloud sessions share the same rate limits with all other Claude Code usage.

    Igor Bonifacic

    Source link

  • Claude Sonnet 4.5 is Anthropic’s safest AI model yet

    In May, Anthropic announced two new AI systems, Opus 4 and Sonnet 4. Now, less than six months later, the company is introducing Sonnet 4.5, and calling it the best coding model in the world to date. Anthropic’s basis for that claim is a selection of benchmarks where the new AI outperforms not only its predecessor but also the more expensive Opus 4.1 and competing systems, including Google’s Gemini 2.5 Pro and GPT-5 from OpenAI. For instance, in OSWorld, a suite that tests AI models on real-world computer tasks, Sonnet 4.5 set a record score of 61.4 percent, putting it 17 percentage points above Opus 4.1. 

    At the same time, the new model is capable of autonomously working on multi-step projects for more than 30 hours, a significant improvement from the seven or so hours Opus 4 could maintain at launch. That’s an important milestone for the type of agentic systems Anthropic wants to build. 

    Sonnet 4.5 outperforms Anthropic’s older models in coding and agentic tasks.

    (Anthropic)

    Perhaps more importantly, the company claims Sonnet 4.5 is its safest AI system to date, with the model having undergone “extensive” safety training. That training translates to a chatbot Anthropic says is “substantially” less prone to “sycophancy, deception, power-seeking and the tendency to encourage delusional thinking” — all potential model traits that have landed OpenAI in hot water in recent months. At the same time, Anthropic has strengthened Sonnet 4.5’s protections against prompt injection attacks. Due to the sophistication of the new model, Anthropic is releasing Sonnet 4.5 under its AI Safety Level 3 framework, meaning it comes with filters designed to prevent potentially dangerous outputs related to prompts around chemical, biological and nuclear weapons.  

    A chart showing how Sonnet 4.5 compares against other frontier models in safety testing.

    A chart showing how Sonnet 4.5 compares against other frontier models in safety testing.

    (Anthropic)

    With today’s announcement, Anthropic is also rolling out quality of life improvements across the Claude product stack. To start, Claude Code, the company’s popular coding agent, has a refreshed terminal interface, with a new feature called checkpoints included. As you can probably guess from the name, they allow you to save your progress and roll back to a previous state if Claude writes some funky code that isn’t quite working like you imagined it would. File creation, which Anthropic began rolling out at the start of the month, is now available to all Pro users, and if you joined the waitlist Claude for Chrome, you can start using the extension today.   

    API pricing for Sonnet 4.5 remains at $3 per one million input tokens and $15 for the same amount of output tokens. The release of Sonnet 4.5 caps off a strong September for Anthropic. Just one day after Microsoft added Claude models to Copilot 365 last week, OpenAI admitted its rival offers the best AI for work-related tasks.

    Source link