ReportWire

Tag: anthropic

  • Goldman Sachs’ Information Chief Marco Argenti Deepens A.I. Push with Anthropic

    [ad_1]

    Marco Argenti says A.I. agents are becoming “digital co-workers” across Goldman’s operations. Courtesy Goldman Sachs

    Marco Argenti, chief information officer at Goldman Sachs, is leading one of Wall Street’s most aggressive integrations of A.I. He has made a name for himself as an early adoptor of A.I. in finance through initiatives like the GS AI Assistant platform, which is offered to Goldman Sachs employees for tasks such as coding and translation, and last year’s pilot of A.I. software engineer Devin, made by Cognition Labs. More recently, the investment bank has been collaborating with Anthropic, using its Claude model primarily in its accounting and compliance departments, Argenti said in an interview with CNBC published today (Feb. 6).

    The goal is to speed up tasks that involve massive amounts of data without investing in more manpower. “Think of it as a digital co-worker for many of the professions within the firm that are scaled, complex and very process-intensive,” Argenti said.

    Argenti spent much of his career in the tech and cloud computing industries before joining Goldman Sachs in 2019. He previously served as vice president of technology at Amazon Web Services, overseeing serverless computing and virtual reality. Earlier in his career, he led developer experiences at Nokia.

    Anthropic is known for its A.I. coding assistant, which is widely used by engineers. Goldman Sachs quickly realized that the traits that make a good coder—such as applying logic and working with large volumes of complex data—could be applied to tasks across accounting and compliance, Argenti said. Outside those departments, Claude agents could also be used for employee surveillance and creating investment banking pitchbooks for clients, he revealed.

    Goldman Sachs and Anthropic did not respond to requests from Observer to comment on those efforts.

    A collaboration with Goldman Sachs is the latest win for Anthropic, which has positioned itself as an enterprise-focused A.I. company. Earlier this week, the startup’s release of coworking software with various industry plug-ins triggered a panic selloff in enterprise software stocks, as investors worried such tools could make existing products obsolete.

    Other Wall Street giants are also embracing A.I. agents. JPMorgan Chase currently has more than 500 A.I. use cases, ranging from customer service to idea generation and marketing, and draws upon models from both Anthropic and OpenAI to power its internal LLM Suite program. Morgan Stanley was an early client of OpenAI, using its tech to distill meeting notes, aid financial research and boost coding productivity.

    A.I.’s use in financial services has grown each year since 2022, according to a recent Nvidia survey, in which 100 percent of industry professionals said A.I. spending will either stay the same or increase in 2026. A.I. agents, in particular, are being used or assessed by 42 percent of respondents. Top workflows include knowledge management and retrieval, internal process optimization and customer support automation.

    Such widespread adoption will inevitably lead to industry-wide labor shifts. A.I. leaders and studies alike have warned that the technology could reshape or eliminate entry-level white-collar roles. It’s unclear how the use of A.I. would affect Goldman Sachs’ employees. But Argenti conceded that A.I.’s advancements could eliminate the need for third-party providers.

    Goldman Sachs’ Information Chief Marco Argenti Deepens A.I. Push with Anthropic

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Explainer: Just why have Anthropic’s latest Claude plugins spooked everyone? – Tech Digest

    [ad_1]

    Share


    Depending on your point of view (glass half full, glass half empty), AI either promises a golden age of limitless productivity or a desolate future of professional obsolescence.

    This week, as Anthropic unleashed the latest “plugins” for its Claude chatbot, the stock market – and even many Anthropic employees – appeared to cast their vote for the latter.

    In what traders are already calling the “SaaS apocalypse,” the release of these semi-autonomous agents triggered a seismic shift in global finance, wiping nearly $300 billion off software stocks in a single day and fuelling fears that AI’s apocalyptic jobs prophecy is moving from theory to reality.

    Anthropic, the $350 billion Silicon Valley titan, launched the plugins with uncharacteristic subtlety. Billed as specialist assistants for legal, marketing, finance, and data analysis, the tools claim to represent the evolution of AI from a conversational engine into a “digital co-worker.”

    Importantly, unlike previous bots that required human prompting at every turn, these “agentic” AIs are designed to carry out complex business tasks autonomously, operating across a company’s entire digital infrastructure without needing any of that arcade coding nonsense.

    Rightmove: One of the company’s affected by the sell-off

    Massive software sell-off

    However, the reaction from Wall Street and the City of London was swift. Investors, fearing that these agents will render traditional software platforms redundant, staged a massive sell-off. Legacy giants that provide mundane but essential tools for accounting and data entry, such as SAP, Sage, and Relx, saw their valuations crater.

    Advertising powerhouse WPP fell 15%, while the property portal Rightmove dropped 10% on fears that AI could soon automate the entire house-hunting process. Specialist investors, including the Finsbury Growth and Income Trust, have been caught in the crosshairs, as the “boring-but-reliable” software stocks that anchor many pension funds were unceremoniously dumped into the “AI loser” bucket.

    While billionaires like Nvidia’s Jensen Huang have dismissed the sell-off as “illogical,” the anxiety on the ground is palpable, even within Anthropic itself.

    In response to an internal survey in December, reports The Telegraph, one Anthropic employee frets: “In the long term, I think AI will end up doing everything and make me and many others irrelevant.” Another says: “It kind of feels like I’m coming to work every day to put myself out of a job.”

    Indeed, this sentiment is echoed by Dario Amodei, Anthropic’s CEO, who has warned that AI could eliminate up to half of all entry-level white-collar roles in the coming years.

    White-collar wipe-out

    The long-term effect on the labour market is already becoming visible. In the UK, administrative vacancies have fallen by 36,000 over the last six months and entry-level job postings have plummeted by 35% since the launch of ChatGPT. A JP Morgan study suggests a grimmer reality: AI is currently destroying more jobs in Britain than it is creating, resulting in a net loss of 8%.

    For the “Gen Z” workforce, the threat is twofold. Not only are entry-level roles being squeezed, but there is a growing fear of “deskilling.” As junior associates in law and finance find themselves relying on bots that can perform due diligence in a tenth of the time of a human, they risk becoming obsolete before they have even mastered their craft. Technology Secretary Liz Kendall recently levelled with the public, admitting that “some jobs will go.” That’s almost certainly an understatement. 

    As Anthropic’s agents begin to fill the desks of traditional white-collar workers, the era of AI as a mere novelty is over. The stock market has signalled that the displacement of human labour is no longer a distant warning: it is a priced-in reality.

    Indeed, it seems that for the modern office worker, the arrival of the AI colleague may well mean the departure of the human one. What a depressing thought as I sit here in my home office on a dreary Friday morning contemplating my journalistic future!


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • OpenAI boss attacks rival’s Super Bowl ads, Anthropic’s plugins wipes billions off software stocks – Tech Digest

    [ad_1]

    Share

    OpenAI boss Sam Altman (right) with Jony Ive. Altman has criticised the ads that rival Anthropic is planning to show during the Super Bowl

    The boss of ChatGPT-maker OpenAI is being ridiculed for launching a lengthy attack on a rival chatbot firm over the adverts it intends to run during the Super Bowl. Anthropic is using the ads, to criticise commercials being introduced to ChatGPT, describing the move as a “betrayal”. In a 420 word-long post on X, external, OpenAI CEO Sam Altman hit back, calling Anthropic “dishonest” and “deceptive” – and even accusing the firm of using “doublespeak”. BBC

    Deepfake fraud has gone “industrial”, an analysis published by AI experts has said. Tools to create tailored, even personalised, scams – leveraging, for example, deepfake videos of Swedish journalists or the president of Cyprus – are no longer niche, but inexpensive and easy to deploy at scale, said the analysis from the AI Incident Database. It catalogued more than a dozen recent examples of “impersonation for profit”, including a deepfake video of Western Australia’s premier, Robert Cook, hawking an investment scheme, and deepfake doctors promoting skin creams.


    Anthropic,
    one of the biggest and most influential tech companies in the world, is launching a new model: Claude Opus 4.6. Until now, this would mostly be big news for techies, where Anthropic is admired as the maker of Claude Code, the code-writing AI tool which many engineers say is taking over their work entirely. All of a sudden, however, the impact of these tools is being felt more widely, after a seemingly small release from Anthropic shook some sections of the stock market. Sky News 

    At Anthropic, the artificial intelligence (AI) business behind the Claude co-working bot, staff are increasingly uneasy about the power of their own creation. In response to an internal survey in December, one Anthropic employee frets: “In the long term, I think AI will end up doing everything and make me and many others irrelevant.” Another says: “It kind of feels like I’m coming to work every day to put myself out of a job.” Telegraph

    The UK government claims it will develop a “world-first” framework to evaluate deepfake detection technologies as AI-generated content proliferates. The Home Office is working with Microsoft, other tech corporations and academics to assess methods for identifying harmful forgeries. It estimates eight million deepfakes were shared in 2025, up from half a million in 2023. Nik Adams, Deputy Commissioner for City of London Police, called the framework “a strong and timely addition to the UK’s response to the rapidly evolving threat posed by AI and deepfake technologies.” The Register 

    The affordable iPhone 17e was earlier rumored to launch in Spring this year, but a new report now suggests the device could arrive later this month. Meanwhile, a separate report claims the phone will bring three key upgrades. According to Macwelt, citing industry sources, the iPhone 17e will be unveiled via a press release on February 19. This wouldn’t be surprising, as Apple also announced the iPhone 16e in February last year.

    iPhone 16e gets a 48MP single rear cameraiPhone 16e gets a 48MP single rear camera

    The report adds that the upcoming iPhone will support MagSafe, offering wireless charging speeds of up to 25W. It is also said to retain the notch from the iPhone 16e. GSMArena


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Anthropic Launches New Model That Spots Zero Days, Makes Wall Street Traders Lose Their Minds

    [ad_1]

    Anthropic, the makers of the popular and code-competent chatbot Claude, released a new model Thursday called Claude Opus 4.6. The company is doubling down on coding capabilities, claiming that the new model “plans more carefully, sustains agentic tasks for longer, can operate more reliably in larger codebases, and has better code review and debugging skills to catch its own mistakes.”

    It seems the model is also pretty good at catching other people’s mistakes. According to a report from Axios, Opus 4.6 was able to spot more than 500 previously undisclosed zero-day security vulnerabilities in open-source libraries during its testing period. It also reportedly did so without receiving specific prompting to go hunting for flaws—it just spotted and reported them.

    That’s a nice change of pace from all of the many developments that have been happening around OpenClaw, an open-source AI agent that most users have been running with Claude Opus 4.5. A number of vibe-coded projects that have come out of the community have had some pretty major security flaws. Maybe Anthropic’s upgrade will be able to catch those issues before they become everyone else’s problem.

    Claude’s calling card has been coding for some time now, but it seems Anthropic is looking to make a splash elsewhere with this update. The company said Opus 4.6 will be better at other work tasks like creating PowerPoint presentations and navigating documents in Excel. Seems those features will be key to Cowork, Anthropic’s recent project that it is touting as “Claude Code” for non-technical workers.

    It’s also boasting that the model will have potential use in financial analysis, and it sure seems like the folks on Wall Street could use some help there. The general consensus among financial analysts this week is that Anthropic’s Cowork models are spooking the stock market and playing a major factor in sending software stocks into a spiral. It’s possible that this is what the market has been responding to—after all, the initial release of DeepSeek, the open-source AI model out of China, tanked the AI sector for a day or so, so it’s not like these markets aren’t overly sensitive.

    But it seems unlikely that Opus 4.6 will fundamentally upend the market. Anthropic already holds a solid lead on the plurality of the enterprise market, according to a recent report from Menlo Ventures, and is well ahead of its top (publicly traded) competitors in the space—though OpenAI made its own play to cut into some market share earlier today with the launch of its Frontier platform for managing AI agents. If anything, Anthropic’s new model seems like it’ll help the company maintain its top spot for the time being. But if the stock market shock is any indication, one thing is for sure: the entire economy is completely pot-committed to the developments in AI. Surely that won’t have any repercussions.

    [ad_2]

    AJ Dellinger

    Source link

  • Sam Altman got exceptionally testy over Claude Super Bowl ads | TechCrunch

    [ad_1]

    Anthropic’s Super Bowl commercial, one of four ads the AI lab dropped on Wednesday, begins with the word “BETRAYAL” splashed boldly across the screen. The camera pans to a man earnestly asking a chatbot (obviously intended to depict ChatGPT) for advice on how to talk to his mom.

    The bot, portrayed by a blonde woman, offers some classic bits of advice. Start by listening. Try a nature walk! And then twists into an ad for a fictitious (we hope!) cougar-dating site called Golden Encounters. Anthropic finishes the spot by saying that while ads are coming to AI, they won’t be coming to it’s own chatbot, Claude.

    Another one features a slight young man looking for advice on building a six pack. After offering his height, age, and weight, the bot serves him an ad for height-boosting insoles.

    The Anthropic commercials are cleverly crafted at OpenAI’s users, after that company’s recent announcement that ads will be coming to ChatGPT’s free tier. And they caused an immediate stir, spawning headlines that Anthropic “mocks,” “skewers” and “dunks” on OpenAI.

    They are funny enough that even Sam Altman admitted on X that he laughed at them. But he clearly didn’t really find them funny. They inspired him to write a novella-sized rant that devolved into calling his rival “dishonest” and “authoritarian.”

    In that post, Altman explains that an ad-supported tier is intended to shoulder the burden of offering free ChatGPT to many of its millions of users. ChatGPT is still the most popular chatbot by a large margin.

    But the OpenAI CEO insisted they were “dishonest” in implying that ChatGPT will twist a conversation to insert an ad (and possibly for an off-color product, to boot).”We would obviously never run ads in the way Anthropic depicts them,” Altman wrote in the social media post. “We are not stupid and we know our users would reject that.”

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    Indeed, OpenAI has promised ads will be separate, labeled, and will never influence a chat. But the company has also said it is planning on making them conversation-specific — which is the central allegation of Anthropic’s ads. As OpenAI explained in its blog. “We plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.”

    Altman then went on to fling some equally questionable assertions at his rival. “Anthropic serves an expensive product to rich people,” he wrote. “We also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.”

    But Claude has a free chat tier, too, with subscriptions at $0, $17, $100, $200. ChatGPT’s tiers are $0, $8, $20, $200. One could argue the subscription tiers are fairly equivalent.

    Altman also alleged in his post that: “Anthropic wants to control what people do with AI” He argues it blocks usage of Claude Code from “companies they don’t like” like OpenAI, and said Anthropic tells people what they can and can’t use AI for.

    True, Anthropic’s whole marketing deal since day one has been “responsible AI.” The company was founded by two former OpenAI alums, after all, who claimed they grew alarmed about AI safety when they worked there.

    Still, both chatbot companies have usage policies, AI guardrails, and talk about AI safety. And, while OpenAI allows ChatGPT to be used for erotica while Anthropic does not, it, too, has determined some content should be blocked, particularly in regards to mental health.

    Yet Altman took this Anthropic-tells-you-what-to-do argument to an extreme level when he accused Anthropic of being “authoritarian.”

    “One authoritarian company won’t get us there on their own, to say nothing of the other obvious risks. It is a dark path,” he wrote.

    Using “authoritarian” in a rant over a cheeky Super Bowl ad is misplaced, at best. It’s particularly tactless when considering the current geopolitical environment in which protesters around the world have been killed by agents of their own government. While business rivals have been duking it out in ads since the beginning of time, clearly Anthropic hit a nerve.

    [ad_2]

    Julie Bort

    Source link

  • Anthropic’s Claude AI Strikes F1 Partnership With Atlassian Williams

    [ad_1]

    Even AI companies see the value in sports alliances.

    Anthropic, the company behind the AI interface known as Claude, has struck a multi-year partnership deal with the Atlassian Williams F1 team, making one of the first big tie-ins between a growing artificial-intelligence sector and sports teams. Claude will be known as Williams “official thinking partner.”

    Financial terms were not disclosed.

    Under the agreement, Claude will be integrated across the entire Williams organization, and will support how the team plans and considers race strategy, car development and other operations. Teams can use Claude to analyze research and, potentially, build new products. Claude branding will appear on Williams cars, drivers, and team kit starting with the 2026 car reveal on February 3, ahead of the season opener in Melbourne.

    The deal has been unveiled just as F1 has changed its rules to allow for the construction of lighter, smaller and narrower vehicles.

    Formula 1 is ultimately about the pairing of human endeavor and technical excellence.” said Mike Krieger, co-lead of Anthropic Labs, in a statement. “I’ve watched Atlassian Williams F1 Team find ways to punch above their weight for years, that’s exactly the kind of team Claude is built for.”

    Williams finished fifth last season. The alliance will draw new attention to the team, with fans and executives scrutinizing whether Claude can help boost performance and results.

    “At a time when our team is on a journey to the front, this partnership is an opportunity for us to show what’s possible when you combine elite human talent with the right frontier models,” said James Vowles, team principal at Atlassian Williams F1 Team, in a statement. “We know that there are no shortcuts to success, and look forward to working with Anthropic to continue building long-lasting performance.”

    Anthropic was encouraged by the fact that Williams is not backed by an auto company, said Andrew Sirk, Anthropic’s head of brand marketing. “They are world class problem solvers, focused on the smallest details, that’s the same drive that animates Anthropic. It’s why this partnership felt right from the first conversation,” he said.

    Founded in 1977 by Sir Frank Williams and Sir Patrick Head, the team has won nine Constructors’ World Championships, seven Drivers’ World Championships and 114 Grand Prix races.

    [ad_2]

    Brian Steinberg

    Source link

  • Tech CEOs boast and bicker about AI at Davos | TechCrunch

    [ad_1]

    There were times at this week’s meeting of the World Economic Forum when Davos seemed transformed into a high-powered tech conference, with on-stage appearances by Tesla CEO Elon Musk, Nvidia CEO Jensen Huang, Anthropic CEO Dario Amodei, Microsoft CEO Satya Nadella, and even more industry executives.

    The big topic, unsurprisingly, was AI, with CEOs laying a vision for the technology’s transformative potential while also acknowledging ongoing concerns that they’re inflating a massive bubble. Amidst all that big-picture prognostication, they also found time to take swipes at their competitors, and even at their ostensible partners.

    On the latest episode of TechCrunch’s Equity podcast, I discussed all things Davos with TechCrunch’s Kirsten Korosec and Sean O’Kane.

    Kirsten noted that the conference seemed transformed from past years, with tech companies like Meta and Salesforce taking over the main promenade, while important topics like climate change failed to draw crowds. And Sean said that even if AI execs weren’t quite “panhandling for usage and more customers,” it could sometimes feel that way.

    Read a preview of our full conversation, edited for length and clarity, below.

    Kirsten: Some of the discussions around, let’s say, climate change or poverty and big global problems, [are] not really attracting the crowds. Meanwhile, on the main promenade in Davos, Switzerland, some of the biggest storefronts have been converted and taken over by companies like Meta and Salesforce, Tata, also a lot of Middle East countries. And I think the largest was the USA House, which was sponsored by McKinsey and Microsoft. It really felt visually different.

    And then Elon Musk being there — Sean, you and I both listened to it. There wasn’t a lot of there there, but I will say that it was interesting that he showed up, because in the past he has avoided Davos.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Anthony: We were trying to pull out the tech content of Davos, [and] there are absolutely things that worth highlighting here, but it’s also striking how, especially as AI has become such a big business story, it’s hard to fully separate that from all the other threads going on in terms of bigger questions about international trade, about world politics.

    One of the big headlines coming out of [Davos], for us at least, was the remarks by the CEO of Anthropic, where he basically attacked this Trump administration decision to allow Nvidia to send chips to China. It’s a story that is a tech story, but it’s also a trade story, it’s a politics story.

    I think in terms of the substance of what he said, it felt consistent to me in the sense that he’s generally comfortable shooting his mouth off, and also that it’s this interesting line [in AI discourse] where there’s an element of criticism, but it also ties into this really intense AI hype. One of the phrases he used was that an AI data center is like a country full of geniuses. I have questions about that — but he’s like, “How could we possibly send all these chips to China if we’re worried about China? Because essentially we’re sending a country full of geniuses over to China and letting them control it.”

    Sean: You could probably fill a notebook with all the different weird phrases that these CEOs use this week. The other one that has been stuck in my mind is that Satya Nadella kept calling the data centers token factories, which is a wonderful abstraction of what he thinks they’re there for.

    You know, there were two things that really stuck out to me about all the different things that were said by these CEOs in different parts of the week. One is that they are definitely all sort of sniping at each other — not just Anthropic with Nvidia, which is interesting in its own right, because Anthropic is a huge Nvidia customer and uses Nvidia GPUs, and there’s an interesting tension there. But also just seeing them sitting them next to each other and really kind of pulling, know, putting the knives out a little bit more than we’re used to seeing.

    We know that they’re all jockeying to be the lead and that they’re also trying to hold on to talent without overspending themselves to death. And this was one of the first times where it really felt like that tension was palpable and that they were present for it. Those two things are not often true at the same time.

    The other thing, to your point about a lot of the geopolitics of it and the business of it — this was the most blatant that I feel like we’ve gotten these CEOs on record as far as what they think they need to continue succeeding.

    Satya Nadella — I think you could maybe unfavorably read it this way, but I don’t think it’s that unfavorable — more or less was like, “More people need to be using this or else it’s going to be a bubble and a popped bubble.” He took a much different position in some ways from Dario Amadei of Anthropic, because Nadella’s focus is really about trying to broadly scoop up as much usage as possible [and] how do we make sure that AI is equitable across all these different communities and throughout the globe, versus concentrated in one place, like only the wealthy places, which I thought was an interesting tension. But there is an element of him giving away the game of not really panhandling for usage and more customers … but kind of.

    And to that point, Jensen Huang of Nvidia did something similar, where he was more or less saying, “We’re not investing enough in this and we need more investment to be able to make this work.”

    Kirsten: Jensen’s comments were interesting because he really talked about it in terms of job creation, and one could give the counterpoint of, there will be a moment where the build out slows, but no one’s really talking about that right now.

    The other thing, I think, was a good point that you made, which is we’ve never really seen them all sort of together in a room sniping at each other. Oftentimes you’ll have like Sam Altman at a conference or Satya [Nadella], but here they are all together. So you’re hearing it in real time.

    [ad_2]

    Anthony Ha

    Source link

  • Claude Code gives Anthropic its viral moment | Fortune

    [ad_1]

    It’s been a good few weeks for Anthropic. The lab is reportedly planning a $10 billion fundraising that would value the company at $350 billion, its CEO caused headlines in Davos by criticizing the White House, and it’s also having a viral product launch that most AI labs can only dream of.

    Claude Code, the company’s surprisingly popular hit, is a coding tool that has captured the attention of users far beyond the software engineers it was built for. First released in February 2024 as a developer assistant, the coding tool has become increasingly sophisticated and sparked a level of excitement rarely seen since ChatGPT’s debut. Jensen Huang called it “incredible” and urged companies to adopt it for coding. A senior Google engineer said it recreated a year’s worth of work in an hour. And users without any programming background have deployed it to book theater tickets, file taxes, and even monitor tomato plants.

    Even at Microsoft, which sells GitHub Copilot, Claude Code has been widely adopted internally across its major engineering teams, with even non-developers reportedly being encouraged to use it.

    Anthropic’s products have long been popular with software developers, but after users pointed out that Claude Code was more of a general-purpose AI agent, Anthropic created a version of the product for non-coders. Last week, the company launched Cowork, a file management agent that is essentially a user-friendly version of the coding product. Boris Cherny, head of Claude Code at Anthropic, said his team built Cowork in approximately a week and a half, largely using Claude Code itself to do the legwork.

    “It was just kind of obvious that Cowork is the next step,” Cherny told Fortune. “We just want to make it much easier for non-programmers.”

    What separates Cowork from earlier general use AI tools from Anthropic is its ability to take autonomous action rather than simply provide advice. The products can access files, control browsers through the “Claude in Chrome” extension, and manipulate applications—executing tasks rather than just suggesting how to do them. For some general users, it’s the first taste of what the promise of agentic AI really is.

    Many of the uses aren’t especially sexy, but they do save users hours. Cherny says he uses Cowork for project management, automatically messaging team members on Slack when they haven’t updated shared spreadsheets, and had heard of use cases including one researcher deploying it to comb through museum archives for basketry collections.

    “Engineers just feel unshackled, that they don’t have to work on all the tedious stuff anymore,” Cherny told Fortune. “We’re starting to hear this for Cowork also, where people are saying all this tedious stuff—shuffling data between spreadsheets, integrating Slack and Salesforce, organizing your emails—it just does it so you can focus on the work you actually want to do.”

    Enterprise first, consumer second

    Despite the consumer buzz, Anthropic is positioning both products squarely in the enterprise market, where the company reportedly already leads OpenAI in adoption.

    “For Anthropic, we’re an enterprise AI company,” Cherny said. “We build consumer products, but for us, really, the focus is enterprise.”

    Cherny said this strategy is also guided by Anthropic’s founding mission around AI safety, which resonates with corporate customers concerned about security and compliance. In this case, the company’s roadmap with general-use products was to first develop strong coding capabilities to enable sophisticated tool use and ‘test’ products with technical customers. By providing capabilities to technical users through Claude Code before extending them to broader audiences, Cherny said the company builds on a tested foundation rather than starting from scratch with consumer tools.

    Claude Code is now used by Uber, Netflix, Spotify, Salesforce, Accenture, and Snowflake, among others, according to Cherny. The product has found “a very intense product market fit across the different enterprise spaces,” he told Fortune.

    Anthropic’s also seen a traffic uplift as a result of Claude Code’s viral moment. Claude’s total web audience has more than doubled since December 2024, and its daily unique visitors on desktop are up 12% globally year-to-date, according to data from Similarweb and Sensor Tower published by The Wall Street Journal.

    The company is facing challenges that come with AI agents capable of autonomous action. Both products have security vulnerabilities, particularly “prompt injections” where attackers hide malicious instructions in web content to manipulate AI behavior.

    To tackle this, Anthropic has implemented multiple security layers, including running Cowork in a virtual machine and recently adding deletion protection after a user accidentally removed files. A feature Cherny called “quite innovative.”

    But the company does acknowledge the limitations of their approach. “Agent safety—that is, the task of securing Claude’s real-world actions—is still an active area of development in the industry,” Anthropic warned in its announcement.

    The future of software engineering

    With the rise of increasingly sophisticated autonomous coding tools, some are concerned that software engineer roles, especially entry-level roles, could dry up. Even within Anthropic, some engineers have stopped writing code at all, according to CEO Dario Amodei.

    “I have engineers within Anthropic who say ‘I don’t write any code anymore. I just let the model write the code, I edit it,’” Amodei said at the World Economic Forum in Davos. “We might be six to 12 months away from when the model is doing most, maybe all of what software engineers do end-to-end.”

    Tech companies argue that these tools will democratize coding, allowing those with little to no technical skills to build products by prompting AI systems in natural language. But, while it’s not definitive the two are causally linked and there are other factors impacting a jobs downturn, it’s true that open roles for entrylevel software engineers have declined as the amount of code written by generative AI has ramped up.

    Time will tell whether this heralds a democratization of software development or the slow erosion of a once stable profession, but by bringing autonomous AI agents out of the lab and into everyday work, Claude Code may speed up how quickly we find out.

    This story was originally featured on Fortune.com

    [ad_2]

    Beatrice Nolan

    Source link

  • A timeline of the US semiconductor market in 2025 | TechCrunch

    [ad_1]

    Last year was a tumultuous one for the U.S. semiconductor industry.  

    From leadership changes at legacy companies to continuously changing dialogue around AI chip export controls, a lot has happened. If the first few weeks of 2026, which saw new chip tariffs and international semiconductor deals, are any indicator — this year will be as unexpected as the last.  

    But before we get too deep into 2026, here is a final look at everything that happened in the U.S. semiconductor industry in 2025:  

    December

    Nvidia finds gold with Groq 

    December 24: Nvidia announced that it struck a non-exclusive licensing deal with chip maker Groq. While this wasn’t an acquisition, Nvidia hired Groq’s founder and president, in addition to other employees. The company also bought $20 billion worth of Groq’s assets.  

    Chips to China 

    December 8: The U.S. Department of Commerce decided that Nvidia and AMD can send AI chips to China after all, a stark reversal to past messaging. The U.S. government specifically said Nvidia could sell its H200 chips, which are much more advanced than its H20 chips, to approved customers.  

    November 

    Nvidia keeps climbing 

    November 19: Nvidia reported record results in its third-quarter earnings report. The company racked up $57 billion in revenue in Q3, a 66% increase over the same quarter in 2024. A large portion of that revenue came from Nvidia’s data center business.  

    October

    Intel makes processor progress 

    October 9: Intel announced a new processor, dubbed Panther Lake, that is part of the company’s Intel Core Ultra processor family. This will be the first one built on the company’s 18A semiconductor process and will be exclusively made at Intel’s Arizona fab factory.  

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    September

    A taste of tariffs 

    September 26: We got the first inkling of what the Trump administration’s semiconductor tariffs could look like at the end of September. Rumors started swirling that the administration would require semiconductor companies to produce the same volume of chips domestically as they do internationally, or they would otherwise be subject to tariffs.  

    China shuts out Nvidia 

    September 17: China’s campaign against Nvidia continued when the country told its domestic companies not to buy Nvidia’s chips. The Cyberspace Administration of China banned local companies from buying Nvidia’s chips in an effort to boost domestic chip sales.  

    China calls out Nvidia

    September 15: Despite being given a loose green light to start selling chips again in China, the process was not going to be smooth sailing for Nvidia. China’s State Administration for Market Regulation ruled that Nvidia violated the country’s antitrust regulations regarding the company’s 2020 acquisition of Mellanox Technologies.  

    A leadership shakeup

    September 9: Just a few short weeks after the U.S. government took an equity stake in Intel, the company made some notable leadership changes. Michelle Johnston Holthaus, the chief executive officer of Intel products, departed after three decades. The company also created a central engineering group.  

    August

    Nvidia reports record quarter

    August 27: The turmoil in the semiconductor market over the year had clearly not hurt Nvidia. On August 27, the company reported that it had record sales in the second quarter. The highlights were the growth of its data center business, which saw its revenue grow 56% year over year.

    U.S. Government takes equity stake in Intel

    August 22: The U.S. government announced it was converting existing government grants into a 10% stake in Intel. The deal was structured to penalize Intel if the company’s ownership in its foundry program dropped below 50%.

    SoftBank takes a stake in Intel

    August 18: Japanese conglomerate SoftBank announced it was taking a $2 billion stake in Intel. SoftBank CEO Masayoshi Son called the deal “strategic.” The transaction was announced as rumors were swirling that the U.S. was going to take a stake in the company.

    Chip companies strike a deal to sell in China

    August 12: Nvidia and AMD announced that they struck a deal with the U.S. government to gain the necessary license to sell their AI chips in China. Both companies agreed to pay the U.S. government 15% of revenue from their chip sales in China.

    Trump and Lip-Bu Tan meet

    August 11: Intel CEO Lip-Bu Tan went to the White House to meet with President Trump. The pair talked about Tan’s past and how Intel can help the U.S. with its goal of bringing semiconductor manufacturing back to the U.S. Both called the conversation productive.

    Trump comes for Lip-Bu Tan

    August 7: President Donald Trump demanded that Intel CEO Lip-Bu Tan “resign immediately” due to “conflicts of interest” in a Truth Social post. While Trump didn’t clarify what the conflicts of interest were, this came the day after Republican Senator Tom Cotton sent a letter to Intel’s board of directors inquiring about Tan’s ties to China.

    Trump says tariffs are coming for the industry

    August 5: President Donald Trump told CNBC’s Squawk Box that he was planning to announce tariffs on the semiconductor industry as soon as the following week. At the time, he didn’t mention specifics on what these tariffs could look like. As of September 5, no tariffs have been announced for this industry.

    July

    Intel spins out business unit

    July 25: Just one day after its second-quarter earnings call, Intel confirmed that it was spinning out its Network and Edge group, which is responsible for making chips for the telecom industry. The business unit produced $5.8 billion in revenue for the semiconductor company in 2024.

    Intel continues to look for efficiency

    July 24: Intel announced that it was pulling back on some of its manufacturing operations. The company said it will longer pursue its previously announced projects in Germany and Poland and that it was consolidating its test operations. Intel also announced it plans to end this year with around 75,000 employees.

    Trump’s AI Action Plan

    July 23: The Trump administration unveiled its much-anticipated AI Action Plan alongside multiple related executive orders. While the plan included a lot regarding the need for U.S. chip export controls and for the U.S. to coordinate with its allies on this effort, it didn’t provide concrete information on what those restrictions would look like.

    Groundbreaking UAE AI deal reportedly on hold

    July 17: The Trump administration helped foster a groundbreaking deal in May that resulted in a commitment from the United Arab Emirates to buy billions of dollars’ worth of AI chips from Nvidia. But now that deal was reportedly on hold as the U.S. worked through national security concerns and fears that those chips could be smuggled from the Middle East to China.

    Nvidia is a bargaining chip

    July 16: A day after semiconductor firms like Nvidia and AMD got the green light to resume selling certain AI chips to China, we found out why. U.S. Commerce Security Howard Lutnick said the plans to allow U.S. companies to start selling AI chips in China are tied to ongoing trade discussions between the U.S. and China regarding rare earth elements.

    U.S. chips head back to China

    July 14: Nvidia said it was filing an application to restart sales of H20 AI chips in China, confirming rumors from a few weeks prior. The company also announced that it would be selling a new chip, the RTX Pro, which was designed specifically for the Chinese market.

    Malaysia fights chip smuggling

    July 14: Malaysia announced that it was launching trade permits for U.S.-made AI chips. Under this new restriction, any individual or business would need to give the Malaysian government 30 days’ notice before exporting any U.S. AI chips.

    June

    Intel appoints new leadership

    June 18: Intel announced four new leadership appointments that Intel said will help it move toward its goal of becoming an engineering-first company again. Intel announced a new chief revenue officer in addition to multiple high-profile engineering hires.

    Intel began layoffs

    June 17: Intel began laying off a significant chunk of its Intel Foundry staff in July, according to various media reports. The company later confirmed it was restructuring. Reports said it planned to eliminate 15% to 20%, of workers in that business unit. These layoffs weren’t a shock: Layoffs were rumored back in April, and Intel’s CEO Lip-Bu Tan had said he wants to flatten the organization.

    Nvidia won’t report on China

    June 13: Nvidia wasn’t counting on the U.S. backing off from its AI chip export restrictions. After the company took a financial hit from the newly imposed licensing requirements on its H20 AI chips, Nvidia CEO Jensen Huang said the company will no longer include the Chinese market in future revenue and profit forecasts.

    AMD acquired the team behind Untether AI

    June 6: AMD made another acquisition — this time focused on talent. The company acqui-hired the team behind Untether AI, which develops AI inference chips, as the semiconductor giant continues to round out its AI offerings.

    AMD is coming for Nvidia’s AI hardware dominance

    June 4: AMD continued its shopping spree. The company acquired AI software optimization startup Brium, which helps companies retrofit AI software to work with different AI hardware. With a lot of AI software being designed with Nvidia hardware in mind, this acquisition isn’t surprising.

    May

    Nvidia laid out the impact of chip export restrictions

    May 28: Nvidia reported that U.S. licensing requirements on its H20 AI chips cost the company $4.5 billion in charges during Q1. The company expected these requirements to result in an $8 billion hit to Nvidia’s revenue in Q2.

    AMD acquired Enosemi

    May 28: AMD kicked off its acquisition spree. The semiconductor company announced that it acquired Enosemi, a silicon photonics startup. Enosemi’s tech, which uses light photons to transmit data, is becoming an increasing area of interest for semiconductor companies.

    Tensions started to flare between China and the U.S.

    May 21: China’s Commerce Secretary didn’t like the U.S. guidance, issued on May 13, that warned U.S. companies that using Huawei’s AI chips “anywhere in the world” was a U.S. chip export violation. The commerce secretary issued a statement that threatened legal action against anyone caught enforcing that export restriction.

    Intel began the process to offload units

    May 20: Intel CEO Lip-Bu Tan seemingly got right to work on his plan to spin out Intel’s non-core business units. Back in May, the semiconductor giant was reportedly looking to offload its Networking and Edge units, which make chips for telecom equipment, and was responsible for $5.4 billion of the company’s 2024 revenue.

    The Biden administration’s AI Diffusion rule was officially dead

    May 13: Just days before the Biden administration’s Artificial Intelligence Diffusion Rule was set to go into place, the U.S. Department of Commerce formally rescinded it. The DOC said that it plans to issue new guidance in the future, and in the meantime, companies should remember that using Huawei’s Ascend AI chips anywhere in the world is a violation of U.S. export rules.

    A last-minute reversal

    May 7: Just a week before the “Framework for Artificial Intelligence Diffusion” was set to go into place, the Trump administration planned on taking a different path. According to multiple media outlets, including Axios and Bloomberg, the administration wouldn’t enforce the restrictions when they were supposed to start on May 15 and is instead working on its own framework. 

    April

    Anthropic doubles down on its support of chip export restrictions

    April 30: Anthropic doubled down on its support for restricting U.S.-made chip exports, including some tweaks to the Framework for Artificial Intelligence Diffusion, like imposing further restrictions on Tier 2 countries and dedicating resources to enforcement. An Nvidia spokesperson shot back, saying, “American firms should focus on innovation and rise to the challenge, rather than tell tall tales that large, heavy, and sensitive electronics are somehow smuggled in ‘baby bumps’ or ‘alongside live lobsters.’” 

    Planned layoffs at Intel

    April 22: Ahead of its Q1 earnings call, Intel said it was planning to lay off more than 21,000 employees. The layoffs were meant to streamline management, something CEO Lip-Bu Tan has long said Intel needed to do, and help rebuild the company’s engineering focus. 

    The Trump administration further restricts chip exports

    April 15: Nvidia’s H20 AI chip got hit with an export licensing requirement, the company disclosed in an SEC filing. The company added that it expected $5.5 billion in charges related to this new requirement in the first quarter of its 2026 fiscal year. The H20 was the most advanced AI chip Nvidia can still export to China in some fashion. TSMC and Intel reported similar expenses the same week. 

    Nvidia appears to talk its way out of further chip exports

    April 9: Nvidia’s CEO Jensen Huang was spotted attending dinner at Donald Trump’s Mar-a-Lago resort, according to reports. At the time, NPR reported Huang may have been able to spare Nvidia’s H20 AI chips from export restrictions upon agreeing to invest in AI data centers in the U.S. 

    An alleged agreement between Intel and TSMC

    April 3: Intel and TSMC allegedly reached a tentative agreement to launch a joint chipmaking venture. This joint venture would operate Intel’s chipmaking facilities, and TSMC would have a 20% stake in the new venture. Both companies declined to comment or confirm. If this deal doesn’t come to fruition, this is likely a decent preview of potential deals in the industry to come. 

    Intel warned it will spin off non-core assets

    April 1: CEO Lip-Bu Tan got to work right away. Just weeks after he joined Intel, the company announced that it was going to spin off non-core assets so it could focus. He also said the company would launch new products, including custom semiconductors for customers. 

    March

    Intel names a new CEO 

    March 12:  Intel announced that industry veteran and former board member Lip-Bu Tan would return to the company as CEO on March 18. At the time of his appointment, Tan said Intel would be an “engineering-focused company” under his leadership. 

    February

    Intel’s Ohio chip plant gets delayed again

    February 28: Intel was supposed to start operating its first chip fabrication plant in Ohio this year. Instead, the company slowed down construction on the plant for the second time in February. Now the $28 billion semiconductor project won’t wrap up construction until 2030 and may not even open until 2031.

    Senators call for more chip export restrictions

    February 3: U.S. senators, including Elizabeth Warren (D-Mass) and Josh Hawley (R-Mo), wrote a letter to Commerce Secretary Nominee-Designate Howard Lutnick, urging the Trump administration to further restrict AI chip exports. The letter specifically referred to Nvidia’s H20 AI chips, which were used in the training of DeepSeek’s R1 “reasoning” model. 

    January 

    DeepSeek releases its open “reasoning” model

    January 27: Chinese AI startup DeepSeek caused quite the stir in Silicon Valley when it released the open version of its R1 “reasoning” model. While this isn’t semiconductor news specifically, the sheer alarm in the AI and semiconductor industries DeepSeek caused continues to have ripple effects on the chip industry. 

    Joe Biden’s executive order on chip exports

    January 13: With just a week left in office, former president Joe Biden proposed sweeping new export restrictions on U.S.-made AI chips. This order created a three-tier structure that determined how many U.S. chips can be exported to each country. Under this proposal, Tier 1 countries faced no restrictions; Tier 2 countries had a chip purchase limit for the first time; and Tier 3 countries got additional restrictions. 

    Anthropic’s Dario Amodei weighs in on chip export restrictions

    January 6: Anthropic co-founder and CEO Dario Amodei co-wrote an op-ed in The Wall Street Journal endorsing existing AI chip export controls and pointing to them as a reason why China’s AI market was behind the U.S. He also called on incoming president Donald Trump to impose further restrictions and to close loopholes that have allowed AI companies in China to still get their hands on these chips.

    This story was originally published on May 9, 2025, and is regularly updated with new information.

    [ad_2]

    Rebecca Szkutak

    Source link

  • Sequoia to invest in Anthropic, breaking VC taboo on backing rivals: FT | TechCrunch

    [ad_1]

    Sequoia Capital is reportedly joining a blockbuster funding round for Anthropic, the AI startup behind Claude, according to the Financial Times. It’s a move sure to turn heads in Silicon Valley.

    Why? Because venture capital firms have historically avoided backing competing companies in the same sector, preferring to place their bets on a single winner. Yet here’s Sequoia, already invested in both OpenAI and Elon Musk’s xAI, now throwing its weight behind Anthropic, too.

    The timing is particularly surprising given what OpenAI CEO Sam Altman said under oath last year. As part of OpenAI’s defense against Musk’s lawsuit, Altman addressed rumors about restrictions in OpenAI’s 2024 funding round. While he denied that OpenAI investors were broadly prohibited from backing rivals, he did acknowledge that investors with ongoing access to OpenAI’s confidential information were told that access would be terminated “if they made non-passive investments in OpenAI’s competitors.” Altman called this “industry standard” protection (which it is) against misuse of competitively-sensitive information.

    According to the FT, Sequoia is joining a funding round led by Singapore’s GIC and U.S. investor Coatue, which are each contributing $1.5 billion. Anthropic is aiming to raise $25 billion or more at a $350 billion valuation — more than double its $170 billion valuation from just four months ago. The WSJ and Bloomberg had earlier reported the round at $10 billion. Microsoft and Nvidia have committed up to $15 billion combined, with VCs and other investors said to be contributing another $10 billion or more.

    The Sequoia connection with Altman runs deep. When Altman dropped out of Stanford to start Loopt, Sequoia backed him. He later became a “scout” for Sequoia, introducing the firm to Stripe, which became one of the firm’s most valuable portfolio companies. Sequoia’s new co-leader Alfred Lin and Altman also appear comparatively close. Lin has interviewed Altman numerous times at Sequoia events, and when Altman was briefly ousted from OpenAI in November 2023, Lin publicly said he’d eagerly back Altman’s “next world-changing company.”

    While Sequoia’s investment in xAI might seem to have already contradicted the traditional VC approach of picking winners, that bet is widely viewed as less about backing an OpenAI competitor and more about deepening the firm’s extensive ties to Elon Musk. Sequoia invested in X when Musk bought Twitter and rebranded it, is an investor in SpaceX and The Boring Company, and is a major backer of Neuralink, Musk’s brain-computer interface company. Former longtime Sequoia leader Michael Moritz was even an early investor in Musk’s X.com, which became part of PayPal.

    Sequoia’s apparent reversal on portfolio conflicts is especially glaring given its historical stance. As we reported in 2020, the firm took the extraordinary step of walking away from its investment in payments company Finix after determining the startup competed with Stripe. Sequoia forfeited its $21 million investment, letting Finix keep the money while giving up its board seat, information rights, and shares, marking the first time in the firm’s history it had severed ties with a newly funded company over a conflict of interest. (Sequoia had led Finix’s $35 million Series B round just months earlier.)

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The reported Anthropic investment comes after dramatic leadership changes at Sequoia, where the firm’s global steward, Roelof Botha, was pushed out in a surprise vote this fall just days after sitting down with this editor at TechCrunch Disrupt, with Lin and Pat Grady — who’d led that Finix deal — taking over.

    Anthropic is reportedly preparing for an IPO that could come as soon as this year. We’ve reached out to Sequoia Capital for comment.

    [ad_2]

    Connie Loizos

    Source link

  • What Doctors Really Think of ChatGPT Health and A.I. Medical Advice

    [ad_1]

    The rush to deploy A.I. in health care raises hard questions about accuracy and trust. Unsplash

    Each week, more than 230 million people globally ask ChatGPT questions about health and wellness, according to OpenAI. Seeing a vast, untapped demand, OpenAI earlier this month launched ChatGPT Health and made a swift $60 million acquisition of the health care tech startup Torch to turbocharge the effort. Anthropic soon followed suit, announcing Claude for Healthcare last week. The move from general-purpose chatbot to health care advisor is well underway.

    For a world rife with health care inequities—whether skyrocketing insurance costs in the U.S. or care deserts in remote regions around the globe—democratized information and advice about one’s health is, at least in theory, a positive development. But the intricacies of how large A.I. companies operate raise questions that health tech experts are eager to interrogate.

    “What I am worried about as a clinician is that there is still a high level of hallucinations and erroneous information that sometimes makes it out of these general-purpose LLMs to the end user,” said Saurabh Gombar, a clinical instructor at Stanford Health Care and the chief medical officer and co-founder of Atropos Health, an A.I. clinical decision support platform.

    “It’s one thing if you’re asking for a spaghetti recipe and it’s telling you to add 10 times the amount [of an ingredient] that you should. But it’s a totally different thing if it’s fundamentally missing something about the health care of the individual,” he told Observer.

    For example, a doctor might see left shoulder pain as a non-traditional sign of a heart attack in certain patients, whereas a chatbot might only suggest taking an over-the-counter pain medication. The reverse can also happen. If a patient comes to a provider convinced they have a rare disorder based on a simple symptom after chatting with A.I., it can erode trust when a human doctor seeks to rule out more common explanations first.

    Google is already under fire for its AI Overviews providing inaccurate and false health information. ChatGPT, Claude and other chatbots have faced similar criticism for hallucinations and misinformation, even as they attempt to limit liability in health-related conversations by noting that they are “not intended for diagnosis or treatment.

    Gombar argues that A.I. companies must do more to publicly emphasize how often an answer may be hallucinated and clearly flag when information is poorly grounded in evidence or entirely fabricated. This is particularly important given that extensive chatbot disclaimers serve to prevent legal recourse, whereas human health care models allow individuals to sue for malpractice.

    The primary care provider workforce in the U.S. has shrunk by 11 percent annually over the past seven years, especially in rural areas. Gombar suggests that physicians may no longer control how they fit into the global health care landscape. “If the whole world is moving away from going to physicians first, then physicians are going to be utilized more as an expert second opinion, as opposed to the primary opinion,” he said.

    The inevitable question of data privacy

    OpenAI and Anthropic have been explicit that their health tools are secure and compliant, including with the Health Insurance Portability and Accountability Act (HIPAA) in the U.S., which protects sensitive patient health information from unauthorized use and disclosure. But for Alexander Tsiaras, founder and CEO of the A.I.-driven medical record platform StoryMD, there is more to consider.

    “It’s not the protection from being hacked. It’s the protection of what they will do with [the data] after,” Tsiaras told Observer. “In the back end, their encryption algorithms are as good as anyone in HIPAA. But once you have the data, can you trust them? And that’s where I think it’s going to be a real problem, because I certainly would not trust them.”

    Tsiaras points to the persistent techno-optimism of Silicon Valley elites like OpenAI CEO Sam Altman, arguing that they live in a bubble and have “proven themselves to not care.”

    On a more tangible level, chatbots tend to be overly agreeable. xAI’s Grok recently drew criticism for agreeing to generate nearly nude photos of real women and children, though the company blocked this capability this week following public outcry. Chatbots can also reinforce delusions and harmful thought patterns in people with mental illness, triggering crises such as psychosis or even suicide.

    Andrew Crawford, senior counsel for privacy and data at the nonpartisan think tank Center for Democracy and Technology, said an A.I. company prioritizing profit through personalization over data protection can put sensitive health information at serious risk.

    “Especially as OpenAI moves to explore advertising as a business model, it’s crucial that the separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight,” Crawford said in a statement to Observer.

    Then there is the question of non-protected health data that users voluntarily input. Personal wellness companies such as MyFitnessPal and Oura already pose data privacy risks. “It’s amplifying the inherent risk by making that data more available and accessible,” Gombar said.

    For people like Tsiaras, profit-driven A.I. giants have tainted the health tech space. “The trust is eroded so significantly that anyone [else] who builds a system has to go in the opposite direction of spending a lot of time proving that we’re there for you and not about abusing what we can get from you,” he said.

    Nasim Afsar, a physician, former chief health officer at Oracle and advisor to the White House and global health agencies, views ChatGPT Health as an early step toward what she calls intelligent health, but far from a complete solution.

    “A.I. can now explain data and prepare patients for visits,” Afsar said in a statement to Observer. “That’s meaningful progress. But transformation happens when intelligence drives prevention, coordinated action and measurable health outcomes, not just better answers inside a broken system.”

    What Doctors Really Think of ChatGPT Health and A.I. Medical Advice

    [ad_2]

    Rachel Curry

    Source link

  • Anthropic taps former Microsoft India MD to lead Bengaluru expansion | TechCrunch

    [ad_1]

    Anthropic has appointed Irina Ghose, a former Microsoft India managing director, to lead its India business as the U.S. AI startup prepares to open an office in Bengaluru. The move underscores how India is becoming a key battleground for AI companies looking to expand beyond the U.S. for major growth markets.

    Ghose brings deep big-tech operating experience to the role. She spent 24 years at Microsoft before stepping down in December 2025. Her appointment gives Anthropic a seasoned executive with local enterprise and government relationships as it gears up to establish an on-the-ground presence in one of the world’s fastest-growing AI markets.

    India has become one of Anthropic’s most strategically important markets, with the country already ranking as the second-largest user base for Claude and usage heavily skewing toward technical and work-related tasks, including software development. Arch-rival OpenAI is also sharpening its focus on the market with plans to open an office in New Delhi — a sign India is fast becoming one of the most contested arenas in the global race to commercialize generative AI.

    While India offers enormous scale — with more than a billion internet subscribers and over 700 million smartphone users — converting that reach into meaningful revenue has proven difficult, pushing AI companies to experiment with aggressive pricing and promotions. OpenAI last year introduced ChatGPT Go, its under-$5 plan aimed at attracting Indian users, and later made it available free for a year in the country.

    Similar dynamics are playing out for Anthropic: its Claude app recorded a 48% increase from the previous year in downloads in India in September, reaching about 767,000 installs, while consumer spending surged 572% to $195,000 for the month, per Appfigures — still modest compared with the U.S., where September spending hit $2.5 million.

    Anthropic has been stepping up its engagement in India at the highest levels. Chief executive Dario Amodei visited in October and met corporate executives and lawmakers, including Prime Minister Narendra Modi, to discuss the company’s expansion plans and growing adoption of its tools. Anthropic had also explored a potential partnership with billionaire Mukesh Ambani’s Reliance Industries to broaden access to Claude, as TechCrunch reported previously. Reliance, however, ultimately struck a deal with Google to offer its Gemini AI Pro plan free to Jio subscribers. That move came as rival Bharti Airtel partnered with Perplexity to bundle access to its premium subscription, underscoring how India’s telecom giants have become critical distribution gatekeepers in the race to scale consumer AI services.

    In a LinkedIn post announcing the move, Ghose said she would focus on working with Indian enterprises, developers and startups adopting Claude for “mission-critical” use cases, pointing to growing demand for what she described as “high-trust, enterprise-grade AI.” She added that AI tailored to local languages could be a “force multiplier” across sectors including education and healthcare — signaling Anthropic’s intent to deepen adoption beyond early tech users into larger institutions and the public sector.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The push by Anthropic, OpenAI, and Perplexity comes as India’s homegrown GenAI ecosystem remains relatively early-stage. While the country has a deep pool of software talent and a fast-growing base of AI users, it has produced few startups building large foundation models, with investors instead largely backing application-layer companies rather than committing the scale of capital typically required to train frontier systems.

    The appointment also comes ahead of India’s AI Impact Summit 2026 in February, where the Indian government is expected to bring together AI startups, global CEOs, and industry experts to discuss the next phase of AI deployment in the country. The summit is part of New Delhi’s broader effort to signal support for domestic AI development and position India as a serious player in the global AI landscape, as competition intensifies across major markets.

    Anthropic is also building out its India team, with job listings for roles including startup and enterprise account executives as well as a partner sales manager, signaling a push to deepen its go-to-market efforts and tap Indian businesses and startups as customers as it expands its presence in the country.

    For Anthropic, the hire adds senior local leadership as it looks to turn India’s surging usage into a durable business, navigating a market where distribution partnerships, pricing pressure, and enterprise adoption will shape which AI players emerge as long-term winners.

    [ad_2]

    Jagmeet Singh

    Source link

  • Anthropic launches Claude Cowork, a version of its coding AI for regular people

    [ad_1]

    If you follow Anthropic, you’re probably familiar with Claude Code. Since the fall of 2024, the company has been training its AI models to use and navigate computers like a human would, and the coding agent has been the most practical expression of that work, giving developers a way to automate rote programming tasks. Starting today, Anthropic is giving regular people a way to take advantage of those capabilities, with the release of a new preview feature called Claude Cowork.

    The company is billing Cowork as “a simpler way for anyone — not just developers — to work with Claude.” After you give the system access to a folder on your computer, it can read, edit or create new files in that folder on your behalf.

    Anthropic gives a few different example use cases for Cowork. For instance, you could ask Claude to organize your downloads folder, telling it to rename the files contained within to something that’s easier to parse at a glance. Another example: you could use Claude to turn screenshots of receipts and invoices into a spreadsheet for tracking expenses. Cowork can also navigate websites — provided you install Claude’s Chrome plugin — and make can use Anthropic’s Connectors framework to access third-party apps like Canva.

    “Cowork is designed to make using Claude for new work as simple as possible. You don’t need to keep manually providing context or converting Claude’s outputs into the right format,” the company said. “Nor do you have to wait for Claude to finish before offering further ideas or feedback: you can queue up tasks and let Claude work through them in parallel.”

    If the idea of granting Claude access to your computer sounds ill-advised, Anthropic says Claude “can’t read or edit anything you don’t give it explicit access to.” However, the company does note the system can “take potentially destructive actions,” such as deleting a file that is important to you or misinterpreting your instructions. For that reason, Anthropic suggests it’s best to give “very clear” guidance to Claude.

    Anthropic isn’t the first to offer a computer agent. Microsoft, for example, has been pushing Copilot hard for nearly three years, despite seemingly limited adoption. For Anthropic, the challenge will be convincing people these tools are useful where others have failed. The fact Claude Code has been universally loved by programmers may make that task easier.

    For now, Anthropic is giving users of its pricey Claude Max subscription first access to the preview. If you want to try Cowork for yourself, you’ll also need a Mac with the Claude macOS app installed. For everyone else, you’ll need to join a wait list.  

    [ad_2]

    Igor Bonifacic

    Source link

  • Anthropic announces Claude for Healthcare following OpenAI’s ChatGPT Health reveal | TechCrunch

    [ad_1]

    On the heels of OpenAI’s ChatGPT Health reveal, Anthropic announced on Sunday that it’s introducing Claude for Healthcare, a set of tools for providers, payers, and patients.

    Like ChatGPT Health, Claude for Healthcare will allow users to sync health data from their phones, smartwatches, and other platforms (both OpenAI and Anthropic have said that their models won’t use this data for training). But Anthropic’s product promises more sophistication than ChatGPT Health, which seems as though it will be more focused on a patient-side chat experience as it rolls out gradually.

    Though some industry professionals are concerned about the role of hallucination-prone LLMs in offering clients medical advice, Anthropic’s “agent skills” seem promising.

    Claude has added what it calls “connectors” to give the AI access to platforms and databases that can speed up research processes and report generation for payers and providers, including the Centers for Medicare and Medicaid Services (CMS) Coverage Database; the International Classification of Diseases, 10th Revision (ICD-10); the National Provider Identifier Standard; and PubMed.

    Anthropic explained in a blog post that Claude for Health could use its connectors to speed up prior authorization review, the process in which a doctor must submit additional information to an insurance provider to see if it will cover a medication or treatment.

    “Clinicians often report spending more time on documentation and paperwork than actually seeing patients,” Anthropic CPO Mike Krieger said in a presentation about the product.

    For doctors, submitting prior authorization documents is more of an administrative task than something that requires their specialized training and expertise. It’s something that makes more sense to automate than the actual process of administering medical advice … though Claude will do that as well.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    People are already relying on LLMs for medical advice. OpenAI said that 230 million people talk about their health with ChatGPT each week, and there’s no doubt that Anthropic is observing that use case as well.

    Of course, both Anthropic and OpenAI warn consumers that they should see healthcare professionals for more reliable, tailored guidance.

    [ad_2]

    Amanda Silberling

    Source link

  • Allianz, Anthropic partner to expand AI use in insurance

    [ad_1]

    Global financial services group Allianz SE has partnered with AI solutions provider Anthropic to accelerate responsible AI deployment , with a focus on strengthening insurance decision-making.

    The collaboration centers on three projects: deploying AI to support Allianz software developers, automating labor-intensive insurance workflows such as claims processing, and building AI systems designed to meet compliance requirements by fully documenting decisions and data sources, according to an Allianz release today.

    Anthropic’s Claude AI models will be integrated into Allianz’s internal AI platform for employees, including tools designed to assist thousands of developers globally. The companies are also developing AI agents to automate multi-step processes in areas such as motor and health insurance, while maintaining a “human-in-the-loop” approach for complex or sensitive claims.

    The partnership builds on Allianz’ existing use of AI to improve customer service, including multilingual voice assistants for roadside assistance and automated claims processing that significantly reduces turnaround times.

    Register here by Jan. 16 for early bird pricing for the inaugural FinAi Banking Summit, taking place March 2-3 in Denver. View the full event agenda here. 

    [ad_2]

    FinAi News, AI-assisted

    Source link

  • The AI Models That Top Startups Are Suddenly Choosing Over ChatGPT

    [ad_1]

    Famed San Francisco-based startup accelerator and venture capital firm Y Combinator says that one AI model provider has overtaken OpenAI in popularity among the accelerator’s latest batch of startups, eliminating the ChatGPT-maker’s previous market dominance. 

    On a December 22 episode of official Y Combinator podcast Lightcone, YC general partner Diana Hu said that “shockingly,” Anthropic’s Claude AI models are the most popular among the accelerator’s new winter 2026 batch of startups, dethroning OpenAI. 

    “For the longest time, OpenAI was the clear winner,” said Hu, adding that when YC started the podcast in February 2024, OpenAI’s models were preferred by over 90 percent of that batch’s firms. Even in early 2025, added YC president and CEO Garry Tan, Anthropic’s models were only preferred by around one-fourth of new YC-backed startups. 

    But in the past three to six months, said Hu, usage of Anthropic’s Claude models among new YC firms skyrocketed to over 52 percent. Hu partially credited this fast takeoff to the rise of vibe-coding platforms, such as Replit and Lovable, which use Claude models to power software that enables people without coding experience to develop websites and applications through natural language. Several of the entrepreneurs in this latest YC batch are building their own code-generation companies to compete with the likes of Replit and Lovable. 

    Hu said that Anthropic has developed a reputation for providing top-line coding models, and as more startups enter the coding space, they are consistently relying on Claude. But while coding may bring entrepreneurs into the Anthropic ecosystem, that’s not how the “vast majority” of people are using Claude, YC managing partner Jared Friedman said. 

    “I wonder if there’s a bleed-through effect,” Friedman opined, “where people are using Claude for their personal coding and then as a result, they’re more likely to choose it for their application, even if their application is not doing coding at all.” 

    Tan postulated that once a user has spent enough time with a certain AI model, they become familiar and comfortable with that model’s “personality,” which makes it harder to switch. Hu agreed, and said that OpenAI’s models have “black cat energy,” while Anthropic’s have that of more of a “happy-go-lucky, very helpful golden retriever.” 

    Personally, Tan said he is still using ChatGPT as his daily AI tool, mainly because of the platform’s ability to remember details about users across multiple conversations. “It knows me, it knows my personality, it knows the things I think about,” Tan said, adding that memory, and the consumer experiences enabled by it, is fast-becoming a legitimate moat for OpenAI.  

    Go inside one interesting founder-led company each day to find out how its strategy works, and what risk factors it faces. Sign up for 1 Smart Business Story from Inc. on Beehiiv.

    [ad_2]

    Ben Sherry

    Source link

  • Adobe hit with proposed class-action, accused of misusing authors’ work in AI training | TechCrunch

    [ad_1]

    Like pretty much every other tech company in existence, Adobe has leaned heavily into AI over the past several years. The software firm has launched a number of different AI services since 2023, including Firefly — its AI-powered media-generation suite. Now, however, the company’s full-throated embrace of the technology may have led to trouble, as a new lawsuit claims it used pirated books to train one of its AI models.

    A proposed class-action lawsuit filed on behalf of Elizabeth Lyon, an author from Oregon, claims that Adobe used pirated versions of numerous books — including her own — to train the company’s SlimLM program.

    Adobe describes SlimLM as a small language model series that can be “optimized for document assistance tasks on mobile devices.” It states that SlimLM was pre-trained on SlimPajama-627B, a “deduplicated, multi-corpora, open-source dataset” released by Cerebras in June of 2023. Lyon, who has written a number of guidebooks for non-fiction writing, says that some of her works were included in a pretraining dataset that Adobe had used.

    Lyon’s lawsuit, which was originally reported on by Reuters, says that her writing was included in a processed subset of a manipulated dataset that was the basis of Adobe’s program: “The SlimPajama dataset was created by copying and manipulating the RedPajama dataset (including copying Books3),” the lawsuit says. “Thus, because it is a derivative copy of the RedPajama dataset, SlimPajama contains the Books3 dataset, including the copyrighted works of Plaintiff and the Class members.”

    “Books3” — a huge collection of 191,000 books that have been used to train GenAI systems — has been an ongoing source of legal trouble for the tech community. RedPajama has also been cited in a number of litigation cases. In September, a lawsuit against Apple claimed the company had used copyrighted material to train its Apple Intelligence model. The litigation mentioned the dataset and accused the tech company of copying protected works “without consent and without credit or compensation.” In October, a similar lawsuit against Salesforce also claimed the company had used RedPajama for training purposes. 

    Unfortunately for the tech industry, such lawsuits have, by now, become somewhat commonplace. AI algorithms are trained on massive datasets and, in some cases, those datasets have allegedly included pirated materials. In September, Anthropic agreed to pay $1.5 billion to a number of authors who had sued it and accused it of using pirated versions of their work to train its chatbot, Claude. The case was considered a potential turning point in the ongoing legal battles over copyrighted material in AI training data, of which there are many.

    [ad_2]

    Lucas Ropek

    Source link

  • Anthropic called to testify in House on China-backed cyberattack

    [ad_1]

    A US House committee is calling on Anthropic Chief Executive Officer Dario Amodei to testify about a Chinese cyber-espionage attack the company revealed earlier this month. Leaders of the House Homeland Security Committee asked Amodei to appear before the panel on Dec. 17 to discuss the rise of AI-orchestrated cyberattacks. The committee also requested testimony […]

    [ad_2]

    Bloomberg News

    Source link

  • Anthropic’s Opus 4.5 model is here to conquer Microsoft Excel

    [ad_1]

    Hot on the heels of Google’s Gemini 3 Pro release, Anthropic has announced an update for its flagship Opus model. Now at version 4.5, the new system offers state-of-the-art performance in coding, computer use and office tasks. No surprise there, those have been some of Claude’s greatest strengths for a while. The good news is Anthropic is rolling out a handful of existing tools more broadly alongside Opus 4.5. It’s also releasing one new feature.

    To start, the company’s Chrome extension, Claude for Chrome, is now available to all Max users. Anthropic is also introducing a feature called infinite chat. Provided you pay to use Claude, the chatbot won’t fall to context window errors, allowing it to maintain consistency across files and chats. According to Anthropic, infinite chat was one of the most requested features from its users. Then there’s Claude for Excel, which brings the chatbot to a sidebar inside of Microsoft’s app. The tool is now broadly available to all Max, Team and Enterprise users, with support for pivot tables, charts and file uploads built-in.

    A table comparing Opus 4.5’s efforts in various benchmarks. (Anthropic)

    On the subject of Excel, Anthropic says early testers saw a 20 percent accuracy improvement on their internal evaluations and a 15 percent improvement in efficiency gains. As a complete Excel noob, I’m excited to for the company to trickle down that expertise to its more consumer-oriented models, Claude Sonnet and Haiku.

    Elsewhere, Opus 4.5 also delivers improvements in agentic workflows, with the new model excelling at refining its own processes. More importantly, Anthropic is calling Opus 4.5 its safest model yet. It’s better at rejecting prompt injection style attacks, outpacing even Gemini 3 Pro, according to Anthropic’s own evaluations.

    If you want to try Opus 4.5 for yourself, it’s available today through all of Anthropic’s apps and the company’s API. For developers, pricing for the new model starts at $5 per million tokens.

    [ad_2]

    Igor Bonifacic

    Source link

  • Cloudflare Resolves Global Outage That Disrupted ChatGPT, X

    [ad_1]

    A worldwide outage at the network of cybersecurity firm Cloudflare Inc. has been resolved after several hours of disruption on Tuesday. The outage had taken down the websites for everything from the chief US energy regulator, ChatGPT, the New Jersey transit authority and the social-media platform X. ChatGPT and X were among the services that […]

    [ad_2]

    Bloomberg News

    Source link