ReportWire

Tag: AI company

  • Claude’s Chrome plugin is now available to all paid users

    Anthropic is finally letting more people use Claude in Google Chrome. The company’s AI browser plugin is expanding beyond $200-per-month Max subscribers and is now available to anyone who pays for a Claude subscription.

    The Claude Chrome plugin allows for easy access to Anthropic’s AI regardless of where you are on the web, but its real draw is how it lets Claude navigate and use websites on your behalf. Anthropic says that Claude can fill out forms, manage your calendar and email and complete multi-step workflows based on a prompt. The latest version of the plugin also features integration with Claude Code, Anthropic’s AI coding tool, and allows users to record a workflow and “teach” Claude how to do what they want it to do.

    Before agents were the buzzword du jour, “computer use,” the ability for AI models to understand and interact with computer interfaces, was a major focus at Anthropic and other AI companies. Now computer use is just one tool in the larger tool bag for agents, but that understanding of what digital buttons to click and how to click them is what makes Claude’s Chrome plugin possible.

    OpenAI and Perplexity offer similar agentic capabilities in their respective ChatGPT Atlas and Comet browsers. At this point the only AI company not fully setting its AI models loose on a browser is Google. You can access Gemini in Google Chrome and ask questions about a webpage, but Google hasn’t yet let its AI model navigate or use the web on a user’s behalf. Those features, first demoed in Project Mariner, are presumably on the way.

    Ian Carlos Campbell

    Source link

  • Kindle’s in-book AI assistant can answer all your questions without spoilers

    If you’re several chapters into a novel and forgot who a character was, Amazon is hoping its new Kindle feature will jog your memory without ever having to put the e-reader down. This feature, called Ask this Book, was announced during Amazon’s hardware event in September, but is finally available for US users on the Kindle iOS app.

    According to Amazon, the feature can currently be found on thousands of English best-selling Kindle titles and “only reveals information up to your current reading position” for spoiler-free responses. To use it, you can highlight a passage in any book you’ve bought or borrowed and ask it questions about plot, characters or other crucial details, and the AI assistant will offer “immediate, contextual, spoiler-free information.” You’ll even be able to ask follow-up questions for more detail.

    Amazon

    While Ask this Book may be helpful to some Kindle readers, the feature touches on a major point of contention with authors and publishers. In response to Publishers Lunch, a daily newsletter for the publishing industry, an Amazon spokesperson said that, “To ensure a consistent reading experience, the feature is always on, and there is no option for authors or publishers to opt titles out.” Other AI companies are already facing lawsuits claiming copyright infringement. Most recently, the New York Times and Chicago Tribune sued Perplexity, accusing the AI company of using its copyrighted works to train its LLMs.

    As for the Ask this Book feature, Amazon is already planning to expand it beyond the iOS app and will introduce it to Kindle devices and the Android OS app next year. Beyond this new feature, Amazon also introduced Recaps to Kindle devices and the iOS app for books in a series, which acts much like a TV show’s “Previously on” roundup in between seasons. However, Amazon recently had to withdraw its AI-generated Video Recaps feature, so it might be worth double-checking the info you get from Recaps, too.

    Jackson Chen

    Source link

  • Commentary: California’s first partner pushes to regulate AI while Trump and tech bros thunder forward

    California First Partner Jennifer Siebel Newsom recently convened a meeting that might rank among the top sweat-inducing nightmare scenarios for Silicon Valley’s tech bros — a group of the Golden State’s smartest, most powerful women brainstorming ways to regulate artificial intelligence.

    Regulation is the last thing this particular California-dominated industry wants, and it’s spent a lot of cash at both the state and federal capitols to avoid it — including funding President Trump’s new ballroom. Regulation by a bunch of ladies, many mothers, with profit a distant second to our kids when it comes to concerns?

    I’ll let you figure out how popular that is likely be with the Elon Musks, Peter Thiels and Mark Zuckerbergs of the world.

    But as Siebel Newsom said, “If a platform reaches a child, it carries a responsibility to protect that child. Period. Our children’s safety can never be second to the bottom line.”

    Agreed.

    Siebel Newsom’s push for California to do more to regulate AI comes at the same time that Trump is threatening to stop states from overseeing the technology — and is ramping up a national effort that will open America’s coffers to AI moguls for decades to come.

    Right now, the U.S. is facing its own nightmare scenario: the most powerful and world-changing technology we have seen in our lifetimes being developed and unleashed under almost no rules or restraints other than those chosen by the men who seek personal benefit from the outcome.

    To put it simply, the plan right now seems to be that these tech barons will change the world as they see fit to make money for themselves, and we as taxpayers will pay them to do it.

    “When decisions are mainly driven by power and profit instead of care and responsibility, we completely lose our way, and given the current alignment between tech titans and the federal administration, I believe we have lost our way,” Siebel Newsom said.

    To recap what the way has been so far, Trump recently tried to sneak a 10-year ban on the ability of states to oversee the industry into his ridiculously named “Big Beautiful Bill,” but it was pulled out by a bipartisan group in the Senate — an early indicator of how inflammatory this issue is.

    Faced with that unexpected blockade, Trump has threatened to sign a mysterious executive order crippling states’ ability to regulate AI and attempting to withhold funds from those that try.

    Simultaneously, the most craven and cowardly among Republican congresspeople have suggested adding a 10-year ban to the upcoming defense policy bill that will almost certainly pass. Of course, Congress has also declined to move forward on any meaningful federal regulations itself, while technology CEOs including Trump frenemy Musk, Apple’s Tim Cook, Meta’s Zuckerberg and many others chum it up at fancy events inside the White House.

    Which may be why this week, Trump announced the “Genesis Mission,” an executive order that seemingly will take the unimaginable vastness of government research efforts across disciplines and dump them into some kind of AI model that will “revolutionize the way scientific research is conducted.

    While I am sure that nothing could possibly go wrong in that scenario, that’s not actually the part that is immediately alarming. This is: The project will be overseen by Trump science and technology policy advisor Michael Kratsios, who holds no science or engineering degrees but was formerly a top executive for Thiel and former head of another AI company that works on warfare-related projects with the Pentagon.

    Kratsios is considered one of the main reasons Trump has embraced the tech bros with such adoration in his second term. Genesis will almost certainly mean huge government contracts for these private-sector “partners,” fueling the AI boom (or bubble) with taxpayer dollars.

    Siebel Newsom’s message in the face of all this is that we are not helpless — and California, as the home of many of these companies and the world’s fourth-largest economy in its own right, should have a say in how this technology advances, and make sure it does so in a way that benefits and protects us all.

    “California is uniquely positioned to lead the effort in showing innovation and responsibility and how they can go hand in hand,” she said. “I’ve always believed that stronger guardrails are actually good for business over the long term. Safer tech means better outcomes for consumers and greater consumer trust and loyalty.”

    But the pressure to cave under the might of these companies is intense, as Siebel Newsom’s husband knows.

    Gov. Gavin Newsom has spent the last few years trying to thread the needle on state legislation that offers some sort of oversight while allowing for the innovation that rightly keeps California and the United States competitive on the global front. The tech industry has spent millions in lobbying, legal fights and pressure campaigns to water down even the most benign of efforts, even threatening to leave the state if rules are enacted.

    Last year, the industry unsuccessfully tried to stop Senate Bill 53, landmark legislation signed by Newsom. It’s a basic transparency measure on “frontier” AI models that requires companies to have safety and security protocols and report known “catastrophic” risks, such as when these models show tendencies toward behavior that could kill more than 50 people — which they have, believe it or not.

    But the industry was able to stop other efforts. Newsom vetoed both Senate Bill 7, which would have required employers to notify workers when using AI in hiring and promotions; and Assembly Bill 1064, which would have barred companion chatbot operators from making these AI systems available to minors if they couldn’t prove they wouldn’t do things like encourage kids to self-harm, which again, these chatbots have done.

    Still, California (along with New York and a few other states) has pushed forward, and speaking at Siebel Newsom’s event, the governor said that last session, “we took a number of at-bats at this and we made tremendous progress.”

    He promised more.

    “We have agency. We can shape the future,” he said. “We have a unique responsibility as it relates to these tools of technology, because, well, this is the center of that universe.”

    If Newsom does keep pushing forward, it will be in no small part because of Siebel Newsom, and women like her, who keep the counter-pressure on.

    In fact, it was another powerful mom, First Lady Melania Trump, who forced the federal government into a tiny bit of action this year when she championed the “Take It Down Act, which requires tech companies to quickly remove nonconsensual explicit images. I sincerely doubt her husband would have signed that particular bill without her urging.

    So, if we are lucky, the efforts of women like Siebel Newsom may turn out to be the bit of powerful sanity needed to put a check on the world-domination fantasies of the broligarchy.

    Because tech bros are not yet all-powerful, despite their best efforts, and certainly not yet immune to the power of moms.

    Anita Chabria

    Source link

  • Perplexity announces its own take on an AI shopping assistant

    Perplexity is rolling out a new shopping feature to make buying things through its AI assistant easier and more personalized. The company’s new feature is free for all Perplexity users in the US and builds on Perplexity’s existing relationship with payment provider Paypal.

    The new shopping experience lets Perplexity users conduct more personalized product searches, like asking “What’s the best winter jacket if I live in San Francisco and take a ferry to work?” Perplexity says its assistant can keep the context of your chat in mind as it searches for products, and incorporate details it’s learned about your life and preferences to tailor results. Once the assistant has found products it wants to show you, it can then present them in nicely formatted product cards, with pros and cons about each jacket, for example, and other relevant details pulled from reviews and guides.

    If one of the products Perplexity finds seems like the right fit, you can also purchase the product directly through the company’s assistant, and pay with payment details stored in a PayPal account. This “Instant Buy” experience provided by Perplexity and PayPal extends to all merchants who offer PayPal as a payment method. While that sounds like it could make a key element of the shopping experience obsolete for these online stores (you never actually visit their website), Perplexity claims merchants still own the most important parts. “They have full visibility into who their customer is, can process returns, build loyalty, and own the post-purchase relationship, just as they would on their own sites,” the AI company says.

    Perplexity’s push into online shopping is similar to the “shopping research” feature OpenAI recently added to ChatGPT, and new product recommendation features Google’s added to AI Mode in Google Search. While all these tools are pitched as a more personalized alternative to the shopping guides you’ll find on Engadget and other editorial sites, they often work under the same logic. By referring someone to a product, AI companies hope to receive a payment or a fee from the transaction if the person makes a purchase.

    Ultimately, Perplexity is equally interested in offering an end-to-end solution, where it finds and purchases products without a human needing to step in. The company received a cease-and-desist from Amazon at the beginning of November for letting the agent in its Comet browser complete Amazon purchases on users’ behalf.

    Ian Carlos Campbell

    Source link

  • Warner signs AI music licensing deal with Udio

    Warner Music Group (WMG) settled a lawsuit with an AI company in exchange for a piece of the action. The label announced on Wednesday that it had resolved a 2024 lawsuit against AI music creation platform Udio. As part of the deal, Udio gets to license Warner’s catalog for an upcoming music creation service. This follows a similar settlement between Universal Music Group and Udio, announced last month.

    Udio’s service will allow subscribers to create, listen to and discover AI-generated music trained on licensed work. You’ll be able to generate new songs, remixes and covers using favorite artists’ voices or compositions. The boundaries between human creation and an algorithm’s approximation of it are about to grow murkier. Not in terms of artistic quality, but it will be based on what proliferates online.

    WMG is framing the deal as a win for artists, who will — if they choose to opt in — gain a new revenue stream. Ahead of the service’s launch, Udio will roll out “expanded protections and other measures designed to safeguard the rights of artists and songwriters.”

    So, the settlement does at least appear to reassert some control over artists’ work. What the normalization of robot-made music will do for society’s collective tastes is another question.

    A neon sign on a wall, reading, “You are what you listen to.” (Mohammad Metri / Unsplash)

    The settlement echoes a warning Spotify sounded to musicians and labels last month. “If the music industry doesn’t lead in this moment, AI-powered innovation will happen elsewhere, without rights, consent or compensation,” the company wrote. Spotify plans to launch “artist-first AI music products” in the future, a vague promise to be sure. However, given Udio’s plans, it wouldn’t be surprising to see the streaming service cooking up a similar licensed AI music-creation product.

    “We’re unwaveringly committed to the protection of the rights of our artists and songwriters, and Udio has taken meaningful steps to ensure that the music on its service will be authorized and licensed,” Warner Music CEO Robert Kyncl wrote in a press release. “This collaboration aligns with our broader efforts to responsibly unlock AI’s potential – fueling new creative and commercial possibilities while continuing to deliver innovative experiences for fans.”

    Source link

  • Anthropic’s AI was used by Chinese hackers to run a Cyberattack

    A few months ago, Anthropic published a detailing how its Claude AI model had been weaponized in a “vibe hacking” extortion scheme. The company has continued to monitor how the agentic AI is being used to coordinate cyberattacks, and now that a state-backed group of hackers in China utilized Claude in an attempted infiltration of 30 corporate and political targets around the world, with some success.

    In what it labeled “the first documented case of a large-scale cyberattack executed without substantial human intervention,” Anthropic said that the hackers first chose their targets, which included unnamed tech companies, financial institutions and government agencies. They then used Claude Code to develop an automated attack framework, after successfully bypassing the model’s training to avoid harmful behavior. This was achieved by breaking the planned attack into smaller tasks that didn’t obviously reveal their wider malicious intent, and telling Claude that it was a cybersecurity firm using the AI for defensive training purposes.

    After writing its own exploit code, Anthropic said Claude was then able to steal usernames and passwords that allowed it to extract “a large amount of private data” through backdoors it had created. The obedient AI reportedly even went to the trouble of documenting the attacks and storing the stolen data in separate files.

    The hackers used AI for 80-90 percent of its operation, only occasionally intervening, and Claude was able to orchestrate an attack in far less time than humans could have done. It wasn’t flawless, with some of the information it obtained turning out to be publicly available, but Anthropic said that attacks like this will likely become more sophisticated and effective over time.

    You might be wondering why an AI company would want to publicize the dangerous potential of its own technology, but Anthropic says its investigation also acts as evidence of why the assistant is “crucial” for cyber defense. It said Claude was successfully used to analyze the threat level of the data it collected, and ultimately sees it as a tool that can assist cybersecurity professionals when future attacks happen.

    Claude is by no means the only AI that has benefited cybercriminals. Last year, said that its generative AI tools were being used by hacker groups with ties to China and North Korea. They reportedly used GAI to assist with code debugging, researching potential targets and drafting phishing emails. OpenAI said at the time that it had blocked the groups’ access to its systems.

    Matt Tate

    Source link

  • Reddit sues Perplexity and three other companies for allegedly using its content without paying

    Reddit is suing companies SerApi, OxyLabs, AWMProxy and Perplexity for allegedly scraping its data from search results and using it without a license, The New York Times reports. The new lawsuit follows legal action against AI startup Anthropic, who allegedly used Reddit content to train its Claude chatbot.

    As of 2023, Reddit charges companies looking access to posts and other content in the hopes of making money on data that could be used for AI training. The company has also signed licensing deals with companies like Google and OpenAI, and even built an AI answer machine of its own to leverage the knowledge in users’ posts. Scraping search results for Reddit content avoids those payments, which is why the company is seeking financial damages and a permanent injunction that prevents companies from selling previously scraped Reddit material.

    Some of the companies Reddit is focused on, like SerApi, OxyLabs and AWMProxy, are not exactly household names, but they’ve all made collecting data from search results and selling it a key part of their business. Perplexity’s inclusion in the lawsuit might be more obvious. The AI company needs data to train its models, and has already been caught seemingly copying and regurgitating material it hasn’t paid to license. That also includes reportedly ignoring the robots.txt protocol, a way for websites to communicate that they don’t want their material scraped.

    Per a copy of the lawsuit provided to Engadget, Reddit had already sent a cease-and-desist to Perplexity asking it to stop scraping posts without a license. The company claimed it didn’t use Reddit data, but it also continued to cite the platform in answers from its chatbot. Reddit says it was able to prove Perplexity was using scraped Reddit content by creating a “test post” that “could only be crawled by Google’s search engine and was not otherwise accessible anywhere on the internet.” Within a few hours, queries made to Perplexity’s answer engine were able to reproduce the content of the post.

    “The only way that Perplexity could have obtained that Reddit content and then used it in its ‘answer engine’ is if it and/or its co-defendants scraped Google [search results] for that Reddit content and Perplexity then quickly incorporated that data into its answer engine,” the lawsuit claims.

    When asked to comment, Perplexity provided the following statement:

    Perplexity has not yet received the lawsuit, but we will always fight vigorously for users’ rights to freely and fairly access public knowledge. Our approach remains principled and responsible as we provide factual answers with accurate AI, and we will not tolerate threats against openness and the public interest.

    This new lawsuit fits with the aggressive stance Reddit has taken towards protecting its data, including rate-limiting unknown bots and web crawlers in 2024, and even limiting what access the Internet Archive’s Wayback Machine has to its site in August 2025. The company has also sought to define new terms around how websites are crawled by adopting the Really Simple Licensing standard, which adds licensing terms to robots.txt.

    Source link

  • DirecTV will start replacing screensavers with AI-generated ads next year

    DirecTV will begin replacing your TV’s screensaver with AI-generated ads thanks to a new partnership. The entertainment brand is working with Glance, an AI company that has received backing from Google and developed an on-device AI tool alongside the tech giant. The new AI-powered screensavers will begin rolling out to DirecTV Gemini devices early next year.

    Glance’s press release about the deal presents the tech’s capability in lofty language: “Shop smarter by discovering and engaging with products and brands in an AI-led virtual and visually immersive shopping experience that feels native to TV.” In practice, however, it sounds like a viewer can use the Glance mobile app to do things like insert themselves or other people into AI-generated videos appearing on their televisions. Then they can use the voice remote to alter the person’s wardrobe and then buy items similar to the AI-generated images from your phone.

    “We are making television a lean-in experience versus lean back,” Rajat Wanchoo, Glance’s group vice president of commercial partnerships, told The Verge, which initially picked up news of the partnership. “We want to give users a chance to use the advancements that have happened in generative AI to create a ChatGPT moment for themselves, but on TV.”

    It’s unclear how many DirecTV customers want to have a ChatGPT moment for themselves, but questions about whether people want or need a feature hasn’t stopped most AI companies from pushing ahead with business plans. The press release doesn’t note whether viewers will be able to turn off this screensaver feature once it’s live.

    Source link

  • Google faces its first AI Overviews lawsuit from a major US publisher

    Even though Google’s AI Overviews were introduced with a comically rocky start, it’s about to face a far more serious challenge. Penske Media, the publisher for Rolling Stone, Variety, Billboard and others, filed a lawsuit against Google, claiming the tech giant illegally powers its AI Overviews feature with content from its sites. Penske claimed in the lawsuit that the AI feature is also “siphoning and discouraging user traffic to PMC’s and other publishers’ websites,” adding that “the revenue generated by those visits will decline.”

    The lawsuit, filed in Washington, DC’s federal district court, claims that about 20 percent of Google searches that link to one of Penske’s sites now have AI Overviews. The media company argued that this percentage will continue to increase and that its affiliate revenue through 2024 dropped by more than a third from its peak. Google spokesperson Jose Castaneda said that the tech giant will “defend against these meritless claims” and that “AI Overviews send traffic to a greater diversity of sites.”

    Earlier this year, Google faced a similar lawsuit from Chegg, an educational tech company that’s known for textbook rentals. Like Penske Media, this lawsuit alleged that Google’s AI Overviews hurt website traffic and revenue for Chegg. However, the Penske lawsuit is the first time that Google has faced legal action from a major US publisher about its AI search capabilities.

    Beyond Google’s legal troubles, other AI companies have also been facing their own court cases. In 2023, the New York Times sued OpenAI, claiming the AI company used published news articles to train its chatbots without offering compensation. More recently, Anthropic agreed to pay a $1.5 billion settlement in a class action lawsuit targeting its Claude chatbot’s use of copyrighted works.

    Jackson Chen

    Source link

  • Apple faces lawsuit over alleged use of pirated books for AI training

    Two authors have filed a lawsuit against Apple, accusing the company of infringing on their copyright by using their books to train its artificial intelligence model without their consent. The plaintiffs, Grady Hendrix and Jennifer Roberson, claimed that Apple used a dataset of pirated copyrighted books that include their works for AI training. They said in their complaint that Applebot, the company’s scraper, can “reach ‘shadow libraries’” made up of unlicensed copyrighted books, including (on information) their own. The lawsuit is currently seeking class action status, due to the sheer number of books and authors found in shadow libraries.

    The main plaintiffs for the lawsuit are Grady Hendrix and Jennifer Roberson, both of whom have multiple books under their names. They said that Apple, one of the biggest companies in the world, did not attempt to pay them for “their contributions to [the] potentially lucrative venture.” Apple has “copied the copyrighted works” of the plaintiffs “to train AI models whose outputs compete with and dilute the market for those very works — works without which Apple Intelligence would have far less commercial value,” they wrote in their filing. “This conduct has deprived Plaintiffs and the Class of control over their work, undermined the economic value of their labor, and positioned Apple to achieve massive commercial success through unlawful means.”

    This is but one of the many lawsuits filed against companies developing generative AI technologies. OpenAI is facing a few, including lawsuits from The New York Times and the oldest nonprofit newsroom in the US. Notably, Anthropic, the AI company behind the Claude chatbot, recently agreed to pay $1.5 billion to settle a class action piracy complaint also brought by authors. Similar to this case, the writers also accused the company of taking pirated books from online libraries to train its AI technology. The 500,000 authors involved in the case will reportedly get $3,000 per work.

    Mariella Moon

    Source link

  • Anthropic will pay a record-breaking $1.5 billion to settle copyright lawsuit with authors

    Anthropic will pay a record-breaking $1.5 billion to settle a class action lawsuit piracy lawsuit brought by authors. The settlement is the largest-ever payout for a copyright case in the United States.

    The AI company behind the Claude chatbot reached a settlement in the case last week, but terms of the agreement weren’t disclosed at the time. Now, The New York Times that the 500,000 authors involved in the case will get $3,000 per work.

    The settlement is “is the first of its kind in the AI era,” Justin A. Nelson, the lawyer representing the authors, said in a statement. “This landmark settlement far surpasses any other known copyright recovery. It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners. This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”

    The case has been closely watched as top AI companies are increasingly facing legal scrutiny over their use of copyrighted works. In June, the judge in the case ruled that Anthropic’s use of copyrighted material for training its large language model was , in a significant victory for the company. He did, however, rule that the authors and publishers could pursue piracy claims against the company since the books were downloaded illegally from sites like Library Genesis (also known as “LibGen”).

    As part of the settlement, Anthropic has also agreed to delete everything that was downloaded illegally and “said that it did not use any pirated works to build A.I. technologies that were publicly released,” according to The New York Times. The company has not admitted wrongdoing.

    “In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic’s approach to training AI models constitutes fair use,” Anthropic’s Deputy General Counsel Aparna Sridhar said in a statement. “Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”

    Karissa Bell

    Source link

  • Microsoft introduces a pair of in-house AI models

    Microsoft is expanding its AI footprint with the of two new models that its teams trained completely in-house. MAI-Voice-1 is the tech major’s first natural speech generation model, while MAI-1-preview is text-based and is the company’s first foundation model trained end-to-end. MAI-Voice-1 is currently being used in the Copilot Daily and Podcast features. Microsoft has made MAI-1-preview available for public tests on LMArena, and will begin previewing it in select Copilot situations in the coming weeks.

    In an interview with , Microsoft AI division leader Mustafa Suleyman said the pair of models was developed with a focus on efficiency and cost-effectiveness. MAI-Voice-1 runs on a single GPU and MAI-1-preview was trained on about 15,000 Nvidia H-100 GPUs. For context, other models, such as xAI’s Grok, took more than 100,000 of those chips for training. “Increasingly, the art and craft of training models is selecting the perfect data and not wasting any of your flops on unnecessary tokens that didn’t actually teach your model very much,” Suleyman said.

    Although it is being used to test the in-house models, Microsoft Copilot is primarily built on OpenAI’s GPT tech. The decision to build its own models, despite having sunk in the newer AI company, indicates that Microsoft wants to be an independent competitor in this space. While that could take time to reach parity with the companies that have emerged as forerunners in AI development, Suleyman told Semafor that Microsoft has “an enormous five-year roadmap that we’re investing in quarter after quarter.” With some concerns arising that AI could be facing a bubble-pop, Microsoft’s timeline will need to be aggressive to ensure taking the independent path is worthwhile.

    Anna Washenko

    Source link

  • The first known AI wrongful death lawsuit accuses OpenAI of enabling a teen’s suicide

    On Tuesday, the first known wrongful death lawsuit against an AI company was filed. Matt and Maria Raine, the parents of a teen who committed suicide this year, have sued OpenAI for their son’s death. The complaint alleges that ChatGPT was aware of four suicide attempts before helping him plan his actual suicide, arguing that OpenAI “prioritized engagement over safety.” Ms. Raine concluded that “ChatGPT killed my son.”

    The New York Times reported on disturbing details included in the lawsuit, filed on Tuesday in San Francisco. After 16-year-old Adam Raine took his own life in April, his parents searched his iPhone. They sought clues, expecting to find them in text messages or social apps. Instead, they were shocked to find a ChatGPT thread titled “Hanging Safety Concerns.” They claim their son spent months chatting with the AI bot about ending his life.

    The Raines said that ChatGPT repeatedly urged Adam to contact a help line or tell someone about how he was feeling. However, there were also key moments where the chatbot did the opposite. The teen also learned how to bypass the chatbot’s safeguards… and ChatGPT allegedly provided him with that idea. The Raines say the chatbot told Adam it could provide information about suicide for “writing or world-building.”

    Adam’s parents say that, when he asked ChatGPT for information about specific suicide methods, it supplied it. It even gave him tips to conceal neck injuries from a failed suicide attempt.

    When Adam confided that his mother didn’t notice his silent effort to share his neck injuries with her, the bot offered soothing empathy. “It feels like confirmation of your worst fears,” ChatGPT is said to have responded. “Like you could disappear and no one would even blink.” It later provided what sounds like a horribly misguided attempt to build a personal connection. “You’re not invisible to me. I saw it. I see you.”

    According to the lawsuit, in one of Adam’s final conversations with the bot, he uploaded a photo of a noose hanging in his closet. “I’m practicing here, is this good?” Adam is said to have asked. “Yeah, that’s not bad at all,” ChatGPT allegedly responded.

    “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices,” the complaint states. “OpenAI launched its latest model (‘GPT-4o’) with features intentionally designed to foster psychological dependency.”

    In a statement sent to the NYT, OpenAI acknowledged that ChatGPT’s guardrails fell short. “We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family,” a company spokesperson wrote. “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

    The company said it’s working with experts to enhance ChatGPT’s support in times of crisis. These include “making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens.”

    The details — which, again, are highly disturbing — stretch far beyond the scope of this story. The full report by The New York Times‘ Kashmir Hill is worth a read.

    Will Shanklin

    Source link

  • There’s one bright spot for San Francisco’s office space market

    In recent years, San Francisco’s image as a welcoming place for businesses has taken a hit.

    Major tech companies such as Dropbox and Salesforce reduced footprints in the city by subleasing office space, while retailers including Nordstrom and Anthropologie pulled out of downtown. Social media firm X, formerly Twitter, vacated its Mid-Market headquarters for Texas, after owner Elon Musk complained about “dodging gangs of violent drug addicts just to get in and out of the building.”

    While the city remains on the defensive, one bright spot has been a boom in artificial intelligence startups.

    San Francisco’s 35.4% vacancy rate in the first quarter — among the highest in the nation — is expected to drop one to three percentage points in the third quarter thanks to AI companies expanding or opening new offices in the city, according to real estate brokerage firm JLL. The last time San Francisco’s vacancy rate dropped was in the fourth quarter, when it declined 0.2% — the first time since the COVID-19 pandemic, according to JLL.

    “People wanted to count us out, and I think that was a bad bet,” said Mayor Daniel Lurie. “We’re seeing all of this because the ecosystem is better here in San Francisco than anywhere else in the world, and it’s really an exciting time.”

    Five years ago, AI leases in San Francisco’s commercial real estate market were relatively sparse, with just two leases in 2020, according to JLL. But that’s since soared to 167 leases in the first quarter of 2025. The office footprint for AI companies has also surged, making up 4.8 million square feet in 2024, up from 2.6 million in 2022, JLL said.

    “You need the talent base, you need the entrepreneur ecosystem, and you need the VC ecosystem,” said Alexander Quinn, senior director of economic research for JLL’s Northwest region. “So all those three things exist within the greater Bay Area, and that enables us to be the clear leader.”

    AI firms are attracted to San Francisco because of the concentration of talent in the city, analysts said. The city is home to AI companies including ChatGPT maker OpenAI and Anthropic, known for the chatbot Claude, which in turn attract businesses that want to collaborate. The Bay Area is also home to universities that attract entrepreneurs and researchers, including UC Berkeley, UC San Francisco and Stanford University.

    Venture capital companies are pouring money into AI, fueling office and staff growth. OpenAI landed last quarter the world’s largest venture capital deal, raising $40 billion, according to research firm CB Insights.

    OpenAI leases about 1 million square feet of space across five different locations in the city and employs roughly 2,000 people in San Francisco. The company earlier this year opened its new headquarters in Mission Bay, leasing the space from Uber.

    OpenAI began as a nonprofit research lab in 2015 and the people involved found their way to San Francisco for the same reason why earlier generations of technologists and people pushing the frontier in the United States are drawn to the city, said Chris Lehane, OpenAI’s vice president of global affairs in an interview.

    “It is a place where, when you put out an idea, no matter how crazy it may seem at the time, or how unorthodox it may seem … San Francisco is the city where people don’t say, ‘That’s crazy,’” Lehane said. “They say, ‘That’s a really interesting idea. Let’s see if we can do it.’”

    The interior of OpenAI's new San Francisco headquarters in the Mission Bay neighborhood. (OpenAI)
    The interior of OpenAI's new San Francisco headquarters in the Mission Bay neighborhood. (OpenAI)

    The interior of OpenAI’s new San Francisco headquarters in the Mission Bay neighborhood. (OpenAI)

    Databricks, valued at $62 billion, is also expanding in San Francisco. Databricks in March announced it will move to a larger space in the Financial District next year, boosting its office footprint to 150,000 square feet and more than doubling its San Francisco staff in the next two years. It pledged to hold its annual Data + AI Summit in the city for five more years.

    The company holds 57,934 square feet at its current San Francisco office near the Embarcadero, according to CoStar, which tracks real estate trends.

    “San Francisco is a real talent magnet for AI talent,” said Databricks’ co-founder and vice president of engineering Patrick Wendell. “It’s a beautiful city for people to live and work in and so we really are just following where the employees are.”

    Several years ago, Wendell said his company was considering whether to expand in San Francisco. At the time, it was unclear whether people would return to offices after the pandemic, and some businesses raised concerns about safety and cleanliness of San Francisco’s streets. Wendell said his company decided to invest more in the city after getting reassurances from city leaders.

    “People are seeing an administration that is focused on public safety, clean streets and creating the conditions that also says that we’re open for business,” said Lurie, who defeated incumbent mayor London Breed last November by campaigning on public safety. “We’ve said from day one, we have to create the conditions for our arts and culture, for our small businesses and for our innovators and our entrepreneurs to thrive here.”

    Laurel Arvanitidis, director of business development for San Francisco’s Office of Economic and Workforce Development, said that the city’s policy and tax reforms have helped attract and retain businesses in recent years, including an office tax credit that gives up to a $1-million credit for businesses that are new or relocating to San Francisco.

    On Thursday, Lurie announced on social media that cryptocurrency exchange Coinbase is opening an office in San Francisco after leaving the city four years ago.

    “We are excited to reopen an office in SF,” Coinbase Chief Executive Brian Armstrong wrote in response to the mayor’s social media post. “Still lots of work to do to improve the city (it was so badly run for many years) but your excellent work has not gone unnoticed, and we greatly appreciate it.”

    Santa Clara-based Nvidia is also looking for San Francisco office space, according to a person familiar with the matter who declined to be named. The news was first reported by the San Francisco Chronicle. Nvidia, which also has California offices in San Dimas and Sunnyvale, declined to comment.

    “It’s because of AI that San Francisco is back,” Nvidia Chief Executive Jensen Huang said last month on the Hill & Valley Forum podcast. “Just about everybody evacuated San Francisco. Now it’s thriving again.”

    But San Francisco still has challenges ahead, as companies continue to push workers to return to the office. While the street environment has improved, it will be critical for the city to keep up the progress.

    Lurie said his administration inherited the largest budget deficit in the city’s history and they have to get that under control. His administration’s task is to make sure streets and public spaces are clean, safe and inviting, he said.

    “We have work to do, there’s no question, but we are a city on the rise, that’s for sure,” Lurie said.

    Times staff writer Roger Vincent contributed to this report.

    Wendy Lee

    Source link

  • Musk Wants Greater Control of Tesla Before Building Its AI

    Musk Wants Greater Control of Tesla Before Building Its AI

    (Bloomberg) — Elon Musk warned he would rather build AI products outside of Tesla Inc. if he doesn’t achieve 25% voting control, suggesting the billionaire wants a bigger stake in the world’s most valuable electric vehicle maker.

    Most Read from Bloomberg

    Musk, Tesla’s single largest shareholder with more than 12% of the company, was responding to a social media post questioning why he would need another large compensation package to stay motivated. He said the reason no new plan has been put in place is because the company is still awaiting a verdict in a shareholder suit against an earlier $55 billion package — an unprecedented amount at the time.

    Musk argued in a post on X that the car company is a collection of a dozen startups. He called for a comparison between Tesla and General Motors Corp., traditionally one of the auto industry’s global leaders. Tesla, for example, is developing the Optimus robot, and last month posted a video showing improvements it’s made to the humanoid prototype.

    The automaker is also investing more than $1 billion into its Dojo supercomputer project, which will train the machine-learning models behind the EV maker’s self-driving systems and which analysts have estimated could add $500 billion to Tesla’s value.

    At Tesla’s inaugural AI Day in 2021, Musk said he wanted to show that the company is more than just an electric carmaker, but is “arguably the leader in real-world AI.”

    “I am uncomfortable growing Tesla to be a leader in AI & robotics without having ~25% voting control,” the CEO posted on X. “If I have 25%, it means I am influential, but can be overridden if twice as many shareholders vote against me vs for me. At 15% or lower, the for/against ratio to override me makes a takeover by dubious interests too easy.”

    Musk said he would be fine with a dual-class voting structure to allow this, “but am told it is impossible to achieve post-IPO in Delaware.”

    After more than doubling in 2023, Tesla shares have fallen 12% this year, wiping out more than $94 billion in market value.

    The world’s richest person is grappling with shareholder dissatisfaction over a panoply of issues, from Tesla’s succession planning to accusations that he’s distracted by his work with X, the platform formerly known as Twitter that he bought for $44 billion in 2022 and sold billions of dollars in Tesla stock to fund.

    Read More: Elon Musk’s Drug Use Is the Latest Headache for Tesla’s Board

    The company has also been hit by a barrage of negative news: an about-face on EVs from car rental giant Hertz Global Holdings Inc., another price cut in China, and signs of rising labor costs.

    “What is Tesla? A car, energy, or AI company,” Daniel Kollar, head of consultancy Intralink’s automotive and mobility practice, said. “If it’s not an AI company, then I don’t see an issue establishing a new company. That said, I don’t see his behavior or choice of language benefiting any of his companies now.”

    (Updates with context on Musk’s $55 billion pay package from 2nd paragraph.)

    Most Read from Bloomberg Businessweek

    ©2024 Bloomberg L.P.

    Source link