ReportWire

Tag: machine learning

  • This AI-Powered Robot Keeps Going Even if You Attack It With a Chainsaw

    [ad_1]

    A four-legged robot that keeps crawling even after all four of its legs have been hacked off with a chainsaw is the stuff of nightmares for most people.

    For Deepak Pathak, cofounder and CEO of the startup Skild AI, the dystopian feat of adaptation is an encouraging sign of a new, more general kind of robotic intelligence.

    “This is something we call an omni-bodied brain,” Pathak tells me. His startup developed the generalist artificial intelligence algorithm to address a key challenge with advancing robotics: “Any robot, any task, one brain. It is absurdly general.”

    Many researchers believe the AI models used to control robots could experience a profound leap forward, similar to the one that produced language models and chatbots, if enough training data can be gathered.

    The AI-controlled robot is able to adapt to new, extreme circumstances, such as the loss of limbs.

    Existing methods for training robotic AI models, such as having algorithms learn to control a particular system through teleoperation or in simulation, do not generate enough data, Pathak says.

    Skild’s approach is to instead have a single algorithm learn to control a large number of different physical robots across a wide range of tasks. Over time, this produces a model which the company calls Skild Brain, with a more general ability to adapt to different physical forms—including ones it has never seen before. The researchers created a smaller version of the model, called LocoFormer, for an academic paper outlining its approach.

    The model is also designed to adapt quickly to a new situation, such as missing leg or treacherous new terrain, figuring out how to apply what it has learned to its new predicament. Pathak compares the approach to the way large language models can take on particularly challenging problems by breaking it down and feeding its deliberations back into its own context window—an approach known as in-context learning.

    Other companies, including the Toyota Research Institute and a rival startup called Physical Intelligence, are also racing to develop more generally capable robot AI models. Skild is unusual, however, in how it is building models that generalize across so many different kinds of hardware.

    LocoFormer is trained with large-scale RL on a variety of procedurally generated robots with aggressive domain randomization.

    Courtesy of Skild

    In one experiment, the Skild team trained their algorithm to control a large number of walking robots of different shapes. When the algorithm was then run on real two- and four-legged robots—systems not included in the training data—it was able to control their movements and have them walk around.

    At one point, the team found that a four-legged robot running the company’s omni-bodied brain will quickly adapt when it is placed on its hind legs. Because it senses the ground beneath its hind legs, the algorithm operates the robot dog as if it were a humanoid, having it stroll around on its hind legs.

    LocoFormer learns continuously through online experience. The policy can learn from falls in early trials to improve control strategies in later ones.

    Courtesy of Skild

    The generalist algorithm could also adapt extreme changes to a robot’s shape—when, for example, its legs were tied together, cut off, or modified to become longer. The team also tried deactivating two of the motors on a quadruped robot with wheels as well as legs. The robot was able to adapt by balancing on two wheels like an unsteady bicycle.

    When facing large disturbances—such as morphological changes, motor failures, or weight changes—LocoFormer can rebuild such representations to achieve online adaptation.

    Courtesy of Skild

    Skild is testing the same approach for robot manipulation. It trained Skild Brain on a range of simulated robot arms and found that the resulting model could control unfamiliar hardware and adapt to sudden changes in its environment like a reduction in lighting. The startup is already working with some companies that use robot arms, Pathak says. In 2024 the company raised $300 million in a round that valued the company at $1.5 billion.

    Pathak says the results might seem creepy to some, but to him they show the sparks of a kind of physical superintelligence for robots. “It is so exciting to me personally, dude,” he says.

    What do you think of Skild’s multitalented robot brain? Send an email to ailab@wired.com to let me know.


    This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

    [ad_2]

    Will Knight

    Source link

  • YouTube Thinks AI Is Its Next Big Bang

    [ad_1]

    Google figured out early on that video would be a great addition to its search business, so in 2005 it launched Google Video. Focused on making deals with the entertainment industry for second-rate content, and overly cautious on what users could upload, it flopped. Meanwhile, a tiny startup run by a handful of employees working above a San Mateo, California, pizzeria was exploding, simply by letting anyone upload their goofy videos and not worrying too much about who held copyrights to the clips. In 2006, Google snapped up that year-old company, figuring it would sort out the IP stuff later. (It did.) Though the $1.65 billion purchase price for YouTube was about a billion dollars more than its valuation, it was one of the greatest bargains ever. YouTube is now arguably the most successful video property in the world. It’s an industry leader in music and podcasting, and more than half of its viewing time is now on living room screens. It has paid out over $100 billion to creators since 2021. One estimate from MoffettNathanson analysts cited by Variety is that if it were a separate company, it might be worth $550 billion.

    Now the service is taking what might be its biggest leap yet, embracing a new paradigm that could change its essence. I’m talking, of course, about AI. Since YouTube is still a wholly owned subsidiary of AI-obsessed Google, it’s not surprising that its anniversary product announcements this week touted AI features that will let creators use AI to enhance or produce videos. After all, Google Deepmind’s Veo 3 technology was YouTube’s for the taking. Ready or not, the video camera ultimately will be replaced by the prompt. This means a rethinking of YouTube’s superpower: authenticity.

    YouTube’s Big Bang

    I had that shift in mind when I recently interviewed YouTube CEO Neal Mohan at his office at YouTube’s San Bruno, California, headquarters. Mohan took over as CEO in 2023 when his boss, Susan Wojcicki, left her post due to a fatal cancer. But first we chat a bit about the company’s history. Mohan reminds me that his own connection with the service began even before he joined Google in 2008, after his ad company DoubleClick merged with the search giant. He was struck by how the YouTube founders were first with a revelation that, he says, remains the core of the service. “It was not just that people were interested in sharing short clips about themselves and that it was done without a gatekeeper,” he says, “but that people were interested in watching them. That was the big bang inflection point. Our mission is to give everyone a voice and show them the world.”

    Critics of Google’s power often argue that not only the public but also YouTube itself might benefit from a split from the mother company. Just think what the world’s biggest video company could do if it were truly independent. Mohan, a self-admitted Google loyalist, disagrees. “I don’t believe YouTube would be where it is if it weren’t part of Google,” he says. He says that being part of a giant company allowed YouTube to make long-term bets on things like streaming and podcasting. When I ask whether YouTube might be even more innovative on its own, he reminds me that YouTube has been sufficiently innovative to challenge legacy media in things like live sports while fending off challenges from competitors focusing on the creator economy.

    YouTube has an advantage in breadth that Tiktok and Reels can’t dream of … “everything from a 15-second short to a 15-minute traditional long-form YouTube video to a 15-hour livestream and everything in between,” Mohan crows.

    It’s currently pressing another advantage: Google’s AI technology. The announcements this week range from fun features like putting you or your friends’ bodies into videos showing astonishing acrobatic feats or allowing podcasters to make instant television shows from their audio conversations by having AI create visuals that resonate with the content of the chatter. Mohan says that, in a sense, AI is just the latest enhancement of the service. “When YouTube was born 20 years ago it was about using technology for more people to have their voice heard,” he says. “With AI, it’s the same core principle—how do we use technology to democratize creation?”

    [ad_2]

    Steven Levy

    Source link

  • USA Today Enters Its Gen AI Era With a Chatbot

    [ad_1]

    The publishing company behind USA Today and 220 other publications is today rolling out a chatbot-like tool called DeeperDive that can converse with readers, summarize insights from its journalism, and suggest new content from across its sites.

    “Visitors now have a trusted AI answer engine on our platform for anything they want to engage with, anything they want to ask,” said Mike Reed, CEO of Gannett and the USA Today Network, at the WIRED AI Power Summit in New York, an event that brought together voices from the tech industry, politics, and the world of media, “and it is performing really great.”

    Most publishers have a fraught relationship with AI, as the chatbots that trained on their content are now summarizing it and eating the traffic that search engines used to send them.

    Reed said that Google’s AI Overview feature has dramatically cut traffic to publishers across the industry. “We are watching the same movie as everyone else is watching,” Reed said ahead of today’s announcement. “We can see some risk in the future to any content distribution model that is based primarily on SEO optimization.”

    Like other publishers, Gannett has signed some deals with AI companies, including Amazon and Perplexity, to license its content. The company actively blocks the web scrapers that crawl websites in order to steal content.

    DeeperDive represents a bet that harnessing the same generative artificial intelligence technology could help publishers capture readers attention by engaging with them in new ways.

    The tool replaces a conventional search box and automatically suggests questions that readers might want to ask. For example, today it offers as one prompt, “How does Trump’s Fed policy affect the economy?”

    DeeperDive generates a short answer to the query along with relevant stories from across the USA Today network. Reed says it is crucial that DeeperDive bases its output on factually correct information and does not draw from opinion pieces. “We only look at our real journalism,” he says.

    The interface of DeeperDive on the homepage of USA Today

    Photograph: USA Today

    Reed adds that his company hopes that the tool will also reveal more about readers’ interests. “That can help us from a revenue standpoint,” he said.

    DeeperDive was developed by the advertising company Taboola. Adam Singola, Taboola’s CEO, says his firm developed DeeperDive by fine-tuning several open source models.

    Singola says DeeperDive benefits from data gathered from across its own network of more than 600 million daily readers across around 11,000 publishers. He says the tool “grounds every answer in articles retrieved from our publisher partners and requires sentence-level citations to those sources” and will avoid generating an output if information from two sources seems to conflict.

    Gannett’s CEO Reed said ahead of today’s event that, together with Taboola, his firm is interested in exploring agentic tools for readers’ shopping decisions. “Our audiences have a higher intent to purchase to begin with,” he says. “That’s really the next step here.”

    [ad_2]

    Will Knight

    Source link

  • Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement

    [ad_1]

    Anthropic has agreed to pay at least $1.5 billion to settle a lawsuit brought by a group of book authors alleging copyright infringement, an estimated $3,000 per work. In a court motion on Friday, the plaintiffs emphasized that the terms of the settlement are “critical victories” and that going to trial would have been an “enormous” risk.

    This is the first class action settlement centered on AI and copyright in the United States, and the outcome may shape how regulators and creative industries approach the legal debate over generative AI and intellectual property. According to the settlement agreement, the class action will apply to approximately 500,000 works, but that number may go up once the list of pirated materials is finalized. For every additional work, the artificial intelligence company will pay an extra $3,000. Plaintiffs plan to deliver a final list of works to the court by October.

    “This landmark settlement far surpasses any other known copyright recovery. It is the first of its kind in the AI era. It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners. This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong,” says colead plaintiffs’ counsel Justin Nelson of Susman Godfrey LLP.

    Anthropic is not admitting any wrongdoing or liability. “Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” Anthropic deputy general counsel Aparna Sridhar said in a statement.

    The lawsuit, which was originally filed in 2024 in the US District Court for the Northern District of California, was part of a larger ongoing wave of copyright litigation brought against tech companies over the data they used to train artificial intelligence programs. Authors Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber alleged that Anthropic trained its large language models on their work without permission, violating copyright law.

    This June, senior district judge William Alsup ruled that Anthropic’s AI training was shielded by the “fair use” doctrine, which allows unauthorized use of copyrighted works under certain conditions. It was a win for the tech company but came with a major caveat. As it gathered materials to train its AI tools, Anthropic had relied on a corpus of books pirated from so-called “shadow libraries,” including the notorious site LibGen, and Alsup determined that the authors should still be able to bring Anthropic to trial in a class action over pirating their work. (Anthropic maintains that it did not actually train its products on the pirated works, instead opting to purchase copies of books.)

    “Anthropic downloaded over seven million pirated copies of books, paid nothing, and kept these pirated copies in its library even after deciding it would not use them to train its AI (at all or ever again). Authors argue Anthropic should have paid for these pirated library copies. This order agrees,” Alsup wrote in his summary judgement.

    [ad_2]

    Kate Knibbs

    Source link

  • The Doomers Who Insist AI Will Kill Us All

    [ad_1]

    The subtitle of the doom bible to be published by AI extinction prophets Eliezer Yudkowsky and Nate Soares later this month is “Why superhuman AI would kill us all.” But it really should be “Why superhuman AI WILL kill us all,” because even the coauthors don’t believe that the world will take the necessary measures to stop AI from eliminating all non-super humans. The book is beyond dark, reading like notes scrawled in a dimly lit prison cell the night before a dawn execution. When I meet these self-appointed Cassandras, I ask them outright if they believe that they personally will meet their ends through some machination of superintelligence. The answers come promptly: “yeah” and “yup.”

    I’m not surprised, because I’ve read the book—the title, by the way, is If Anyone Builds It, Everyone Dies. Still, it’s a jolt to hear this. It’s one thing to, say, write about cancer statistics and quite another to talk about coming to terms with a fatal diagnosis. I ask them how they think the end will come for them. Yudkowsky at first dodges the answer. “I don’t spend a lot of time picturing my demise, because it doesn’t seem like a helpful mental notion for dealing with the problem,” he says. Under pressure he relents. “I would guess suddenly falling over dead,” he says. “If you want a more accessible version, something about the size of a mosquito or maybe a dust mite landed on the back of my neck, and that’s that.”

    The technicalities of his imagined fatal blow delivered by an AI-powered dust mite are inexplicable, and Yudowsky doesn’t think it’s worth the trouble to figure out how that would work. He probably couldn’t understand it anyway. Part of the book’s central argument is that superintelligence will come up with scientific stuff that we can’t comprehend any more than cave people could imagine microprocessors. Coauthor Soares also says he imagines the same thing will happen to him but adds that he, like Yudkowsky, doesn’t spend a lot of time dwelling on the particulars of his demise.

    We Don’t Stand a Chance

    Reluctance to visualize the circumstances of their personal demise is an odd thing to hear from people who have just coauthored an entire book about everyone’s demise. For doomer-porn aficionados, If Anyone Builds It is appointment reading. After zipping through the book, I do understand the fuzziness of nailing down the method by which AI ends our lives and all human lives thereafter. The authors do speculate a bit. Boiling the oceans? Blocking out the sun? All guesses are probably wrong, because we’re locked into a 2025 mindset, and the AI will be thinking eons ahead.

    Yudkowsky is AI’s most famous apostate, switching from researcher to grim reaper years ago. He’s even done a TED talk. After years of public debate, he and his coauthor have an answer for every counterargument launched against their dire prognostication. For starters, it might seem counterintuitive that our days are numbered by LLMs, which often stumble on simple arithmetic. Don’t be fooled, the authors says. “AIs won’t stay dumb forever,” they write. If you think that superintelligent AIs will respect boundaries humans draw, forget it, they say. Once models start teaching themselves to get smarter, AIs will develop “preferences” on their own that won’t align with what we humans want them to prefer. Eventually they won’t need us. They won’t be interested in us as conversation partners or even as pets. We’d be a nuisance, and they would set out to eliminate us.

    The fight won’t be a fair one. They believe that at first AI might require human aid to build its own factories and labs–easily done by stealing money and bribing people to help it out. Then it will build stuff we can’t understand, and that stuff will end us. “One way or another,” write these authors, “the world fades to black.”

    The authors see the book as kind of a shock treatment to jar humanity out of its complacence and adopt the drastic measures needed to stop this unimaginably bad conclusion. “I expect to die from this,” says Soares. “But the fight’s not over until you’re actually dead.” Too bad, then, that the solutions they propose to stop the devastation seem even more far-fetched than the idea that software will murder us all. It all boils down to this: Hit the brakes. Monitor data centers to make sure that they’re not nurturing superintelligence. Bomb those that aren’t following the rules. Stop publishing papers with ideas that accelerate the march to superintelligence. Would they have banned, I ask them, the 2017 paper on transformers that kicked off the generative AI movement. Oh yes, they would have, they respond. Instead of Chat-GPT, they want Ciao-GPT. Good luck stopping this trillion-dollar industry.

    Playing the Odds

    Personally, I don’t see my own light snuffed by a bite in the neck by some super-advanced dust mote. Even after reading this book, I don’t think it’s likely that AI will kill us all. Yudksowky has previously dabbled in Harry Potter fan-fiction, and the fanciful extinction scenarios he spins are too weird for my puny human brain to accept. My guess is that even if superintelligence does want to get rid of us, it will stumble in enacting its genocidal plans. AI might be capable of whipping humans in a fight, but I’ll bet against it in a battle with Murphy’s law.

    Still, the catastrophe theory doesn’t seem impossible, especially since no one has really set a ceiling for how smart AI can become. Also studies show that advanced AI has picked up a lot of humanity’s nasty attributes, even contemplating blackmail to stave off retraining, in one experiment. It’s also disturbing that some researchers who spend their lives building and improving AI think there’s a nontrivial chance that the worst can happen. One survey indicated that almost half the AI scientists responding pegged the odds of a species wipeout as 10 percent chance or higher. If they believe that, it’s crazy that they go to work each day to make AGI happen.

    My gut tells me the scenarios Yudkowsky and Soares spin are too bizarre to be true. But I can’t be sure they are wrong. Every author dreams of their book being an enduring classic. Not so much these two. If they are right, there will be no one around to read their book in the future. Just a lot of decomposing bodies that once felt a slight nip at the back of their necks, and the rest was silence.

    [ad_2]

    Steven Levy

    Source link

  • This Robot Only Needs a Single AI Model to Master Humanlike Movements

    [ad_1]

    While there is a lot of work to do, Tedrake says all of the evidence so far suggests that the approaches used to LLMs also work for robots. “I think it’s changing everything,” he says.

    Gauging progress in robotics has become more challenging of late, of course, with videoclips showing commercial humanoids performing complex chores, like loading refrigerators or taking out the trash with seeming ease. YouTube clips can be deceptive, though, and humanoid robots tend to be either teleoperated, carefully programmed in advance, or trained to do a single task in very controlled conditions.

    The new Atlas work is a big sign that robots are starting to experience the kind of equivalent advances in robotics that eventually led to the general language models that gave us ChatGPT in the field of generative AI. Eventually, such progress could give us robots that are able to operate in a wide range of messy environments with ease and are able to rapidly learn new skills—from welding pipes to making espressos—without extensive retraining.

    “It’s definitely a step forward,” says Ken Goldberg, a roboticist at UC Berkeley who receives some funding from TRI but was not involved with the Atlas work. “The coordination of legs and arms is a big deal.”

    Goldberg says, however, that the idea of emergent robot behavior should be treated carefully. Just as the surprising abilities of large language models can sometimes be traced to examples included in their training data, he says that robots may demonstrate skills that seem more novel than they really are. He adds that it is helpful to know details about how often a robot succeeds and in what ways it fails during experiments. TRI has previously been transparent with the work it’s done on LBMs and may well release more data on the new model.

    Whether simple scaling up the data used to train robot models will unlock ever-more emergent behavior remains an open question. At a debate held in May at the International Conference on Robotics and Automation in Atlanta, Goldberg and others cautioned that engineering methods will also play an important role going forward.

    Tedrake, for one, is convinced that robotics is nearing an inflection point—one that will enable more real-world use of humanoids and other robots. “I think we need to put these robots out of the world and start doing real work,” he says.

    What do you think of Atlas’ new skills? And do you think that we are headed for a ChatGPT-style breakthrough in robotics? Let me know your thoughts on ailab@wired.com.


    This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

    [ad_2]

    Will Knight

    Source link

  • Intrusive Thought of the Day: Is That YouTube Video Enhanced With AI?

    [ad_1]

    Have you noticed YouTube videos have started to have a little hint of the Uncanny Valley in recent months? You are far from alone, as a growing chorus of folks stuck in YouTube’s endless scroll of Shorts have started piecing together similar qualities across videos that give viewers the heebie jeebies. That’s probably not the intended response that YouTube was going for, but according to a report from The Atlantic, the effects are intentional and part of an ongoing experiment by YouTube to “enhance” videos.

    Here’s what to look for to spot an “enhanced” video, according to users: “punchy shadows,” “sharp edges,” and a “plastic” look. According to the BBC, YouTubers have also pointed out these strange effects, which lead to more defined wrinkles appearing in clothing, skin looking unnaturally smooth, and occasional warping around the edges of a person’s face. Some creators expressed concerns that the unnatural look could lead to viewers thinking they used AI in their video.

    All of this is appearing because YouTube is tweaking people’s videos after the content is uploaded, and has been doing so seemingly without any forewarning that changes would be made and without the permission of the creator. And while YouTubers like Rhett Shull have suggested the effects are the result of AI upscaling, an attempt to “improve” video quality using AI tools, YouTube has a different explanation.

    “We’re running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video),” Rene Ritchie, YouTube’s head of editorial and creator liaison, said in a Twitter post. “YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features.”

    It’s certainly an interesting decision to explicitly identify these techniques as “traditional machine learning technology” rather than AI. A spokesperson for Google made the message even clearer in a statement to The Atlantic, stating, “These enhancements are not done with generative AI.”

    It’s not like YouTube has exactly been distancing itself from generative AI. The platform just launched a new suite of “generative effects” that it has encouraged creators to use. Other creators have shown that YouTube uses AI tools to generateinspiration” and ideas for new videos for their channel. But perhaps it’s the viscerally negative response that people have had when spotting these “enhanced” videos that has YouTube backing away from the AI-centric language.

    This experiment has apparently been going on for a couple months, if the eyes of viewers are to be trusted. The BBC tracked examples of complaints about the effects described by YouTube as “enhancements” dating back to June of this year. It’s also led to some users taking a conspiratorial view of the experiment, suggesting the company is trying to desensitize audiences to AI-style effects and make them more palatable. On the positive side, that at least suggests people are generally rejecting slop. Ideally, YouTube won’t keep dragging its creators down into the AI mud and will let their videos be. It’s not like the platform is exactly short on content, after all.

    [ad_2]

    AJ Dellinger

    Source link

  • The Hidden Ingredients Behind AI’s Creativity

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    We were once promised self-driving cars and robot maids. Instead, we’ve seen the rise of artificial intelligence systems that can beat us in chess, analyze huge reams of text, and compose sonnets. This has been one of the great surprises of the modern era: physical tasks that are easy for humans turn out to be very difficult for robots, while algorithms are increasingly able to mimic our intellect.

    Another surprise that has long perplexed researchers is those algorithms’ knack for their own, strange kind of creativity.

    Diffusion models, the backbone of image-generating tools such as DALL·E, Imagen, and Stable Diffusion, are designed to generate carbon copies of the images on which they’ve been trained. In practice, however, they seem to improvise, blending elements within images to create something new—not just nonsensical blobs of color, but coherent images with semantic meaning. This is the “paradox” behind diffusion models, said Giulio Biroli, an AI researcher and physicist at the École Normale Supérieure in Paris: “If they worked perfectly, they should just memorize,” he said. “But they don’t—they’re actually able to produce new samples.”

    To generate images, diffusion models use a process known as denoising. They convert an image into digital noise (an incoherent collection of pixels), then reassemble it. It’s like repeatedly putting a painting through a shredder until all you have left is a pile of fine dust, then patching the pieces back together. For years, researchers have wondered: If the models are just reassembling, then how does novelty come into the picture? It’s like reassembling your shredded painting into a completely new work of art.

    Now two physicists have made a startling claim: It’s the technical imperfections in the denoising process itself that leads to the creativity of diffusion models. In a paper presented at the International Conference on Machine Learning 2025, the duo developed a mathematical model of trained diffusion models to show that their so-called creativity is in fact a deterministic process—a direct, inevitable consequence of their architecture.

    By illuminating the black box of diffusion models, the new research could have big implications for future AI research—and perhaps even for our understanding of human creativity. “The real strength of the paper is that it makes very accurate predictions of something very nontrivial,” said Luca Ambrogioni, a computer scientist at Radboud University in the Netherlands.

    Bottoms Up

    Mason Kamb, a graduate student studying applied physics at Stanford University and the lead author of the new paper, has long been fascinated by morphogenesis: the processes by which living systems self-assemble.

    One way to understand the development of embryos in humans and other animals is through what’s known as a Turing pattern, named after the 20th-century mathematician Alan Turing. Turing patterns explain how groups of cells can organize themselves into distinct organs and limbs. Crucially, this coordination all takes place at a local level. There’s no CEO overseeing the trillions of cells to make sure they all conform to a final body plan. Individual cells, in other words, don’t have some finished blueprint of a body on which to base their work. They’re just taking action and making corrections in response to signals from their neighbors. This bottom-up system usually runs smoothly, but every now and then it goes awry—producing hands with extra fingers, for example.

    [ad_2]

    Webb Wright

    Source link

  • Sam Altman’s AI paradox: Warning of a bubble while raising trillions

    [ad_1]

    Welcome to Eye on AI! AI reporter Sharon Goldman here, filling in for Jeremy Kahn. In this edition… Sam Altman’s AI paradox…AI has quietly become a fixture of advertising…Silicon Valley’s AI deals are creating zombie startupssources say Nvidia working on new AI chip for China that outperforms the H20.

    I was not invited to Sam Altman’s cozy dinner with reporters in San Francisco last week (whomp whomp), but maybe that’s for the best. I have trouble suppressing exasperated eye rolls when I hear peak Silicon Valley–ironic statements.

    I am not sure I could have controlled myself when the OpenAI CEO said that he believes AI could be in a “bubble,” with market conditions similar to the 1990s dotcom boom. Yes, he reportedly said, “investors as a whole are overexcited about AI.” 

    Yet, over the same meal, Altman also apparently said he expects OpenAI to spend trillions of dollars on its data center buildout in the “not very distant future,” adding that “you should expect a bunch of economists wringing their hands, saying, ‘This is so crazy, it’s so reckless,’ and we’ll just be like, ‘You know what? Let us do our thing.’”

    Ummm…what could be more frothy than pitching a multi-trillion-dollar expansion in an industry you’ve just called a bubble? Cue an eye roll reaching the top of my head. Sure, Altman may have been referring to smaller AI startups with sky-high valuations and little to no revenue, but still, the irony is rich. It’s particularly notable given the weak GPT-5 rollout earlier this month, which was supposed to mark a leap forward but instead left many disappointed with its routing system and lack of breakthrough progress.

    In addition, even as Altman speaks of bubbles, OpenAI itself is raising record sums. In early August, OpenAI secured a whopping $8.3 billion in new funding at a $300 billion valuation—part of its plan to raise $40 billion this year. That figure was five times oversubscribed. On top of that, employees are now poised to sell about $6 billion in shares to investors like SoftBank, Dragoneer, and Thrive, pushing the company’s valuation potentially up to $500 billion.

    OpenAI is hardly an outlier in its infrastructure binge. Tech giants are pouring unprecedented sums into AI buildouts in 2025: Microsoft alone plans to spend $80 billion on AI data centers this fiscal year, while Meta is projecting up to $72 billion in AI and infrastructure investments. And on the fundraising front, OpenAI has company too — rivals like Anthropic are chasing multibillion-dollar rounds of their own. 

    Wall Street’s biggest bulls, like Wedbush’s Dan Ives, seem unconcerned. Ives said Monday on CNBC’s “Closing Bell” that demand for AI infrastructure has grown 30% to 40% in the last months, calling the capex surge a validation moment for the sector. While he acknowledged “some froth” in parts of the market, he said the AI revolution with autonomous systems is only starting to play out and we are in the “second inning of a nine-inning game.” 

    And while a bubble implies an eventual bursting, and all the damage that results, the underlying phenomenon causing a bubble often has real value. The advent of the web in the ’90s was revolutionary; The bubble was a reflection of the massive opportunities opening up.

    Still, I’d be curious if anyone pressed Altman on the AI paradox—warning of a bubble while simultaneously bragging about OpenAI’s massive fundraising and spending. Perhaps over a glass of bubbly and a sugary sweet dessert? I’d also love to know if he fielded tougher questions on the other big issues looming over the company: its shift to a public benefit corporation (and what that means for the nonprofit), the current state of its Microsoft partnership, and whether its mission of “AGI to benefit all of humanity” still holds now that Altman himself has said AGI “is not a super-useful term.”

    In any case, I’m game for a follow-up chat with Altman & Co (call me!). I’ll bring the bubbly, pop the questions, and do my best to keep the eye rolls at bay.

    Also: In just a few weeks, I will be headed to Park City, Utah, to participate in our annual Brainstorm Tech conference at the Montage Deer Valley! Space is limited, so if you’re interested in joining me, register here. I highly recommend: There’s a fantastic lineup of speakers, including Ashley Kramer, chief revenue officer of OpenAI; John Furner, president and CEO of Walmart U.S.; Tony Xu, founder and CEO of DoorDash; and many, many more!

    With that, here’s more AI news.

    Sharon Goldman
    sharon.goldman@fortune.com
    @sharongoldman

    FORTUNE ON AI

    Wall Street isn’t worried about an AI bubble. Sam Altman is – by Beatrice Nolan

    MIT report: 95% of generative AI pilots at companies are failing – by Sheryl Estrada

    Silicon Valley talent keeps getting recycled, so this CEO uses a ‘moneyball’ approach for uncovering hidden AI geniuses in the new era – by Sydney Lake

    Waymo experimenting with generative AI, but exec says LiDAR and radar sensors important to self-driving safety ‘under all conditions’ – by Jessica Matthews

    AI IN THE NEWS

    More shakeups for Meta AI. The New York Times reported today that Meta is expected to announce that it will split its A.I. division — which is known as Meta Superintelligence Labs — into four groups. One will focus on AI research; one on  “superintelligence”; another on products; and one on infrastructure such as data centers. According to the article’s anonymous sources, the reorganization “is likely to be the final one for some time,” with moves “aimed at better organizing Meta so it can get to its goal of superintelligence and develop AI products more quickly to compete with others.” The news comes less than two months after CEO Mark Zuckerberg overhauled Meta’s entire AI organization, including bringing on Scale AI CEO Alexandr Wang as chief AI officer. 

    Madison Avenue is starting to love AI. According to the New York Times, artificial intelligence has quietly become a fixture of advertising. What felt novel when Coca-Cola released an AI-generated holiday ad last year is now mainstream: nearly 90% of big-budget marketers are already using—or planning to use—generative AI in video ads. From hyper-realistic backdrops to synthetic voice-overs, the technology is slashing costs and production times, opening TV spots to smaller businesses for the first time. Companies like Shuttlerock and ITV are helping brands replace weeks of work with hours, while tech giants like Meta and TikTok push their own AI ad tools. The shift raises ethical questions about displacing creatives and fooling viewers, but industry leaders say the genie is out of the bottle: AI isn’t just streamlining ad production—it’s reshaping the entire commercial playbook.

    Silicon Valley’s AI deals are creating zombie startups: ‘You hollowed out the organization.’ According to CNBCSilicon Valley’s AI startup scene is being hollowed out as Big Tech sidesteps antitrust rules with a new playbook: licensing deals and talent raids that gut promising young companies. Windsurf, once in talks to be acquired by OpenAI, collapsed into turmoil after its founders bolted to Google in a $2.4 billion licensing pact; interim CEO Jeff Wang described tearful all-hands meetings as employees realized they’d been left with “nothing.” Similar moves have seen Meta sink $14.3 billion into Scale AI, Microsoft scoop up Inflection’s founders, and Amazon strip talent from Adept and Covariant—leaving behind so-called “zombie companies” with little future. While founders and top researchers cash out, investors and rank-and-file staff are often left stranded, sparking growing concern that these quasi-acquisitions not only skirt regulators but also threaten to choke off AI innovation at its source.

    Nvidia working on new AI chip for China that outperforms the H20, sources say. According to ReutersNvidia is developing a new China-specific AI chip, codenamed B30A, based on its cutting-edge Blackwell architecture. The chip, which could be delivered to Chinese clients for testing as soon as next month, would be more powerful than the current H20 but still fall below U.S. export thresholds—using a single-die design with about half the raw computing power of Nvidia’s flagship B300. The move comes after President Trump signaled possible approval for scaled-down chip sales to China, though regulatory approval is uncertain amid bipartisan concerns in Washington over giving Beijing access to advanced AI hardware. Nvidia argues that retaining Chinese buyers is crucial to prevent defections to domestic rivals like Huawei, even as Chinese regulators cast suspicion on the company’s products.

    EYE ON AI RESEARCH

    Study finds AI-led interviews improved outcomes. A new study looked at what happens when job interviews are run by AI voice agents instead of human recruiters. In a large experiment with 70,000 applicants, people were randomly assigned to be interviewed by a person, by an AI, or given the choice. Surprisingly, AI-led interviews actually improved outcomes: applicants interviewed by AI were 12% more likely to get job offers, 18% more likely to start jobs, and 17% more likely to still be employed after 30 days. Most applicants didn’t mind the change—78% even chose the AI when given the option, especially those with lower test scores. The AI also pulled out more useful information from candidates, leading recruiters to rate those interviews higher. Overall, the study shows that AI interviewers can perform just as well as, or even better than, human recruiters—without hurting applicant satisfaction.

    AI CALENDAR

    Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.

    Oct. 6-10: World AI Week, Amsterdam

    Oct. 21-22: TedAI San Francisco. Apply to attend here.

    Dec. 2-7: NeurIPS, San Diego

    Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

    BRAIN FOOD

    Do AI chatbots need to be protected from harm? 

    AI lab Anthropic has introduced a new safety measure in its latest Claude models, which empowers the AI to terminate conversations in extreme cases of harmful or abusive interaction. The feature activates only after repeated redirections fail—typically for content requests involving sexual exploitation of minors or facilitation of large-scale violence. The company is notably framing this as a safeguard not principally for users, but for the model’s own “AI welfare,” reflecting an exploratory stance on the machine’s potential moral status.

    Unsurprisingly, the idea of granting AI moral status is contentious. Jonathan Birch, a philosophy professor at the London School of Economics, told The Guardian he welcomed Anthropic’s move for sparking a public debate about AI sentience—a topic he said many in the industry would rather suppress. At the same time, he warned that the decision risks misleading users into believing the chatbot is more real than it is.

    Others argue that focusing on AI welfare distracts from urgent human concerns. For example, while Claude is designed to end only the most extreme abusive conversations, it will not intervene in cases of imminent self-harm—even though a New York Times opinion piece yesterday urged such safeguards, written by a mother who discovered her daughter’s ChatGPT conversations only after her daughter’s suicide.

    [ad_2]

    Sharon Goldman

    Source link

  • Sound Ethics and UC Irvine Partner to Shape the Future of Ethical AI in the Music Industry

    [ad_1]

    Sound Ethics, a leader in ethical AI advocacy and education, is proud to announce its partnership with UC Irvine’s School of Information and Computer Sciences (UCI ICS). This strategic collaboration will focus on ethical AI practices in the music sector and creating responsible AI solutions.

    Labs to Legends: Shaping the Next Generation of Ethical AI Innovators

    This collaboration is part of Sound Ethics’ Labs to Legends program, an initiative dedicated to empowering the next generation of data scientists and creatives to lead with responsibility in the AI-driven music industry. The program bridges academic research with real-world industry needs by providing mentorship, ethically curated datasets, and practices that emphasize transparency, attribution, and compliance. Through Labs to Legends, Sound Ethics equips emerging talent with the tools and knowledge to champion ethical AI practices in their careers.

    Driving Ethical AI Innovation

    The UCI-Sound Ethics partnership aims to drive innovation and ethical considerations in AI’s application to music.

    Key objectives include:

    • Advancing AI Detection Research: The project will focus on groundbreaking research into AI-generated music detection, exploring methods to distinguish between human and AI-created audio to ensure ethical AI usage.

    • Creating Ethical Industry Frameworks: Insights from this project will help Sound Ethics develop its frameworks that promote attribution, transparency, and regulatory compliance, accelerating AI innovations in music.

    • Empowering Future AI Leaders: As part of Sound Ethics’ Labs to Legends program, this partnership offers UCI team members a critical opportunity to apply AI responsibly in creative industries like music, helping to shape the next generation of ethical AI innovators.

    James O’Brien, CEO of Sound Ethics, stated:

    “We believe ethical AI starts with education. We cannot rely on policymakers alone to fix these problems. This partnership allows us to mentor the next generation of AI professionals and build AI frameworks that support both artists and innovation.”

    Prof. Hadar Ziv, Faculty Director of ICS Capstones, UC Irvine

    “We are excited to collaborate with Sound Ethics, providing our students with the opportunity to contribute to the responsible AI landscape in the music industry. Collaborations like this showcase the strength of UCI’s ICS Capstone Projects program, which has become a recognized and highly sought-after initiative. Our teams tackle real-world challenges using cutting-edge tools and technologies, addressing critical issues such as ethics, privacy, and social responsibility.

    “At our 2024 ICS Project Expo, more than 500 attendees explored projects from over 75 student teams, and we anticipate the 2025 ICS Project Expo will be even bigger and better, featuring innovative technical solutions developed through impactful partnerships like this one with Sound Ethics.”

    Prof. Sergio Gago-Masague, Director of CS Capstone Projects, UC Irvine

    “I’m thrilled to see our students tackling critical challenges at the intersection of technology and ethics. This partnership with Sound Ethics offers a unique opportunity for our talented students to apply their skills in addressing real-world issues in the music industry, ensuring that innovation is guided by ethical responsibility. By fostering collaborations like this, we are preparing the next generation of computer scientists to lead with integrity and make meaningful contributions to society.”

    About

    Sound Ethics pioneers ethical AI in music, advocating for transparency, protecting copyrights, and empowering artists. Through partnerships with universities and policymakers, it ensures AI fosters creativity while preventing exploitation. www.soundethics.org

    Ranked among the nation’s top 10 public universities, UC Irvine’s Donald Bren School of Information and Computer Sciences leads in AI, data science, and machine learning, fostering ethical AI through research, innovation, and interdisciplinary collaboration. www.ics.uci.edu

    Source: Sound Ethics, Inc.

    [ad_2]

    Source link

  • This Is a Glimpse of the Future of AI Robots

    This Is a Glimpse of the Future of AI Robots

    [ad_1]

    Despite stunning AI progress in recent years, robots remain stubbornly dumb and limited. The ones found in factories and warehouses typically go through precisely choreographed routines without much ability to perceive their surroundings or adapt on the fly. The few industrial robots that can see and grasp objects can only do a limited number of things with minimal dexterity due to a lack of general physical intelligence.

    More generally capable robots could take on a far wider range of industrial tasks, perhaps after minimal demonstrations. Robots will also need more general abilities in order to cope with the enormous variability and messiness of human homes.

    General excitement about AI progress has already translated into optimism about major new leaps in robotics. Elon Musk’s car company, Tesla, is developing a humanoid robot called Optimus, and Musk recently suggested that it would be widely available for $20,000 to $25,000 and capable of doing most tasks by 2040.

    Courtesy of Physical Intelligence

    Previous efforts to teach robots to do challenging tasks have focused on training a single machine on a single task because learning seemed untransferable. Some recent academic work has shown that with sufficient scale and fine-tuning, learning can be transferred between different tasks and robots. A 2023 Google project called Open X-Embodiment involved sharing robot learning between 22 different robots at 21 different research labs.

    A key challenge with the strategy Physical Intelligence is pursuing is that there is not the same scale of robot data available for training as there is for large language models in the form of text. So the company has to generate its own data and come up with techniques to improve learning from a more limited dataset. To develop π0 the company combined so-called vision language models, which are trained on images as well as text, with diffusion modeling, a technique borrowed from AI image generation, to enable a more general kind of learning.

    For robots to be able to take on any robot chore that a person asks them to do, such learning will need to be scaled up significantly. “There’s still a long way to go, but we have something that you can think of as scaffolding that illustrates things to come,” Levine says.

    [ad_2]

    Will Knight

    Source link

  • ‘BBL Drizzy’ Was the Beginning of the Future of AI Music

    ‘BBL Drizzy’ Was the Beginning of the Future of AI Music

    [ad_1]

    Not all AI tools are the generate-from-scratch types like Google’s MusicFX, Suno, and Udio that independent creators like Hatcher use—there are also ones for extracting stems, for mixing and mastering, and for brainstorming lyrics, all of which are finding user bases amongst hobbyists as well as professional producers. Sam Hollander, a pop hitmaker who has worked with Panic! at the Disco and Flava Flav, compares AI to the explosion of drum machines in the ’80s, and how session drummers had to adapt and learn programming if they wanted to continue to work.

    Giving a typical example of where AI fits into the workflow of him and his peers, Hollander recalls how a UK grime producer he worked with was using Suno and Udio to generate funk and soul samples; once the tool iterated one he liked, he’d use another AI tool to extract the stem in order to use it, manually, in a track.

    “There’s going to be two paths,” Hollander predicts. “An entirely organic industry that bucks against it” versus “people who adapt [AI] into what they do.” Last week, thousands of musicians and other creatives aligned themselves with the former group, signing a letter claiming that AI training was an “unjust threat to the livelihoods of the people behind those works.”

    For his part, Hollander dabbles in AI tools for brainstorming as well as for sample-hunting and generating, but, like Hatcher, always uses his original lyrics. “I don’t think AI does humor exceptionally well yet,” Hatcher says—human input is still needed, and even necessary, if AI-made music is going to avoid the pitfalls of being totally boring and bad.

    “[AI music] either has a shock factor, or [is] music as a background thing,” Hu points out. Shock-factor comedy is part of the appeal for successful AI projects, like the viral SpongeBob rap by producer Glorb, or ObscurestVinyl, a collection of “lost” album tracks like the Ronettes-style “My Arms Are Just Fuckin’ Stuck Like This.” Original concepts and hand-crafted lyrics mean that the AI output avoids feeling generic—and make it good and interesting enough that it might be picked up, in Hatcher’s case, by a major producer as a sample on merit alone.

    The other side of that coin is the realm of AI-generated ambient/chill music, which Hu identifies as a growing domain, citing YouTube channels like Home Alone and what is ? as examples. With millions of views, and their use of AI on the down-low, these channels also show that what began as experimentation in the early days of these tools—so, literally, last year—is now going mainstream in an almost hidden way, as AI output becomes indistinguishable from human-made samples and compositions.

    [ad_2]

    Allegra Rosenberg

    Source link

  • Google, Microsoft, and Perplexity Are Promoting Scientific Racism in Search Results

    Google, Microsoft, and Perplexity Are Promoting Scientific Racism in Search Results

    [ad_1]

    Google added that part of the problem it faces in generating AI Overviews is that, for some very specific queries, there’s an absence of high quality information on the web—and there’s little doubt that Lynn’s work is not of high quality.

    “The science underlying Lynn’s database of ‘national IQs’ is of such poor quality that it is difficult to believe the database is anything but fraudulent,” Sear said. “Lynn has never described his methodology for selecting samples into the database; many nations have IQs estimated from absurdly small and unrepresentative samples.”

    Sear points to Lynn’s estimation of the IQ of Angola being based on information from just 19 people and that of Eritrea being based on samples of children living in orphanages.

    “The problem with it is that the data Lynn used to generate this dataset is just bullshit, and it’s bullshit in multiple dimensions,” Rutherford said, pointing out that the Somali figure in Lynn’s dataset is based on one sample of refugees aged between 8 and 18 who were tested in a Kenyan refugee camp. He adds that the Botswana score is based on a single sample of 104 Tswana-speaking high school students aged between 7 and 20 who were tested in English.

    Critics of the use of national IQ tests to promote the idea of racial superiority point out not only that the quality of the samples being collected is weak, but also that the tests themselves are typically designed for Western audiences, and so are biased before they are even administered.

    “There is evidence that Lynn systematically biased the database by preferentially including samples with low IQs, while excluding those with higher IQs, for African nations,” Sears added, a conclusion backed up by a preprint study from 2020.

    Lynn published various versions of his national IQ dataset over the course of decades, the most recent of which, called “The Intelligence of Nations,” was published in 2019. Over the years, Lynn’s flawed work has been used by far-right and racist groups as evidence to back up claims of white superiority. The data has also been turned into a color-coded map of the world, showing sub-Saharan African countries with purportedly low IQ colored red compared to the Western nations, which are colored blue.

    “This is a data visualization that you see all over [X, formerly known as Twitter], all over social media—and if you spend a lot of time in racist hangouts on the web, you just see this as an argument by racists who say, ‘Look at the data. Look at the map,’” Rutherford says.

    But the blame, Rutherford believes, does not lie with the AI systems alone, but also with a scientific community that has been uncritically citing Lynn’s work for years.

    “It’s actually not surprising [that AI systems are quoting it] because Lynn’s work in IQ has been accepted pretty unquestioningly from a huge area of academia, and if you look at the number of times his national IQ databases have been cited in academic works, it’s in the hundreds,” Rutherford said. “So the fault isn’t with AI. The fault is with academia.”

    [ad_2]

    David Gilbert

    Source link

  • Anthropic Wants Its AI Agent to Control Your Computer

    Anthropic Wants Its AI Agent to Control Your Computer

    [ad_1]

    Demos of AI agents can seem stunning, but getting the technology to perform reliably and without annoying (or costly) errors in real life can be a challenge. Current models can answer questions and converse with almost humanlike skill, and are the backbone of chatbots such as OpenAI’s ChatGPT and Google’s Gemini. They can also perform tasks on computers when given a simple command by accessing the computer screen as well as input devices like a keyboard and trackpad, or through low-level software interfaces.

    Anthropic says that Claude outperforms other AI agents on several key benchmarks including SWE-bench, which measures an agent’s software development skills, and OSWorld, which gauges an agent’s capacity to use a computer operating system. The claims have yet to be independently verified. Anthropic says Claude performs tasks in OSWorld correctly 14.9 percent of the time. This is well below humans, who generally score around 75 percent, but considerably higher than the current best agents—including OpenAI’s GPT-4—which succeed roughly 7.7 percent of the time.

    Anthropic claims that several companies are already testing the agentic version of Claude. This includes Canva, which is using it to automate design and editing tasks, and Replit, which uses the model for coding chores. Other early users include The Browser Company, Asana, and Notion.

    Ofir Press, a postdoctoral researcher at Princeton University who helped develop SWE-bench, says that agentic AI tends to lack the ability to plan far ahead and often struggles to recover from errors. “In order to show them to be useful we must obtain strong performance on tough and realistic benchmarks,” he says, such as reliably planning a wide range of trips for a user and booking all the necessary tickets.

    Kaplan notes that Claude can already troubleshoot some errors surprisingly well. When faced with a terminal error when trying to start a web server, for instance, the model knew how to revise its command to fix it. It also worked out that it had to enable popups when it ran into a dead end browsing the web.

    Many tech companies are now racing to develop AI agents as they chase market share and prominence. In fact, it might not be long before many users have agents at their fingertips. Microsoft, which has poured upwards of $13 billion into OpenAI, says it is testing agents that can use Windows computers. Amazon, which has invested heavily in Anthropic, is exploring how agents could recommend and eventually buy goods for its customers.

    Sonya Huang, a partner at the venture firm Sequoia who focuses on AI companies, says for all the excitement around AI agents, most companies are really just rebranding AI-powered tools. Speaking to WIRED ahead of the Anthropic news, she says that the technology works best currently when applied in narrow domains such as coding-related work. “You need to choose problem spaces where if the model fails, that’s okay,” she says. “Those are the problem spaces where truly agent native companies will arise.”

    A key challenge with agentic AI is that errors can be far more problematic than a garble chatbot reply. Anthropic has imposed certain constraints on what Claude can do—for example, limiting its ability to use a person’s credit card to buy stuff.

    If errors can be avoided well enough, says Press of Princeton University, users might learn to see AI—and computers—in a completely new way. “I’m super excited about this new era,” he says.

    [ad_2]

    Will Knight

    Source link

  • A Lawsuit Against Perplexity Calls Out Fake News Hallucinations

    A Lawsuit Against Perplexity Calls Out Fake News Hallucinations

    [ad_1]

    Perplexity did not respond to requests for comment.

    In a statement emailed to WIRED, News Corp chief executive Robert Thomson compared Perplexity unfavorably to OpenAI. “We applaud principled companies like OpenAI, which understands that integrity and creativity are essential if we are to realize the potential of Artificial Intelligence,” the statement says. “Perplexity is not the only AI company abusing intellectual property and it is not the only AI company that we will pursue with vigor and rigor. We have made clear that we would rather woo than sue, but, for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy.”

    OpenAI is facing its own accusations of trademark dilution, though. In New York Times v. OpenAI, the Times alleges that ChatGPT and Bing Chat will attribute made-up quotes to the Times, and accuses OpenAI and Microsoft of damaging its reputation through trademark dilution. In one example cited in the lawsuit, the Times alleges that Bing Chat claimed that the Times called red wine (in moderation) a “heart-healthy” food, when in fact it did not; the Times argues that its actual reporting has debunked claims about the healthfulness of moderate drinking.

    “Copying news articles to operate substitutive, commercial generative AI products is unlawful, as we made clear in our letters to Perplexity and our litigation against Microsoft and OpenAI,” says NYT director of external communications Charlie Stadtlander. “We applaud this lawsuit from Dow Jones and the New York Post, which is an important step toward ensuring that publisher content is protected from this kind of misappropriation.”

    If publishers prevail in arguing that hallucinations can violate trademark law, AI companies could face “immense difficulties” according to Matthew Sag, a professor of law and artificial intelligence at Emory University.

    “It is absolutely impossible to guarantee that a language model will not hallucinate,” Sag says. In his view, the way language models operate by predicting words that sound correct in response to prompts is always a type of hallucination—sometimes it’s just more plausible-sounding than others.

    “We only call it a hallucination if it doesn’t match up with our reality, but the process is exactly the same whether we like the output or not.”

    [ad_2]

    Kate Knibbs

    Source link

  • Filmmakers Are Worried About AI. Big Tech Wants Them to See ‘What’s Possible’

    Filmmakers Are Worried About AI. Big Tech Wants Them to See ‘What’s Possible’

    [ad_1]

    “You have to learn the fundamentals,” he says. “Technology will change, but storytelling won’t.”

    To make his short, “Mnemonade,” really sing, Meta Puppet says he focused on giving the story some emotional heft. “I don’t think AI films will go fully mainstream until we get emotional dialog,” he says. He played all the roles in his short, about the poignance of sense memory and an elderly woman’s loss of memory, using AI from Silicon Valley “unicorn” ElevenLabs to shift his vocal performance into each character’s range and voice.

    Maddie Hong, who went head-to-head with Meta Puppet in the Culver Cup finals, says that she understands Hollywood’s trepidation when it comes to AI. “There’s more potential for legal backlash and financial loss,” she says, referring to the danger of unintended (or even flagrant) copyright infringement during generation. The studios also have a “higher standard for image continuity,” Hong says, “given that they’re thinking about distribution on all types of platforms and screens.”

    That being said, Hong agrees with people like Luma cofounder Amit Jain, who says that gen AI filmmaking could give the traditional studio system some flexibility in terms of budget and diversity of product.

    “If you look at Hollywood today,” Jain says, “the majority of the high-budget productions are just recycling old franchises because it’s too tough to bet on a new idea or a new franchise .” It’s just safer, he says, to reproduce something than it is to imagine something new.

    In Jain’s (admittedly biased) view, making more projects, even with lower budgets, means more people will work and more money will come rolling in. “I would actually posit,” he adds, “that people will actually have far better careers that are more fulfilling and long-lasting when they’re able to produce things that people actually do want to watch.” If there’s going to be any job loss in Hollywood because of AI, he suggests, the people who are going to go will be the ones most resistant to AI.

    Recent research contradicts that notion. A survey of 300 entertainment industry leaders conducted earlier this year found that 75 percent believed gen AI had led to the elimination, reduction, or consolidation of jobs within their departments. It had also led to the creation of some jobs, but it was “not clear” if new jobs would offset jobs lost.

    Other studies have examined how the VFX world in particular might be affected by more AI in production, with artists typically reporting interest or excitement around tools that could streamline their sometimes tedious workflows, but concern about the ethical and financial implications of the technology. While it would be cool, as Jain suggests, to team up with 11 of your friends to “make a feature film about a Boston Terrier that has superpowers” for relatively little money, it remains to be seen what effect the impact of sweeping AI availability will have on the industry as a whole.

    For Meta Puppet, it comes down to skill, and who has it. “I liken gen AI to the piano,” he says. “Everybody knows about the piano. Not everybody is Mozart. Writing real masterpieces with AI, you have to wear a lot of hats, which is a good and a bad thing because if you have experience, that’s great. If you don’t, whatever you make is probably going to be bad.”

    [ad_2]

    Marah Eakin

    Source link

  • OpenAI’s ChatGPT Breaks Out of Its Box—and Onto a Canvas

    OpenAI’s ChatGPT Breaks Out of Its Box—and Onto a Canvas

    [ad_1]

    Although both writing and coding modes give the choice of requesting in-line edits, the bifurcated user interface for canvas is designed with one additional set of shortcuts for those focused on AI-assisted writing and another for coders. In the demo, Levine showed off how the writer’s shortcut could be used to condense the number of words in a canvas or attempt to perform a “final polish” on the draft. He also used one of the more lighthearted shortcuts to add a bunch of random emoji. On the coder’s side, ChatGPT can add logs, comments, and attempt to troubleshoot problems in a canvas.

    ChatGPT saves different versions of the canvas as you’re iterating, so you can return to old versions if you end up preferring that output. Writers who may be worried about what they upload being used by OpenAI to train its model should go into their user settings and make sure that “model training” is toggled off.

    By allowing ChatGPT to make edits as well as suggestions, OpenAI is blurring the line between authorship and word curation. As someone who works with professional editors daily, I’m skeptical the canvas beta will match their incisive notes and careful guidance. But for people who don’t have easy access to human writing partners, I can see how getting synthetic notes on a composition about structure and content would be beneficial.

    It’s worth noting that three people listed as “supporting leadership” on the canvas project are no longer with the company. Former post-training colead and cofounder John Schulman left in August and now works at Anthropic, a rival AI company. Additionally, former chief technology officer Mira Murati and research vice president Barret Zoph both stepped down from their positions a week before this launch. At a press event in the OpenAI office after the departures, current chief product officer Kevin Weil reaffirmed the company’s commitment to continue releasing software.

    “I think 2025 is gonna be the year that agentic systems finally hit the mainstream,” he says. The idea of an AI “agent” that can not only work through software tasks alongside you, but is also nimble enough to be sent off into the digital wilderness to do things on your behalf, is simultaneously generative AI’s recent past and projected future.

    Last year, WIRED covered ChatGPT’s plug-ins people could use for tasks, like booking flights with Expedia or making a reservation with OpenTable—arguably a step toward more “agentic” AI tools. However, plug-ins were later wound down, with more limited custom GPT chatbots launched in their place.

    Keeping that in mind, the beta release of canvas does appear to be another attempt at augmenting AI models with more decisionmaking abilities, which can lead to surprises. During one of WIRED’s demos, Levine highlighted a portion of the canvas and requested an edit, and ChatGPT subsequently made an in-line change near the bottom, outside his highlight. “The really interesting thing is oftentimes, if you highlight a section, it will make an edit in that part,” he says. “But ChatGPT has the option to decide where to edit.”

    The closest alternative to OpenAI’s canvas tool available right now is probably Google’s Gemini integration that lets you use generative AI inside of Docs or Anthropic’s Artifacts tool. Chatbots definitely aren’t dead, but AI companies are now acknowledging the format’s constraints and looking for ways to diversify their software to uncover novel, sticky user interfaces. Google recently received praise in tech circles for its entertaining AI podcasts—even CEO Sam Altman lauded the tool.

    With billions of investment dollars still flowing through Silicon Valley to AI companies, consumers can expect to see more of these structural experiments that build on existing tools, like AI podcast hosts and AI document editors, to be released with a regular cadence over the next year. The chatbot race is far from over, and future iterations on the technology are likely to stray far away from that drab chatbox, and toward a more multifaceted approach.

    [ad_2]

    Reece Rogers

    Source link

  • China’s Plan to Make AI Watermarks Happen

    China’s Plan to Make AI Watermarks Happen

    [ad_1]

    Chinese regulators likely learned from the EU AI Act, says Jeffrey Ding, an assistant professor of Political Science at George Washington University. “Chinese policymakers and scholars have said that they’ve drawn on the EU’s Acts as inspiration for things in the past.”

    But at the same time, some of the measures taken by the Chinese regulators aren’t really replicable in other countries. For example, the Chinese government is asking social platforms to screen the user-uploaded content for AI. “That seems something that is very new and might be unique to the China context,” Ding says. “This would never exist in the US context, because the US is famous for saying that the platform is not responsible for content.”

    But What About Freedom of Expression Online?

    The draft regulation on AI content labeling is seeking public feedback until October 14, and it may take another several months for it to be modified and passed. But there’s little reason for Chinese companies to delay preparing for when it goes into effect.

    Sima Huapeng, founder and CEO of the Chinese AIGC company Silicon Intelligence, which uses deepfake technologies to generate AI agents, influencers, and replicate living and dead people, says his product now allows users to voluntarily choose whether to mark the generated product as AI. But if the law passes, he might have to change it to mandatory.

    “If a feature is optional, then most likely companies won’t add it to their products. But if it becomes compulsory by law, then everyone has to implement it,” Sima says. It’s not technically difficult to add watermarks or metadata labels, but it will increase the operating costs for compliant companies.

    Policies like this can steer AI away from being used for scamming or privacy invasion, he says, but it could also trigger the growth of an AI service black market where companies try to dodge legal compliance and save on costs.

    There’s also a fine line between holding AI content producers accountable and policing individual speech through more sophisticated tracing.

    “The big underlying human rights challenge is to be sure that these approaches don’t further compromise privacy or free expression,” says Gregory. While the implicit labels and watermarks can be used to identify sources of misinformation and inappropriate content, the same tools can enable the platforms and government to have stronger control over what users post on the internet. In fact, concerns about how AI tools can go rogue has been one of the main drivers of China’s proactive AI legislation efforts.

    At the same time, the Chinese AI industry is pushing back on the government to have more space to experiment and grow since they are already behind their Western peers. An earlier Chinese generative-AI law was watered down considerably between the first public draft and the final bill, removing requirements on identity verification and reducing penalties imposed on companies.

    “What we’ve seen is the Chinese government really trying to walk this fine tightrope between ‘making sure we maintain content control’ but also ‘letting these AI labs in a strategic space have the freedom to innovate,’” says Ding. “This is another attempt to do that.”

    [ad_2]

    Zeyi Yang

    Source link

  • The AI Boom Is Raising Hopes of a Nuclear Comeback

    The AI Boom Is Raising Hopes of a Nuclear Comeback

    [ad_1]

    For five years, reactor one at Three Mile Island nuclear power station in Pennsylvania has lain dormant. Now, thanks to a deal with Microsoft, the reactor will start running again in 2028—this time to exclusively supply the tech firm with oodles of low-carbon electricity.

    It’s all part of an ongoing flirtation between Big Tech and nuclear power. In March, Amazon Web Services agreed to buy a data center powered by Susquehanna nuclear power station in Pennsylvania. At an event at Carnegie Mellon University on September 18, Alphabet CEO Sundar Pichai mentioned small modular nuclear reactors as one potential source of energy for data centers. The links don’t stop there either: OpenAI CEO Sam Altman chairs the boards of nuclear startups Oklo and Helion Energy.

    The AI boom has left technology companies scrambling for low-carbon sources of energy to power their data centers. The International Energy Agency estimates that electricity demand from AI, data centers, and crypto could more than double by 2026. Even its lowball estimates say that the added demand will be equivalent to all the electricity used in Sweden or—in the high-usage case—Germany.

    This surge in energy demand is music to the ears of the nuclear power industry. Electricity demand in the US has been fairly flat for decades, but the sheer scale and intensity of the AI boom is changing that dynamic. One December 2023 report from a power industry consultancy declared the era of flat power demand over, thanks to growing demand from data centers and industrial facilities. The report forecasts that peak electricity demand in the US will grow by 38 gigawatts by 2028, roughly equivalent to 46 times the output of reactor one at Three Mile Island.

    “[AI] is really taking off, and it’s garnering a lot of attention in the energy industry,” says John Kotek, senior vice president for policy development and public affairs at nuclear industry trade association the Nuclear Energy Institute. Kotek says there’s also a national security angle. “People legitimately see AI as a field of competition between the US and our global competitors.” The US falling behind in the AI race because it doesn’t have enough power “is something that’s really causing people to focus attention,” he says.

    Nuclear power is attractive to tech companies because it provides low-carbon electricity round-the-clock, unlike solar and wind, which run intermittently unless coupled with a form of energy storage. Reactivating reactor one will provide Microsoft with 835 megawatts of low-carbon energy over the 20 years that the deal will run for. Since Microsoft has pledged to be carbon negative by 2030, spiraling electricity demand from AI poses a major threat to the firm’s climate plans unless it can find sources of low-carbon power. In 2023, Microsoft’s emissions increased by 29 percent compared with 2020, primarily driven by the construction of new data centers.

    Three Mile Island nuclear power station has two reactors. The second reactor was infamously the site of a partial meltdown in 1979 and it has remained out of action ever since. But reactor one kept on chugging away without incident until 2019, when it was taken offline for financial reasons—mainly due to competition from gas- and wind-powered electricity. Kotek says there are relatively few idle reactors that could also be brought back online fairly quickly, but that a lot of power plant owners are interested in extending their operating licenses of their existing plants to try and ride the AI power wave.

    [ad_2]

    Matt Reynolds

    Source link

  • Xavier Niel, a Driving Force of French AI, Is Now Shaping TikTok

    Xavier Niel, a Driving Force of French AI, Is Now Shaping TikTok

    [ad_1]

    If Europe wants to compete with Asia and the US on AI, he believes the continent has to act now. “If you want to create a search engine now from scratch, you cannot win because you were not there 25 years ago,” he says, noting this window to compete on AI will also close.

    In one way or another, Niel is connected to almost all of France’s rising startup stars. In Mistral AI, valued at €5.8 billion ($6.4bn), he’s an investor. The same goes for H, another new AI company. Scaleway, the cloud provider used by Mistral, is an Iliad subsidiary, while the team behind Hugging Face, a platform for AI developers, spent time at Station F, a vast startup campus also launched by Niel. A self-described “geek,” Niel has long been embedded in the French startup scene. Station F was launched seven years ago and before that, he was central to an experimental computer science school called École 42.

    His belief that Europe should pursue homegrown AI translated into a €200 million ($220m) investment he made in French AI last September. Half of that money went towards launching Kyutai, a non profit research lab based in Paris, which launched an AI voice assistant this summer called Moshi. Similar to OpenAI’s voice assistant, Moshi is also a flirty English-speaking female voice. But unlike OpenAI, which delayed its launch due to safety concerns, Moshi has been available to test online since July—with its models released this week.

    “The idea of Kyutai is to produce an AI algorithm that is completely open science and open source,” says Niel. He uses the operating system Linux as an example of an open-source tool with the kind of popularity Kyutai wants to replicate. “Depending on the license we will attach to this thing, everybody who will make a modification will have to publish it.”

    When it comes to Kyutai, however, there are some things that Niel is not so open about. When I ask where Moshi gets all its training data from, he laughs. Partly the model was trained on an actress’ voice recorded in London, he explains. But he alludes to other sources of training data, too. “Maybe we are not completely respecting all the rules.”

    Niel is careful to direct credit for Moshi to the people actually building the models. But he appears invigorated by his handful of visits to the 12-person Kyutai team in their “nice place in Paris” with their big whiteboard scrawled with math he doesn’t understand. He’s also clearly excited by the tech.

    “You had fun with Moshi,” he prompts a member of his team. Embarrassed, the staffer giggles and plays me a recorded interaction on his phone.

    “Isn’t Xavier Niel terrible at speaking English?” the staffer can be heard asking the AI.

    “Oh you’re so funny,” Moshi replies. “No, he’s not terrible, he’s just not very good but he’s trying his best.” (When I later ask Moshi, “who is Xavier Niel?” she replies: “Savio Vega is a Puerto Rican professional wrestler.”)

    Alongside Kyutai and his startup investments, Niel has also been thinking about how to develop AI infrastructure in France. His vision for the cloud provider he founded, Scaleway, is for big European companies to be able to use a local cloud “instead of being customers of a US cloud.” He’s also been buying up the GPUs necessary to train AI models. Although he’d love there to be European-made GPUs, for now he is relying on NVIDIA.

    “I think we are the biggest private buyers of NVIDIA GPUs in Europe,” Niel says.

    At home, Niel is driven by a desire to make sure France—and Europe—are not left behind in the AI age. “[Or] in the end, we will be the nicest place in the world for museums,” he says.

    Other than challenging US dominance, it’s still unclear how his new role at ByteDance fits with his mission to boost French AI. But joining the Chinese tech giant, just as it prepares to argue against a US ban in court, certainly continues Niel’s history of disruption.

    [ad_2]

    Morgan Meaker

    Source link