ReportWire

Tag: backchannel – nl

  • Inside the Multimillion-Dollar Plan to Make Mobile Voting Happen

    [ad_1]

    Joe Kiniry, a security expert specializing in elections, was attending an annual conference on voting technology in Washington, DC, when a woman approached him with an unusual offer. She said she represented a wealthy client interested in funding voting systems that would encourage bigger turnouts. Did he have any ideas? “I told her you should stay away from internet voting, because it’s really, really hard,” he says.

    Later he learned who had sent her. It was Bradley Tusk, a New York City political consultant and fixer for companies like Uber fending off regulation. He’d made a fortune doing that (early Uber stock helped a lot), and he was eager to spend a good chunk of it pursuing online voting technology. Tusk convinced Kiniry to work with him. At the very least, Kiniry thought, it would be a valuable research project.

    Today Tusk is showing off the fruits of that collaboration. His Mobile Voting Foundation is releasing VoteSecure, a cryptography-based protocol that seeks to help people securely cast their votes on iPhones and Androids. The protocol is open source and available on GitHub for anyone to test, improve upon, and build out. Two election technology vendors have already committed to using it—perhaps as early as 2026. Tusk claims that mobile voting will save our democracy. But getting it accepted by legislators and the public will be the really, really hard part.

    Primary Numbers

    Tusk has been obsessed with mobile voting for a while. Around 2017, he began taking serious action, funding small elections that used existing technology to allow deployed military or disabled people to vote. He estimates he’s dropped $20 million so far and plans to keep shoveling cash into the effort. When I ask why, he explains that working with the government has given him a panoramic view of its failures. Tusk believes there is a single pressure point that could fix a number of mismatches between what the public deserves and what they get: more people using the ballot box. “We get lousy, or corrupt, government because so few people vote, especially in off-year elections and primaries, where the turnout is dismal,” he says. “If primary turnout is 37 percent instead of 9 percent, the underlying political incentives for an elected official to change—it pushes them to the middle, and they’re not rewarded for screaming and pointing fingers.”

    To Tusk, mobile voting is a no-brainer: We already do banking, commerce, and private messages on our phones, so why not cast a ballot? “If I don’t do it, who is going to do it?” he asks. Furthermore, he says, “if it doesn’t happen, I don’t think we’re one country in 20 years, because if you are unable to solve any single problem that matters to people, eventually they decide not to keep going.”

    Tusk had Kiniry evaluate existing online voting platforms—including some that Tusk himself had paid for. “Joe is considered the absolute expert on electronic voting,” says Tusk. So when Kiniry deemed those systems insufficient, Tusk decided that the best way forward was to start from scratch. He hired Kiniry’s company, Free & Fair, to develop VoteSecure. It’s not a turnkey solution but a backend part of a system that will require a user interface and other pieces to be operable. The protocol includes a means for voters to check the accuracy of their ballots and verify that their vote has been received by the election board and transferred to a paper ballot.

    Tusk says his next step is to “run legislation” in a few cities to allow mobile voting. “Start small—city council, school board, maybe mayor,” he says. “Prove the thesis. The odds of Vladimir Putin hacking the Queensborough election seems pretty remote to me.” (Next spring some local election elections in Alaska will offer the option of mobile-phone voting with software developed by Tusk’s foundation.) Kiniry agrees it’s way too soon to use mobile voting in national elections, but Tusk is betting that eventually the systems become familiar, to the point where people trust them much more than traditional paper ballots. “Once the genie’s out of the bottle, they can’t put it back, right?” he says. “That’s been true for every tech I’ve worked on.” But first the genie has to get out of the bottle. That’s no cinch.

    Crypto Foes

    The loudest objections against mobile or internet voting come from cryptographers and security experts, who believe that the safety risks are insurmountable. Take two people who were at the 2017 conference with Kiniry. Ron Rivest is the legendary “R” in the RSA protocol that protects the internet, a winner of the coveted Turing Award, and a former professor at MIT. His view: Mobile voting is far from ready for prime time. “What you can do with mobile phones is interesting, but we’re not there yet, and I haven’t seen anything to make me think otherwise,” he says, “Tusk is driven by trying to make this stuff happen in the real world, which is not the right way to do it. They need to go through the process of writing a peer-reviewed paper. Putting up code doesn’t cut it.”

    Computer scientist and voting expert David Jefferson is also unimpressed. Though he acknowledges that Kiniry is one of the country’s top voting system experts, he sees Tusk’s effort as doomed. “I’m willing to concede rock-solid cryptography, but it does not weaken the argument about how insecure online voting systems are in general. Open source and perfect cryptography do not address the most serious vulnerabilities.”

    [ad_2]

    Steven Levy

    Source link

  • The Man Who Invented AGI

    [ad_1]

    Everyone is obsessed with artificial general intelligence—the stage when AI can match all feats of human cognition. The guy who named it saw it as a threat.

    [ad_2]

    Steven Levy

    Source link

  • Inside the Messy, Accidental Kryptos Reveal

    [ad_1]

    Jim Sanborn couldn’t believe it. He was weeks away from auctioning off the answer to Kryptos, the sculpture he created for the CIA that had defied solution for 35 years. As always, wannabe solvers kept on paying him a $50 fee to offer their guesses to the remaining unsolved portion of the 1,800-character encrypted message, known as K4—wrong without exception. Then, on September 3, he opened an email from the latest applicant, Jarett Kobek, which started, “I believe the text of K4 is as follows …” He’d seen words like this thousands of times before. But this time, the text was correct.

    “I was in shock,” Sanborn tells me. “Real serious shock.” The timing was awful. Sanborn, who turns 80 this year, saw the auction as a way for someone to continue his work of vetting potential solutions while maintaining the mystery of Kryptos. He’d also been looking forward to getting compensated for his work. What came next was even more shattering. He quickly got on the phone with Kobek and his friend Richard Byrne, who gobsmacked him by reporting they did not find the solution by codebreaking. Instead, Kobek had learned from the auction notice that some Kryptos materials were held at the Smithsonian’s Archives of American Art in Washington, DC. Kobek, a California novelist (one of his books is called I Hate the Internet), got his friend, the playwright and journalist Byrne, to photograph some of the holdings. To Kobek’s astonishment, two of the images contained a 97-character passage with words that Sanborn had previously dropped as clues. He was staring at the full unencrypted text that CIA and NSA codebreakers, along with countless academics and hobbyists, had sought for decades.

    The secret of Kryptos was out of the artist’s hands, in the most humiliating way imaginable—Sanborn himself had mistakenly submitted it in readable form to the museum. For 35 years the Kryptos plaintext had been a summit that none had reached. Suddenly some had attained it—not by climbing to the peak but by hitching a ride to the top. Sanborn’s grand vision for a piece of art that illuminated the idea of secrecy itself was imperiled—as was the auction. Now he had to figure out what to do about it.

    Enter: The Media

    The initial phone call had been friendly. Kobek and Byrne insisted that they did not want to mess up the auction. After he hung up, Sanborn called the auction house. That’s when things started going sideways. As Sanborn tells me, “They said, ‘Listen, see if the guys will sign NDAs, and see if they’ll take a portion of the proceeds.’ And I said, ‘Oh geez, man, I don’t know about that. But I offered it.’”

    Kobek and Byrne were uncomfortable with that arrangement and refused to sign. (RR Auction executive vice president Bobby Livingston didn’t comment on the legal issue but says of an NDA, “It’s something that would be comforting to our clients.”) Sanborn told them his intent was to get the Smithsonian to freeze the archives—which it did. He assumed Kobek and Byrne would stay silent. “If you don’t release it, you’re heroes to me,” Sanborn told them.

    “I thought everything was OK,” he says, “And then all of a sudden [the journalist] John Schwartz calls me and says these guys want to publish it in The New York Times.” Kobek explains to me that they contacted Schwartz in part to relieve some legal pressure. “There was threat after threat being sent to us from the auction house’s lawyers, threatening to sue us for a multitude of things,” he says. (When I ask Livingston if his lawyers have been contacting Kobek, he says, “There’s lawyers talking to each other,” and adds that there may well be copyright concerns if Kobek and Byrne published the plaintext.) On October 16, Schwartz published his scoop, informing the world that the plaintext was out.

    Sanborn tells me that Kobek shared the plaintext with Schwartz over the phone. When asked about this, Kobek says, “I cannot speak about that…I am under significant legal peril.” Schwartz says. “Once my editors decided it would not be revealed in the story, I deleted the text from my interviews file. I don’t know it.” (So don’t bug him.)

    [ad_2]

    Steven Levy

    Source link

  • Can AI Avoid the Enshittification Trap?

    [ad_1]

    I recently vacationed in Italy. As one does these days, I ran my itinerary past GPT-5 for sightseeing suggestions and restaurant recommendations. The bot reported that the top choice for dinner near our hotel in Rome was a short walk down Via Margutta. It turned out to be one of the best meals I can remember. When I got home, I asked the model how it chose that restaurant, which I hesitate to reveal here in case I want a table sometime in the future (Hell, who knows if I’ll even return: It is called Babette. Call ahead for reservations.) The answer was complex and impressive. Among the factors were rave reviews from locals, notices in food blogs and the Italian press, and the restaurant’s celebrated combination of Roman and contemporary cooking. Oh, and the short walk.

    Something was required from my end as well: trust. I had to buy into the idea that GPT-5 was an honest broker, picking my restaurant without bias; that the restaurant wasn’t shown to me as sponsored content and wasn’t getting a cut of my check. I could have done deep research on my own to double-check the recommendation (I did look up the website), but the point of using AI is to bypass that friction.

    The experience bolstered my confidence in AI results but also made me wonder: As companies like OpenAI get more powerful, and as they try to pay back their investors, will AI be prone to the erosion of value that seems endemic to the tech apps we use today?

    Word Play

    Writer and tech critic Cory Doctorow calls that erosion “enshittification.” His premise is that platforms like Google, Amazon, Facebook, and TikTok start out aiming to please users, but once the companies vanquish competitors, they intentionally become less useful to reap bigger profits. After WIRED republished Doctorow’s pioneering 2022 essay about the phenomenon, the term entered the vernacular, mainly because people recognized that it was totally on the mark. Enshittification was chosen as the American Dialect Society’s 2023 Word of the Year. The concept has been cited so often that it transcends its profanity, appearing in venues that normally would hold their noses at such a word. Doctorow just published an eponymous book on the subject; the cover image is the emoji for … guess what.

    If chatbots and AI agents become enshittified, it could be worse than Google Search becoming less useful, Amazon results getting plagued with ads, and even Facebook showing less social content in favor of anger-generating clickbait.

    AI is on a trajectory to be a constant companion, giving one-shot answers to many of our requests. People already rely on it to help interpret current events and get advice on all sorts of buying choices—and even life choices. Because of the massive costs of creating a full-blown AI model, it’s fair to assume that only a few companies will dominate the field. All of them plan to spend hundreds of billions of dollars over the next few years to improve their models and get them into the hands of as many people as possible. Right now, I’d say AI is in what Doctorow calls the “good to the users” stage. But the pressure to make back the massive capital investments will be tremendous—especially for companies whose user base is locked in. Those conditions, as Doctorow writes, allow companies to abuse their users and business customers “to claw back all the value for themselves.”

    When one imagines the enshittification of AI, the first thing that comes to mind is advertising. The nightmare is that AI models will make recommendations based on which companies have paid for placement. That’s not happening now, but AI firms are actively exploring the ad space. In a recent interview, OpenAI CEO Sam Altman said, “I believe there probably is some cool ad product we can do that is a net win to the user and a sort of positive to our relationship with the user.” Meanwhile, OpenAI just announced a deal with Walmart so the retailer’s customers can shop inside the ChatGPT app. Can’t imagine a conflict there! The AI search platform Perplexity has a program where sponsored results appear in clearly labeled follow-ups. But, it promises, “these ads will not change our commitment to maintaining a trusted service that provides you with direct, unbiased answers to your questions.”

    [ad_2]

    Steven Levy

    Source link

  • Sam Altman Says the GPT-5 Haters Got It All Wrong

    [ad_1]

    OpenAI’s August launch of its GPT-5 large language model was somewhat of a disaster. There were glitches during the livestream, with the model generating charts with obviously inaccurate numbers. In a Reddit AMA with OpenAI employees, users complained that the new model wasn’t friendly, and called for the company to restore the previous version. Most of all, critics griped that GPT-5 fell short of the stratospheric expectations that OpenAI has been juicing for years. Promised as a game changer, GPT-5 might have indeed played the game better. But it was still the same game.

    Skeptics seized on the moment to proclaim the end of the AI boom. Some even predicted the beginning of another AI Winter. “GPT-5 was the most hyped AI system of all time,” full-time bubble-popper Gary Marcus told me during his packed schedule of victory laps. “It was supposed to deliver two things, AGI and PhD-level cognition, and it didn’t deliver either of those.” What’s more, he says, the seemingly lackluster new model is proof that OpenAI’s ticket to AGI—massively scaling up data and chip sets to make its systems exponentially smarter—can no longer be punched. For once, Marcus’ views were echoed by a sizable portion of the AI community. In the days following launch, GPT-5 was looking like AI’s version of New Coke.

    Sam Altman isn’t having it. A month after the launch he strolls into a conference room at the company’s newish headquarters in San Francisco’s Mission Bay neighborhood, eager to explain to me and my colleague Kylie Robison that GPT-5 is everything that he’d been touting, and that all is well in his epic quest for AGI. “The vibes were kind of bad at launch,” he admits. “But now they’re great.” Yes, great. It’s true the criticism has died down. Indeed, the company’s recent release of a mind-bending tool to generate impressive AI video slop has diverted the narrative from the disappointing GPT-5 debut. The message from Altman, though, is that naysayers are on the wrong side of history. The journey to AGI, he insists, is still on track.

    Numbers Game

    Critics might see GPT-5 as the waning end of an AI summer, but Altman and team argue that it cements AI technology as an indispensable tutor, a search-engine-killing information source, and, especially, a sophisticated collaborator for scientists and coders. Altman claims that users are beginning to see it his way. “GPT-5 is the first time where people are, ‘Holy fuck. It’s doing this important piece of physics.’ Or a biologist is saying, ‘Wow, it just really helped me figure this thing out,’” he says. “There’s something important happening that did not happen with any pre-GPT-5 model, which is the beginning of AI helping accelerate the rate of discovering new science.” (OpenAI hasn’t cited who those physicists or biologists are.)

    So why the tepid initial reception? Altman and his team have sussed out several reasons. One, they say, is that since GPT-4 hit the streets, the company delivered versions that were themselves transformational, particularly the sophisticated reasoning modes they added. “The jump from 4 to 5 was bigger than the jump from 3 to 4,” Altman says. “We just had a lot of stuff along the way.” OpenAI president Greg Brockman agrees: “I’m not shocked that many people had that [underwhelmed] reaction, because we’ve been showing our hand.”

    OpenAI also says that since GPT-5 is optimized for specialized uses like doing science or coding, everyday users are taking a while to appreciate its virtues. “Most people are not physics researchers,” Altman observes. As Mark Chen, OpenAI’s head of research, explains it, unless you’re a math whiz yourself, you won’t care much that GPT-5 ranks in the top five of Math Olympians, whereas last year the system ranked in the top 200.

    As for the charge about how GPT-5 shows that scaling doesn’t work, OpenAI says that comes from a misunderstanding. Unlike previous models, GPT-5 didn’t get its major advances from a massively bigger dataset and tons more computation. The new model got its gains from reinforcement learning, a technique that relies on expert humans giving it feedback. Brockman says that OpenAI had developed its models to the point where they could produce their own data to power the reinforcement learning cycle. “When the model is dumb, all you want to do is train a bigger version of it,” he says. “When the model is smart, you want to sample from it. You want to train on its own data.”

    [ad_2]

    Steven Levy

    Source link

  • Broadcast TV Is a ‘Melting Ice Cube.’ Kimmel Just Turned Up the Heat

    [ad_1]

    Jimmy Kimmel returned to ABC this week. Sort of. About a quarter of ABC’s usual audience couldn’t see the talk show host this week after two major owners of ABC affiliates, Sinclair and Nexstar, refused to carry the show. Those right-leaning companies apparently felt that Kimmel’s joke—which included some disputed facts—was so unpardonable that they couldn’t expose their viewers to the comedian. They were also the first organizations to pull the plug on Kimmel, after Federal Communications Commission chair Brendan Carr seemed to threaten action. That means that even the stations that did carry the show—as well as Disney, which owns ABC—might be courting the ire of a government official who seems eager to use his powers to silence critics.

    Carr does have power. The FCC can grant and revoke broadcast licenses if stations don’t serve the public interest. It’s an artifact of a time when virtually 100 percent of viewers got their shows over the air, via television antennas. Local TV stations were granted slices of the very limited broadcast spectrum to beam their programs and had to meet certain standards to keep that privilege. But that era has passed. Local television stations now reach their audience via cable or internet bundles. Also, networks increasingly stream their programming through apps. Yet Carr still has the ability to bully networks and affiliates by threatening to take their licenses.

    This raises a question: What’s the point of maintaining the current system? It’s certainly a mess for Disney and its fellow network owners like Comcast, which owns NBC, and Paramount, which owns CBS. Instead of kowtowing to free-speech-hating regulators, and toadying affiliates who are fine with censoring ABC programming, maybe Disney should bid farewell to stations that decline to run its programming. Disney already streams shows on Hulu (which it controls) and on its own app. There have long been examples of local stations owned and operated by networks. What if Disney or Comcast let contracts with troublesome affiliates lapse and then started their own local stations without using spectrum—both as apps and cable channels? Let Nexstar and Sinclair find their own programming, where they can tailor content to any standard they want. Disney can happily bypass the airwaves without worrying about FCC threats. They can even say those seven dirty words!

    I ran this idea past a former FCC commissioner, who pointed out some potential problems involving existing contracts and such. But generally, he agreed that the idea not only made sense but was already in motion, on the largest scale. “It’s what Disney is doing by streaming ESPN and everything else. It is something that has to be coming,” he tells me, speaking on the condition of anonymity. Blair Levin, the former chief of staff to an FCC chairman, was even more sympathetic to my idea. “Broadcast is a melting ice cube,” he says. It’s only a question of how long it will take to thaw. Five years? Ten?

    So my idea is less novel than I thought. The Kimmel conundrum has only turned up the heat on a doomed chunk of frozen water. Even as I chatted with former FCC officials, Needham, an investment bank that tracks media, put out a note that suggested even more drastic action is warranted. Disney, it said, should immediately begin streaming its entire schedule! The money it would reap from ads or subscriptions would more than make up for any losses, and Disney’s market cap would rise.

    I don’t expect that to happen right away. The multiyear contracts and ongoing relationships between affiliates and networks lock in the current situation for a while. But when I asked an executive from a company that owns TV stations whether the current arrangement was sustainable, I didn’t get the pushback I expected. “It’s a real question,” he tells me, admitting the relationship of late has become more fraught.

    [ad_2]

    Steven Levy

    Source link

  • YouTube Thinks AI Is Its Next Big Bang

    [ad_1]

    Google figured out early on that video would be a great addition to its search business, so in 2005 it launched Google Video. Focused on making deals with the entertainment industry for second-rate content, and overly cautious on what users could upload, it flopped. Meanwhile, a tiny startup run by a handful of employees working above a San Mateo, California, pizzeria was exploding, simply by letting anyone upload their goofy videos and not worrying too much about who held copyrights to the clips. In 2006, Google snapped up that year-old company, figuring it would sort out the IP stuff later. (It did.) Though the $1.65 billion purchase price for YouTube was about a billion dollars more than its valuation, it was one of the greatest bargains ever. YouTube is now arguably the most successful video property in the world. It’s an industry leader in music and podcasting, and more than half of its viewing time is now on living room screens. It has paid out over $100 billion to creators since 2021. One estimate from MoffettNathanson analysts cited by Variety is that if it were a separate company, it might be worth $550 billion.

    Now the service is taking what might be its biggest leap yet, embracing a new paradigm that could change its essence. I’m talking, of course, about AI. Since YouTube is still a wholly owned subsidiary of AI-obsessed Google, it’s not surprising that its anniversary product announcements this week touted AI features that will let creators use AI to enhance or produce videos. After all, Google Deepmind’s Veo 3 technology was YouTube’s for the taking. Ready or not, the video camera ultimately will be replaced by the prompt. This means a rethinking of YouTube’s superpower: authenticity.

    YouTube’s Big Bang

    I had that shift in mind when I recently interviewed YouTube CEO Neal Mohan at his office at YouTube’s San Bruno, California, headquarters. Mohan took over as CEO in 2023 when his boss, Susan Wojcicki, left her post due to a fatal cancer. But first we chat a bit about the company’s history. Mohan reminds me that his own connection with the service began even before he joined Google in 2008, after his ad company DoubleClick merged with the search giant. He was struck by how the YouTube founders were first with a revelation that, he says, remains the core of the service. “It was not just that people were interested in sharing short clips about themselves and that it was done without a gatekeeper,” he says, “but that people were interested in watching them. That was the big bang inflection point. Our mission is to give everyone a voice and show them the world.”

    Critics of Google’s power often argue that not only the public but also YouTube itself might benefit from a split from the mother company. Just think what the world’s biggest video company could do if it were truly independent. Mohan, a self-admitted Google loyalist, disagrees. “I don’t believe YouTube would be where it is if it weren’t part of Google,” he says. He says that being part of a giant company allowed YouTube to make long-term bets on things like streaming and podcasting. When I ask whether YouTube might be even more innovative on its own, he reminds me that YouTube has been sufficiently innovative to challenge legacy media in things like live sports while fending off challenges from competitors focusing on the creator economy.

    YouTube has an advantage in breadth that Tiktok and Reels can’t dream of … “everything from a 15-second short to a 15-minute traditional long-form YouTube video to a 15-hour livestream and everything in between,” Mohan crows.

    It’s currently pressing another advantage: Google’s AI technology. The announcements this week range from fun features like putting you or your friends’ bodies into videos showing astonishing acrobatic feats or allowing podcasters to make instant television shows from their audio conversations by having AI create visuals that resonate with the content of the chatter. Mohan says that, in a sense, AI is just the latest enhancement of the service. “When YouTube was born 20 years ago it was about using technology for more people to have their voice heard,” he says. “With AI, it’s the same core principle—how do we use technology to democratize creation?”

    [ad_2]

    Steven Levy

    Source link

  • I Wasn’t Sure I Wanted Anthropic to Pay Me for My Books—I Do Now

    [ad_1]

    A billion dollars isn’t what it used to be—but it still focuses the mind. At least it did for me when I heard that the AI company Anthropic agreed to an at least $1.5 billion settlement for authors and publishers whose books were used to train an early version of its large language model, Claude. This came after a judge issued a summary judgment that it had pirated the books it used. The proposed agreement—which is still under scrutiny by the wary judge—would reportedly grant authors a minimum $3,000 per book. I’ve written eight and my wife has notched five. We are talking bathroom-renovation dollars here!

    Since the settlement is based on pirated books, it doesn’t really address the big issue of whether it’s OK for AI companies to train their models on copyrighted works. But it’s significant that real money is involved. Previously the argument over AI copyright was based on legal, moral, and even political hypotheticals. Now that things are getting real, it’s time to tackle the fundamental issue: Since elite AI depends on book content, is it fair for companies to build trillion-dollar businesses without paying authors?

    Legalities aside, I have been struggling with the issue. But now that we’re moving from the courthouse to the checkbook, the film has fallen from my eyes. I deserve those dollars! Paying authors feels like the right thing to do. Despite the powerful forces (including US president Donald Trump) arguing otherwise.

    Fine-Print Disclaimer

    Before I go farther, let me drop a whopper of a disclaimer. As I mentioned, I’m an author myself, and stand to gain or lose from the outcome of this argument. I’m also on the council of the Author’s Guild, which is a strong advocate for authors and is suing OpenAI and Microsoft for including authors’ works in their training runs. (Because I cover tech companies, I abstain on votes involving litigation with those firms.) Obviously, I’m speaking for myself today.

    In the past, I’ve been a secret outlier on the council, genuinely torn on the issue of whether companies have the right to train their models on legally purchased books. The argument that humanity is building a vast compendium of human knowledge genuinely resonates with me. When I interviewed the artist Grimes in 2023, she expressed enthusiasm over being a contributor to this experiment: “Oh, sick, I might get to live forever!” she said. That vibed with me, too. Spreading my consciousness widely is a big reason I love what I do.

    But embedding a book inside a large language model built by a giant corporation is something different. Keep in mind that books are arguably the most valuable corpus that an AI model can ingest. Their length and coherency are unique tutors of human thought. The subjects they cover are vast and comprehensive. They are much more reliable than social media and provide a deeper understanding than news articles. I would venture to say that without books, large language models would be immeasurably weaker.

    So one might argue that OpenAI, Google, Meta, Anthropic and the rest should pay handsomely for access to books. Late last month, at that shameful White House tech dinner, CEOs took turns impressing Donald Trump with the insane sums they were allegedly investing in US-based data centers to meet AI’s computation demands. Apple promised $600 billion, and Meta said it would match that amount. OpenAI is part of a $500 billion joint venture called Stargate. Compared to those numbers, that $1.5 billion that Anthropic, as part of the settlement, agreed to distribute to authors and publishers as part of the infringement case doesn’t sound so impressive.

    Unfair Use

    Nonetheless, it could well be that the law is on the side of those companies. Copyright law allows for something called “fair use,” which permits the uncompensated exploitation of books and articles based on several criteria, one of which is whether the use is “transformational”—meaning that it builds on the book’s content in an innovative manner that doesn’t compete with the original product. The judge in charge of the Anthropic infringement case has ruled that using legally obtained books in training is indeed protected by fair use. Determining this is an awkward exercise, since we are dealing with legal yardsticks drawn before the internet—let alone AI.

    Obviously, there needs to be a solution based on contemporary circumstances. The White House’s AI Action Plan announced this May didn’t offer one. But in his remarks about the plan, Trump weighed in on the issue. In his view, authors shouldn’t be paid—because it’s too hard to set up a system that would pay them fairly. “You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for,” Trump said. “We appreciate that, but just can’t do it—because it’s not doable.” (An administration source told me this week that the statement “sets the tone” for official policy.)

    [ad_2]

    Steven Levy

    Source link

  • The Doomers Who Insist AI Will Kill Us All

    [ad_1]

    The subtitle of the doom bible to be published by AI extinction prophets Eliezer Yudkowsky and Nate Soares later this month is “Why superhuman AI would kill us all.” But it really should be “Why superhuman AI WILL kill us all,” because even the coauthors don’t believe that the world will take the necessary measures to stop AI from eliminating all non-super humans. The book is beyond dark, reading like notes scrawled in a dimly lit prison cell the night before a dawn execution. When I meet these self-appointed Cassandras, I ask them outright if they believe that they personally will meet their ends through some machination of superintelligence. The answers come promptly: “yeah” and “yup.”

    I’m not surprised, because I’ve read the book—the title, by the way, is If Anyone Builds It, Everyone Dies. Still, it’s a jolt to hear this. It’s one thing to, say, write about cancer statistics and quite another to talk about coming to terms with a fatal diagnosis. I ask them how they think the end will come for them. Yudkowsky at first dodges the answer. “I don’t spend a lot of time picturing my demise, because it doesn’t seem like a helpful mental notion for dealing with the problem,” he says. Under pressure he relents. “I would guess suddenly falling over dead,” he says. “If you want a more accessible version, something about the size of a mosquito or maybe a dust mite landed on the back of my neck, and that’s that.”

    The technicalities of his imagined fatal blow delivered by an AI-powered dust mite are inexplicable, and Yudowsky doesn’t think it’s worth the trouble to figure out how that would work. He probably couldn’t understand it anyway. Part of the book’s central argument is that superintelligence will come up with scientific stuff that we can’t comprehend any more than cave people could imagine microprocessors. Coauthor Soares also says he imagines the same thing will happen to him but adds that he, like Yudkowsky, doesn’t spend a lot of time dwelling on the particulars of his demise.

    We Don’t Stand a Chance

    Reluctance to visualize the circumstances of their personal demise is an odd thing to hear from people who have just coauthored an entire book about everyone’s demise. For doomer-porn aficionados, If Anyone Builds It is appointment reading. After zipping through the book, I do understand the fuzziness of nailing down the method by which AI ends our lives and all human lives thereafter. The authors do speculate a bit. Boiling the oceans? Blocking out the sun? All guesses are probably wrong, because we’re locked into a 2025 mindset, and the AI will be thinking eons ahead.

    Yudkowsky is AI’s most famous apostate, switching from researcher to grim reaper years ago. He’s even done a TED talk. After years of public debate, he and his coauthor have an answer for every counterargument launched against their dire prognostication. For starters, it might seem counterintuitive that our days are numbered by LLMs, which often stumble on simple arithmetic. Don’t be fooled, the authors says. “AIs won’t stay dumb forever,” they write. If you think that superintelligent AIs will respect boundaries humans draw, forget it, they say. Once models start teaching themselves to get smarter, AIs will develop “preferences” on their own that won’t align with what we humans want them to prefer. Eventually they won’t need us. They won’t be interested in us as conversation partners or even as pets. We’d be a nuisance, and they would set out to eliminate us.

    The fight won’t be a fair one. They believe that at first AI might require human aid to build its own factories and labs–easily done by stealing money and bribing people to help it out. Then it will build stuff we can’t understand, and that stuff will end us. “One way or another,” write these authors, “the world fades to black.”

    The authors see the book as kind of a shock treatment to jar humanity out of its complacence and adopt the drastic measures needed to stop this unimaginably bad conclusion. “I expect to die from this,” says Soares. “But the fight’s not over until you’re actually dead.” Too bad, then, that the solutions they propose to stop the devastation seem even more far-fetched than the idea that software will murder us all. It all boils down to this: Hit the brakes. Monitor data centers to make sure that they’re not nurturing superintelligence. Bomb those that aren’t following the rules. Stop publishing papers with ideas that accelerate the march to superintelligence. Would they have banned, I ask them, the 2017 paper on transformers that kicked off the generative AI movement. Oh yes, they would have, they respond. Instead of Chat-GPT, they want Ciao-GPT. Good luck stopping this trillion-dollar industry.

    Playing the Odds

    Personally, I don’t see my own light snuffed by a bite in the neck by some super-advanced dust mote. Even after reading this book, I don’t think it’s likely that AI will kill us all. Yudksowky has previously dabbled in Harry Potter fan-fiction, and the fanciful extinction scenarios he spins are too weird for my puny human brain to accept. My guess is that even if superintelligence does want to get rid of us, it will stumble in enacting its genocidal plans. AI might be capable of whipping humans in a fight, but I’ll bet against it in a battle with Murphy’s law.

    Still, the catastrophe theory doesn’t seem impossible, especially since no one has really set a ceiling for how smart AI can become. Also studies show that advanced AI has picked up a lot of humanity’s nasty attributes, even contemplating blackmail to stave off retraining, in one experiment. It’s also disturbing that some researchers who spend their lives building and improving AI think there’s a nontrivial chance that the worst can happen. One survey indicated that almost half the AI scientists responding pegged the odds of a species wipeout as 10 percent chance or higher. If they believe that, it’s crazy that they go to work each day to make AGI happen.

    My gut tells me the scenarios Yudkowsky and Soares spin are too bizarre to be true. But I can’t be sure they are wrong. Every author dreams of their book being an enduring classic. Not so much these two. If they are right, there will be no one around to read their book in the future. Just a lot of decomposing bodies that once felt a slight nip at the back of their necks, and the rest was silence.

    [ad_2]

    Steven Levy

    Source link