ReportWire

Tag: plaintext

  • AI Will Understand Humans Better Than Humans Do

    AI Will Understand Humans Better Than Humans Do

    [ad_1]

    Michal Kosinski is a Stanford research psychologist with a nose for timely subjects. He sees his work as not only advancing knowledge, but alerting the world to potential dangers ignited by the consequences of computer systems. His best-known projects involved analyzing the ways in which Facebook (now Meta) gained a shockingly deep understanding of its users from all the times they clicked “like” on the platform. Now he’s shifted to the study of surprising things that AI can do. He’s conducted experiments, for example, that indicate that computers could predict a person’s sexuality by analyzing a digital photo of their face.

    I’ve gotten to know Kosinski through my writing about Meta, and I reconnected with him to discuss his latest paper, published this week in the peer-reviewed Proceedings of the National Academy of Sciences. His conclusion is startling. Large language models like OpenAI’s, he claims, have crossed a border and are using techniques analogous to actual thought, once considered solely the realm of flesh-and-blood people (or at least mammals). Specifically, he tested OpenAI’s GPT-3.5 and GPT-4 to see if they had mastered what is known as “theory of mind.” This is the ability of humans, developed in the childhood years, to understand the thought processes of other humans. It’s an important skill. If a computer system can’t correctly interpret what people think, its world understanding will be impoverished and it will get lots of things wrong. If models do have theory of mind, they are one step closer to matching and exceeding human capabilities. Kosinski put LLMs to the test and now says his experiments show that in GPT-4 in particular, a theory of mind-like ability “may have emerged as an unintended by-product of LLMs’ improving language skills … They signify the advent of more powerful and socially skilled AI.”

    Kosinski sees his work in AI as a natural outgrowth of his earlier dive into Facebook Likes. “I was not really studying social networks, I was studying humans,” he says. When OpenAI and Google started building their latest generative AI models, he says, they thought they were training them to primarily handle language. “But they actually trained a human mind model, because you cannot predict what word I’m going to say next without modeling my mind.”

    Kosinski is careful not to claim that LLMs have utterly mastered theory of mind—yet. In his experiments he presented a few classic problems to the chatbots, some of which they handled very well. But even the most sophisticated model, GPT-4, failed a quarter of the time. The successes, he writes, put GPT-4 on a level with 6-year-old children. Not bad, given the early state of the field. “Observing AI’s rapid progress, many wonder whether and when AI could achieve ToM or consciousness,” he writes. Putting aside that radioactive c-word, that’s a lot to chew on.

    “If theory of mind emerged spontaneously in those models, it also suggests that other abilities can emerge next,” he tells me. “They can be better at educating, influencing, and manipulating us thanks to those abilities.” He’s concerned that we’re not really prepared for LLMs that understand the way humans think. Especially if they get to the point where they understand humans better than humans do.

    “We humans do not simulate personality—we have personality,” he says. “So I’m kind of stuck with my personality. These things model personality. There’s an advantage in that they can have any personality they want at any point of time.” When I mention to Kosinski that it sounds like he’s describing a sociopath, he lights up. “I use that in my talks!” he says. “A sociopath can put on a mask—they’re not really sad, but they can play a sad person.” This chameleon-like power could make AI a superior scammer. With zero remorse.

    [ad_2]

    Steven Levy

    Source link

  • A High-Profile Geneticist Is Launching a Fusion-Power Moonshot

    A High-Profile Geneticist Is Launching a Fusion-Power Moonshot

    [ad_1]

    Eric Lander is a Big Science heavyweight. A geneticist, molecular biologist, and mathematician, he led the International Human Genome Project and is founding director of the powerful Broad Institute of MIT and Harvard. His countless accolades include a MacArthur “genius” grant and 14 honorary doctorates. When Joe Biden became president, he tapped Lander to be his science adviser and the head of the Office of Science and Technology Policy. Lander lost the job because of charges that he bullied subordinates, but he went on to head a nonprofit organization called Science for America.

    So what is he doing running a Silicon Valley startup that aims to solve the climate crisis by realizing the long-held dream of clean fusion energy? Lander is the founding CEO of newly announced Pacific Fusion, heading a team that includes top scientists from the national nuclear labs—Lawrence Livermore and Sandia—as well as experts in simulation and operations. It joins several dozen companies chasing a fusion dream that always seems to be 10 or 20 years out. And it still is—Pacific Fusion says it won’t deliver a working commercial fusion plant until well into the 2030s. But this time there’s a clear path to success. Or so says its famous CEO.

    In May 2023, Science for America issued a report that flagged progress in fusion, citing recent breakthroughs. The year before, a Livermore group achieved what is known as “target gain,” producing significantly more energy than the amount required to perform the experiment. Soon after publishing the paper, Lander quietly formed a company with some scientists in the field, including some who worked at the labs and others from places like Alphabet’s X division and Tesla.

    Sitting in a conference room at Pacific Fusion’s headquarters in Fremont, California, Lander explains to me why commercial fusion is finally within reach—and why Pacific Fusion may have the best chance to make it happen. He starts by giving me a primer on fusion, which happens when hydrogen is, in his word, “squished” into helium, releasing massive amounts of energy. It occurs naturally on the sun and other stars, but humans have yet to figure out how to do it efficiently here on Earth. But the potential payoff—unlimited clean power—has prompted around 50 startups to chase this dragon. Billionaires including Sam Altman and Bill Gates have backed one or another of these startups. Every few months, it seems, one of those contenders announces some breakthrough.

    Why does Pacific Fusion say it’s different? The method it’s pursuing is called pulsed magnetic fusion, which involves inserting tiny containers of deuterium-tritium fuel into a chamber and blasting large electrical pulses through them to magnetically squeeze the fuel containers and achieve fusion. (It’s all explained here in a paper.) “It’s a very attractive approach that’s sort of been known for decades as an idea but has only just become feasible in the last two years because of this work in the national labs,” says Lander. His contention, which I will hear repeatedly as I meet with his team, is that we’ve now made all the scientific breakthroughs we need to understand how to use this technique to generate way more energy that it takes to build and run this system. The remaining challenges—hard ones to be sure— lie in engineering.

    Another challenge is getting the money to build the prototypes for the hundreds of commercial plants that will theoretically solve the world’s energy woes. (And maybe cause global disruption when the current suppliers are upended, but that’s another story.) How do you fund a moonshot? Even when an investor accepts the risk, the prospect for payoff is distant: The Pacific Fusion timeline is to have a full-scale demonstration system sometime in the early 2030s, and commercial systems later in the decade.

    [ad_2]

    Steven Levy

    Source link

  • Social Media Tells You Who You Are. What if It’s Totally Wrong?

    Social Media Tells You Who You Are. What if It’s Totally Wrong?

    [ad_1]

    A few years ago I wrote about how, when planning my wedding, I’d signaled to the Pinterest app that I was interested in hairstyles and tablescapes, and I was suddenly flooded with suggestions for more of the same. Which was all well and fine until—whoops—I canceled the wedding and it seemed Pinterest pins would haunt me until the end of days. Pinterest wasn’t the only offender. All of social media wanted to recommend stuff that was no longer relevant, and the stench of this stale buffet of content lingered long after the non-event had ended.

    So in this new era of artificial intelligence—when machines can perceive and understand the world, when a chatbot presents itself as uncannily human, when trillion-dollar tech companies use powerful AI systems to boost their ad revenue—surely those recommendation engines are getting smarter, too. Right?

    Maybe not.

    Recommendation engines are some of the earliest algorithms on the consumer web, and they use a variety of filtering techniques to try to surface the stuff you’ll most likely want to interact with—and in many cases, buy—online. When done well, they’re helpful. In the earliest days of photo sharing, like with Flickr, a simple algorithm made sure you saw the latest photos your friend had shared the next time you logged in. Now, advanced versions of those algorithms are aggressively deployed to keep you engaged and make their owners money.

    More than three years after reporting on what Pinterest internally called its “miscarriage” problem, I’m sorry to say my Pinterest suggestions are still dismal. In a strange leap, Pinterest now has me pegged as a 60- to 70-year-old, silver fox of a woman who is seeking a stylish haircut. That and a sage green kitchen. Every day, like clockwork, I receive marketing emails from the social media company filled with photos suggesting I might enjoy cosplaying as a coastal grandmother.

    I was seeking paint #inspo online at one point. But I’m long past the paint phase, which only underscores that some recommendation engines may be smart, but not temporal. They still don’t always know when the event has passed. Similarly, the suggestion that I might like to see “hairstyles for women over 60” is premature. (I’m a millennial.)

    Pinterest has an explanation for these emails, which I’ll get to. But it’s important to note—so I’m not just singling out Pinterest, which over the past two years has instituted new leadership and put more resources into fine-tuning the product so people actually want to shop on it—that this happens on other platforms, too.

    Take Threads, which is owned by Meta and collects much of the same user data that Facebook and Instagram do. Threads is by design a very different social app than Pinterest. It’s a scroll of mostly text updates, with an algorithmic “For You” tab and a “Following” tab. I actively open Threads every day; I don’t stumble into it, the way I do from Google Image Search to images on Pinterest. In my Following tab, Threads shows me updates from the journalists and techies I follow. In my For You tab, Threads thinks I’m in menopause.

    Wait, what? Laboratorially, I’m not. But over the past several months Threads has led me to believe I might be. Just now, opening the mobile app, I’m seeing posts about perimenopause; women in their forties struggling to shrink their midsections, regulate their nervous systems, or medicate for late-onset ADHD; husbands hiring escorts; and Ali Wong’s latest standup bit about divorce. It’s a Real Housewives-meets-elder-millennial-ennui bizarro world, not entirely reflective of the accounts I choose to follow or my expressed interests.

    [ad_2]

    Lauren Goode

    Source link

  • Want to Get Into Founder Mode? You Should Be So Lucky

    Want to Get Into Founder Mode? You Should Be So Lucky

    [ad_1]

    It’s also true that when one of those groundbreaking companies matures and faces challenges, a founder has a unique ability to make bold moves and stick to the original vision when others urge a less risky course. There are certainly cases where companies struggled when founders were replaced by managers. Remember Yahoo? And of course there’s Apple, where the founder returned and restored the company to its former glory and beyond.

    But there are abundant counterexamples as well. Apple isn’t exactly struggling under Tim Cook. And consider Microsoft. Its CEO since 2014, Satya Nadella, had been a company lifer, slogging away in various divisions since 1992. Not a founder, nope. But he’s taken the company to new heights. Though Bill Gates is still revered at Microsoft, no one in the company wants him back at the top.

    And god knows, there are plenty of cases where it wasn’t management fakers but stubborn founders who drove a company into the ground. My guess is that Travis Kalanick might have benefited from listening to stodgy managers. His replacement, a management type of dude, has made Uber profitable.

    The fact is, not everyone is Brian Chesky, and no one is like Steve Jobs. The vast majority of companies never take off, and instead fade into ignominy. Very few founders get to the point where investors demand that they retain adult supervision to manage growth, because only the rarest of companies get to that point.

    It’s fun to talk about founder mode, maybe for the same reason that some of us read Ben Horowitz’s founder-porn texts with our noses pressed to the window. Founder mode, which Graham predicts will one day get its closeup in management texts, really applies only to the most exceptional founders, the ones Steve Jobs once described as “the crazy ones.” Their companies aren’t called unicorns for nothing.

    Time Travel

    In 2007, I embedded in a Y Combinator batch of 12 companies. (Starting next year there will be four batches a year, with hundreds of startups.) It was clear even then that Graham, who was extremely hands-on, had developed his views on the primacy of founders. My story ran in Newsweek under the headline “Boot Camp for Billionaires.”

    Every Tuesday during the program, Y Combinator hosts a dinner of chili or stew for the start-ups. At this first one, Graham and [cofounder Jessica] Livingston distribute gray T shirts emblazoned with one of Graham’s pithiest admonitions, MAKE SOMETHING PEOPLE WANT. A second, black shirt is bestowed only to start-ups that achieve a “liquidity event”—a purchase by a larger company or an IPO. It reads, I MADE SOMETHING PEOPLE WANT.

    [ad_2]

    Steven Levy

    Source link

  • The NSA Has a Podcast—Here’s How to Decode It

    The NSA Has a Podcast—Here’s How to Decode It

    [ad_1]

    The spy agency that dared not speak its name is now the Joe Rogan of the SIGINT set. And the pod’s actually worth a listen.

    [ad_2]

    Steven Levy

    Source link

  • Mark Zuckerberg Vows to Be Neutral–While Tossing Gifts to Trump and the GOP

    Mark Zuckerberg Vows to Be Neutral–While Tossing Gifts to Trump and the GOP

    [ad_1]

    This week Mark Zuckerberg sent a letter to Jim Jordan, the chair of the House Judiciary Committee. For months, the GOP-led committee has been on a crusade to prove that Meta, via its once-eponymous Facebook app, engaged in political sabotage by taking down right-wing content. Its investigation has involved thousands of documents, and the committee interviewed multiple employees, which failed to locate a smoking gun. Now, under the guise of offering his take on the subject, Zuckerberg’s letter is a mea culpa where he seems to indicate that there was something to the GOP conspiracy theory.

    Specifically, he said that in 2021 the Biden administration asked Meta “to censor some Covid-related content.” Meta did take the posts down, and Zuckerberg now regrets the decision. He also conceded that it was wrong to take down some content regarding Hunter Biden’s laptop, which the company did after the FBI warned that the reports might be Russian disinformation.

    What stood out to me, besides the letter’s simpering tone, was how Zuckerberg used the word “censor.” For years the right has been using that word to describe what it regards as Facebook’s systematic suppression of conservative posts. Some state attorneys general have even used that trope to argue that the company’s content should be regulated, and Florida and Texas have passed laws to do just that. Facebook has always contended that the First Amendment is about government suppression, and by definition its content decisions could not be characterized as such. Indeed, the Supreme Court dismissed the lawsuits and blocked the laws.

    Now, by using that term to describe the removal of the Covid material, Zuckerberg seems to be backing down. After years of insisting that, right or wrong, a social media company’s content decisions did not deprive people of First Amendment rights—and in fact said that by making such decisions, the company was invoking its free speech rights—Zuckerberg is now handing its conservative critics just what they wanted.

    I asked Meta spokesperson Andy Stone if the company now agrees with the GOP that some of its decisions to take down content can be referred to as “censoring.” Stone said that Zuckerberg was referring to the government when he used that term. But he also pointed me to Zuckerberg’s affirmation that the ultimate decision to remove the posts was Meta’s own. (Responding to the Zuckerberg letter, the White House said, “When confronted with a deadly pandemic, this Administration encouraged responsible actions to protect public health and safety,” and left the final decision to Facebook.)

    Meta can’t have it both ways, The letter is clear—Zuckerberg said the government pressured Meta to “censor” some Covid content. Meta took that material down. Ergo, Meta now characterizes some of its own actions as censorship. Seizing on this, the GOP members of the Judiciary Committee quickly tweeted that Zuckerberg has now outright admitted “Facebook censored Americans.”

    Stone did say that Meta still does not consider itself a censor. So is Meta disputing that GOP tweet? Stone wouldn’t comment on it. It seems that Meta will offer no pushback while GOP legislators and right-wing commentators crow that Facebook now concedes that it blatantly censored conservatives as a matter of policy.

    Meta’s CEO presented Jordan and the GOP with another gift in his letter, involving his private philanthropy. During the 2020 election, Zuckerberg helped fund nonpartisan initiatives to protect people’s right to vote. Republicans criticized Zuckerberg’s effort as aiding the Democrats. Zuckerberg still insists he wasn’t advocating that people vote a certain way, just ensuring they were free to cast ballots. But, he wrote Jordan, he recognized that some people didn’t believe him. So, apparently to indulge those ill-informed or ill-intentioned critics, he now vows not to fund bipartisan voting efforts during this election cycle. “My goal is to be neutral and not play a role one way or another—or even appear to play a role,” he wrote.

    [ad_2]

    Steven Levy

    Source link

  • Steve Jobs Knew the Moment the Future Had Arrived. It’s Calling Again

    Steve Jobs Knew the Moment the Future Had Arrived. It’s Calling Again

    [ad_1]

    Steve Jobs is 28 years old, and seems a little nervous as he starts his speech to a group of designers gathered under a large tent in Aspen, Colorado. He fiddles with his bow tie and soon removes his suit jacket, dropping it to the floor when he finds no other place to set it down. It is 1983, and he’s about to ask designers for their help in improving the look of the coming wave of personal computers. But first he will tell them that those computers will shatter the lives they have led to date.

    “How many of you are 36 years … older than 36?” he asks. That’s how old the computer is, he says. But even the younger people in the room, including himself, are sort of “precomputer,” members of the television generation. A distinct new generation, he says, is emerging: “In their lifetimes, the computer will be the predominant medium of communication.”

    Quite a statement at the time, considering that very few of the audience, according to Jobs’ impromptu polling, owns a personal computer or has even seen one. Jobs tells the designers that they not only will soon use one, but it will be indispensable, and deeply woven into the fabric of their lives.

    The video of this speech is the centerpiece of an online exhibit called The Objects of Our Life, presented by the Steve Jobs Archive, the ambitious history project devoted to telling the story of Apple’s fabled cofounder. When the exhibit went live earlier this month—after the discovery of a long-forgotten VHS tape in Jobs’ personal collection—I found it not only a compelling reminder of the late CEO, but pertinent to our own time, when another new technology is arriving with equal promise and peril.

    The occasion of the speech was the annual Aspen International Design Conference. The theme of that year’s event was “The Future Isn’t What It Used to Be,” making Jobs the perfect speaker. While much of the talk is about his views on making products beautiful, the underlying message is straight out of that Bob Dylan tune: Something is happening and you don’t know what it is. He told his audience things that seemed preposterous: that in a few years more computers would be shipped than cars, and that people would spend more time with those computers than they spend riding in those cars. He told them that computers would become connected with each other, and everyone would use something called electronic mail, which he had to describe because it was such a strange concept then. Computers, he insisted, would become the dominant medium of communication. His goal was to make all that happen, to get to the point “where people are using these things and they go, ‘Wasn’t this the way it always was?’”

    Jobs’ vision seemed to sway his audience, which gave him a standing ovation. Before he left Aspen that week, Jobs was asked to donate an object that would be placed in a time capsule that would commemorate the event. It was to be dug up in 2000. Jobs unhooked the mouse from the Lisa Computer he had brought to demo, and it was sealed in the capsule, along with an 8-track tape of the Moody Blues and a six-pack of beer.

    The speech itself is kind of a time capsule. Jobs was right when he said one day we would not be able to imagine what life was like before these new tools he was ushering into the mainstream. Those of us still around who are, in Jobs’ term, “born precomputer” often astound young people by describing how we did our work (manual typewriters! carbon copies!), communicated with each other (phone booths!), and entertained ourselves (three TV channels! Bonanza!) before computers became our virtual appendages.

    [ad_2]

    Steven Levy

    Source link

  • At 25, Metafilter Feels Like a Time Capsule From Another Internet

    At 25, Metafilter Feels Like a Time Capsule From Another Internet

    [ad_1]

    Jessamyn West used to describe Metafilter as a social network for non-friends, a description belied in part by the tight-knit camaraderie that emerges in an online group of only a few thousand people. West herself is an example: She met her partner on the site. She also describes the Metafilter cohort as “a community of old Web nerds.”

    This month, the venerated site celebrates its 25th anniversary. It’s amazing it has lasted that long; it made it this far in great part thanks to West, who helped stabilize it after a near-death spiral. You could say it’s the site that time forgot—certainly I’d forgotten about it until I decided to mark its big birthday. Metafilter is a kind of digital Brigadoon; visiting it is like a form of time travel. To people who have been around a while, Metafilter seems to preserve in amber the spirit of what online used to be like. The feed is strictly chronological. It’s still text-only. Some members may be influential on Metafilter, but they don’t call themselves influencers, and they don’t sell personally branded cosmetics or garments. As founder Matt Haughey, who stepped down in 2017, says, “It’s a weird throwback thing—like a cockroach that survived.”

    When Haughey started Metafilter in 1999, he envisioned a quick way for people to share cool stuff they saw in what was then a few dozen key blogs. “I never even thought about free-flowing conversations, but it quickly went there,” he says.

    For about a year the community was tiny, maybe 100 visitors a day, but in 2000 it was featured in a popular blog called Cool Site of the Day, and 5,000 people checked it out. That helped Metafilter morph from a niche link-sharing site into a community where smart people also discussed what was cool on the internet. In the early aughts, Haughey felt too many people were joining, so he cut off new membership. (People could still view the conversation as an outsider.) For years, the only way you could get in was to email him and beg. Later, when he decided to charge a $5 fee, 4,000 people signed up on the first day. The fee also helped to weed out potential trolls. That, and fairly paid moderators, maintained civility on the site. More importantly, the community itself didn’t tolerate awful behavior.

    One popular feature from early on was “Ask Metafilter,” where members seek advice and tips from the Metafilter hive mind. “When you’re pitching a question to 10,000 really smart nerds, chances are somebody has to be experienced in the thing you’re asking,” says Haughey. It became an invaluable repository of knowledge, not just to the community but those who stumbled on the answers through Google. Quora later launched with a similar idea, but with ambitions for a mega-footprint. That wasn’t Metafilter’s thing.

    “I didn’t want to be Walmart,” says Haughey. “We’re just the neighborhood corner store.” At one point he consulted with a kid named Aaron Swartz, who had an idea for a site that would be like a social-media wiki for everything. Then Swartz joined the first Y Combinator batch and hooked up with some founders starting a company called Reddit, which was basically Metafilter with limitless ambition.

    Haughey was OK with that. In the early 2010s, things were pretty cush. Metafilter’s core community was tight, and millions of tourists dropped in, drawn by Google search results. Haughey monetized them via Google ads and was able to drop his day job as a web designer, buy a house, and raise a family. But beginning in 2012, Google made a number of spam-fighting changes to its ranking algorithms, and Metafilter, for mysterious reasons, suffered collateral damage. Over the next couple of years, revenue plunged and Metafilter had to lay off some employees.

    [ad_2]

    Steven Levy

    Source link

  • He Helped Invent Generative AI. Now He Wants to Save It

    He Helped Invent Generative AI. Now He Wants to Save It

    [ad_1]

    Illia Polosukhin doesn’t want big companies to determine the future of artificial intelligence. His alternative vision for “user-owned AI” is already starting to take shape.

    [ad_2]

    Steven Levy

    Source link

  • My Memories Are Just Meta’s Training Data Now

    My Memories Are Just Meta’s Training Data Now

    [ad_1]

    In R. C. Sherriff’s novel The Hopkins Manuscript, readers are transported to a world 800 years after a cataclysmic event ended Western civilization. In pursuit of clues about a blank spot in their planet’s history, scientists belonging to a new world order discover diary entries in a swamp-infested wasteland formerly known as England. For the inhabitants of this new empire, it is only through this record of a retired school teacher’s humdrum rural life, his petty vanities and attempts to breed prize-winning chickens, that they begin to learn about 20th-century Britain.

    If I were to teach futuristic beings about life on earth, I once believed I could produce a time capsule more profound than Sherriff’s small-minded protagonist, Edgar Hopkins. But scrolling through my decade-old Facebook posts this week, I was presented with the possibility that my legacy may be even more drab.

    Earlier this month, Meta announced that my teenage status updates were exactly the kind of content it wants to pass on to future generations of artificial intelligence. From June 26, old public posts, holiday photos, and even the names of millions of Facebook and Instagram users around the world would effectively be treated as a time capsule of humanity and transformed into training data.

    That means my mundane posts about university essay deadlines (“3 energy drinks down 1,000 words to go”) as well as unremarkable holiday snaps (one captures me slumped over my phone on a stationary ferry) are about to become part of that corpus. The fact that these memories are so dull, and also very personal, makes Meta’s interest more unsettling.

    The company says it is only interested in content that is already public: private messages, posts shared exclusively with friends, and Instagram Stories are out of bounds. Despite that, AI is suddenly feasting on personal artifacts that have, for years, been gathering dust in unvisited corners of the internet. For those reading from outside Europe, the deed is already done. The deadline announced by Meta applied only to Europeans. The posts of American Facebook and Instagram users have been training Meta AI models since 2023, according to company spokesperson Matthew Pollard.

    Meta is not the only company turning my online history into AI fodder. WIRED’s Reece Rogers recently discovered that Google’s AI search feature was copying his journalism. But finding out which personal remnants exactly are feeding future chatbots was not easy. Some sites I’ve contributed to over the years are hard to trace. Early social network Myspace was acquired by Time Inc. in 2016, which in turn was acquired by a company called Meredith Corporation two years later. When I asked Meredith about my old account, they replied that Myspace had since been spun off to an advertising firm, Viant Technology. An email to a company contact listed on its website was returned with a message that the address “couldn’t be found.”

    Asking companies still in business about my old accounts was more straightforward. Blogging platform Tumblr, owned by WordPress owner Automattic, said unless I’d opted out, the public posts I made as a teenager will be shared with “a small network of content and research partners, including those that train AI models” per a February announcement. YahooMail, which I used for years, told me that a sample of old emails—which have apparently been “anonymized” and “aggregated”—are being “utilized” by an AI model internally to do things like summarize messages. Microsoft-owned LinkedIn also said my public posts were being used to train AI although some “personal” details included in those posts were excluded, according to a company spokesperson, who did not specify what those personal details were.

    [ad_2]

    Morgan Meaker

    Source link

  • Don’t Let Mistrust of Tech Companies Blind You to the Power of AI

    Don’t Let Mistrust of Tech Companies Blind You to the Power of AI

    [ad_1]

    Meanwhile, in less visible ways, AI is already changing education, commerce, and the workplace. One friend recently told me about a big IT firm he works with. The company had a lengthy and long-established protocol for launching major initiatives that involved designing solutions, coding up the product, and engineering the rollout. Moving from concept to execution took months. But he recently saw a demo that applied state-of-the-art AI to a typical software project. “All of those things that took months happened in the space of a few hours,” he says. “That made me agree with your column. Tons of the companies that surround us are now animated corpses.” No wonder people are freaked.

    What fuels a lot of the rage against AI is mistrust of the companies building and promoting it. By coincidence I had a breakfast scheduled this week with Ali Farhadi, the CEO of the Allen Institute for AI, a nonprofit research effort. He’s 100 percent convinced that the hype is justified but also empathizes with those who don’t accept it—because, he says, the companies that are trying to dominate the field are viewed with suspicion by the public. “AI has been treated as this black box thing that no one knows about, and it’s so expensive only four companies can do it,” Farhadi says. The fact that AI developers are moving so quickly fuels the distrust even more. “We collectively don’t understand this, yet we’re deploying it,” he says. “I’m not against that, but we should expect these systems will behave in unpredictable ways, and people will react to that.” Fahadi, who is a proponent of open source AI, says that at the least the big companies should publicly disclose what materials they use to train their models.

    Compounding the issue is that many people involved in building AI also pledge their devotion to producing AGI. While many key researchers believe this will be a boon to humanity—it’s the founding principle of OpenAI—they have not made the case to the public. “People are frustrated with the notion that this AGI thing is going to come tomorrow or one year or in six months,” says Farhadi, who is not a fan of the concept. He says AGI is not a scientific term but a fuzzy notion that’s mucking up the adoption of AI. “In my lab when a student uses those three letters, it just delays their graduation by six months,” he says.

    Personally I’m agnostic on the AGI issue—I don’t think we’re on the cusp of it but simply don’t know what will happen in the long run. When you talk to people on the front lines of AI, it turns out that they don’t know, either.

    Some things do seem clear to me, and I think that these will eventually become apparent to all—even those pitching spitballs at me on X. AI will get more powerful. People will find ways to use it to make their jobs and personal lives easier. Also, many folks are going to lose their jobs, and entire companies will be disrupted. It will be small consolation that new jobs and firms might emerge from an AI boom, because some of the displaced people will still be stuck in unemployment lines or cashiering at Walmart. In the meantime, everyone in the AI world—including columnists like me—would do well to understand why people are so enraged, and respect their justifiable discontent.

    [ad_2]

    Steven Levy

    Source link

  • It’s Time to Believe the AI Hype

    It’s Time to Believe the AI Hype

    [ad_1]

    Folks, when dogs talk, we’re talking Biblical disruption. Do you think that future models will do worse on the law exams?

    If nothing else, this week proves that the rate of AI progress isn’t slowing at all. Just ask the people building these models. “A lot of things have happened—internet, mobile,” says Demis Hassabis, cofounder of DeepMind and now Google’s AI czar, in a post-keynote chat at I/O. “AI is going maybe three or four times faster than those other revolutions. We’re in a period of 25 or 30 years of massive change.” When I asked Google search VP Liz Reid to name a big challenge, she didn’t say it was to keep the innovation going—instead, she cited the difficulty of absorbing the pace of change. “As the technology is early, the biggest challenge is about even what’s possible,” she says. “It’s understanding what the models are great at today, and what they are not great at but will be great at in three months or six months. The technology is changing so fast that you can get two researchers in the room who are working on the same project, and they’ll have totally different views when something is possible.”

    There’s universal agreement in the tech world that AI is the biggest thing since the internet, and maybe bigger. And when non-techies see the products for themselves, they most often become believers too. (Including Joe Biden, after a March 2023 demo of ChatGPT.) That’s why Microsoft is well along on a total AI reinvention, why Mark Zuckerberg is now refocusing Meta to create artificial general intelligence, why Amazon and Apple are desperately trying to keep up, and why countless startups are focusing on AI. And because all of these companies are trying to get an edge, the competitive fervor is ramping up new innovations at a frantic page. Do you think it was a coincidence that OpenAI made its announcement a day before Google I/O?

    Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos aren’t lying. We will eventually become acclimated to the AI marvels unveiled this week. The smartphone once seemed exotic; now it’s an appendage no less critical to our daily life than an arm or a leg. At a certain point AI’s feats, too, may not seem magical any more. But the AI revolution will change our lives, and change us, for better or worse. And we haven’t even seen GPT-5 yet.

    Time Travel

    Sure, I could be wrong about AI. But consider the last time I made such a call. In 1995, I joined Newsweek—the same organ where Clifford Stoll had just dismissed the internet as a hoax—and at the end of the year argued of this new digital medium, “This Changes Everything.” Some of my colleagues thought I’d bought into overblown hype. Actually, reality exceeded my hyperbole.

    In 1995, the Internet ruled. You talk about a revolution? For once, the shoe fits. “In the long run it’s hard to exaggerate the importance of the Internet,” says Paul Moritz, a Microsoft VP. “It really is about opening communications to the masses.” And 1995 was the year that the masses started coming. “If you look at the numbers they’re quoting, with the Web doubling every 53 days, that’s biological growth, like a red tide or population of lemmings,” says Kevin Kelly, executive editor of WIRED. “I don’t know if we’ve ever seen technology exhibit that sort of growth.” In fact, there’s a raging controversy over exactly how many people regularly use the Net. A recent Nielsen survey pegged the number at an impressive 24 million North Americans. During the course of the year the discussion of the Internet ranged from sex to stock prices to software standards. But the most significant aspect of the Internet has nothing to do with money or technology, really. It’s us.

    [ad_2]

    Steven Levy

    Source link

  • Tech Leaders Once Cried for AI Regulation. Now the Message Is ‘Slow Down’

    Tech Leaders Once Cried for AI Regulation. Now the Message Is ‘Slow Down’

    [ad_1]

    The other night I attended a press dinner hosted by an enterprise company called Box. Other guests included the leaders of two data-oriented companies, Datadog and MongoDB. Usually the executives at these soirees are on their best behavior, especially when the discussion is on the record, like this one. So I was startled by an exchange with Box CEO Aaron Levie, who told us he had a hard stop at dessert because he was flying that night to Washington, DC. He was headed to a special-interest-thon called TechNet Day, where Silicon Valley gets to speed-date with dozens of Congress critters to shape what the (uninvited) public will have to live with. And what did he want from that legislation? “As little as possible,” Levie replied. “I will be single-handedly responsible for stopping the government.”

    He was joking about that. Sort of. He went on to say that while regulating clear abuses of AI like deepfakes makes sense, it’s way too early to consider restraints like forcing companies to submit large language models to government-approved AI cops, or scanning chatbots for things like bias or the ability to hack real-life infrastructure. He pointed to Europe, which has already adopted restraints on AI as an example of what not to do. “What Europe is doing is quite risky,” he said. “There’s this view in the EU that if you regulate first, you kind of create an atmosphere of innovation,” Levie said. “That empirically has been proven wrong.”

    Levie’s remarks fly in the face of what has become a standard position among Silicon Valley’s AI elites like Sam Altman. “Yes, regulate us!” they say. But Levie notes that when it comes to exactly what the laws should say, the consensus falls apart. “We as a tech industry do not know what we’re actually asking for,” Levie said, “I have not been to a dinner with more than five AI people where there’s a single agreement on how you would regulate AI.” Not that it matters—Levie thinks that dreams of a sweeping AI bill are doomed. “The good news is there’s no way the US would ever be coordinated in this kind of way. There simply will not be an AI Act in the US.”

    Levie is known for his irreverent loquaciousness. But in this case he’s simply more candid than many of his colleagues, whose regulate-us-please position is a form of sophisticated rope-a-dope. The single public event of TechNet Day, at least as far as I could discern, was a livestreamed panel discussion about AI innovation that included Google’s president of global affairs Kent Walker and Michael Kratsios, the most recent US Chief Technology Officer and now an executive at Scale AI. The feeling among those panelists was that the government should focus on protecting US leadership in the field. While conceding that the technology has its risks, they argued that existing laws pretty much cover the potential nastiness.

    Google’s Walker seemed particularly alarmed that some states were developing AI legislation on their own. “In California alone, there are 53 different AI bills pending in the legislature today,” he said, and he wasn’t boasting. Walker of course knows that this Congress can hardly keep the government itself afloat, and the prospect of both houses successfully juggling this hot potato in an election year is as remote as Google rehiring the eight authors of the transformer paper.

    The US Congress does have legislation pending. And the bills keep coming—some perhaps less meaningful than others. This week, Representative Adam Schiff, a California Democrat, introduced a bill called the Generative AI Copyright Disclosure Act of 2024. It mandates that large language models must present to the copyright office “a sufficiently detailed summary of any copyrighted works used … in the training data set.” It’s not clear what “sufficiently detailed” means. Would it be OK to say “We simply scraped the open web?” Schiff’s staff explained to me that they were adopting a measure in the EU’s AI bill.

    [ad_2]

    Steven Levy

    Source link

  • Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work

    Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work

    [ad_1]

    Ever since the rollout of ChatGPT in November 2022, many people in science, business, and media have been obsessed with AI. A cursory look at my own published work during that period fingers me as among the guilty. My defense is that I share with those other obsessives a belief that large language models are the leading edge of an epochal transformation. Maybe I’m swimming in generative Kool-Aid, but I believe AI advances within our grasp will change not only the way we work, but the structure of businesses, and ultimately the course of humanity.

    Not everyone agrees, and in recent months there’s been a backlash. AI has been oversold and overhyped, some experts now opine. Self-styled AI-critic-in-chief Gary Marcus recently said of the LLM boom, “It wouldn’t surprise me if, to some extent, this whole thing fizzled out.” Others claim that AI is mired in the “trough of disillusionment.”

    This week we got some data that won’t resolve the larger questions but provides a snapshot of how the US, if not the world, views the advent of AI and large language models. The Pew Research Center—which did similar probes during the rise of the internet, social media, and mobile devices—released a study of how ChatGPT was being used, regarded, and trusted. The sample was taken between February 7 and 11 of this year.

    Some of the numbers at first seem to indicate that the LLM controversy might be a parochial disagreement that most people don’t care about. A third of Americans haven’t heard of ChatGPT. Just under a quarter have used it. Oh, and for all the panic about how AI is going to flood the public square with misinformation about the 2024 election? So far, only 2 percent of Americans have used ChatGPT to get information about the presidential election season already underway.

    More broadly, though, data from the survey indicates that we’re seeing a powerful technology whose rise is just beginning. If you accept Pew’s sample as indicative of all Americans, millions of people are indeed familiar with ChatGPT. And one thing in particular stands out: While 17 percent of respondents said they have used it for entertainment and an identical number says they’ve tried it to learn something new, a full 20 percent of adults say that they have used ChatGPT for work. That’s up dramatically from the 12 percent who responded affirmatively when the same question was asked six months earlier—a rise of two-thirds.

    When I spoke to Colleen McClain, a Pew research associate involved in the study, she agreed that it seems to track with other huge technological shifts. “If you look at our trend charts over time on internet access, smartphones, social media, certainly some of them show this uptick,” she says. For some technologies there had been a leveling off, she adds. But in the ones she mentioned, the plateau came only when so many people came on board that there weren’t many stragglers left.

    What’s crazy about that sudden jump in ChatGPT business use from 12 percent to 20 percent is that we’re only at the beginning stages of humans collaborating with these models. And the tools to fully make use of ChatGPT are in a nascent status. That’s changing fast. OpenAI, ChatGPT’s creator, is going full tilt, and AI giants Microsoft and Google are still in the process of diverting their workforces to redesign every product line to integrate conversational AI. And startups like Sierra, which is building agents for corporate customers, are enabling bespoke usages that take advantage of multiple models. As this process continues, more people will use AI tools. And since the foundation models are getting exponentially better—am I hearing that GPT5 will show up this year?—that will make them even more compelling. This raises the possibility that the quality of virtually all work will reside in how well one can draw out the talents of a robot collaborator.

    What past technology can help us understand the trajectory of the rocket ship we’re on? While the near limitless ceiling of AI makes it hard to find an analog, I suggest the uptake of spreadsheets. Dan Bricklin and Bob Frankston invented them in 1978, and a year later the concept was embodied in VisiCalc, which at the time ran only on Apple computers. Spreadsheets had a phenomenal and disruptive effect on the business world. More than mere accounting tools, they triggered an era of business innovation and shook up the flow of information inside companies. Yet it took a few years before the business world widely adopted spreadsheets. The turning point came with a new and more powerful product called Lotus 1, 2, 3, which ran on the IBM PC. The current and near-future startups in the AI world, like Sierra, are all hoping to become the Lotuses of our era—but also to be much more consequential and lasting. Spreadsheets are largely limited to the business domain. LLMs can seemingly mess with anything.

    [ad_2]

    Steven Levy

    Source link

  • The Mindblowing Experience of a Chatbot That Answers Instantly

    The Mindblowing Experience of a Chatbot That Answers Instantly

    [ad_1]

    If all that is true—and there’s no way to tell right now—Groq might well pose a threat to the dominance of Nvidia. Ross is careful when discussing this. “Let’s be clear—they’re Goliath, and we’re David,” he says. “It would be very, very foolish to say that Nvidia is worried about us.” When asked about Groq, though, Nvidia’s prompt response indicates that the startup is indeed on its radar. With near-Groq-like speed, the Goliath’s PR team sent me a statement indicating that Nvidia’s AI advantage is not only in its chips but other services it provides to customers. like AI software, memory, networking, and other goodies. “AI compute in the data center is a complex challenge that requires a full-stack solution,” it says, implying that its unnamed competitor might be stack-challenged.

    In any case, Ross says he’s not competing with Nvidia but offering an alternative experience—and not just in terms of speed. He’s on a mission to make sure that Groq will deliver fair results unsullied by political point of view or pressure from commercial interests. “Groq will never be involved in advertising, ever,” he says. “Because that’s influencing people. AI should always be neutral, it should never tell you what you should be thinking. Groq exists to make sure everyone has access. It’s helping you make your decision, not its decisions.” Great sentiments, but even the Groq chatbot, when I quizzed it about early-stage idealism, is skeptical about such claims. “The pressure to generate profits and scale can lead even well-intentioned founders to compromise on their ideals,” it promptly replied.

    One other thing. You may have heard that Elon Musk has given the name “Grok” to the LLM created by his AI company. This took Ross by surprise, since he says he trademarked the name of his company when he founded it in 2016, and he believes it covers the phonetically identical original term. “We called dibs,” he says. “He can’t have it. We’ve sent a cease-and-desist letter.” So far he hasn’t gotten a response from Musk.

    When I asked Groq about the name dispute, it first cautioned me that it doesn’t provide legal opinions. “However, I can provide some context that may help you understand the situation better,” it said. The bot explained that the term grok has been used in the industry for decades, so Musk would be within his rights to use it. On the other hand, if Groq trademarked the term, it might well have an exclusive claim. All accurate and on the mark—everything you’d expect from a modern LLM. What you would not expect was that the reply appeared in less than a second.

    Time Travel

    In my book on Google, In the Plex, I explained how the company, and its cofounder Larry Page, prioritized speed and recognized that faster products are used not only more often, but differently. It became an obsession within Google.

    Engineers working for Page learned quickly enough of [his speed] priority. “When people do demos and they’re slow, I’m known to count sometimes,” he says. “One one-thousand, two one-thousand. That tends to get people’s attention.” Actually, if your product could be measured in seconds, you’d already failed. Paul Buchheit remembers one time when he was doing an early Gmail demo in Larry’s office. Page made a face and told him it was way too slow. Buchheit objected, but Page reiterated his complaint, charging that the reload took at least 600 milliseconds. (That’s six-tenths of a second.) Buchheit thought, You can’t know that, but when he got back to his own office he checked the server logs. Six hundred milliseconds. “He nailed it,” says Buchheit.

    [ad_2]

    Steven Levy

    Source link

  • My Quest to Fix a Crashing Roku App Provides a Warning About AI

    My Quest to Fix a Crashing Roku App Provides a Warning About AI

    [ad_1]

    Two words in this statement popped out to me like a flying dinosaur in a mixed-reality headset: when possible. When I flagged this in a subsequent call, Roku reassured me that a fix for my issue will happen. In the worst-case scenario, if the problem won’t be solved in the next OS, sufferers will be provided some incantation to have their televisions backdated to the previous operating system. (Does this mean we’re back to hitting that home button five times?) And if that doesn’t work, which Roku says totally won’t be the case, the company will make sure to make everyone satisfied somehow. The company was ready to satisfy me right away, offering me a new TV. I declined, since they weren’t offering it to everyone whose Netflix was crashing.

    I think Roku is dealing in good faith. I’d been happy with my Roku-powered smart TV, until I wasn’t because it kept crashing. I take Roku at its word that it’s working on the problem and might actually fix it. I acknowledge that updating software on a static platform like a television set is a particular challenge. And God knows how common bugs are in software.

    In any case, my inability to stream Netflix without resetting the TV every time I watch a movie is a pretty trivial problem. And you know what? Even if I never watched Netflix again, I’d live. Now that Netflix has added advertising to its business model, I’m dreading the day when everyone on the service is exposed to endless commercials, unless we pay even more than the already out-of-control monthly fee. Beef was great, but I’d pass if every 10 minutes it was interrupted by pharma ads.

    Nevertheless, my Roku problem is a warning. Artificial intelligence is thrusting us into an era that intertwines our lives with digital technology more than ever. If you think that our current software is complicated, just wait until everything works on neural nets! Even the people who create those are mystified about how they work. And, boy, can things go wrong with that stuff. Just this week, OpenAI suffered a few hours where its chatbots blurted out incoherent comments, evoking the word salad of a stroke victim or the Republican front-runner. And Google had to temporarily stop its Gemini LLM from generating images of people, because of what it called “historical inconsistencies” in how it depicted the diversity of humanity. These are disturbing portents. We’re now in the process of turning over much of our activities to these systems. If they fail, “community discussions” won’t save us.

    Time Travel

    Digital technology is too damn complicated, and we’re doomed to a life of bug-resolution. That was my observation 30 years ago when I wrote Insanely Great, in a passage spurred by a freezing problem I had with my Macintosh IIcx. As the Mac operating system struggled to handle a complicated ecosystem of extensions, boundary-pushing applications, and data at a scale the original had not imagined, bugs appeared that required Sherlock Holmes–level sleuthing to resolve.

    This was the background to my Macintosh troubles: the computer had become more complicated than anyone had imagined. I enacted a short-term fix, stripping the system of possible offenders. I was stepping back in time, making the Mac emulate the simpler, though less useful, computer I once had. As I wiped out Super Boomerang, Background Printing, On Location and Space Saver, I pictured myself as Astronaut Dave in 2001, determinedly yanking out the chips in the supercomputer H.A.L., with the uncomfortable feeling that I was deconstructing a personality. When I finished my Macintosh IIcx was not so atavistic as to sing “Daisy,” but it was, in a Mac sense, no longer itself. On the other hand, it no longer hung.

    [ad_2]

    Steven Levy

    Source link