Photo-Illustration: New York Magazine
I can remember my life before and after I saw the interview. In one of several promo radio chats for The Life of a Showgirl, Taylor Swift revealed her favorite lyric from the new album: “There’s a song called ‘Father Figure,’ where the first line of the second verse says, ‘I pay the check / Before it kisses the mahogany grain.’” She pauses and grins as if waiting for a gasp. It never arrives. She continues, undeterred: “I’m like, That’s my favorite type of writing, right? Where you have to think about, What do those words mean? Oh, somebody got the bill before it hit the table.”
Watching her explain the word mahogany, I knew I was doomed, both personally and as part of a larger species. I saw God himself signing the check for our obliteration (before it hit the table). It’s not just that Taylor Swift — one of our greatest aughts-era songwriters, who used to effortlessly shed lines like “You call me up again just to break me like a promise / So casually cruel in the name of being honest” — thought these lyrics constituted good descriptive writing. It’s not just that she smiles so proudly while providing an explicit description of paying a restaurant bill very quickly. It’s not just that the album also features the line “Did you girlboss too close to the sun?” and a startling, detailed account of how she plans to start a neighborhood full of children with inherited CTE with the help of her fiancé’s gigantic football dick (“Have a couple kids / Got the whole block looking like you”). It’s that Swift wrote that mahogany line thinking, This is going to require a level of semiotic thinking that my audience is perhaps no longer capable of. And the thing is, she was absolutely correct. —Rachel Handler
Within the first three minutes of Untamed, a 2025 Netflix drama about crime in the wilderness, two climbers scaling a mountain realize they’re in a tough spot. One of their anchors wobbles. A foot slips. Things already look iffy and then, from far above them, a dead body comes hurtling over the edge of the cliff, gets tangled in their ropes, and sends them careening off the rock face. Another Netflix show, Wayward, opens with a young man sprinting frantically out of what looks like a prison facility, covering his ears so he can’t hear the cultish mind-control texts being blasted at him from somewhere in the darkness. And in the first scene of Pulse (you guessed it — also Netflix), a school bus full of teens plunges off a bridge into a stormy ocean.
From a distance, it looks like a good strategy. Grab the audience instantly. Leave no space for viewers to feel bored or unengaged. Front-load the first 90 seconds of any new drama with peril, death, catastrophe, and contextless clues. Netflix is the worst but not the only offender here. This whack-you-with-a-plot mentality has proliferated on Prime this year, too. Take Ballard, which begins with Maggie Q holding an enormous gun while chasing someone through darkened streets and shattering a glass window before the guy sprawls on the floor in front of her. We Were Liars drops us into underwater footage of an unconscious woman with a head wound as the voice-over says, “Something terrible happened last summer, and I have no memory of what, or who, hurt me.” Each opening gambit becomes an advertisement for the thing you’re already watching, a blast of spoon-fed emotional stakes that treat viewers as mindless, tasteless sacks of nerve endings sensitive only to the highest-grade stimuli.
It all comes off as a cynical bid for attention based on an understanding that audiences do not react to insight or nuance or thoughtful tone-setting. No need to question, no need to wait for gratification. Even the shows aiming for prestige have to play along, at least in those first few minutes. The Beast in Me, a Claire Danes thriller, will show Danes, streaked with blood and wailing, scant seconds after we first hit “play.” For series that want to dodge the obvious choices of “person running through woods,” “person drowning,” or “instant discovery of corpse,” House of Guinness provides a model that’s somehow even more ridiculous than those. The show, about a somber, political Succession-style struggle over the future of the family business, doesn’t lend itself well to bodies falling off a cliff, so it cuts straight to big, flashing, wall-décor-style onscreen text that articulates exactly what this thing is about: Water. Malted Barley. Hops. Yeast. Copper. Oak. Fire. Family. Money. Rebellion. Power. —Kathryn VanArendonk
If the essential quality of good theater (as my colleague Sara Holdren has written) is that it should be something that can happen only in a theater — that it’s alive in the room with you, capable of literally leaping into the audience should the participants decide to do so — the relentless creep of giant glittering screens is its opposite. Now a staple of set design, the device does often serve some purpose: Jamie Lloyd’s production of Sunset Blvd. and George Clooney’s turn in Good Night, and Good Luck earlier this year, or Network and 1984 a few seasons back, deploy them to talk about issues of image and reality and surveillance. But in the actual room, the eye almost inevitably goes to the moving jumbotron image instead of the person, whether it’s a tracking shot of Nicole Scherzinger or merely projected clouds drifting behind the cast. When it’s a live feed, an extra problem can come into view: Because stage performance calls for bigger gestures and expressions than acting for the camera does, a real Broadway belter’s face can show up onscreen as a lot of straining neck cords and visible tonsils. The theatrical stage is the one place where — over 2,500 years or so — practitioners have figured out how to convey storytelling directly from one person to a roomful of viewers, fusing music and drama and comedy and dancing in three full dimensions. Now, somehow, we’ve pushed it back to two. —Christopher Bonanos
Earlier this year, I signed up to teach a course at the same prestigious university I’d attended more than a decade ago. The syllabus I prepared required students to read a short book for several of our sessions, which seemed reasonable. When I was in college, professors routinely assigned an entire novel or biography for a single class session.
A few months before the semester began, my proposed syllabus was reviewed by an academic committee. I was excited for feedback from experienced instructors, anticipating strong opinions on thematic consistency with pedagogical objectives and general rigor. But the only feedback I received was to make the readings shorter. The suggested limit was fewer than 100 pages per class, ostensibly to encourage accountability. I revised the syllabus. Narratives with movement and arc became excerpts and snapshots, curated to relay the essence and little else.
Is this really so bad? The truth is that when I was assigned a full book to read in college, I failed to finish it more often than not. But there was something in being told to try anyway, in the implication that a book worth assigning is worth experiencing in its entirety, and that the truth is best when distilled from the whole story. Students, meanwhile, are the same as ever. The ratio might have changed, but there is still a core who read and participate diligently, and I wish they could have reaped more benefit from my assignments. The rest have not done the truncated readings any more than they would have read a full book, but now they feel less guilty about it. —Anonymous
Watching debates is not a good way to learn things and form opinions about those things — change my mind! Over the past few years, debate as an activity has broken out of high-school extracurriculars, political elections, and cable news and has come to infect media and discourse at all levels. And now, it has escaped the manosphere containment zone. Debate content was once the limited domain of “Debate me, coward” dweebs like Ben Shapiro and Jordan Peterson, but in 2025, debate clips took over the internet, their snippets edited to reinforce the biases of the poster: Sam Seder “owning” Ethan Klein on leftist news feeds, the reverse on Zionist ones; Mehdi Hasan arguing with, essentially, Nazi youth. Outside the Twitch streams of individual debate-content creators like Destiny, much of this stuff comes from Jubilee, a digital-media company with 10.5 million YouTube subscribers that professes a corporate mission to “provoke understanding & create human connection.” Jubilee structures these oratorical face-offs like dystopian MrBeast challenges: “1 Conservative vs 20 Feminists,” starring Candace Owens. “1 Conservative vs 25 LGBTQ+,” starring Michael Knowles. The guest debater sits at a table with a chess timer in the middle of a circle of challengers, who enter the ring one after another to get TKO’d in a sort of battle royal for dorks.
The thing about debate as a rhetorical format is that it’s generally a dumb way to consume information. Winning strategies are often not intellectually curious or even honest: spreading, an overreliance on hounding an opponent about logical fallacies, overwhelming with a rapid-fire litany of (possibly incorrect) data, dodging, and needling. They’re more about persuasion than communication, more about building a case backward from a preordained point than building up toward something. When Charlie Kirk argues that trans women aren’t women (against 25 liberal college students), or when Mehdi Hasan faces off against 20 far-right conservatives, at least two of which turned out to be self-avowed fascists, a series of hateful, harmful lies gets repackaged into “points,” like neat little coins in a video game, toward settling a larger score. It’s brain rot with a veneer of serious infotainment. —Rebecca Alter
Rebecca Yarros occupies a rarefied spot on the best-seller list. Twelve million copies of her horny dragon books have sold in the U.S. in less than two years. The Empyrean series, which follows a young woman surviving military school with the help of a mind-body connection to a pair of dragons, was initially planned for three volumes, then stretched to five. When the third, Onyx Storm, appeared in January, it became the fastest-selling adult novel in 20 years — a curious fact given that the book is borderline incomprehensible. Of course, few readers flock to this series for its prose, but the first volume’s war-college setting, where students gather in the quad every morning to honor their peers who died the day before, scratched a dystopian-fantasy itch I hadn’t felt since completing the original Hunger Games trilogy. Onyx Storm, however, is packed with so many new characters, locations, and magical abilities that I had to use a fan-made guide to keep track of it all. A quarter of the way through, I lost track of why exactly the main characters abandon the war to end all wars brewing in their homeland to travel halfway around the world, and I eventually stopped trying to understand it altogether. —Julie Kosin
When the first Jurassic Park premiered in 1993, reviewers found plenty to admire — its originality, the cinematography, Laura Dern. But a more consistent point of praise was how the movie, in many ways taking its cues from the novel it was based on, committed to accurate, or at least plausible, science. “It was the most scientific and realistic vision of dinosaurs we’d ever had,” paleoartist John Gurche told Le Monde earlier this year. One historian wrote that the film “did actually drive and develop the science and technology of ancient DNA research.” That has changed somewhat — we now know manymost dinosaurs had feathers and velociraptors were built like poodles — but even now, Dern and an ascot-adorned Sam Neill manage to deliver lines that are conceivable to the average fan with a museum-placard level of paleontology knowledge.
This is part of the reason why, when Jurassic World: Rebirth came out earlier this year, fans were disappointed not only with its meandering plot but also by the way the film’s principal paleontologist, Dr. Loomis (a distractingly hot Jonathan Bailey), occasionally felt, let’s say, unconvincing. “The greatest scientific knowledge that he demonstrates at any point in the film is high-school level biology,” wrote an aggrieved redditor. One paleontologist speculated that, “as opposed to the first film — no paleontologist had been seriously consulted.” (The movie does credit a scientific consultant.) Of course, all six Jurassic sequels have had their scientific follies (hello, mutant locusts of Jurassic World: Dominion). But the plot of Rebirth was science-fudging less in the name of spectacle than convenience. I will spare you the entire plot, but know that it relies in part on the idea that dinosaurs can live only near the equator — a detail repeated three times in the film’s first 30 minutes — because of the warm climate and “oxygen-rich” atmosphere, which, Loomis says, is similar to what the climate was like 60 million years ago. If that sounds overly simplistic, don’t worry — it’s also just wrong. Oxygen levels today are fairly uniform worldwide and roughly the same as those in the age of the dinosaurs, and dinosaurs themselves lived in a wide range of climates. Other grievances include the fact that mosasaurs, the movie’s main species, aren’t actually dinosaurs and that, no, dinosaurs didn’t live to be centuries old because of their big hearts. Fortunately, Loomis offers another kernel of wisdom: “Intelligence is massively overrated as an adaptive trait.” —Paula Aceves
A 2024 Pew survey revealed that the group of U.S. adults most likely to consult astrology at least yearly is LGBTQ+ women, at 63 percent. Pew must not have surveyed anyone in Brooklyn: Based on my own observations, I would put the number at something closer to 102 percent. Belief in superstition and magic has peaked among my friends. They no longer just consult the planets and stars and tarot decks and Chani Nicholas; now they believe in moon phases exerting their control, bringing good and bad auspices and explaining why a Hinge date went a certain way or why everyone at work has the sniffles. I could abide tarot and astrology as tools for people to talk about their lives, but the moon stuff, to me, comes across as a symptom of some widely adopted serf mind-set, a response to the economic realities of widening wealth gaps and billionaires acting like sun gods. Throw in the rise in stories about AI-enabled religious psychosis, and the transformation of Etsy into Taskrabbit for witches, and 2025 was the year of people literally believing in ghosts in the machine. —R.A.
Typically, the Supreme Court has decided the weightiest matters through its “merits docket”: a multistep, sometimes yearslong process that involves the parties to a suit, plus interested experts and organizations, taking their best shot at making their case in writing. The justices grill advocates during oral argument and, when they’re ready, can write hundreds of pages to explain their reasoning and provide evidence and case law to back it up. It’s not that this process cannot yield outrageous or specious results, but at least the majority has to give the public an explanation.
This year, a more expedient track has been found. First, the Trump administration openly breaks the law as it has long been understood, then a lower-court judge rules against it, and then the administration appeals on the “shadow docket” — which avoids the normal briefing-and-hearing process by claiming an emergency. Since January, according to the Brennan Center for Justice, the Supreme Court has ruled at least partly for the Trump administration in 20 of an unprecedented 23 emergency appeals. In these late-night orders, the public is lucky to get a few sentences of justification. Seven had no written rationale at all. In one dissent on terminating federal grants, Justice Elena Kagan called the majority opinion’s reasoning “at the least underdeveloped, and very possibly wrong.” —Irin Carmon
The word interview used to mean something. At the very least, it implied a conversation aimed at extracting real information. That idea feels quaint in today’s video-driven media environment, in which the balance of power has flipped: Famous guests hold the power because they now have a million friendly alternatives, and hosts are just grateful to be there. The modern “interview” is thus fluffy by default, oriented more toward gimmicks, get-to-know-me games, and general sycophancy. Intentional dumbness is now virtue-signaling relatability. Beneath it all is a dynamic in which the aesthetic of the interview (people in chairs with microphones between them recalling a history of more serious images) carries more weight than the interview’s substance. That dynamic reached a peak this year when Benjamin Netanyahu appeared on the bro-y, sports-and-bullshit-heavy Full Send Podcast with the Nelk Boys in July. The segment, criticized for offering a soft platform to a world leader amid a devastating humanitarian crisis, included the following exchange:
Interviewer: “You ever tried Chick-fil-A?”
Netanyahu: “Chick-fil-A is good, actually.”
—Nicholas Quah
All around me this year, I’ve observed more and more people succumbing to the ease and inaccuracy of Google’s automated summary. I first noticed the tendency to rely on AI for answers a few years ago on a family trip when an early-adopting relative told me it would be simpler to ask ChatGPT why Union soldiers won the Battle of Gettysburg than to look into a dreary, more detailed article. Ever since Google dropped the option to AI search into everyone’s hand, it’s felt as if we’ve entered a new era: one in which people know they’re consuming misinformation and just don’t care. I talked to a friend who told me she spent her time in a historic castle while on vacation in Portugal asking AI to explain what was in front of her. “It was probably wrong,” she told me, “but it captured enough of the vibe.” I knew we’d crossed the Rubicon when I noticed people using AI to ask subjective artistic questions. This past summer, I sat next to a woman at a performance of Evita who opened her phone at intermission, typed in “Why does Che narrate Evita,” and then stared at the box as if it would help her understand Andrew Lloyd Webber and Tim Rice’s decision-making process. It did not. —Jackson McHenry
It’s not just that Andrew Cuomo seems to hate New York City. Nor is it that everyone knows he sexually harassed former staffers. It’s not even that he was spanked in the primary and insisted on running anyway. The stupidest thing about Cuomo’s mayoral campaign, besides the fact of its existence, was his team’s wholehearted embrace of AI slop. In the spring, the campaign released a housing plan that turned out to have been put together using ChatGPT, then blamed this decision, and the plan’s typos, on an aide who has only one arm. That humiliation did not stop them. Cuomo’s people followed it up with a parade of AI-generated ads. The first featured an AI Cuomo incompetently driving a subway train and melting down on the NYSE trading floor paired with footage of the real Cuomo saying woodenly, “I know what I know, and I know what I don’t know.” How was emphasizing his inabilities supposed to help? No one explained, but this ad was nothing compared with those that followed: One blatantly racist, soon-deleted video featured an over-the-top AI-generated parade of “criminals” — including, incredibly, a Black shoplifter in a keffiyeh and a Black pimp with a van full of battered white women — cheerfully boosting Zohran Mamdani. On Halloween, Cuomo’s campaign released an ad showing an AI-generated Mamdani trick-or-treating and scooping big handfuls of candy out of a bowl offered by an appalled couple while crowing, “I’m a socialist! Some people need to get tricked so others get a treat!” These videos are so bad that, even while we watch them, it’s hard to believe they exist — that actual people were paid actual money to release them. It’s even harder to believe they thought the ads would make this loser win. —Madeline Leung Coleman
The Editors
Source link
