Will, Netflix’s imported Belgian movie about the moral impossibility of life under Nazi occupation during World War II, announces itself with shocking bluntness. Within its first 10 minutes, it’s made clear that co-writer and director Tim Mielants intends to confront the grisly horrors of the Holocaust head-on. But it’s also apparent that the film is constructed more like a thriller than a somber drama, and it tightens the screws on its lead character — young policeman Wilfried Wils (Stef Aerts) — in a series of breathless setups with escalating stakes.
It’s an effective way to pull viewers into empathizing with the awful dilemmas faced by an occupied population, and into bearing fresh witness to familiar horrors. But the thriller genre sets up expectations — climax, catharsis, redemption — which risk trivializing the material, and set something of an ethical trap. Who’s going to fall into it: the filmmakers, or the audience? Mielants is too tough-minded to be caught, it turns out, but that’s bad news for the rest of us. Will nurses a glimmer of hope in the darkness, only to snuff it out completely. This is a bleak, bleak movie.
It’s 1942, and Wil (referred to in the subtitles by the Dutch spelling of his name, despite the English title Will) and Lode (Matteo Simoni) are fresh recruits to the police force in the port city of Antwerp. Before their first patrol, their commanding officer, Jean (Jan Bijvoet), hands out regulation platitudes about the police being “mediators between our people and the Germans.” Then he sheds that pretense and offers some off-the-record advice: “You stand there and you just watch.” The ambiguity of these words echoes through the whole movie. Is it cowardice to stand by and watch the Nazis at work, or heroism to refuse to cooperate with them? Are the occupied Belgians washing their hands of the Nazis’ crimes, or bearing witness to them?
Wil and Lode don’t have long to contemplate these questions. No sooner have they left the station on their first patrol than a ranting, drugged-up German soldier demands they accompany him on the arrest of some people who “refuse to work”: a Jewish family, in other words. The young men are initially paralyzed by the situation, but things spiral out of control, more through desperation than heroic resistance on the part of the two policemen. In the aftermath, Lode and Wil return to work in a state of paranoid terror.
Image: Les Films Du Fleuve/Netflix
Mielants, working with screenwriter Carl Joos from a novel by Jeroen Olyslaegers, wastes no time in using this premise to explore the paranoid quagmire of the occupied city. Can the two young men trust each other? Where do their sympathies lie? Wil’s civil-servant father leads him to seek help from local worthy Felix Verschaffel (the excellent Dirk Roofthooft), who boasts of being friends with the Germans’ commanding officer, Gregor Schnabel (Dimitrij Schaad). Suddenly, Wil is indebted to a greedy, antisemitic collaborator.
Meanwhile, Lode’s mistrustful family — especially his fiery sister Yvette (Annelore Crollet) — want to know more. Does Wil speak any German at home? What radio station does he listen to? In occupied Antwerp — a region where German and French phrases naturally mix in with the local Dutch dialect — an innocent choice of word or of leisure listening comes freighted with dangerous political significance. “There isn’t much on the radio,” Wil responds. “Can you recommend something?”
Time and again during the movie, Wil uses deflections like this to squirm out of taking a position on the occupation. But eventually, he starts working to save Jewish lives. Actions may speak louder than words, but even in the teeth of a febrile affair with Yvette, Wil continues to keep his words to himself. As Schnabel’s net closes in, Wil’s caution keeps him and his friends alive, but the cost is heavy.
It’s a bold move to center a thriller about the Holocaust on a protagonist who, on some level, refuses to pick a side. We can only empathize with Wil because Mielants so effectively loads almost every scene and line of dialogue with implicit threat. Will is a tense, dark, frightening movie, filmed claustrophobically in a boxy ratio with lenses that blur the edge of the frame. The acting is intense (sometimes to a fault), and there are frequent bursts of unpleasant, graphic violence as the pressure builds.
Photo: Les Films Du Fleuve/Netflix
But even though Schaad sometimes seems to be doing a weak impression of Christoph Waltz’s Hans Landa in Quentin Tarantino’s Inglourious Basterds, Will isn’t that movie, and Mielants isn’t interested in Tarantino’s style of catharsis. At the end of the movie, the vicious, inescapable trap he set for all the characters simply snaps shut. Will shows that under the remorseless illogic of Nazi occupation, survival is collaboration, and resistance is death.
That’s a miserable payload for the movie to carry, and it’s debatable how constructive it is. Jonathan Glazer’s chilling The Zone of Interest, currently in theaters, shows that challenging new perspectives on the human mechanics of the Holocaust are as essential now as they have ever been. Thirty years ago, Schindler’s List achieved something similar, and just as necessary, through radically different means: It found a thread of hope and compassion that could lead a wide audience into the heart of the nightmare and throw it into relief.
Will is too burdened by its point of view to manage anything similar. It’s clear-sighted on the cruel compromises of occupation and collaboration, but so fatalistic about them that it winds up wallowing in its own guilt and hopelessness. That’s a dark kind of truth, and not necessarily one that anyone needs to hear.
For the past week, I’ve been watching Goodreads drama happen in what feels like slow motion. Debut author Cait Corrain admitted to fabricating at least six Goodreads user accounts, and leaving negative reviews (including one-star ratings) of other debut authors’ books — many of whom were authors of color. On Monday, her publisher dropped her book Crown of Starlight, and Corrain posted a mea culpa on X (formerly Twitter).
The coordinated efforts of fans and authors helped expose Corrain’s review bombing. Last week, Iron Widow author Xiran Jay Zhao tweeted a thread noting a series of one-star reviews on debut science fiction and fantasy authors’ Goodreads accounts, without naming any names. They also shared a 31-page document of unknown origin (which Polygon reviewed) that contained screenshots of accounts that added Crown of Starlight to a number of most-anticipated lists, and left one-star reviews on forthcoming books by Kamilah Cole, Frances White, Bethany Baptiste, Molly X. Chang, R.M. Virtues, K.M. Enright, and others.
This once again brings Goodreads’ moderation issues to the fore. When reached for comment, a Goodreads spokesperson sent Polygon a statement: “Goodreads takes the responsibility of maintaining the authenticity and integrity of ratings and protecting our community of readers and authors very seriously. We have clear reviews and community guidelines, and we remove reviews and/or accounts that violate these guidelines.” The company added, regarding Corrain’s one-star reviews, “The reviews in question have been removed.” Goodreads community guidelines state that members should not “misrepresent [their] identity or create accounts to harass other members” and that “artificially inflating or deflating a book’s ratings or reputation violates our rules.” But it doesn’t explain how those guidelines are enforced.
Goodreads also pointed Polygon to an Oct. 30 post about “authenticity of ratings and reviews,” which said the company “strengthened account verification to block potential spammers,” expanded its customer service team, and added more ways for members to report “problematic content.” The company addressed review bombing and “launched the ability to temporarily limit submission of ratings and reviews on a book during times of unusual activity that violate our guidelines.”
Ostensibly, these measures were put in place after several especially high-profile instances of review bombing on the platform this year. But these new tools did not prevent Corrain from review bombing authors in November and December. The guidelines, including the October one, ask users to “report” content that “breaks our rules,” seemingly shifting responsibility onto the user base. It’s past time for Goodreads, which is owned by Amazon, to consider implementing more comprehensive in-house moderation — or at least more sophisticated internal tools — if not for the sake of its users, then for the sake of authors who are at the mercy of the platform.
Goodreads is extremely influential. There are over 150 million members on the platform, 7 million of whom participated in this year’s Reading Challenge. The platform also has few barriers against these sorts of review-bombing campaigns, as any user in good standing can post a review to the platform, including before the book has been published. Pre-publish reviews are part of the marketing cycle, and they are expressly allowed on Goodreads. Publishers encourage authors to get reviews on the Goodreads pages for their forthcoming books, including during the lead-up period to release. Readers can access advance copies of books through official channels like NetGalley, or by receiving an advance reader copy from the publisher, but there’s no way to know whether a reviewer on Goodreads has actually obtained an advance copy or not. (Though Goodreads review guidelines require readers to disclose if they received a free copy, not all users follow those rules — basically, you can post your review regardless.)
This is obviously not an issue that’s novel to Goodreads, but many other platforms require some form of verification before reviewing. Etsy allows users to review a product after they purchase it. Steam only allows users to write reviews of products in their Steam library, and includes “hours played” in the review. The closest comparison to Goodreads I can think of is Yelp, which allows people to leave reviews of restaurants and other establishments, and which also has to handle waves of negative reviews — often involving complaints about things that are entirely out of that business’s control. As far as fan-review platforms for entertainment go, there’s Letterboxd, a platform where users can track and review films. But it doesn’t hold a candle to the cultural chokehold of Rotten Tomatoes, a platform that aggregates review scores from professionally published critics (while it also aggregates audience scores, those are listed separately). Rotten Tomatoes has its own issues, but its system does mean reviews don’t tend to come from people who have not even consumed the media in question.
As a casual peruser on Goodreads, looking for a book to read, how do you know if a reviewer actually read the book? I guess the answer, at least right now, is: You can’t. And as fans have become more sophisticated and coordinated on the internet, it’s become even harder to take the platform’s reviews and ratings seriously. In July, Eat, Pray, Love author Elizabeth Gilbert pulled her forthcoming book The Snow Forest — which was set in Russia — after some 500 users, who had not read the book, left one-star reviews. Gilbert is much more established and better resourced than the debut authors Corrain targeted. She nonetheless made the decision to pull her book.
These debut authors didn’t have the same power or cachet, and it’s painful to imagine how Corrain’s negative reviews could have impacted those authors’ book sales — and subsequently their opportunity to write any more books — had Corrain’s actions gone unnoticed. Publishing is full of enough hurdles as it is, especially for authors of color, without this huge one so close to the finish line.
Gone are the days of scrounging up loot at dilapidated taco joints and rusty playgrounds in Fortnite. Epic Games released a massive new update to the battle royale game this week as part of Chapter 5 Season 1. The patch literally blew up the OG map with a meteor, replacing it with an entirely new, much fancier map. Instead of rough locales like Greasy Groves or Tomato Temple, players now explore palatial manors like Lavish Lair or the manicured vineyards of Pleasant Piazza. Fortnite is basically a fancy European vacation now, and it feels a bit outside my personal budget.
Developers stuffed the new map with luxurious points of interest. Another example: Grand Glacier, a hotel nestled on a snow-capped mountain that looks like it’s straight out of Wes Anderson’s The Grand Budapest Hotel. If the mountains aren’t your thing, you can head over to the Ritzy Riviera, a picturesque shore-side town with villas nestled into a sloped hillside. At places like Classy Courts, decrypted playgrounds with broken concrete have been replaced with manicured hedges.
Image: Epic Games
Call me a traditionalist, but I like to do dumb shit in Fortnite. I personally play as Kakashi from Naruto, and style him with an Among Us backpack as I regale other players with emotes like the Gangnam Style dance. Part of what made me fall in love with Fortnite was the garishness of it all. It’s a bright, cartoony game where you can go fishing with Ariana Grande, then turn around and scuffle with Goku. In the new season, a lot of that whimsy is still there: Peter Griffin is now a skin, and appears as an NPC you can fight. But that tone doesn’t seem to be reflected in the map, which forms a central part of the game.
It isn’t that previous maps were lacking in high-end locations. Prior to the return of the OG map, Chapter 4 Season 4 added the cyberpunk-inspired Mega City and the sweeping Japanese estates of Kenjutsu Crossing. While Kenjutsu resembles the more elaborate locales in the current iteration of the game, some of those additions still evoked a sort of surrealism: Mega City’s sci-fi elements felt true to the less realistic elements of Fortnite.
All that said, locations are subject to change with each update. So it’s possible that further meteors or other ill fates might befall some of these fancy locales and bring back some of the good old Fortnite charm — rough hedges and all.
In mid-September, YouTube announced a collection of new artificial intelligence tools coming to the platform. The tools touch basically every part of the content creation process, from generating topics to editing and even generating video footage itself through the Dream Screen feature. But even as AI features have caused an uproar in so many other creative industries, the response to YouTube’s new suite of tools has been muted. Instead, YouTubers are sharing other concerns about the ways generative AI is already affecting the platform.
It’s been a watershed year as generative AI tools have made it easier to create images and text, all generated from internet scrapes of others’ art and writing. Artists and writers have typically pushed back, citing issues like copyright and their own work being undermined — in September, high-profile authors including George R.R. Martin and Jodi Picoult filed to sue OpenAI for scraping their books. And then there’s generative AI’s issues with hallucination and inaccuracies.
On the other side of the coin, these tools have been used by many people, either experimentally or professionally. Prizes have been won by AI art, while some news sites cut their staff and put out AI-generated articles. AI has also become a cornerstone of TikTok, particularly AI-powered filters. Creators use the Bold Glamour filter to apply makeup, a Ghibli filter to look like characters from the studio’s films, and even pay a fee for filters that generate themed avatars — like the hugely popular ’90s high school photo filter.
Maybe it’s the fact that YouTube’s tools aren’t available to the general public yet. But the quiet reception still seems to buck the trend. On the YouTube Creators account on X (formerly known as Twitter), the announcement only picked up a few hundred likes, doing similarly to engagement-bait tweets like “how do you make your audience feel seen and heard?” On the main YouTube account, it performed worse than a tweet reading “stars are kinda just sky rocks.”
On the platform itself, it’s difficult to find videos discussing the tools at all, despite a thriving community of YouTubers who explain how to use AI tools in making videos — just not the ones announced by YouTube. Instead, these videos focus on explaining existing tools to generate scripts and voice-overs, and to create and edit together images for the video visuals. YouTube’s new tools basically give creators an in-house option for much of this: Creators will be able to generate video prompts and script outlines, automatically edit clips together, and create AI-voiced dubs into other languages.
The main potential draw is that these AI tools would generate content based off of creators’ own historical output. For example, YouTube says the “insights” tool will be personalized so that new video ideas will take into account what a creator’s audience is already watching, something that other text generators can’t do without access to YouTube’s data. It also aims to recommend music for videos, including royalty-free music that hypothetically should help creators know what won’t get them troublesome copyright strikes.
But existing creators don’t seem particularly interested one way or the other. “No one’s heard of it yet,” says Jimmy McGee, a YouTuber who recently made a video titled “The AI Revolution is Rotten to the Core.” As the title might suggest, he’s not a huge fan of YouTube’s proposed tools, but he says it’s “strange” how they’ve been received.
He thinks it may be that these tools are mainly geared toward creators, and viewers may not notice if, for example, a video is edited with the help of AI. He doesn’t think the more obvious tools, like the melty generated visuals of Dream Screen, will take off in the long run. “People will get sick of those quick enough that it’s not really a problem,” he says. But the other tools might lead to longer-term issues in the creator space.
Viewers might not immediately notice if AI software is used to edit videos, but McGee worries that it will undermine those who actually use it. “It’s going to de-skill newer people on YouTube,” he says. Although he finds it unlikely that it will replace professional editors in its current form, it will prevent newer creators from growing their skills. YouTube is billing the feature as an easier way in for people who might not be as confident in their skills yet. It’s also aimed toward Shorts, YouTube’s vertical-video spinoff, so it might make things easier for those who only have their phones to edit on. But McGee thinks that relying on it may end up discouraging video creators in the long run as they struggle to grow creatively.
“I think the more decisions you can make in your video, the better the video can be,” says McGee. “Maybe it won’t be [at first], but the ceiling is higher. That’s what worries me. If someone goes in earnestly trying to use these tools, it’d be very sad to see them give up.”
That potential pitfall depends on whether YouTube’s tools stick around. Parent company Google has a habit of shuttering things — including features it has hyped up a lot more than this one. And generative AI is currently running at a loss for most companies. “We’re probably going to see a decline in its popularity pretty soon,” says media and fandom critic Sarah Z. “[In the meantime] I hope these tools are helpful to creators and serve as a way of empowering them to better execute videos that serve their visions rather than a way to undercut creators.”
But some creators already feel undercut by AI on the platform. Just before YouTube’s tool announcement, creator Abyssoft released a video about a potential case of plagiarism. In it, he detailed the similarities between a previous video he had put out and a video uploaded by a different channel and speculated on how AI could have been used to perform the theft, including using speech-to-text programs and AI voice-over software.
Contacted for comment, Abyssoft pointed out that this is already a widespread issue on the platform. In May, science communicator Kyle Hill spoke out against YouTube channels using AI to create unverified but attention-grabbing content on the site. These videos are often misleading and in some cases appear to copy topics that Hill himself had made videos on.
In his video, Abyssoft says that he isn’t sure what the solution to these issues is. But one thing he suggests is that YouTube should disclose when AI is being used in video creation. He’d also like to see “a punishment or strike system for people that fail to disclose and are proven to be using AI.”
This would be easier if it were YouTube’s own AI tools that were being used; the platform would already be aware. In response to a request for comment on whether Google was considering implementing this feature or any additional measures to avoid plagiarism and misinformation on the platform, Google policy communications manager Jack Malon stated that all content is subject to the existing community guidelines, and that these are “enforced consistently for all creators on our platform, regardless of whether their content is generated using artificial intelligence.”
Although Abyssoft considered some of the other generative AI tools as potentially useful, like the music tool helping creators avoid copyright issues, he continues to fear what easy access to AI tools might do to YouTube creators. “AI facilitates plagiarism in a way we haven’t seen before, and with a bit of effort it will soon become undetectable,” he says. “Competing in a sea of faceless AI channels will be a tough challenge for creators who make a living this way, as their upload cadence will be greatly outpaced by the AI.”
However, he doesn’t think that AI will necessarily produce interesting videos. “I’m assuming the tool that suggests video topics is only going to suggest ideas that it thinks will do well in the algorithm,” he says. “Things will get incredibly formulaic if [it’s] relied on too much.”
He does acknowledge that channels with technical content, such as his own speedrunning history videos, have the advantage of research and understanding that can’t be carried out by AI. McGee similarly feels somewhat protected by his own style. “My videos are messy and I like them that way,” he says. “I can make all the melty, weird visuals myself and make something I’m actually proud of.”
But other channels might not be able to survive. “Someone that covers current news will see AI upload videos before their editing is finished, since it can just scrape whatever articles have been published for the day and render out a video and voice-over in less than an hour,” says Abyssoft.
YouTube’s tools haven’t yet launched beyond a few test countries, so it’ll be some time until we see the impact they’ll have on the platform. But while creators have concerns that they might add new issues for both existing and upcoming video makers, they also have prior concerns about the use of AI that they feel aren’t being addressed by the platform. It seems to be these that are holding creators’ attention, not any new announcements.