DALLAS (AP) — Federal prosecutors say two Texas men plotted to take over a Haitian island, one going so far as joining the U.S. military to acquire training for an armed attack, with the goal of killing all the men and using the women and children for sex.
Gavin Rivers Weisenburg, 21, and Tanner Christopher Thomas, 20, who are from the Dallas area, were indicted Thursday on charges of conspiracy to murder, maim or kidnap in a foreign country, according to the U.S. Attorney’s Office in the Eastern District of Texas. They were also charged with production of child pornography over allegations they persuaded a minor to engage in sexually explicit conduct.
Attorneys for both men said Friday they will enter not guilty pleas.
“They never tried to do any of this,” said John Helms, who is Thomas’ attorney.
An indictment filed in a Texas federal court accuses the men of planning to recruit the homeless to join their coup in Haiti, buy a sailboat and seize power on Gonave Island, which has about 87,000 residents. It covers roughly 290 square miles (751 square kilometers) square miles and is the largest island surrounding Hispaniola.
Helms said that while he has not yet seen the government’s evidence, he thinks prosecutors “are going to have a real hard time” trying to prove that Weisenburg and Thomas actually intended to carry out such a plot.
David Finn, Weisenburg’s attorney, said he encourages everyone to “tap the breaks” and reserve judgment. He said people have been telling him it is “the craziest thing” they have heard, and his response has been: “Yeah, it is.”
According to the indictment, the two men worked on the plot from August 2024 through July and that preparations included researching weapons and ammunition and plans to buy military-type rifles. Prosecutors also allege that both men tried to learn the Haitian Creole language.
Weisenburg allegedly enrolled in a fire academy around Dallas to receive training that would be useful in the attack but failed out of the school. He then allegedly traveled to Thailand and planned to learn to sail, only to never end up enrolling in lessons because of the cost.
Thomas enlisted in the U.S. Air Force in January, according to the indictment, and told Weisenburg in a social media message that he had joined the military to further their planned attack. While in the Air Force, Thomas changed his assignment to Andrews Air Base in Maryland to help in the recruiting of homeless people in Washington, D.C., the indictment said.
The U.S. Air Force Office of Special Investigations was among the investigating agencies, according to the U.S. Attorney’s Office. The Air Force did not immediately respond to an inquiry about Thomas’ service on Friday.
The men face up to 30 years in prison if convicted on the child pornography charge and up to life in prison if convicted on the conspiracy charge.
Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Family of student expelled after confronting teen over deepfake nude image plans lawsuit – CBS News
Watch CBS News
A Louisiana family plans to file a federal lawsuit against their school district in a case involving a deepfake pornographic image. CBS News national reporter Kati Weis has the details.
Meta forgot to keep its porn in a passworded folder, and now its kink for data collection is the subject of scrutiny. The social media giant turned metaverse company turned AI power is currently facing a lawsuit brought by adult film companies Strike 3 Holdings and Counterlife Media, alleging that the Big Tech staple illegally torrented thousands of porn videos to be used for training AI models. Meta denies the claims, and recently filed a motion to dismiss the case because, in part, it’s more likely the videos were downloaded for “private personal use.”
To catch up on the details of the case, back in July, Strike 3 Holdings (the producers of Blacked, Blacked Raw, Tushy, Tushy Raw, Vixen, MILFY, and Slayed) and Counterlife Media accused Meta of having “willfully and intentionally” infringed “at least 2,396 movies” by downloading and seeding torrents of the content. The companies claim that Meta used that material to train AI models and allege the company may be planning a currently unannounced adult version of its AI video generator Movie Gen, and are suing for $359 million in damages.
For what it’s worth, Strike 3 has something of a reputation of being a very aggressive copyright litigant—so much so that if you search the company, you’re less likely to land on its homepage than you are to find a litany of law firms that offer legal representation to people who have received a subpoena from the company for torrenting their material.
There may be some evidence that those materials were swept up in Meta’s data vacuum. Per TorrentFreak, Strike 3 was able to show what appear to be 47 IP addresses linked to Meta participating in torrenting of the company’s material. But Meta doesn’t seem to think much of the accusation. In its motion to dismiss, the company calls Strike 3’s torrent tracking “guesswork and innuendo,” and basically argues that, among other reasons, there simply isn’t even enough data here to be worth using for AI model training. Instead, it’s more likely just some gooners in the ranks.
“The small number of downloads—roughly 22 per year on average across dozens of Meta IP addresses—is plainly indicative of private personal use, not a concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” the company argued. The company also denied building a porn generator model, basically stating that Strike 3 doesn’t have any evidence of this and Meta’s own terms of service prohibit its models from generating pornographic content.
“These claims are bogus: We don’t want this type of content, and we take deliberate steps to avoid training on this kind of material,” a spokesperson for Meta told Gizmodo.
As absurd as the case is, whether the accusations are right or wrong, there is one clear victim: the dad of a Meta contractor who is apparently simultaneously being accused by Strike 3 of being a conduit for copyright infringement and accused by Meta of being a degenerate: “[Strike 3] point to 97 additional downloads made using the home IP address of a Meta contractor’s father, but plead no facts plausibly tying Meta to those downloads, which are plainly indicative of personal consumption,” Meta’s motion said. God forbid this case move forward and this poor person has to answer for his proclivities reserved for incognito tabs.
A line in the sand has been drawn in the AI race: the porn-brained and the porn-banned. Microsoft has sorted itself into the latter category. According to a report from CNBC, Microsoft AI CEO Mustafa Suleyman told an audience at the Paley International Council Summit that the company would not allows its LLM-powered tools to generate “simulated erotica,” marking a stark contrast from its partner/rival OpenAI.
“That’s just not a service we’re going to provide,” Suleyman reportedly said. “Other companies will build that.”
And build it they will. Earlier this month, OpenAI announced that, as part of its principle to “treat adult users like adults,” it would be introducing “erotica for verified adults”—basically giving over-18 users the green light to goon. CEO Sam Altman later tried to explain erotica “was meant to be just one example of [OpenAI] allowing more user freedom for adults,” but he also didn’t choose it by accident.
The ability to create porn with generative AI tools has become something of a signal for those who are vigilantly monitoring whether AI is “woke” or not. Elon Musk made a point of using that as a wedge to draw a distinction between his company xAI and OpenAI, introducing an “AI girlfriend” called Ani, represented by a pretty sexed-up anime avatar. OpenAI initially decided to mock this, with Altman saying “Anime is cool I guess but I am personally more excited about AI discovering lots of new science” and “we haven’t put a sex-bot avatar on ChatGPT yet.” But a few months later, erotica is on the menu.
Not everyone wants porn to be the marker of anti-woke, though. At the same time, the Trump administration announced its AI Action Plan earlier this year, the President also signed an executive order to ban “woke” AI from landing federal contracts. Its definition of woke focused more on the embrace of diversity, equity, and inclusion principles. It didn’t say that AI had to generate anime titties on demand. Vice President JD Vance went so far as to say that using AI to “come up with increasingly weird porn” is bad and floated the idea that it should be regulated.
That created a new strain between the AI industry and the administration, which previously seemed like it was on the same side when it came to doing everything possible to prevent any guardrails from going up. According to a report from NBC, an AI super PAC called Leading the Future has drawn the ire of the White House because it is offering its backing to any candidate who promises an AI-friendly agenda, including Democrats. With the House of Representatives up for grabs in 2026, the Trump administration views the potential support of Democrats as a threat to its hold on the House.
But, even within Trumpworld, there is support for unfettered AI. David Sacks, Trump’s “Crypto and AI Czar,” explicitly called out AI startup Anthropic for throwing its support behind state-level AI safety regulations, claiming that doing so was “a sophisticated regulatory capture strategy based on fear-mongering.” For Sacks and the folks he’s aligned with in Silicon Valley, any sort of AI guardrails equates to stifling innovation. If that means AI erotica, so be it. Who cares if it makes the Vance wing of the party queasy? Porn is progress, apparently.
There’s something fitting about the possibility of AI porn being the first crack in the breaking apart of the Trump-Big Tech alliance. We’ll just have to deal with the fallout of everyone getting hopelessly addicted to sexting their chatbot later.
Republicans have largely been embracing a “hands-off” approach to regulating artificial intelligence, but Vice President JD Vance has found where he draws the line: weird porn. During an appearance on Newsmax’s “The Record with Greta Van Susteren,” Vance called out OpenAI’s recent announcement that it would allow adult users to create erotica with ChatGPT as an example of “bad” uses of AI.
“Artificial intelligence is still in many cases very dumb,” Vance said during the interview, spotted by The Daily Beast. “Is it good or is it bad, or is it going to help us or going to hurt us? The answer is probably both, and we should be trying to maximize as much of the good and minimize as much of the bad.”
The VP went on to offer examples of what he sees as both sides of the spectrum. On the good: “finding new cures for diseases.” Reasonable enough. As for the “bad,” Vance name-checked OpenAI CEO Sam Altman to lay out where he thinks AI has gone too far. “I saw an announcement, I think it was from Sam Altman from OpenAI, who said basically, they’re going to start using AI to introduce erotica and porn and things like that,” Vance said. “If it’s helping us come up with increasingly weird porn, that’s bad.”
Gizmodo reached out to OpenAI for a response to Vance’s comment, but did not receive a response at the time of publication.
To be fair to Vance here, his basic premise isn’t wrong—though no one said the porn had to be weird, he decided that part. Altman took a lot of heat over the erotica announcement, which he later tried to downplay as “just one example of us allowing more user freedom for adults,” but it’s clearly not a feature that offers anything resembling productivity or obvious human benefit. If anything, it presents even more risk for people getting emotionally or romantically attached to a chatbot in a way that is almost certainly unhealthy.
But it’s also a departure from the guardrail-free approach that many Republicans have been pushing for. Politicians like Ted Cruz have actively been working to help AI firms avoid regulations, first by trying to block states from creating their own standards and more recently by proposing legislation that would provide AI firms with a waiver for federal regulations, allowing them to test new products without standard scrutiny or oversight. The Trump administration issued its AI Action Plan earlier this year, which specifically took aim at cutting any sort of regulatory red tape that may even slightly hinder AI development. And, of course, Elon Musk loves to brag about his disregard for guardrails when it comes to his personal chatbot, Grok. Back in August, Musk had become so obsessed with posting about Grok’s erotic chatbot characters that his own fans were begging him to “stop gooning to AI anime and take us to Mars.”
For the rightwing tech crowd, the attitude is basically let the chatbot talk dirty or China will beat us in the race to AGI.
But while Republicans may not want to regulate these companies, a large chunk of them do want to play the morality police. Basically, the only thing that raises their ire when it comes to AI is the invocation of anything sexual. AI producing misinformation, using an incredible amount of energy, being used to expand the surveillance state—none of that really raises red flags for these folks. But “sensual” chats and erotica? It’s time for the government to step in.
In an X post on Wednesday, OpenAI CEO Sam Altman clarified that when he said ChatGPT might soon manufacture custom erotica, that “was meant to be just one example of [OpenAI] allowing more user freedom for adults.”
Ok this tweet about upcoming changes to ChatGPT blew up on the erotica point much more than I thought it was going to! It was meant to be just one example of us allowing more user freedom for adults. Here is an effort to better communicate it:
A post from Altman the previous day had alerted the world to the fact that ChatGPT will soon include “erotica for verified adults,” and Altman now says that post “blew up on the erotica point” more than he thought it would. “Erotica” is a vague term without a technical or legal definition. It seems to be deployed by collectors of old timey nude photos, or when one describes art or literature that can include titillating amounts of sex and nudity, when said art also needs to sound like it has more redeeming aesthetic value than pornography.
So go ahead and picture something sexy coming from ChatGPT, but not too sexy, because that would be porn, and as OpenAI told Mashable last year, “We have no intention to create AI-generated pornography.”
We asked OpenAI to clarify whether it will generate “erotica” in the form of chats only, or whether there will be erotic images produced within the ChatGPT app by its image model, DALL-E—the one that’s so impressive at generating images that look like anime, and which may or may not soon be capable of generating hentai. We will update if we hear back.
The erotica remark in the earlier Altman post was about a coming update aimed at removing safeguards, and ostensibly allowing “verified adults” to chat with a broadly less restricted version of OpenAI’s signature product. As we noted at the time, the more permissive version of the chat app soon to be delivered sounds a bit like OpenAI highlighting the seemingly addictive or parasocial attributes of ChatGPT once again, after the GPT-5 update flopped at least in part because its default tone had become less friendly and supportive.
Many, however, reasonably gleaned the idea that porn—the form of content that gets perhaps 13-20 percent of all search traffic online—is in fact on its way to ChatGPT. One popular post speculated that OpenAI was launching a full-scale invasion of the online porn sphere. That’s not a crazy assumption. OpenAI is expected to have cash outflows of around $115 billion between now and 2029, and Altman has been explicit about his company needing to find ways to bring in revenue, even if—as with the launch of Sora 2—OpenAI gets criticized for poor taste. Sora 2’s tsunami of slop videos is justified, Altman says, because it makes people smile, and can “hopefully make some money given all that compute need.” Well, some analysts have estimated the value of the porn industry at close to $200 billion. A piece of that action would build an awful lot of compute.
On the internet, wild speculation that OpenAI is getting into porn, or porn-adjacent “erotica,” to drive revenue is inevitable given what the company’s CEO is teasing here. If Altman’s intent is to kick off another version of the 1980s home video revolution in order to bring in the cold hard cash his company so desperately needs, content for horny people who aren’t all that discerning would be a historically grounded, if tacky, way to speed up revenue growth.
So no, OpenAI hasn’t yet clarified where the sexy stuff will come out of the AI pipes, and whether it will be text, photos, or even video. But Altman even struck a rather Larry Flynt-like, free-speech-warrior tone in his clarifying post, saying that “allowing a lot of freedom for people to use AI in the ways that they want is an important part of [OpenAI’s] mission,” and adding that he and his company “are not the elected moral police of the world.”
Wisconsin business owner Bill Berrien, a supporter of President Donald Trump, ended his Republican campaign for governor on Friday, days after it was reported that he followed numerous sexually explicit accounts online, including a nonbinary pornography performer.
Berrien, a former Navy SEAL and one of three announced prominent Republican candidates, issued a lengthy statement saying, “I had no idea that running for political office could be almost as dangerous” as “hunting down war criminals in Bosnia.” Berrien said he concluded he could not win the Republican primary.
“Looking towards what is in the best interest of the party, voters, donors, and my family, I have decided to end my campaign,” he said.
Berrien’s departure leaves U.S. Rep. Tom Tiffany, who got into the race on Tuesday, and Washington County Executive Josh Schoemann as the only Republican candidates. There are numerous Democrats running. The primary is in August.
Berrien has an account on the online platform Medium.com where he followed nonbinary porn performer Jiz Lee and several other authors of sexually explicit essays. He also followed “publications,” which are similar to blogs, that dealt with exploring sexuality, including having relationships with multiple partners.
Lee issued a statement Thursday calling Berrien a hypocrite. Several prominent Republicans had been calling for him to drop out of the race.
Schoemann did not address Berrien’s social media habits in a statement reacting to his withdrawal from the race. Instead, Schoemann said he appreciated his willingness to serve his country as a candidate. Tiffany did not immediately return a message seeking comment.
Wisconsin Democratic Party spokesperson Phil Shulman blamed Berrien’s departure not on his social media activity but his past criticism of Trump.
Conservatives had questioned the viability of Berrien’s candidacy because he had supported former United Nations Ambassador Nikki Haley in the 2024 presidential primary and said in August 2020 that he hadn’t decided whether to support Trump.
“Bill Berrien is a lesson for all GOP candidates: if you don’t show complete and total loyalty to Trump–past or present–then you better pack your bags and head for the door,” Shulman said in a statement. His failure, despite his resume, financial investment, and doing somersaults to earn Trump’s love, shows just how far the other GOP candidates are going to have to go to win the nomination.”
The Milwaukee Journal Sentinel first reported on his online activity on Monday. Berrien defended his actions to The Associated Press on Tuesday, saying the media was focusing on “stupid articles I read years ago.”
He was even more forceful in his statement dropping out of the race, describing the articles he read and people he followed as “cherry-picked.” He said it “painted a salacious and sensational picture that was clearly targeted to force me out of this governor race. It was a major attack piece.”
“And for what? For reading!” Berrien said. “Nothing illegal, nothing unethical, and nothing immoral. Just reading. Wouldn’t you want your political and business leaders (and all of society, frankly) to be widely read and thoughtful and aware of different perspectives and ideas?”
Berrien, the CEO of Pindel Global Precision, ran as a supporter of “family values.” He had been critical of transgender people in the opening weeks of his candidacy. He quit less than three months after getting in the race.
The governor’s race in battleground Wisconsin is open for the first time since 2010. Democratic Gov. Tony Evers decided against seeking a third term.
The most prominent Democratic candidates are Lt. Gov. Sara Rodriguez; Milwaukee County Executive David Crowley; state Sen. Kelda Roys; and state Rep. Francesca Hong. Others considering getting in include Attorney General Josh Kaul, former Lt. Gov. Mandela Barnes and former state economic development director Missy Hughes.
Sean “Diddy” Combs’s lawyers doubled down on claims that his actions were protected by the First Amendment during a hearing this morning as part of their ongoing push for acquittal. (Yes, the same amendment that’s under attack from President Donald Trump.)
“He was a producer of amateur porn,” Alexandra Shapiro, one of Diddy’s many expensive lawyers, told Judge Arun Subramanian in court on September 25. Diddy’s team is hoping for an acquittal by trying to point out major legal problems in the case. “He’s a consumer of amateur porn,” Shapiro said. “It’s well settled that this type of amateur porn, whether it’s live or recorded, is protected by the First Amendment.” The protection also extended to times Diddy didn’t record encounters, she claimed. “It’s often simply only a livestream back and forth,” Shapiro said, who also mentioned OnlyFans. “Somebody’s watching someone on-camera. It’s not recorded. It’s just happening in real time.”
Diddy was found guilty on July 2 on two counts of transportation to engage in prostitution. This charge relates to Diddy’s shuttling of male escorts across state lines for the drug-fueled, dayslong sexual encounters known as Freak-Offs. These encounters were often recorded and were “highly choreographed.” Shapiro also mentioned the recordings had “mood lighting” and costumes to bolster the claim that this was performance, not prostitution.
Diddy, who wore khaki jail scrubs at this proceeding, seemed to be in good spirits. When he walked into Subramanian’s courtroom around 11 a.m., he hugged several of his lawyers.
Prosecutor Christy Slavik, who spoke on the First Amendment issue, insisted that Diddy’s hiring male escorts across state lines didn’t involve free speech. “There’s no symbolic speech,” she said, which would have First Amendment protections. “The act that violated the law was the transportation, which was not protected symbolic speech.”
Diddy’s Avenger-like legal counsel detailed their First Amendment claims in late July court filings. “The freak-offs and hotel nights were performances that he or his girlfriends typically videotaped so they could watch them later,” his lawyers wrote in court papers posttrial. “In other words, he was producing amateur pornography for later private viewing.”
Generally speaking, most pornography is protected so long as it doesn’t involve children or “obscenity.” Prosecutors have insisted, however, that Diddy wasn’t paying prostitutes just to make blue movies in their own court filings. “The record shows that the defendant was anything but a producer of adult films entitled to First Amendment protection — rather, he was a voracious consumer of commercial sex, paying male commercial sex workers on hundreds of occasions to have sex with his girlfriends for his own sexual arousal,” they argued in court papers. “Moreover, the conduct proscribed by the Mann Act — causing the interstate transportation of an individual for the purpose of prostitution — is not entitled to First Amendment protection.”
Subramanian will rule later on the defense’s push for acquittal. Diddy is scheduled to be sentenced on October 3.
A Republican manufacturer running for governor in Wisconsin as a conservative supporter of “family values” and President Donald Trump followed numerous sexually explicit accounts online, including a nonbinary pornography performer.
The Milwaukee Journal Sentinel reported Monday that Bill Berrien, the CEO of Pindel Global Precision and one of two announced 2026 Republican candidates for governor, unfollowed several accounts in recent days after the newspaper asked about the matter.
Berrien, in a statement to The Associated Press, downplayed any concerns about his online activity.
“There are a lot of important issues that are affecting our state and nation,” he said in the statement, “but what is the mainstream media focused on right now? Some stupid articles I read years ago, not the plans I have to reindustrialize our state, turn the economy around, and bring prosperity for all through work.”
In a post on X on Monday, Berrien derided the Journal Sentinel story as “garbage political hits.” He did not refute anything written in the story in his comments to the AP or in his post on X.
The revelation led to calls from some Republicans for Berrien to drop out of the race.
Berrien, 56, is a political newcomer running his first race for Wisconsin’s open governor’s seat. Josh Schoemann, the Washington County executive, is the other Republican in the race. The GOP primary is 11 months away. Numerous Democrats have also announced they are running in an attempt to succeed Democratic Gov. Tony Evers, who is not seeking a third term.
Berrien has been critical of transgender people in the opening weeks of his candidacy. On his campaign website, he says “our daughters’ sports teams and locker rooms are at risk because of radical social experimentation.”
But the Journal Sentinel reported that Berrien has an account on the online platform Medium.com where he followed nonbinary porn performer Jiz Lee and several other authors of sexually explicit essays. He also followed “publications,” which are similar to blogs, that dealt with exploring sexuality, including having relationships with multiple partners.
Berrien stopped following the accounts of 23 people, including the most sexually explicit ones, after the Journal Sentinel asked about his history on the website, the newspaper reported.
“Is this the best they can do?” Berrien posted on X. “Just days after I promised to stand with President Trump to protect our state, stop the woke indoctrination, and keep boys out of girls sports, they came after me with the same failed attacks they tried with President Trump. Garbage political hits didn’t slow President Trump down, and the Democrats and the media’s latest attempts to keep me out of this fight won’t work either.”
Schoemann, the only other announced Republican candidate in the governor’s race, declined to comment. The Wisconsin Democratic Party also declined to comment. Wisconsin College Republicans urged Berrien, in a post on X, to drop out of the race.
Bill McCoshen, a longtime Republican strategist, posted on X that he thought the revelation would be the end of Berrien’s candidacy. Conservatives, including influential talk radio hosts, already had criticized Berrien for his support of former United Nations Ambassador Nikki Haley in the 2024 presidential primary and for saying in August 2020 that he hadn’t decided whether to support Trump.
“I’ve thought this campaign was over for some time,” McCoshen posted. “Now there’s no doubt.”
Dan Degner, president of the social conservative group Wisconsin Family Action, said that “family and sexuality issues matter” with Republican voters. The group’s political action committee will make an endorsement in the Republican primary next year, he said, and it will only go to a candidate who “champions social conservative causes,” Degner said.
“We would have to have some pretty in-depth conversations with him before we would consider an endorsement,” Degner said of Berrien.
LOS ANGELES — A California judge has dismissed a lawsuit filed by an Indigenous tribe in the Brazilian Amazon against The New York Times and TMZ that claimed the newspaper’s reporting on the tribe’s first exposure to the internet led to its members being widely portrayed as technology-addled and addicted to pornography.
The suit was filed in May by the Marubo Tribe of the Javari Valley, a sovereign community of about 2,000 people in the Amazon rainforest.
Los Angeles County Superior Court Judge Tiana J. Murillo on Tuesday sided with the Times, whose lawyers argued in a hearing Monday that its coverage last year was fair and protected by free speech.
TMZ argued that its coverage, which followed the Times’ initial reporting, addressed ongoing public controversies and matters of public interest.
The suit claimed stories by TMZ and Yahoo amplified and sensationalized the Times’ reporting and smeared the tribe in the process. Yahoo was dismissed as a defendant earlier this month.
Murillo wrote in her ruling that though some may “reasonably perceive” the Times’ and TMZ’s reporting as “insensitive, disparaging or reflecting a lack of respect, the Court need not, and does not, determine which of these characterizations is most apt.”
The judge added that “regardless of tone, TMZ’s segment contributed to existing debate over the effects of internet connectivity on remote Indigenous communities.”
“We are pleased by the comprehensive and careful analysis undertaken by the court in dismissing this frivolous lawsuit,” Danielle Rhoades Ha, a spokesperson for the Times, said in a statement Wednesday to The Associated Press. “Our reporter traveled to the Amazon and provided a nuanced account of tension that arose when modern technology came to an isolated community.”
Attorneys for TMZ did not immediately respond to an email request for comment Wednesday.
Plaintiffs in the lawsuit included the tribe, community leader Enoque Marubo and Brazilian journalist and sociologist Flora Dutra, who were both mentioned in the June 2024 story. Both were instrumental in bringing the tribe the internet connection, which they said has had many positive effects including facilitating emergency medicine and the education of children.
N. Micheli Quadros, the attorney who represents the tribe, Marubo and Dutra, wrote to the AP Wednesday that the judge’s decision “highlights the imbalance of our legal system,” which “often shields powerful institutions while leaving vulnerable individuals, such as Indigenous communities without meaningful recourse.”
Quadros said the plaintiffs will decide their next steps in the coming days, whether that is through courts in California or international human rights bodies.
“This case is bigger than one courtroom or one ruling,” Quadros wrote. “It is about accountability, fairness, and the urgent need to protect communities that have historically been silenced or marginalized.”
The lawsuit sought at least $180 million, including both general and punitive damages, from each of the defendants.
The suit argued that the Times’ story by reporter Jack Nicas on how the group was handling the introduction of internet service via Starlink satellites operated by Elon Musk’s SpaceX “portrayed the Marubo people as a community unable to handle basic exposure to the internet, highlighting allegations that their youth had become consumed by pornography.”
The court disagreed with the tribe’s claims that the Times article falsely implied its youth were “addicted to pornography,” noting that the coverage only mentioned unidentified young men had access to porn and did not state that the tribe as a whole was addicted to pornography.
Nicas reported that in less than a year of Starlink access, the tribe was dealing with the same struggles the rest of the world has dealt with for years due to the pervasive effects of the internet. The challenges ranged from “teenagers glued to phones; group chats full of gossip; addictive social networks; online strangers; violent video games; scams; misinformation; and minors watching pornography,” Nicas wrote.
He also wrote that a tribal leader said young men were sharing explicit videos in group chats. The piece doesn’t mention porn elsewhere, but other outlets amplified that aspect of the story. TMZ posted a story with the headline, “Elon Musk’s Starlink Hookup Leaves A Remote Tribe Addicted To Porn.”
The Times published a follow-up story in response to misperceptions brought on by other outlets in which Nicas wrote: “The Marubo people are not addicted to pornography. There was no hint of this in the forest, and there was no suggestion of it in The New York Times’s article.”
Nicas wrote that he spent a week with the Marubo tribe. The lawsuit claimed that while he was invited for a week, he spent less than 48 hours in the village, “barely enough time to observe, understand, or respectfully engage with the community.”
In recent years, legislation aimed at restricting access to online porn sites has become more and more popular in conservative states, but in Michigan, lawmakers have just introduced a bill that would ban all online pornography, full stop.
The legislation, which offers a deeply draconian perspective on human sexuality, was introduced on Sept. 11th, and its primary sponsor is Rep. Josh Schriver (R-Oxford). The “Anticorruption of Public Morals Act,” which sounds like a bill whose name (and contents) were sourced from the 1930s, would ban all “pornographic material.” What does that mean? According to the bill text, it means “content, digital, streamed, or otherwise distributed on the internet, the primary purpose of which is to sexually arouse or gratify, including videos, erotica, magazines, stories, manga, material generated by artificial intelligence, live feeds, or sound clips.”
That, uh, sure sounds like a lot. Additionally, the bill would also define “any depiction or description of trans people as pornographic,” which means that such depictions would also be banned, 404 Media writes. Indeed, while the bill text does not include any specific mentions of trans people as a group, it does include a stipulation that would ban the following category of media: “a depiction, description, or simulation, whether real, animated, digitally generated, written, or auditory, that includes a disconnection between biology and gender by an individual of biological sex imitating, depicting, or representing himself or herself to be of the other biological sex by means of a combination of attire, cosmetology, or prosthetics, or as having a reproductive nature contrary to the individual’s biological sex.”
The bill’s top sponsor, Schriver, claims this is all about defending children. “These measures defend children, safeguard our communities, and put families first,” Schriver recently wrote on X. “Obscene and harmful content online threatens Michigan families, especially children.”
Pornography is obviously a complicated subject with a twisty, not altogether politically neat history, and there are plenty of nuanced conversations to be had about it. One thing’s for sure: an outright ban on it isn’t nuanced, nor does it allow for any conversation at all.
Gizmodo reached out to Schriver’s office for comment and will update this story if he responds.
While, in earlier times, third-wave feminists were the ones advocating for an abolition of the porn industry, in recent times, conservatives have led the charge, albeit for an entirely different set of reasons. Earlier this year, rightwing Senator Mike Lee (R-Utah) introduced the Interstate Obscenity Definition Act (IODA), which would have effectively criminalized all pornography nationwide. Not much has happened with the bill since it was introduced and referred to a Senate committee. The Heritage Foundation’s Project 2025, which many believe has acted as a kind of rightwing policy bible for the Trump administration, has also advocated for criminalizing all pornography.
BANGKOK — A Telegram channel with hundreds of thousands of subscribers that offered revenge porn, hidden-camera videos and other non-consensual content of Chinese women has highlighted gaps in laws protecting victims of sexual abuse in China.
The uproar over the online group comes after Chinese authorities have silenced public activism over women’s rights in recent years, even sentencing some activists to prison for promoting #MeToo.
The Telegram channel called MaskPark, which offered pornographic content in Chinese, came to national attention in recent weeks and was quickly shut down by Telegram. But activists say alternate channels have already emerged, with only some being shut down.
Now activists are calling for ways to help women whose images have been posted. They want police to go after the posters or channel administrators, or even Telegram. They also seek a targeted law to address non-consensual sexual online content, which they see as a form of sexual abuse.
China’s Ministry of Public Security and the State Council Information Office did not respond to a request for comment, and have not commented publicly on the latest demands.
Women in China whose images have been shared online without their consent face an uphill battle in pursuing justice.
The only woman who has come forward about MaskPark is known as Ms. D, according to a report from Southern Metropolis Daily, a state-backed news outlet in Guangdong province. She says she received a private message in May claiming photos and videos of her were on the channel.
There, she found images of her being intimate with a Canadian citizen who was her boyfriend at the time, said Li Ling, an activist and researcher on gender-based violence who works with a team to assist women exposed on MaskPark. The AP could not reach Ms. D or other women independently.
When Ms. D reported the case to police, she found the images had been deleted. She consulted with lawyers but found there is no law in China specifically addressing what had occurred, Li said.
“This means a lot of police officers do not know how to lodge a case,” Li said.
But there are other challenges. To file a lawsuit, even a civil one claiming damages, the alleged perpetrator’s identifying information is needed, Li said.
It is impossible to tell who posted the images. Telegram is blocked in China, which allows only apps that cooperate with the government’s censorship apparatus. Users can access Telegram via a virtual private network, which provides an encrypted connection. And Telegram doesn’t verify the identity of users. It is unclear who ran MaskPark, and the AP could not contact them.
Telegram said in a statement to the AP it “completely removed the MaskPark channels” and that moderators continue to monitor the platform “and accept user reports — so that if such groups ever resurface, they are immediately removed once more.”
Telegram was founded by Pavel Durov. Last year, French authorities arrested Durov over charges that the platform was being used for criminal activity that included drug trafficking and child sex abuse material. His case is pending.
In China, MaskPark reminded many people of a 2020 case in South Korea, where two journalists discovered a Telegram channel where young women and girls had been blackmailed into sharing explicit videos.
The uproar over that channel, called Nth room, led to arrests and a 40-year sentence for the man behind it. The journalists had infiltrated the channel for months, gathering evidence and bringing it to police.
The Korean government then revised laws to impose stricter penalties on people who distribute non-consensual content, and to require platforms located in South Korea to police the content on their servers.
“Their framework addresses the entire chain of harm, from creation to distribution to consumption, while establishing clear platform responsibilities,” said Jiahui Duan, a fellow at University of Hong Kong’s law school.
In the United States, President Donald Trump in May signed a law with stricter penalties for people who distribute non-consensual videos, including ones generated by artificial intelligence.
Past cases in China have resulted in light punishment, without penalties for platforms.
In one case in December, a college graduate found her public photos had been used to create deepfake porn that was shared on X, according to local media. The perpetrator received 10 days of administrative detention by police under the charge of disseminating obscene materials. It did not go on their criminal record.
The offense of disseminating obscene materials can result in two or more years of prison time, however, if authorities deem the case to be severe enough. Cases where money is exchanged can bring three years in prison.
Activists in China seeking a new law to address cases like MaskPark say the charge of disseminating obscene materials is too broad. Police recently used the charge to prosecute women writing romantic fiction deemed to be erotic.
“This is a double standard. The truly obscene things, the covert filming, they’re not coming down on that,” said Li Maizi, a women’s rights activist who has followed those arrests.
Activist Li Ling, who’s also a researcher looking at gender-based violence, said Chinese-language channels on Telegram sharing non-consensual content continue to be found. Not all are shut down immediately.
Activists in China recently found a channel sharing photos aimed up women’s skirts. Its pinned post read, “Recently, many groups and channels are being shut down, the permanent link to find your way home,” with a website address.
The channel remained active as of last week.
“The lesson for China is clear, that this systemic problem demands systemic solutions,” said Duan, the legal scholar. “While closing legal gaps is urgent, lasting change requires coordinated technological regulation, international cooperation and comprehensive victim support.”
Online age checks are on the rise in the U.S. and elsewhere, asking people for IDs or face scans to prove they are over 18 or 21 or even 13. To proponents, they’re a tool to keep children away from adult websites and other material that might be harmful to them.
But opponents see a worrisome trend toward a less secure, less private and less free internet, where people can be denied access not just to pornography but news, health information and the ability to speak openly and anonymously.
“I think that many of these laws come from a place of good intentions,” said Jennifer Huddleston, a senior technology policy fellow at the Cato Institute, a libertarian think tank. “Certainly we all want to protect young people from harmful content before they’re ready to see it.”
More than 20 states have passed some kind of age verification law, though many face legal challenges. While no such law exists on the federal level in the United States, the Supreme Court recently allowed a Mississippi age check law for social media to stand. In June, the court upheld a Texas law aimed at preventing minors from watching pornography online, ruling that adults don’t have a First Amendment right to access obscene speech without first proving their age.
Elsewhere, the United Kingdom now requires users visiting websites that allow pornography to verify their age. Beyond adult sites, platforms like Reddit, X, Telegram and Bluesky have also committed to age checks. France and several other European Union countries also are testing a government sponsored verification app.
“Platforms now have a social responsibility to ensure the safety of our kids is a priority for them,” Australian Prime Minister Anthony Albanese told reporters in November. The platforms have a year to work out how they could implement the ban before penalties are enforced.
To critics, though, age check laws raise “significant privacy and speech concerns, not only for young people themselves, but also for all users of the internet,” Huddleston said. “Because the only way to make sure that we are age verifying anyone under the age of 18 is to also age verify everyone over the age 18. And that could have significant impacts on the speech and privacy rights of adults.”
The state laws are a hodgepodge of requirements, but they generally fall into two camps. On one side are laws — as seen in Louisiana and Texas — that require websites comprised of more than 33% of adult content to verify users’ ages or face fines. Then there are laws — enacted in Wyoming or South Dakota — that seek to regulate sites that have any material that is considered obscene or otherwise harmful to minors.
What’s considered harmful to minors can be subjective, and this is where experts believe such laws run afoul of the First Amendment. It means people may be required to verify their ages to access anything, from Netflix to a neighborhood blog.
“In places like Australia and the U.K., there is already a split happening between the internet that people who are willing to identify themselves or go through age verification can see and the rest of the internet. And that’s historically a very dangerous place for us to end up,” said Jason Kelley, activism director at the nonprofit digital rights group Electronic Frontier Foundation.
What’s behind the gates is determined by a “hundred different decision-makers,” Kelley said, from politicians to tech platforms to judges to individuals who have sued because they believe that a piece of content is dangerous.
While many companies are complying, verifying users’ ages can prove a burden, especially for smaller platforms. On Friday, Bluesky said it will no longer be available in Mississippi because of its age verification requirements. While the social platform already does age verification in the U.K., it said Mississippi’s approach “would fundamentally change how users access Bluesky.”
That’s because it requires every user to undergo an age check, not just those who want to access adult content. It would also require Bluesky to identify and track users that are children.
“We think this law creates challenges that go beyond its child safety goals, and creates significant barriers that limit free speech and disproportionately harm smaller platforms,” the company said in a blog post.
Some websites and social media companies, such as Instagram’s parent company Meta, have argued that age verification should be done by app store owners, such as Apple and Google, and not individual platforms. This would mean that app stores need to verify their users’ ages before they allow them to download apps. Unsurprisingly, Apple and Google disagree.
“Billed as ‘simple’ by its backers, including Meta, this proposal fails to cover desktop computers or other devices that are commonly shared within families. It also could be ineffective against pre-installed apps,” Google said in a blog post.
Nonetheless, a growing number of tech companies are implementing verification systems to comply with regulations or ward off criticism that they are not protecting children. This includes Google, which recently started testing a new age-verification system for YouTube that relies on AI to differentiate between adults and minors based on their watch histories.
Instagram is testing a similar AI system to determine if kids are lying about their ages. Roblox, which was sued by the Louisiana attorney general on claims it doesn’t do enough to protect children from predators, requires users who want to access certain games rated for those over 17 to submit a photo ID and undergo a face scan for verification. Roblox has also recently begun requiring age verification for teens who want to chat more freely on platform.
Face scans that promise to estimate a person’s age may address some of the concerns around IDs, but they can be unreliable. Can AI accurately tell, for instance, if someone is 17.5 or just turned 18?
“Sometimes it’s less accurate for women or it’s less accurate for certain racial or ethnic groups or for certain physical characteristics that then may mean that those people have to go through additional privacy invasive screenings to prove that they are of a certain age,” Huddleston said.
While IDs are a common way of verifying someone’s age, the method raises security concerns: What happens if companies don’t delete the uploaded files, for instance?
Case in point: the recent data breaches at Tea, an app for women to anonymously warn each other about the men they date, speak to some of these concerns. The app requires women who sign up to upload an ID or undergo a scan to prove that they are women. Tea wasn’t supposed to keep the files but it did, and stored them in a way that allowed hackers to not only access the images, but also their private messages.
Online age checks are on the rise in the U.S. and elsewhere, asking people for IDs or face scans to prove they are over 18 or 21 or even 13. To proponents, they’re a tool to keep children away from adult websites and other material that might be harmful to them.
But opponents see a worrisome trend toward a less secure, less private and less free internet, where people can be denied access not just to pornography but news, health information and the ability to speak openly and anonymously.
“I think that many of these laws come from a place of good intentions,” said Jennifer Huddleston, a senior technology policy fellow at the Cato Institute, a libertarian think tank. “Certainly we all want to protect young people from harmful content before they’re ready to see it.”
More than 20 states have passed some kind of age verification law, though many face legal challenges. While no such law exists on the federal level in the United States, the Supreme Court recently allowed a Mississippi age check law for social media to stand. In June, the court upheld a Texas law aimed at preventing minors from watching pornography online, ruling that adults don’t have a First Amendment right to access obscene speech without first proving their age.
Elsewhere, the United Kingdom now requires users visiting websites that allow pornography to verify their age. Beyond adult sites, platforms like Reddit, X, Telegram and Bluesky have also committed to age checks. France and several other European Union countries also are testing a government sponsored verification app.
“Platforms now have a social responsibility to ensure the safety of our kids is a priority for them,” Australian Prime Minister Anthony Albanese told reporters in November. The platforms have a year to work out how they could implement the ban before penalties are enforced.
To critics, though, age check laws raise “significant privacy and speech concerns, not only for young people themselves, but also for all users of the internet,” Huddleston said. “Because the only way to make sure that we are age verifying anyone under the age of 18 is to also age verify everyone over the age 18. And that could have significant impacts on the speech and privacy rights of adults.”
The state laws are a hodgepodge of requirements, but they generally fall into two camps. On one side are laws — as seen in Louisiana and Texas — that require websites comprised of more than 33% of adult content to verify users’ ages or face fines. Then there are laws — enacted in Wyoming or South Dakota — that seek to regulate sites that have any material that is considered obscene or otherwise harmful to minors.
What’s considered harmful to minors can be subjective, and this is where experts believe such laws run afoul of the First Amendment. It means people may be required to verify their ages to access anything, from Netflix to a neighborhood blog.
“In places like Australia and the U.K., there is already a split happening between the internet that people who are willing to identify themselves or go through age verification can see and the rest of the internet. And that’s historically a very dangerous place for us to end up,” said Jason Kelley, activism director at the nonprofit digital rights group Electronic Frontier Foundation.
What’s behind the gates is determined by a “hundred different decision-makers,” Kelley said, from politicians to tech platforms to judges to individuals who have sued because they believe that a piece of content is dangerous.
While many companies are complying, verifying users’ ages can prove a burden, especially for smaller platforms. On Friday, Bluesky said it will no longer be available in Mississippi because of its age verification requirements. While the social platform already does age verification in the U.K., it said Mississippi’s approach “would fundamentally change how users access Bluesky.”
That’s because it requires every user to undergo an age check, not just those who want to access adult content. It would also require Bluesky to identify and track users that are children.
“We think this law creates challenges that go beyond its child safety goals, and creates significant barriers that limit free speech and disproportionately harm smaller platforms,” the company said in a blog post.
Some websites and social media companies, such as Instagram’s parent company Meta, have argued that age verification should be done by app store owners, such as Apple and Google, and not individual platforms. This would mean that app stores need to verify their users’ ages before they allow them to download apps. Unsurprisingly, Apple and Google disagree.
“Billed as ‘simple’ by its backers, including Meta, this proposal fails to cover desktop computers or other devices that are commonly shared within families. It also could be ineffective against pre-installed apps,” Google said in a blog post.
Nonetheless, a growing number of tech companies are implementing verification systems to comply with regulations or ward off criticism that they are not protecting children. This includes Google, which recently started testing a new age-verification system for YouTube that relies on AI to differentiate between adults and minors based on their watch histories.
Instagram is testing a similar AI system to determine if kids are lying about their ages. Roblox, which was sued by the Louisiana attorney general on claims it doesn’t do enough to protect children from predators, requires users who want to access certain games rated for those over 17 to submit a photo ID and undergo a face scan for verification. Roblox has also recently begun requiring age verification for teens who want to chat more freely on platform.
Face scans that promise to estimate a person’s age may address some of the concerns around IDs, but they can be unreliable. Can AI accurately tell, for instance, if someone is 17.5 or just turned 18?
“Sometimes it’s less accurate for women or it’s less accurate for certain racial or ethnic groups or for certain physical characteristics that then may mean that those people have to go through additional privacy invasive screenings to prove that they are of a certain age,” Huddleston said.
While IDs are a common way of verifying someone’s age, the method raises security concerns: What happens if companies don’t delete the uploaded files, for instance?
Case in point: the recent data breaches at Tea, an app for women to anonymously warn each other about the men they date, speak to some of these concerns. The app requires women who sign up to upload an ID or undergo a scan to prove that they are women. Tea wasn’t supposed to keep the files but it did, and stored them in a way that allowed hackers to not only access the images, but also their private messages.
AVIGNON, France — They are, on the face of it, the most ordinary of men. Yet they’re all on trial charged with rape. Fathers, grandfathers, husbands, workers and retirees — 50 in all — accused of taking turns on the drugged and inert body of Gisèle Pelicot while her husband recorded the horror for his swelling private video library.
Among the nearly two dozen defendants who testified during the trial’s first seven weeks was Ahmed T. — French defendants’ full last names are generally withheld until conviction. The married plumber with three kids and five grandchildren said he wasn’t particularly alarmed that Pelicot wasn’t moving when he visited her and her now-ex-husband’s house in the small Provence town of Mazan in 2019.
It reminded him of porn he had watched featuring women who “pretend to be asleep and don’t react,” he said.
Like him, many other defendants told the court that they couldn’t have imagined that Dominique Pelicot was drugging his wife, and that they were told she was a willing participant acting out a kinky fantasy. Dominique Pelicot denied this, telling the court his co-defendants knew exactly what the situation was.
Céline Piques, a spokesperson of the feminist group Osez le Féminisme!, or Dare Feminism! said she’s convinced that many of the men on trial were inspired or perverted by porn, including videos found on popular websites. Although some sites have started cracking down on search terms such as “unconscious,” hundreds of videos of men having sex with seemingly passed out women can be found online, she said.
Piques was particularly struck by the testimony of a tech expert at the trial who had found the search terms “asleep porn” on Dominique Pelicot’s computer.
Last year, French authorities registered 114,000 victims of sexual violence, including more than 25,000 reported rapes. But experts say most rapes go unreported due to a lack of tangible evidence: About 80% of women don’t press charges, and 80% of the ones who do see their case dropped before it is investigated.
In stark contrast, the trial of Dominique Pelicot and his 50 co-defendants has been unique in its scope, nature and openness to the public at the victim’s insistence.
After a store security guard caught Pelicot shooting video up unsuspecting women’s skirts in 2020, police searched his home and found thousands of pornographic photos and videos on his phone, laptop and USB stick. Dominique Pelicot later said he had recorded and stored the sexual encounters of each of his guests, and neatly organized them in separate files.
Among those he had over was Mahdi D., who testified that when he left home on the night of Oct. 5, 2018, he didn’t intend to rape anyone.
“I thought she was asleep,” the 36-year-old transportation worker told the panel of five judges, referring to Gisèle Pelicot, who has attended nearly every day of the trial and has become a hero to many sexual abuse victims for insisting that it be public.
“I grant you that you did not leave with the intention of raping anyone,” the prosecutor told him. “But there in the room, it was you.”
Like a few of the other men accused of raping Pelicot between 2011 and 2020, Mahdi D. acknowledged almost all of the facts presented against him. And he expressed remorse, telling the judges, “She is a victim. We can’t imagine what she went through. She was destroyed.”
But he wouldn’t call it rape, even if admitting that it was might get him a lighter sentence. That led prosecutors to ask the court to screen the graphic videos of Mahdi D.’s visit to the Pelicot home.
In June, authorities took down the chatroom where they say Dominique Pelicot and his co-defendants met. Since the trial started on Sept. 2, it has resonated far beyond the Avignon courtroom’s walls, sparking protests in French cities big and small and inspiring a steady flow of opinion pieces and open letters penned by journalists, philosophers and activists.
It has also drawn curious visitors to the city in southeastern France, such as Florence Nack, her husband and 23-year-old daughter, who made the trip from Switzerland to witness the “historical trial.”
Nack, who noted that she, too, was a victim of sexual violence, said she was disturbed by the testimony of 43-year-old trucker Cyprien C., a defendant who spoke that day in court.
Asked by the head judge, Roger Arata, whether he recognized the facts, Cyprien C. answered that he “did not contest the sexual act.”
“And the rape?” Arata pressed. The defendant stood silently before eventually responding, “I can’t answer.”
Arata then began to describe what was on the videos implicating him. They are only shown as a last resource and on a case-by-case basis. But for many in the courtroom, such detailed descriptions can last several minutes and be just as heavy as watching them. Gisèle Pelicot, who is in her early 70s, has chosen to remain in the courtroom while the videos are shown. Unable to watch, she usually closes her eyes, stares at the floor, or buries her face in her hands.
Experts and groups working to combat sexual violence say the defendants’ unwillingness or inability to admit to rape speaks loudly to taboos and stereotypes that persist in French society.
For Magali Lafourcade, a judge and general secretary of the National Consultative Commission of Human Rights who isn’t involved in the trial, popular culture has given people the wrong idea about what rapists look like and how they operate.
“It’s the idea of a hooded man with a knife whom you don’t know and is waiting for you in a place that is not a private place,” she said, noting that this “is miles away from the sociological, criminological reality of rape.”
Two-thirds of rapes take place at private homes, and in a vast majority of cases, victims know their rapists, Lafourcade said.
It can be difficult at times to reconcile the facts with the personalities of the accused — described by loved ones as loving, generous and considerate companions, brothers and fathers.
Cyril B.’s tearful older sister told the court: “It’s my brother, I love him. He’s not a mean person.” His partner described him as “kind, his heart on his sleeve and full of attention.” She insisted that he isn’t “macho” and that he had never forced her to do anything sexually that she wasn’t comfortable with.
Although Lafourcade does not believe “all men are rapists,” as some have concluded the trial shows, she said that unlike the #MeToo accusations that have ensnared French celebrities, the Pelicot case “makes us understand that in fact rapists could be everyone.”
“For once, they’re not monsters — they’re not serial killers on the margin of society. They are men who resemble those we love,” she said. “In this sense, there is something revolutionary.”
RALEIGH, N.C. — A former porn shop worker who was accused by North Carolina Lt. Gov. Mark Robinson of defamation has asked a court to throw out the lawsuit against him, calling the politician’s allegations “bizarre” and his demand for at least $50 million in damages a violation of civil court rules.
Robinson, the Republican nominee for governor, filed a lawsuit in Wake County court Tuesday against CNN and Louis Love Money, of Greensboro, saying they published “disgusting lies” about him.
The lawsuit identified a CNN report last month that Robinson made explicit racial and sexual posts on a pornography website’s message board more than a decade ago. Weeks before CNN’s report, Money alleged in a music video and in a media interview that for several years starting in the 1990s, Robinson frequented a porn shop Money was working at, and that Robinson purchased porn videos from him.
Attorneys for Money, in filing a dismissal motion Wednesday, said that Robinson’s lawsuit violated a procedural rule that requires that a person seeking punitive damages state initially a demand for monetary damages “in excess of $25,000.”
The motion said the rule is designed to “prevent excess demands from leaking publicly in the media and tainting the judicial process.” Violating the rule, attorneys Andrew Fitzgerald and Peter Zellmer wrote, may “have been for the very purpose of creating media attention for Mr. Robinson’s campaign.”
Otherwise, the attorneys also are seeking a dismissal on the grounds that the allegations in the lawsuit, even if they were true, fail to establish a cause of action against Money.
“The complaint contains many impertinent and bizarre allegations,” they wrote.
Asked for a response to the motion, Robinson’s campaign referred to Tuesday’s news release announcing the lawsuit. In it, Robinson said claims from “grifters like Louis Love Money are salacious tabloid trash.”
Money on Tuesday said he stood by what he had said as truthful. CNN declined to comment on the lawsuit when it was filed and had not responded to it in court as of midday Thursday.
Robinson is running against Democratic nominee Josh Stein in the campaign to succeed term-limited Democratic Gov. Roy Cooper.
The CNN report led many fellow GOP elected officials and candidates, including presidential nominee Donald Trump, to distance themselves from Robinson’s gubernatorial campaign. Most of the top staff running Robinson’s campaign and his lieutenant governor’s office quit following the CNN report, and the Republican Governors Association stopped supporting Robinson’s bid.
The network report said it matched details of the account on the message board to other online accounts held by Robinson by comparing usernames, a known email address and his full name. CNN also reported that details discussed by the account holder matched Robinson’s age, length of marriage and other biographical information.
The lawsuit alleges that CNN published its report despite knowing, or recklessly disregarding, that Robinson’s personal data was previously compromised by data breaches.
RALEIGH, N.C. (AP) — North Carolina Republican Lt. Gov. Mark Robinson sued CNN on Tuesday over its recent report that he made explicit racial and sexual posts on a pornography website’s message board, calling the reporting reckless and defamatory.
The lawsuit, filed in Wake County Superior Court, comes less than four weeks after a report that led many fellow GOP elected officials and candidates, including presidential nominee Donald Trump, to distance themselves from Robinson’s gubernatorial campaign.
Robinson, who announced the lawsuit at a news conference in Raleigh with a Virginia-based attorney, has denied authoring the messages.
CNN “chose to publish despite knowing or recklessly disregarding that Lt. Gov. Robinson’s data — including his name, date of birth, passwords, and the email address supposedly associated with the NudeAfrica account — were previously compromised by multiple data breaches,” the lawsuit states, referencing the website.
Robinson, who would be the state’s first Black governor if elected, called the report a “high-tech lynching” on a candidate “who has been targeted from Day 1 by folks who disagree with me politically and want to see me destroyed.”
CNN declined to comment Tuesday, spokesperson Emily Kuhn said in an email.
The CNN report, which first aired Sept. 19, said Robinson left statements over a decade ago on the message board in which, in part, he referred to himself as a “black NAZI,” said he enjoyed transgender pornography, said he preferred Hitler to then-President Barack Obama, and slammed the Rev. Martin Luther King Jr. as “worse than a maggot.”
The network report said it matched details of the account on the message board to other online accounts held by Robinson by comparing usernames, a known email address and his full name. CNN reported that details discussed by the account holder matched Robinson’s age, length of marriage and other biographical information. CNN also said it compared figures of speech that came up frequently in his public Twitter profile that appeared in discussions by the account on the pornographic website.
Polls at the time of the CNN report already showed Democratic rival Josh Stein, the sitting attorney general, with a lead over Robinson. Early in-person voting begins Thursday statewide, and over 57,000 completed absentee ballots have been received so far.
Robinson also in the same defamation lawsuit sued a Greensboro punk rock band singer who alleged in a music video and in an interview with a media outlet that Robinson, in the 1990s and early 2000s, frequented a porn shop the singer once worked at and purchased videos. Louis Love Money, the other named defendant, released the video and spoke with other media outlets before the CNN report.
Robinson denies the allegation in the lawsuit, which reads, “Lt. Gov. Robinson was not spending hours at the video store, five nights a week. He was not renting or previewing videos, and he did not purchase ‘bootleg’ or other videos from Defendant Money.”
Money said in a phone interview Tuesday that he stands by his statements and the music video’s content as truthful: “My story hasn’t changed.”
The lawsuit, which seeks at least $50 million in damages, says the effort against Robinson “appears to be a coordinated attack aimed at derailing his campaign for governor.” It provides no evidence that the network or Money schemed with outside groups to create what Robinson alleges are false statements.
Attorney Jesse Binnall, right, speaks at a news conference, with his client North Carolina Lt. Gov. Mark Robinson, left, in Raleigh, N.C., Tuesday, Oct. 15, 2024. (AP Photo/Karl B DeBlaker)
What to know about the 2024 Election
Robinson’s lawyer, Jesse Binnall, said that he expects to find more “bad actors,” and that entities, which he did not identify, have stonewalled his firm’s efforts to collect information.
“We will use every tool at our disposal now that a lawsuit has been filed, including the subpoena power, in order to continue pursuing the facts,” said Binnall, whose clients have included Trump and his campaign.
In North Carolina courts, a public official claiming defamation generally must show a defendant knew a statement was false or recklessly disregarded its untruthfulness.
Most of the top staff running Robinson’s campaign and his lieutenant governor’s office quit following the CNN report, and the Republican Governors Association, which had already spent millions of dollars in advertising backing Robinson, stopped supporting his bid. And Democrats from presidential nominee Vice President Kamala Harris to downballot state candidates began running ads linking their opponents to Robinson.
Robinson’s campaign isn’t running TV commercials now. He said that “we’ve chosen to go in a different direction” and focus on in-person campaign stops.
Robinson already had a history of inflammatory comments about topics like abortion and LGBTQ+ rights that Stein and his allies have emphasized in opposing him on TV commercials and online.
Stein spokesperson Morgan Hopkins said Tuesday in a statement that “even before the CNN report, North Carolinians have known for a long time that Mark Robinson is completely unfit to be Governor.”
Hurricane Helene and its aftermath took the CNN report off the front pages. Robinson worked for several days with a central North Carolina sheriff collecting relief supplies and criticized Democratic Gov. Roy Cooper — barred by term limits from seeking reelection — for state government’s response in the initial stages of relief.
Trump endorsed Robinson before the March gubernatorial primary, calling him “Martin Luther King on steroids” for his speaking ability. Robinson had been a frequent presence at Trump’s North Carolina campaign stops, but he hasn’t participated in such an event since the CNN report.
SEOUL, South Korea — Three years after the 30-year-old South Korean woman received a barrage of online fake images that depicted her nude, she is still being treated for trauma. She struggles to talk with men. Using a mobile phone brings back the nightmare.
“It completely trampled me, even though it wasn’t a direct physical attack on my body,” she said in a phone interview with The Associated Press. She didn’t want her name revealed because of privacy concerns.
Many other South Korean women recently have come forward to share similar stories as South Korea grapples with a deluge of non-consensual, explicit deepfake videos and images that have become much more accessible and easier to create.
It was not until last week that parliament revised a law to make watching or possessing deepfake porn content illegal.
Most suspected perpetrators in South Korea are teenage boys. Observers say the boys target female friends, relatives and acquaintances — also mostly minors — as a prank, out of curiosity or misogyny. The attacks raise serious questions about school programs but also threaten to worsen an already troubled divide between men and women.
Deepfake porn in South Korea gained attention after unconfirmed lists of schools that had victims spread online in August. Many girls and women have hastily removed photos and videos from their Instagram, Facebook and other social media accounts. Thousands of young women have staged protests demanding stronger steps against deepfake porn. Politicians, academics and activists have held forums.
“Teenage (girls) must be feeling uneasy about whether their male classmates are okay. Their mutual trust has been completely shattered,” said Shin Kyung-ah, a sociology professor at South Korea’s Hallym University.
The school lists have not been formally verified, but officials including President Yoon Suk Yeol have confirmed a surge of explicit deepfake content on social media. Police have launched a seven-month crackdown.
Recent attention to the problem has coincided with France’s arrest in August of Pavel Durov, the founder of the messaging app Telegram, over allegations that his platform was used for illicit activities including the distribution of child sexual abuse. South Korea’s telecommunications and broadcast watchdog said Monday that Telegram has pledged to enforce a zero-tolerance policy on illegal deepfake content.
Police say they’ve detained 387 people over alleged deepfake crimes this year, more than 80% of them teenagers. Separately, the Education Ministry says about 800 students have informed authorities about intimate deepfake content involving them this year.
Experts say the true scale of deepfake porn in the country is far bigger.
The U.S. cybersecurity firm Security Hero called South Korea “the country most targeted by deepfake pornography” last year. In a report, it said South Korean singers and actresses constitute more than half of the people featured in deepfake pornography worldwide.
The prevalence of deepfake porn in South Korea reflects various factors including heavy use of smart phones; an absence of comprehensive sex and human rights education in schools and inadequate social media regulations for minors as well as a “misogynic culture” and social norms that “sexually objectify women,” according to Hong Nam-hee, a research professor at the Institute for Urban Humanities at the University of Seoul.
Victims speak of intense suffering.
In parliament, lawmaker Kim Nam Hee read a letter by an unidentified victim who she said tried to kill herself because she didn’t want to suffer any longer from the explicit deepfake videos someone had made of her. Addressing a forum, former opposition party leader Park Ji-hyun read a letter from another victim who said she fainted and was taken to an emergency room after receiving sexually abusive deepfake images and being told by her perpetrators that they were stalking her.
The 30-year-old woman interviewed by The AP said that her doctoral studies in the United States were disrupted for a year. She is receiving treatment after being diagnosed with panic disorder and post-traumatic stress disorder in 2022.
Police said they’ve detained five men for allegedly producing and spreading fake explicit contents of about 20 women, including her. The victims are all graduates from Seoul National University, the country’s top school. Two of the men, including one who allegedly sent her fake nude images in 2021, attended the same university, but she said has no meaningful memory of them.
The woman said the images she received on Telegram used photos she had posted on the local messaging app Kakao Talk, combined with nude photos of strangers. There were also videos showing men masturbating and messages describing her as a promiscuous woman or prostitute. One photo shows a screen shot of a Telegram chatroom with 42 people where her fake images were posted.
The fake images were very crudely made but the woman felt deeply humiliated and shocked because dozens of people — some of whom she likely knows — were sexually harassing her with those photos.
Building trust with men is stressful, she said, because she worries that “normal-looking people could do such things behind my back.”
Using a smart phone sometimes revives memories of the fake images.
“These days, people spend more time on their mobile phones than talking face to face with others. So we can’t really easily escape the traumatic experience of digital crimes if those happen on our phones,” she said. “I was very sociable and really liked to meet new people, but my personality has totally changed since that incident. That made my life really difficult and I’m sad.”
Critics say authorities haven’t done enough to counter deepfake porn despite an epidemic of online sex crimes in recent years, such as spy cam videos of women in public toilets and other places. In 2020, members of a criminal ring were arrested and convicted of blackmailing dozens of women into filming sexually explicit videos for them to sell.
“The number of male juveniles consuming deepfake porn for fun has increased because authorities have overlooked the voices of women” demanding stronger punishment for digital sex crimes, the monitoring group ReSET said in comments sent to AP.
South Korea has no official records on the extent of deepfake online porn. But ReSET said a recent random search of an online chatroom found more than 4,000 sexually exploitive images, videos and other items.
Reviews of district court rulings showed less than a third of the 87 people indicted by prosecutors for deepfake crimes since 2021 were sent to prison. Nearly 60% avoided jail by receiving suspended terms, fines or not-guilty verdicts, according to lawmaker Kim’s office. Judges tended to lighten sentences when those convicted repented for their crimes or were first time offenders.
The deepfake problem has gained urgency given South Korea’s serious rifts over gender roles, workplace discrimination facing women, mandatory military service for men and social burdens on men and women.
Kim Chae-won, a 25-year-old office worker, said some of her male friends shunned her after she asked them what they thought about digital sex violence targeting women.
“I feel scared of living as a woman in South Korea,” said Kim Haeun, a 17-year-old high school student who recently removed all her photos on Instagram. She said she feels awkward when talking with male friends and tries to distance herself from boys she doesn’t know well.
“Most sex crimes target women. And when they happen, I think we are often helpless,” she said.
SEOUL, South Korea — Three years after the 30-year-old South Korean woman received a barrage of online fake images that depicted her nude, she is still being treated for trauma. She struggles to talk with men. Using a mobile phone brings back the nightmare.
“It completely trampled me, even though it wasn’t a direct physical attack on my body,” she said in a phone interview with The Associated Press. She didn’t want her name revealed because of privacy concerns.
Many other South Korean women recently have come forward to share similar stories as South Korea grapples with a deluge of non-consensual, explicit deepfake videos and images that have become much more accessible and easier to create.
It was not until last week that parliament revised a law to make watching or possessing deepfake porn content illegal.
Most suspected perpetrators in South Korea are teenage boys. Observers say the boys target female friends, relatives and acquaintances — also mostly minors — as a prank, out of curiosity or misogyny. The attacks raise serious questions about school programs but also threaten to worsen an already troubled divide between men and women.
Deepfake porn in South Korea gained attention after unconfirmed lists of schools that had victims spread online in August. Many girls and women have hastily removed photos and videos from their Instagram, Facebook and other social media accounts. Thousands of young women have staged protests demanding stronger steps against deepfake porn. Politicians, academics and activists have held forums.
“Teenage (girls) must be feeling uneasy about whether their male classmates are okay. Their mutual trust has been completely shattered,” said Shin Kyung-ah, a sociology professor at South Korea’s Hallym University.
The school lists have not been formally verified, but officials including President Yoon Suk Yeol have confirmed a surge of explicit deepfake content on social media. Police have launched a seven-month crackdown.
Recent attention to the problem has coincided with France’s arrest in August of Pavel Durov, the founder of the messaging app Telegram, over allegations that his platform was used for illicit activities including the distribution of child sexual abuse. South Korea’s telecommunications and broadcast watchdog said Monday that Telegram has pledged to enforce a zero-tolerance policy on illegal deepfake content.
Police say they’ve detained 387 people over alleged deepfake crimes this year, more than 80% of them teenagers. Separately, the Education Ministry says about 800 students have informed authorities about intimate deepfake content involving them this year.
Experts say the true scale of deepfake porn in the country is far bigger.
The U.S. cybersecurity firm Security Hero called South Korea “the country most targeted by deepfake pornography” last year. In a report, it said South Korean singers and actresses constitute more than half of the people featured in deepfake pornography worldwide.
The prevalence of deepfake porn in South Korea reflects various factors including heavy use of smart phones; an absence of comprehensive sex and human rights education in schools and inadequate social media regulations for minors as well as a “misogynic culture” and social norms that “sexually objectify women,” according to Hong Nam-hee, a research professor at the Institute for Urban Humanities at the University of Seoul.
Victims speak of intense suffering.
In parliament, lawmaker Kim Nam Hee read a letter by an unidentified victim who she said tried to kill herself because she didn’t want to suffer any longer from the explicit deepfake videos someone had made of her. Addressing a forum, former opposition party leader Park Ji-hyun read a letter from another victim who said she fainted and was taken to an emergency room after receiving sexually abusive deepfake images and being told by her perpetrators that they were stalking her.
The 30-year-old woman interviewed by The AP said that her doctoral studies in the United States were disrupted for a year. She is receiving treatment after being diagnosed with panic disorder and post-traumatic stress disorder in 2022.
Police said they’ve detained five men for allegedly producing and spreading fake explicit contents of about 20 women, including her. The victims are all graduates from Seoul National University, the country’s top school. Two of the men, including one who allegedly sent her fake nude images in 2021, attended the same university, but she said has no meaningful memory of them.
The woman said the images she received on Telegram used photos she had posted on the local messaging app Kakao Talk, combined with nude photos of strangers. There were also videos showing men masturbating and messages describing her as a promiscuous woman or prostitute. One photo shows a screen shot of a Telegram chatroom with 42 people where her fake images were posted.
The fake images were very crudely made but the woman felt deeply humiliated and shocked because dozens of people — some of whom she likely knows — were sexually harassing her with those photos.
Building trust with men is stressful, she said, because she worries that “normal-looking people could do such things behind my back.”
Using a smart phone sometimes revives memories of the fake images.
“These days, people spend more time on their mobile phones than talking face to face with others. So we can’t really easily escape the traumatic experience of digital crimes if those happen on our phones,” she said. “I was very sociable and really liked to meet new people, but my personality has totally changed since that incident. That made my life really difficult and I’m sad.”
Critics say authorities haven’t done enough to counter deepfake porn despite an epidemic of online sex crimes in recent years, such as spy cam videos of women in public toilets and other places. In 2020, members of a criminal ring were arrested and convicted of blackmailing dozens of women into filming sexually explicit videos for them to sell.
“The number of male juveniles consuming deepfake porn for fun has increased because authorities have overlooked the voices of women” demanding stronger punishment for digital sex crimes, the monitoring group ReSET said in comments sent to AP.
South Korea has no official records on the extent of deepfake online porn. But Reset said a recent random search of an online chatroom found more than 4,000 sexually exploitive images, videos and other items.
Reviews of district court rulings showed less than a third of the 87 people indicted by prosecutors for deepfake crimes since 2021 were sent to prison. Nearly 60% avoided jail by receiving suspended terms, fines or not-guilty verdicts, according to lawmaker Kim’s office. Judges tended to lighten sentences when those convicted repented for their crimes or were first time offenders.
The deepfake problem has gained urgency given South Korea’s serious rifts over gender roles, workplace discrimination, mandatory military service for men and social burdens on men and women.
Kim Chae-won, a 25-year-old office worker, said some of her male friends shunned her after she asked them what they thought about digital sex violence targeting women.
“I feel scared of living as a woman in South Korea,” said Kim Haeun, a 17-year-old high school student who recently removed all her photos on Instagram. She said she feels awkward when talking with male friends and tries to distance herself from boys she doesn’t know well.
“Most sex crimes target women. And when they happen, I think we are often helpless,” she said.
SACRAMENTO, Calif. — California Gov. Gavin Newsom signed a pair of proposals Sunday aiming to help shield minors from the increasingly prevalent misuse of artificial intelligence tools to generate harmful sexual imagery of children.
The measures are part of California’s concerted efforts to ramp up regulations around the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.
Earlier this month, Newsom also has signed off on some of the toughest laws to tackle election deepfakes, though the laws are being challenged in court. California is wildly seen as a potential leader in regulating the AI industry in the U.S.
The new laws, which received overwhelming bipartisan support, close a legal loophole around AI-generated imagery of child sexual abuse and make it clear child pornography is illegal even if it’s AI-generated.
Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person, supporters said. Under the new laws, such an offense would qualify as a felony.
“Child sexual abuse material must be illegal to create, possess, and distribute in California, whether the images are AI generated or of actual children,” Democratic Assemblymember Marc Berman, who authored one of the bills, said in a statement. “AI that is used to create these awful images is trained from thousands of images of real children being abused, revictimizing those children all over again.”
Newsom earlier this month also signed two other bills to strengthen laws on revenge porn with the goal of protecting more women, teenage girls and others from sexual exploitation and harassment enabled by AI tools. It will be now illegal for an adult to create or share AI-generated sexually explicit deepfakes of a person without their consent under state laws. Social media platforms are also required to allow users to report such materials for removal.
But some of the laws don’t go far enough, said Los Angeles County District Attorney George Gascón, whose office sponsored some of the proposals. Gascón said new penalties for sharing AI-generated revenge porn should have included those under 18, too. The measure was narrowed by state lawmakers last month to only apply to adults.
“There has to be consequences, you don’t get a free pass because you’re under 18,” Gascón said in a recent interview.
The laws come after San Francisco brought a first-in-the-nation lawsuit against more than a dozen websites that AI tools with a promise to “undress any photo” uploaded to the website within seconds.
The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce it becomes more accessible and easier to use. Researchers have been sounding the alarm these past two years on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters.
The issue has prompted swift bipartisan actions in nearly 30 states to help address the proliferation of AI-generated sexually abusive materials. Some of them include protection for all, while others only outlaw materials depicting minors.
Newsom has touted California as an early adopter as well as regulator of AI technology, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices.