ReportWire

Tag: X/Twitter

  • X Down For Thousands In U.S. And UK

    [ad_1]

    X, the former Twitter, has had a significant outage in the U.S. and UK. this morning.

    Thousands of users were unable to sign onto the Elon Musk site this morning as X wouldn’t load in both its app and website forms.

    According to the monitoring website Downdetector, reports of outages began to accrue at 8:14 a.m. ET today, with the number spiking considerably by 8:29 a.m. when 39.561 reports were filed with the site. The number dropped to 31,918 by 8:59 a.m. The figure at 9:09 a.m. was 28,673.

    About 53% of the user submissions indicated the problems were with the app, with 21% citing timeline issues and the remainder reporting website accessibility issues.

    Some media reports indicate that X outages have been reported in India as well.

    There’s been no official confirmation yet from X.

    [ad_2]

    Greg Evans

    Source link

  • Here’s When Elon Musk Will Finally Have to Reckon With His Nonconsensual Porn Generator

    [ad_1]

    It has been over a week now since users on X began en masse using the AI model Grok to undress people, including children, and the Elon Musk-owned platform has done next to nothing to address it. Part of the reason for that is the fact that, currently, the platform isn’t obligated to do a whole lot of anything about the problem.

    Last year, Congress enacted the Take It Down Act, which, among other things, criminalizes nonconsensual sexually explicit material and requires platforms like X to provide an option for victims to request that content using their likeness be taken down within 48 hours. Democratic Senator Amy Klobuchar, a co-sponsor of the law, posted on X, “No one should find AI-created sexual images of themselves online—especially children. X must change this. If they don’t, my bipartisan TAKE IT DOWN Act will soon require them to.”

    Note the “soon” in that sentence. The requirement within the law for platforms to create notice and removal systems doesn’t go into effect until May 19, 2026. Currently, neither X (the platform where the images are being generated via posted prompts and hosted) nor xAI (the company responsible for the Grok AI model that is generating the images) has formal takedown request systems. X has a formal content takedown request procedure for law enforcement, but general users are advised to go through the Help Center, where it appears users can only report a post as violating X’s rules.

    If you’re curious just how likely the average user is to get one of these images taken down, just ask Ashley St. Clair how well her attempts went when she flagged a nonconsensual sexualized image of her that was shared on X. St. Clair has about as much access as anyone to make a personal plea for a post’s removal—she is the mother of one of Elon Musk’s children and has an X account with more than one million followers. “It’s funny, considering the most direct line I have and they don’t do anything,” she told The Guardian. “I have complained to X, and they have not even removed a picture of me from when I was a child, which was undressed by Grok.”

    The image of St. Clair was eventually removed, seemingly after it was widely reported by her followers and given attention in the press. But St. Clair now claims she was thanked for her efforts to raise this issue by being restricted from communicating with Grok and having her X Premium membership revoked. Premium allows her to get paid based on engagement. Grok, which has become the default source of information on this whole situation, despite the fact that it is an AI model incapable of speaking for anyone or anything, explained in a post, “Ashley St. Clair’s X checkmark and Premium were likely removed due to potential terms violations, including her public accusations against Grok for generating inappropriate images and possible spam-like activity.”

    Enforcement outside of the Take It Down Act is possible, though less straightforward. Democratic Senator Ron Wyden suggested that the material generated by Grok would not be protected under Section 230 of the Communications Decency Act, which typically grants tech platforms immunity from liability for the illegal behavior of users. Of course, it’s unlikely the Trump administration’s Department of Justice would pursue a case against Musk’s companies, leaving attempts at enforcement up to the states.

    Outside of the US, some governments are taking the matter much more seriously. Authorities in France, Ireland, the United Kingdom, and India have all started looking into the nonconsensual sexual images generated by Grok and may eventually bring charges against X and xAI.

    But it certainly doesn’t seem like the head of X and xAI is taking the matter all that seriously. As Grok was generating sexual images of children, Elon Musk, the CEO of both companies involved in this scandal, was actively reposting content created as part of the trend, including AI-generated images of a toaster and a rocket in a bikini. Thus far, the extent of X’s acknowledgement of the situation starts and ends at blaming the users. In a post from X Safety, the company said, “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” but took no responsibility for enabling it.

    If anything, what Grok has been up to in recent weeks seems like it is probably closer to what Musk wants out of the AI. Per a report from CNN, Musk has been “unhappy about over-censoring” on Grok, including being particularly frustrated about restrictions on Grok’s image and video generator. Publicly, Musk has repeatedly talked up Grok’s “spicy mode” and derided the idea of “wokeness” in AI.

    In response to a request for comment from Gizmodo, xAI said, “Legacy Media Lies,” the latest of the automated messages that the platform has sent out since it shut down its public relations department.

    [ad_2]

    AJ Dellinger

    Source link

  • Data on Sydney Sweeney Ad Controversy Shows How MAGA Weaponizes Social Trends

    [ad_1]

    There’s plenty of talk online about echo chambers and the way that certain ideas can get amplified when stuck in a silo of like-minded people. But the controversy earlier this year over American Eagle’s “Good Jeans” advertising campaign featuring Sydney Sweeney is an example of how motivated political actors can pluck otherwise insulated discourse out of parts of social media and spin it into full-blown drama to serve their own means.

    According to data collected by open-source social intelligence platform Open Measures, pushback against the American Eagle ad campaign, which was criticized as dabbling in eugenics and winking at white supremacists, was a relatively small part of the conversation surrounding the marketing effort. From July 16 to August 12, 2025, just 6% of posts mentioning the ad included mention of its perceived racist undertones. But if you caught wind of the discourse about it, you’d think it was the only thing anyone was talking about.

    That, per Open Measures, is because right-wing accounts online spotted some of the backlash and turned it into the story. By July 27, the researchers found that conservative personalities started to boost selected posts to suggest that liberals were outraged by the ads. The accounts used this to generate backlash against what they painted as the entire Left crying “racism” about a jeans advertisement. But, as the New York Times reported in August, most of the posts that got presented as representative of a larger political ideology had fewer than 500 views before being amplified. Meanwhile, the amplification efforts were being done by accounts like LibsOfTikTok, which has 4.5 million followers on Twitter.

    The ability to take these smaller accounts offering criticism and turn them into the stand-ins for the “woke left” allowed the Online Right to generate an entire news cycle about the advertisement and the supposed backlash against it, grabbing mainstream news coverage, including multiple segments on Fox News. The biggest period of conversation about the ad, according to Open Measures, wasn’t the days following its launch, but rather about two weeks later, between July 30 and August 5, when the conservative amplification was at its highest—and culminating in President Donald Trump commenting on the whole situation and saying he “loved” the ad.

    Open Measures further notes, “a larger share of posts discussing the ads that also claimed the ads echoed bigoted ideologies were represented on alt-platforms with predominantly conservative communities than those without, indicating that the claims were more popular with conservative critics of liberals than with liberals themselves.”

    There were undoubtedly people levying real and genuine critiques of the American Eagle campaign, but the idea that those voices were somehow exemplary of the entirety of the Left simply doesn’t match up to the data. The Right managed to take a handful of outliers, turn them into the representatives of something bigger, and then spin up an entire effort to push back against that narrative that it amplified in the first place.

    [ad_2]

    AJ Dellinger

    Source link

  • Elon Musk Finally Files Threatened Suit Over White Supremacist Ads Placement On X/Twitter 

    Elon Musk Finally Files Threatened Suit Over White Supremacist Ads Placement On X/Twitter 

    [ad_1]

    It wasn’t exactly the “split second” the courthouse opened this morning as promised, but Elon Musk has now filed his self-described “thermonuclear lawsuit” against Media Matters.

    “Defendant Media Matters for America is a self-proclaimed media watchdog that decided it would not let the truth get in the way of a story it wanted to publish about X Corp,” proclaimed the jury trial seeking complaint filed in federal court in Texas.  Musk and X’s three-claim disparagement suit wants a preliminary and permanent injunction against Media Matters’ report on the alleged placing of corporate ads next “Pro-Nazi Content.”

    Enraged about studies by the media watchdog that claimed X/Twitter is placing the advertising of major brands and big corporations aside such vile material, Musk lashed put with his legal threats late on November 17. More fallout from the Media Matters study saw Apple, Disney Comcast, Paramount Global, Warner Bros Discovery and others suspend their ad buys and presence on X/Twitter.

    Condemned by the White House last week for his additional amplification of antisemitic screech, Musk clearly wanted to shift the narrative. First, as more deep pocket advertisers jumped ship, the Tesla/Space X boss took to his social media platform to lash out “Many of the largest advertisers are the greatest oppressors of your right to free speech.” Then he swore to take down Media Matters and their so-called “fraudulent attack on our company” while kind of confirming the truth of their research at the same time.

    After Musk threatened late last week to unleash his lawsuit first thing Monday, Media Matters President Angelo Carusone took a swing back. “Far from the free speech advocate he claims to be, Musk is a bully who threatens meritless lawsuits in an attempt to silence reporting that he even confirmed is accurate,” Carusone said. “Musk admitted the ads at issue ran alongside the pro-Nazi content we identified. If he does sue us, we will win.” 

    Today, Carusone added: “Elon Musk has spent the last few days making meritless legal threats, elevating bizarre conspiracy theories, and lobbing vicious personal attacks against his ‘enemies’ online. Even if he does not follow through with his threat to sue, the volatility of actions reinforce why major brands are rightly skittish of partnering with X. We are going to continue our work undeterred. If he sues us, we will win.”

    Now that an actual suit has been filed, Musk will have to hand over material on the platform’s algorithms internal ad decision and more, a pulling back of the curtain that could prove to be the make-or-break in the matter.

    Coming off a weekend that also saw yet another Space X launch end in an explosion, Musk took to X/Twitter repeatedly this morning to take another pre-litigation swipe at Media Matters:

    This is not Musk’s first lawsuit against a media watchdog. 

    Last summer, X/Twitter sued the Center for Countering Digital Hate for defamation over the group’s reports on the platform’s lack of hate speech guardrails. On November 16, the group filed a motion to dismiss and an anti-SLAPP motion last week, arguing that Musk’s platform had “ginned up baseless claims” in taking issue with how CCDH gathered its data.

    “Apparently unhappy with how it is faring in the marketplace of ideas, X Corp. asks this court to shut that marketplace down—to punish the CCDH Defendants for their speech and to silence others who might speak up about X Corp. in the future,” the group’s attorneys wrote. “Thus, X Corp. seeks ‘at least tens of millions of dollars’ in damages based on how advertisers reacted to what the CCDH Defendants said about X Corp. in their public reports.”

    The erratic Musk has previously threatened legal action against other critics over the years, but didn’t follow through. In September, when the Anti-Defamation League sharply criticized X/Twitter for increasing antisemitic and other hate speech, Musk promised to sue – but never did. The South African billionaire blamed ADL for an advertising decline of 60% on the social media platform he bought for over $44 billion last year.

    Musk came up again Monday at the White House, which reiterated its criticism of his antisemitic retweet:

    [ad_2]

    Dominic Patten

    Source link