ReportWire

Tag: content moderation

  • Looks Like American TikTok’s Problems Are Sending Users Flocking to Alternatives

    [ad_1]

    According to Appfigures, the top five free iPhone apps right now in the U.S. are:

    1. ChatGPT
    2. JumpJumpVPN
    3. V2Box
    4. UpScrolled
    5. Threads

    Yesterday, Apple blogger John Gruber of Daring Fireball posted the overall most popular iPhone apps for all of 2025, and the top five were:

    1. ChatGPT
    2. Threads
    3. Google
    4. TikTok 
    5. WhatsApp

    I’m not the first person to point this out, but it’s not exactly a stretch to infer that the three apps that have suddenly squeezed in between ChatGPT and Threads are on the list due to dissatisfaction with TikTok. Two are VPN apps, which can theoretically be used to access TikTok from a virtual network in a country where the U.S. version of TikTok is unnecessary, and one, UpScrolled, is an Australian video and text sharing app that recently went viral.  

    To refresh your memory on what’s going on with TikTok, after years of trying to force Chinese-owned ByteDance to relinquish ownership and let a U.S.-friendly buyer take over, a legal entity was created earlier this month that can take ownership of TikTok, with Adam Presser as its new CEO. This allows TikTok to comply with a new U.S. law essentially requiring TikTok to be run by a U.S. company or be banned.

    But this entity, a complex joint corporate venture in charge of U.S. operations for TikTok, appears from the outside to be struggling to keep everything in order, amid the handoff from TikTok’s Singapore base of operations (U.S. TikTok data was already largely housed in the U.S., so it’s not clear if this transition actually involves any large, burdensome data transfers).

    According to an X post from TikTok, the problem is that there’s been “a major infrastructure issue triggered by a power outage at one of our U.S. data center partner sites,” and there may be various glitches, service slowdowns, failures, and issues with user metrics. Oracle has further clarified that the TikTok issue stems from a weather-related blackout at one of its data centers. Oracle owns 15 percent of the new TikTok U.S. venture.

    The issues TikTok is referring to dovetail nicely with the descriptions of problems described by users likw videos that sit in review indefinitely, and posts that get low or zero view counts, often despite high numbers for other engagement metrics like comments or shares. Other general issues that fit with a data center interruption include a possible lack of analytics in TikTok Studio, livestreamers apparently getting random messages saying they need to stop streaming immediately, and irrelevant search results.

    However, the hiccups at TikTok are, at least in part, being perceived as the technical consequences of a right-wing takeover. That’s in part because that 15 percent of TikTok U.S. now held by Oracle is controlled by the right-wing billionaire Larry Ellison, and the ownership transition is of course being shepherded along by the Trump Administration. And that’s not to mention the fact that the Biden-era push to ban TikTok emerged amid paranoia that it was turning the youth into Maoist, Hamas-supporting terrorists.

    But have the rules on TikTok tangibly changed? For all anyone knows, no. It has re-emerged in the past few days that at some point in the past, new TikTok CEO Adam Presser talked publicly about an idiosyncratic and clunky moderation practice around Israel—treating the word “Zionist” as hate speech if it carries negative connotations. But this isn’t some new TikTok policy rolling out to coincide with the transition to U.S. ownership (although, rather troublingly, at least one answer on X from Grok strongly implies that it is). It’s more likely part of a rule change around Zionism that apparently rolled out in 2024.

    Gizmodo reached out to TikTok’s U.S. joint venture for clarification about the causes of the platform’s recent problems. In a reply, we received links to statements on X, including the one from Oracle. We followed up, specifically asking if any content rules had been changed since the ownership transition. We will update if we hear back.

    Around Sunday, TikTok users started writing that they felt like their political posts were being censored.

    “TikTok has been under new leadership for like a day and I made a slideshow with posts from the ICE rally today and it immediately got out under review and is not being published,” wrote Bluesky user @pnwpolicyangel.bsky.social.  

    Instagram user erinmayequade wrote:

    “TikTok is cooked. They won’t even post my last two videos — I can see them, but anyone else who goes to my profile won’t even see them. Overnight, our federal government has silenced and suppressed dissent [on] one of our largest platforms. Not just content, but everything from certain people.”

    It would be corporate malpractice to roll out such insidious and restrictive policies right out of the gate like this, particularly amid the present backdrop of political upheaval. Once again, TikTok still has not commented on this speculation from some of its users.

    But if it’s true that users are flocking to other options for political reasons despite no hard evidence that the new TikTok U.S. joint venture has already begun some kind of crackdown on political speech, that also doesn’t necessarily mean they’re misled. They might just expect changes along the lines of what happened at Twitter when Elon Musk took over. Content standards there took a hard right turn very quickly. So with that in mind, some TikTok users might just be leaving preemptively at the first sign of an annoying glitch in order to avoid enduring even worse changes that they perceive to be on the horizon. 

    [ad_2]

    Mike Pearl

    Source link

  • TikTok finalizes a deal to form a new American entity

    [ad_1]

    TikTok has finalized a deal to create a new American entity, avoiding the looming threat of a ban in the United States that has been in discussion for years on the platform now used by more than 200 million Americans.The social video platform company signed agreements with major investors including Oracle, Silver Lake and the Emirati investment firm MGX to form the new TikTok U.S. joint venture. The new version will operate under “defined safeguards that protect national security through comprehensive data protections, algorithm security, content moderation and software assurances for U.S. users,” the company said in a statement Thursday. American TikTok users can continue using the same app.President Donald Trump praised the deal in a Truth Social post, thanking Chinese leader Xi Jinping specifically “for working with us and, ultimately, approving the Deal.” Trump added that he hopes “that long into the future I will be remembered by those who use and love TikTok.”Adam Presser, who previously worked as TikTok’s head of operations and trust and safety, will lead the new venture as its CEO. He will work alongside a seven-member, majority-American board of directors that includes TikTok’s CEO Shou Chew.The deal ends years of uncertainty about the fate of the popular video-sharing platform in the United States. After wide bipartisan majorities in Congress passed — and President Joe Biden signed — a law that would ban TikTok in the U.S. if it did not find a new owner in the place of China’s ByteDance, the platform was set to go dark on the law’s January 2025 deadline. For a several hours, it did. But on his first day in office, President Donald Trump signed an executive order to keep it running while his administration sought an agreement for the sale of the company.“China’s position on TikTok has been consistent and clear,” Guo Jiakun, a Chinese Foreign Ministry spokesperson in Beijing, said Friday about the TikTok deal and Trump’s Truth Social post, echoing an earlier statement from the Chinese embassy in Washington.Apart from an emphasis on data protection, with U.S. user data being stored locally in a system run by Oracle, the joint venture will also focus on TikTok’s algorithm. The content recommendation formula, which feeds users specific videos tailored to their preferences and interests, will be retrained, tested and updated on U.S. user data, the company said in its announcement.The algorithm has been a central issue in the security debate over TikTok. China previously maintained the algorithm must remain under Chinese control by law. But the U.S. regulation passed with bipartisan support said any divestment of TikTok must mean the platform cuts ties — specifically the algorithm — with ByteDance. Under the terms of this deal, ByteDance would license the algorithm to the U.S. entity for retraining.The law prohibits “any cooperation with respect to the operation of a content recommendation algorithm” between ByteDance and a new potential American ownership group, so it is unclear how ByteDance’s continued involvement in this arrangement will play out.“Who controls TikTok in the U.S. has a lot of sway over what Americans see on the app,” said Anupam Chander, a professor of law and technology at Georgetown University.Oracle, Silver Lake and MGX are the three managing investors, each holding a 15% share. Other investors include the investment firm of Michael Dell, the billionaire founder of Dell Technologies. ByteDance retains 19.9% of the joint venture.___Associated Press writers Chan Ho-him in Hong Kong and Didi Tang in Washington contributed to this report.

    TikTok has finalized a deal to create a new American entity, avoiding the looming threat of a ban in the United States that has been in discussion for years on the platform now used by more than 200 million Americans.

    The social video platform company signed agreements with major investors including Oracle, Silver Lake and the Emirati investment firm MGX to form the new TikTok U.S. joint venture. The new version will operate under “defined safeguards that protect national security through comprehensive data protections, algorithm security, content moderation and software assurances for U.S. users,” the company said in a statement Thursday. American TikTok users can continue using the same app.

    President Donald Trump praised the deal in a Truth Social post, thanking Chinese leader Xi Jinping specifically “for working with us and, ultimately, approving the Deal.” Trump added that he hopes “that long into the future I will be remembered by those who use and love TikTok.”

    Adam Presser, who previously worked as TikTok’s head of operations and trust and safety, will lead the new venture as its CEO. He will work alongside a seven-member, majority-American board of directors that includes TikTok’s CEO Shou Chew.

    The deal ends years of uncertainty about the fate of the popular video-sharing platform in the United States. After wide bipartisan majorities in Congress passed — and President Joe Biden signed — a law that would ban TikTok in the U.S. if it did not find a new owner in the place of China’s ByteDance, the platform was set to go dark on the law’s January 2025 deadline. For a several hours, it did. But on his first day in office, President Donald Trump signed an executive order to keep it running while his administration sought an agreement for the sale of the company.

    “China’s position on TikTok has been consistent and clear,” Guo Jiakun, a Chinese Foreign Ministry spokesperson in Beijing, said Friday about the TikTok deal and Trump’s Truth Social post, echoing an earlier statement from the Chinese embassy in Washington.

    Apart from an emphasis on data protection, with U.S. user data being stored locally in a system run by Oracle, the joint venture will also focus on TikTok’s algorithm. The content recommendation formula, which feeds users specific videos tailored to their preferences and interests, will be retrained, tested and updated on U.S. user data, the company said in its announcement.

    The algorithm has been a central issue in the security debate over TikTok. China previously maintained the algorithm must remain under Chinese control by law. But the U.S. regulation passed with bipartisan support said any divestment of TikTok must mean the platform cuts ties — specifically the algorithm — with ByteDance. Under the terms of this deal, ByteDance would license the algorithm to the U.S. entity for retraining.

    The law prohibits “any cooperation with respect to the operation of a content recommendation algorithm” between ByteDance and a new potential American ownership group, so it is unclear how ByteDance’s continued involvement in this arrangement will play out.

    “Who controls TikTok in the U.S. has a lot of sway over what Americans see on the app,” said Anupam Chander, a professor of law and technology at Georgetown University.

    Oracle, Silver Lake and MGX are the three managing investors, each holding a 15% share. Other investors include the investment firm of Michael Dell, the billionaire founder of Dell Technologies. ByteDance retains 19.9% of the joint venture.

    ___

    Associated Press writers Chan Ho-him in Hong Kong and Didi Tang in Washington contributed to this report.

    [ad_2]

    Source link

  • Meta’s Use of the Term ‘PG-13’ Has Run Afoul of the U.S. Movie-Rating Organization

    [ad_1]

    The MPA would like a word with Meta about its new filters for young users.

    [ad_2]

    Lucas Ropek

    Source link

  • AI Slop Is Flooding Medium

    AI Slop Is Flooding Medium

    [ad_1]

    Some Medium writers and editors do applaud the platform’s approach to AI. Eric Pierce, who founded Medium’s largest pop culture publication Fanfare, says he doesn’t have to fend off many AI-generated submissions and that he believes that the human curators of Medium’s boost program help highlight the best of the platform’s human writing. “I can’t think of a single piece I’ve read on Medium in the past few months that even hinted at being AI-created,” he says. “Increasingly, Medium feels like a bastion of sanity amid an internet desperate to eat itself alive.”

    However, other writers and editors believe they currently still see a plethora of AI-generated writing on the platform. Content marketing writer Marcus Musick, who edits several publications, wrote a post lamenting how what he suspects to be an AI-generated article went viral. (Reality Defender ran an analysis on the article in question and estimated it was 99 percent “likely manipulated.”) The story appears widely read, with over 13,500 “claps.”

    In addition to spotting possible AI content as a reader, Musick also believes he encounters it frequently as an editor. He says he rejects around 80 percent of potential contributors a month because he suspects they’re using AI. He does not use AI detectors, which he calls “useless,” instead relying on his own judgment.

    While the volume of likely AI-generated content on Medium is notable, the moderation challenges the platform faces—how to surface good work and keep junk banished—is one that has always plagued the greater web. The AI boom has simply super-charged the problem. While click farms have long been an issue, for example, AI has handed SEO-obsessed entrepreneurs a way to swiftly resurrect zombie media outlets by filling them with AI slop. There’s a whole subgenre of YouTube hustle culture entrepreneurs creating get-rich-quick tutorials encouraging others to create AI slop on platforms like Facebook, Amazon Kindle, and, yes, Medium. (Sample headline: “1-Click AI SEO Medium Empire 🤯.”)

    “Medium is in the same place as the internet as a whole right now. Because AI content is so quick to generate that it is everywhere,” says plagiarism consultant Jonathan Bailey. “Spam filters, the human moderators, et cetera—those are probably the best tools they have.”

    Stubblebine’s argument—that it doesn’t necessarily matter whether a platform contains a large amount of garbage, as long as it successfully amplifies good writing and limits the reach of said garbage—is perhaps more pragmatic than any attempt to wholly banish AI slop. His moderation strategy may very well be the most savvy approach.

    It also suggests a future in which the Dead Internet theory comes to fruition. The theory, once the domain of extremely online conspiratorial thinkers, argues that the vast majority of the internet is devoid of real people and human-created posts, instead clogged with AI-generated slop and bots. As generative AI tools grow more commonplace, platforms that give up on trying to blot out bots will incubate an online world in which work created by humans becomes increasingly harder to find on platforms swamped by AI.

    [ad_2]

    Kate Knibbs

    Source link

  • X’s First Transparency Report Since Elon Musk’s Takeover Is Finally Here

    X’s First Transparency Report Since Elon Musk’s Takeover Is Finally Here

    [ad_1]

    Today, X released the company’s first transparency report since Elon Musk bought the company, formerly Twitter, in 2022.

    Before Musk’s takeover, Twitter would release transparency reports every six months.These largely covered the same ground as the new X report, giving specific numbers for takedowns, government requests for information, and content removals, as well as data about which content was reported and, in some cases, removed for violating policies. The last transparency report available from Twitter covered the second half of 2021 and was 50 pages long. (X’s is a shorter 15 pages, but requests from governments are also listed elsewhere on the company’s website and have been consistently updated to remain in compliance with various government orders.)

    Comparing the 2021 report to the current X transparency report is a bit difficult, as the way the company measures different things has changed. For instance, in 2021, 11.6 million accounts were reported. Of this 11.6 million, 4.3 million were “actioned” and 1.3 million were suspended. According to the new X report, there were over 224 million reports, of both accounts and pieces of individual content, but the result was 5.2 million accounts being suspended.

    While some numbers remain seemingly consistent across the reports—reports of abuse and harassment are, somewhat predictably, high—in other areas, there’s a stark difference. For instance, in the 2021 report, accounts reported for hateful content accounted for nearly half of all reports, and 1 million of the 4.3 million accounts actioned. (The reports used to be interactive on the website; the current PDF no longer allows users to flip through the data for more granular breakdowns.) In the new X report, the company says it has taken action on only 2,361 accounts for posting hateful content.

    But this may be due to the fact that X’s policies have changed since it was Twitter, which Theodora Skeadas, a former member of Twitter’s public policy team who helped put together its Moderation Research Consortium, says might change the way the numbers look in a transparency report. For instance, last year the company changed its policies on hate speech, which previously covered misgendering and deadnaming, and rolled back its rules around Covid-19 misinformation in November of 2022.

    “As certain policies have been modified, some content is no longer violative. So if you’re looking at changes in the quality of experience, that might be hard to capture in a transparency report,” she says.

    X has also lost users since Musk’s takeover, further complicating what the new reality of the platform might look like. “If you account for changing usage, is it a lower number?” she asks.

    After taking over the company in October of 2022, Musk fired the majority of the company’s trust and safety staff as well as its policy staff, the people who make the platform’s rules and ensure they’re enforced. Under Musk, the company also began charging for its API, making it harder for researchers and nonprofits to access X data to see what was really going on on the platform. This may also account for changes between the two reports.

    [ad_2]

    Vittoria Elliott

    Source link

  • Why It’s So Hard to Fully Block X in Brazil

    Why It’s So Hard to Fully Block X in Brazil

    [ad_1]

    The social network X has been largely inaccessible in Brazil since Saturday, after the country’s Supreme Court ordered all mobile and internet service providers to block the platform. The court order followed a months-long dispute between Judge Alexandre de Moraes and X CEO Elon Musk over the company’s misinformation, hate speech, and moderation policies.

    With Brazil’s population of 215 million people, a mature democracy, a sprawling land mass, and more than 20,000 internet service providers, it isn’t straightforward to block a web platform in the South American nation. And while the biggest ISPs have implemented the ban, many are still scrambling to comply with the order, leaving a patchwork of access to the site.

    “Brazil has made headway blocking X on the main internet providers, but our telemetry indicates there’s a long tail of local and regional ISPs where the service is still available,” says Isik Mater, director of research at the internet censorship analysis group NetBlocks.

    The Open Observatory of Network Interference reported that a similar progression played out in when Brazil’s Federal Police obtained a court order in April 2023 for ISPs to block the communication platform Telegram because it would not fully share information about users involved in neo-Nazi group chats. Some large ISPs began blocking Telegram immediately; “however, the block was not implemented by all ISPs in Brazil, nor was it implemented in the same way,” the group wrote. “This suggests lack of coordination between providers, and that each ISP implemented the block autonomously.”

    A similar progression has been playing out with the X ban. Brazil’s 20,000 ISPs produce a notably competitive market, but only a few have infrastructure nationwide. About 40 percent are tiny regional providers with 5,000 customers or fewer. The human and digital rights watchdog Freedom House rates Brazil’s internet freedom as “partly free” and trending to be more restrictive, because of the country’s far reaching efforts to crack down on political misinformation in recent years and its three-day ban on Telegram. Brazil also blocked the secure communication platform WhatsApp in December 2015 and again in May 2016 because it did not respond to similar data requests.

    Brazil’s National Telecommunications Agency ANATEL did not respond to WIRED’s multiple requests for comment.

    Unlike in countries including Russia, Iran, and China, there is currently no legal apparatus or technical infrastructure by which the Brazilian government can systematically and comprehensively restrict access to particular websites or online platforms or impose connectivity blackouts on its citizens.

    Reports indicate that many Brazilian ISPs that have implemented the ban are using the technique known as “DNS filtering” to block access to X. The Domain Name System is the internet’s phonebook for looking up the IP addresses associated with URLs like www.wired.com. DNS queries are sent to a DNS “resolver” that does the IP address lookups, and ISPs can configure their resolvers to filter or block requests for particular websites.

    Mobile apps like X’s Android and iOS apps don’t rely on DNS, though, so DNS filtering alone is not enough to block all connections to a web platform. Some Brazilian ISPs seem to also be using IP address “sinkholing”—redirecting online traffic to a different server than the users intended to visit—as a way to send traffic meant for X into the abyss.

    “We’re seeing variation by provider in Brazil and right now it looks they’re each trying their own thing to see what works,” NetBlocks’ Mater says. “Brazil has a diverse network infrastructure with lots of ways for data to enter and leave the country, so there isn’t that centralized choke point and ‘kill switch’ we see in [some] authoritarian-leaning countries.”

    VPN usage has surged in Brazil this week under the ban as a way around ISP attempts to block X, but the court order ban includes a provision that people could be charged a fine of 50,000 reais—about $8,900—per day for using circumvention tools like VPNs.

    [ad_2]

    Lily Hay Newman

    Source link

  • Mark Zuckerberg Vows to Be Neutral–While Tossing Gifts to Trump and the GOP

    Mark Zuckerberg Vows to Be Neutral–While Tossing Gifts to Trump and the GOP

    [ad_1]

    This week Mark Zuckerberg sent a letter to Jim Jordan, the chair of the House Judiciary Committee. For months, the GOP-led committee has been on a crusade to prove that Meta, via its once-eponymous Facebook app, engaged in political sabotage by taking down right-wing content. Its investigation has involved thousands of documents, and the committee interviewed multiple employees, which failed to locate a smoking gun. Now, under the guise of offering his take on the subject, Zuckerberg’s letter is a mea culpa where he seems to indicate that there was something to the GOP conspiracy theory.

    Specifically, he said that in 2021 the Biden administration asked Meta “to censor some Covid-related content.” Meta did take the posts down, and Zuckerberg now regrets the decision. He also conceded that it was wrong to take down some content regarding Hunter Biden’s laptop, which the company did after the FBI warned that the reports might be Russian disinformation.

    What stood out to me, besides the letter’s simpering tone, was how Zuckerberg used the word “censor.” For years the right has been using that word to describe what it regards as Facebook’s systematic suppression of conservative posts. Some state attorneys general have even used that trope to argue that the company’s content should be regulated, and Florida and Texas have passed laws to do just that. Facebook has always contended that the First Amendment is about government suppression, and by definition its content decisions could not be characterized as such. Indeed, the Supreme Court dismissed the lawsuits and blocked the laws.

    Now, by using that term to describe the removal of the Covid material, Zuckerberg seems to be backing down. After years of insisting that, right or wrong, a social media company’s content decisions did not deprive people of First Amendment rights—and in fact said that by making such decisions, the company was invoking its free speech rights—Zuckerberg is now handing its conservative critics just what they wanted.

    I asked Meta spokesperson Andy Stone if the company now agrees with the GOP that some of its decisions to take down content can be referred to as “censoring.” Stone said that Zuckerberg was referring to the government when he used that term. But he also pointed me to Zuckerberg’s affirmation that the ultimate decision to remove the posts was Meta’s own. (Responding to the Zuckerberg letter, the White House said, “When confronted with a deadly pandemic, this Administration encouraged responsible actions to protect public health and safety,” and left the final decision to Facebook.)

    Meta can’t have it both ways, The letter is clear—Zuckerberg said the government pressured Meta to “censor” some Covid content. Meta took that material down. Ergo, Meta now characterizes some of its own actions as censorship. Seizing on this, the GOP members of the Judiciary Committee quickly tweeted that Zuckerberg has now outright admitted “Facebook censored Americans.”

    Stone did say that Meta still does not consider itself a censor. So is Meta disputing that GOP tweet? Stone wouldn’t comment on it. It seems that Meta will offer no pushback while GOP legislators and right-wing commentators crow that Facebook now concedes that it blatantly censored conservatives as a matter of policy.

    Meta’s CEO presented Jordan and the GOP with another gift in his letter, involving his private philanthropy. During the 2020 election, Zuckerberg helped fund nonpartisan initiatives to protect people’s right to vote. Republicans criticized Zuckerberg’s effort as aiding the Democrats. Zuckerberg still insists he wasn’t advocating that people vote a certain way, just ensuring they were free to cast ballots. But, he wrote Jordan, he recognized that some people didn’t believe him. So, apparently to indulge those ill-informed or ill-intentioned critics, he now vows not to fund bipartisan voting efforts during this election cycle. “My goal is to be neutral and not play a role one way or another—or even appear to play a role,” he wrote.

    [ad_2]

    Steven Levy

    Source link

  • Telegram Faces a Reckoning. Other Founders Should Beware

    Telegram Faces a Reckoning. Other Founders Should Beware

    [ad_1]

    “[Elon] Musk and fellow executives should be reminded of their criminal liability,” said Bruce Daisley, a former executive at Twitter, who worked at the company’s British office, days after British protesters tried to set fire to a hotel for asylum seekers.

    But Telegram has provoked politicians more than any other platform. What could be called the company’s uncollaborative approach has put the platform—part messaging app, part social media network—on a collision course with governments around the world.

    The case in France is far from the first time Telegram has been reprimanded by authorities for its refusal to cooperate. Telegram has been temporarily suspended twice in Brazil, in 2022 and 2023, both times after being accused of failing to cooperate with legal orders.

    In 2022, similar events unfolded in Germany when the country’s interior minister also threatened to ban the app after letters, suggestions of fines, and even a Telegram-dedicated task force all went unanswered, according to the authorities, who were concerned about anti-lockdown groups using the app to discuss political assassinations. Multiple German newspapers, including the tabloid Bild, sent journalists to the office Telegram states as its headquarters in Dubai and found it deserted, its doors locked.

    Earlier in 2024, Spain briefly blocked Telegram after broadcasters claimed copyrighted material was circulating on the app. Judge Santiago Pedraz of Spain’s National High Court said his decision to ban was based on Telegram’s lack of cooperation with the case.

    The accusations in France are very specific to Telegram’s way of working, says Arne Möhle, cofounder of encrypted email service Tuta. “Of course it’s important to be independent but at the same time, it’s also important to comply with authority requests if they are valid,” he says. “It’s important to show [criminal activities are] something you don’t want to support with your privacy-oriented service.”

    France’s decision to charge Durov is a rare move to link a tech executive to crimes taking place on their platform, but it is not without precedent. Durov joins the ranks of the founders of The Pirate Bay, who were sentenced by Swedish authorities to a year in prison in 2009; and the German-born founder of MegaUpload, Kim Dotcom, who finally lost a 12-year battle to be extradited to the US from his home in New Zealand in August. He plans to appeal.

    Yet Durov is the first of his generation of founders behind major social media platforms to face such severe consequences. What happens next will carry lessons for them all.

    Bastien Le Querrec, legal officer at French digital freedom group La Quadrature du Net, does not defend Telegram’s lack of moderation. But he is concerned that the case against Durov reflects the huge pressure both social media and messaging apps are under right now to collaborate with law enforcement.

    “[The prosecutor] refers to a provision in French law that requires platforms to disclose any useful document that could allow law enforcement to do interception of communication,” he says. “To our knowledge, it’s the first time that a platform, whatever its size, would be prosecuted [in France] because it refused to disclose such documents. It’s a very worrying precedent.”

    [ad_2]

    Morgan Meaker

    Source link

  • Telegram Founder Pavel Durov Charged Over Alleged Criminal Activity on the App

    Telegram Founder Pavel Durov Charged Over Alleged Criminal Activity on the App

    [ad_1]

    Telegram CEO Pavel Durov is forbidden from leaving French territory after being charged for complicity in running an online platform that allegedly enabled the spread of sexual images of children, creating an uncertain future for the messaging app that has become one of the world’s biggest social media platforms.

    Durov was arrested on Saturday at 8 pm local time after his private jet landed at an airport near Paris. He was then detained for four days as part of an investigation into alleged criminal activity taking place on Telegram. On Wednesday evening, local time, he was indicted and forbidden from leaving the country, according to a statement released by the Paris Prosecutor. He was released under judicial supervision, the statement said, and must post a €5 million ($5.5 million) bail and report to a police station in France twice a week.

    The Telegram founder was placed under formal investigation for a range of charges related to child sexual abuse material, drug trafficking, importing cryptology without prior declaration, as well as a “near-total absence” of cooperation with French authorities, Laure Beccuau, the Paris prosecutor, said on Wednesday.

    French authorities noted an “almost total lack of response from Telegram to legal requests,” Beccuau noted. “This is what led JUNALCO [the National Jurisdiction for the Fight against Organized Crime] to open an investigation into the possible criminal liability of this messaging service’s executives in the commission of these offenses,” she said. The preliminary investigation began in February 2024 and initial investigations were coordinated by the OFMIN, an agency set up to prevent violence against minors, her statement added.

    “It is absurd to claim that a platform or its owner is responsible for the abuse of that platform,” Telegram said on Sunday, before Durov was charged. The platform, which has 900 million active users, did not immediately respond to a request for comment to the charges.

    Since his arrest, both the UAE and Russia have requested consular access to Durov, who has citizenship in both countries. It’s unclear why Durov, who also obtained a French passport after leaving Russia, was in France. “I don’t take holidays,” he said on his Telegram channel in June.

    Russia has claimed, without evidence, that Durov’s arrest is an attempt by the United States to exert influence over the platform via France. “Telegram is one of the few and at the same time the largest Internet platforms over which the United States has no influence,” Vyacheslav Volodin, the chairman of Russia’s State Duma, the lower house of parliament, said on the app.

    France’s president, Emmanuel Macron, said on Monday that Durov’s detention is “in no way a political decision.” “It is up to the judiciary, in full independence, to enforce the law,” he added in a post on X. The European Commission tells WIRED the arrest was conducted under French criminal law and is not connected to new European regulation for tech platforms. “We are closely monitoring the developments related to Telegram and stand ready to cooperate with the French authorities should it be relevant,” a spokesperson says, declining to be named.

    [ad_2]

    Morgan Meaker

    Source link

  • Pavel Durov’s Arrest Leaves Telegram Hanging in the Balance

    Pavel Durov’s Arrest Leaves Telegram Hanging in the Balance

    [ad_1]

    “Civil society has had a complicated relationship with Telegram over the years,” says Natalia Krapiva, a lawyer at the digital rights group Access Now. “We have defended Telegram against attempts by authoritarian regimes to block and coerce the platform into providing encryption keys, but we have also been raising alarms about Telegram’s lack of human rights policies, reliable channel of communication, and remedy for its users.” Krapiva stresses that French authorities may try to force Durov to provide Telegram’s encryption keys to decrypt private messages, “which Russia has already tried to do in the past.”

    The hashtag #FreePavel has been spreading online, including via X’s CEO, Elon Musk, who has posted numerous times about Durov’s arrest. “POV: It’s 2030 in Europe and you’re being executed for liking a meme,” he wrote on Saturday night in response to a post about the Telegram CEO’s detention. “The need to protect free speech has never been more urgent,” Robert F. Kennedy Jr., who on Friday endorsed Donald Trump for US president, wrote on X, where he referred to Telegram as “uncensored” and “encrypted.”

    While Telegram is frequently described as an encrypted messaging app, messages are not end-to-end encrypted by default, and senior executives previously told WIRED that they view the platform as a social network. This is largely due to Channels—an one-to-many broadcast feature that allows unlimited subscribers to view posts.

    One of the posts that has gained the most traction on X was by right-wing former Fox News journalist Tucker Carlson, who alluded to the oft-repeated but debatable story that Durov left Russia because the government tried to take over his company. “But in the end, it wasn’t Putin who arrested him for allowing the public to exercise free speech. It was a western country,” Carlson wrote in a post that has so far been viewed at least 5.7 million times. Carlson also linked to an hour-long interview he did with Durov earlier this year, one of the first and only interviews the Telegram CEO has given in recent years.

    In Durov’s absence, Telegram’s future looks uncertain to some: “I am in shock, and everyone close to Pavel feels the same,” says Georgy Lobushkin, former head of PR at VK, a social network Durov cofounded, who is still in regular contact with Durov. “Nobody was prepared for this situation.” Asked if he worried about Telegram’s future and who could run the company in Durov’s absence, Lobushkin says: “[I] worry a lot.”

    TF1Info, which first broke the news in France of Durov’s arrest, reported that it was “beyond doubt” that Durov would remain in custody during the investigation. “Pavel Durov will end up in pretrial detention, that’s for sure,” one unnamed investigator told reporters.

    “No one in Telegram was prepared for such a scenario,” says Anton Rozenberg, who worked with Durov from the early days of VK in 2007, before working for Telegram from 2016 to 2017. Rozenberg foresaw Durov acquiring the best legal defense money could buy. “But without him, the messenger may have huge problems with management, all crucial decisions and even payments,” he added, given Durov’s personal involvement in running the company. Rozenberg saw no obvious replacement for Durov, who makes key decisions on nearly all matters at Telegram—financing, development strategies, product design, monetization, and content moderation policy.

    For now, everything can be expected to continue as normal, says Elies Campo, who directed Telegram’s growth, business, and partnerships from 2015 to 2021. “Depending on how long this is going to last, it’s like a government, right? There’s this structure, there’s self-momentum.” Campo adds that the company’s staff is small enough—around 60 employees—that the infrastructure won’t be affected.

    The challenge, Campo concedes, would be if Durov needs to be physically present to pay providers—something Rozenberg also flagged.

    “As far as I know, Pavel did the payments,” Campo says. “So what’s going to happen when there needs to be some payments for infrastructure providers, or providers in terms of connectivity—and he’s still under arrest?”

    [ad_2]

    Darren Loucaides

    Source link

  • Telegram’s Founder Reportedly Arrested in France Over Moderation Policy

    Telegram’s Founder Reportedly Arrested in France Over Moderation Policy

    [ad_1]

    Telegram’s cofounder Pavel Durov was arrested on Saturday night after at an airport several miles north of Paris, according to French news outlets BFMTV and TF1. Both outlets report that the billionaire CEO had arrived from Azerbaijan by private jet, and that he was the subject of a French search warrant over the app’s lack of moderators, and its alleged use in drug trafficking, money laundering, and the distribution of child abuse material.

    So far, neither French authorities nor Durov have put out statements on the arrest. However, Telegram commented on X, formerly Twitter, that “Durov has nothing to hide,” while Russian officials reportedly condemned the detainment as an attack on free speech. X owner Elon Musk also posted about moderation and free speech following the reports.

    A post on Telegram’s X account said the company “abides by EU laws” and its moderation efforts are “within industry standards.” The post continued, “It is absurd to claim that a platform or its owner are responsible for abuse of that platform.”

    The company added that it is “awaiting a prompt resolution.”

    Durov was born in Leningrad (now Saint Petersburg) and is a naturalized citizen of France and the United Arab Emirates. Before Telegram, the tech executive cofounded VKontakte, Russia’s answer to Facebook. Durov reportedly sold his stake in VKontakte and left Russia in 2014 over state censorship demands. Telegram is currently headquartered in Dubai, and Durov said in April that the app has nearly a billion users.

    Durov is 39 years old and worth an estimated $15.5 billion, according to Forbes. In July, the tech executive said he was a sperm donor, had “over 100 biological kids,” and planned to “open-source [his] DNA.”

    Telegram has reportedly censored content in the past, including Hamas channels and “public calls for violence” related to the attack on the U.S. Capitol. Yet, governments frequently clash with Telegram over its stance on content moderation and privacy, as well as its use by protestors. Russia attempted to block Telegram after the firm refused to hand over encryption keys in 2018. A year later, Durov claimed China had launched cyber attacks against the service to suppress protests in Hong Kong. Cuba blocked the app in 2021 amid protests over the government’s response to Covid-19, and two years later, a Spanish court briefly blocked Telegram access following copyright complaints from local media groups.

    [ad_2]

    Harri Weber

    Source link

  • How Watermelon Cupcakes Kicked Off an Internal Storm at Meta

    How Watermelon Cupcakes Kicked Off an Internal Storm at Meta

    [ad_1]

    Williams in her note explained that “‘Prayers for …’ any location where there is a war in process might be taken down, but prayers for those impacted by a natural disaster, for example, might stay up.” She continued, “We know people may not agree with this approach, but it’s one of the trade-offs we made to ensure we maintain a productive place for everyone.”

    Pain and Distress

    Meanwhile, Arab and Muslim workers expressed disappointment that last month’s World Refugee Week commemorations inside Meta included talks about human rights projects and refugee experiences and lunches featuring Ukrainian and Syrian food but nothing mentioning Palestinians. (WIRED has viewed the internal schedule for the week.)

    They were similarly dismayed that Meta’s Oversight Board, which advises on content policies, wrote in Hebrew, but not Arabic, to solicit public comments about the Palestinian human rights expression “from the river to the sea,” including whether it’s antisemitic. An Oversight Board spokesperson did not respond to a request for comment.

    The workers also remain frustrated that Meta hasn’t met their demands from December to remove the Instagram accounts of anti-hate watchdog groups such as Canary Mission and StopAntisemitism that have been shaming Palestinian supporters in alleged violation of platform rules against bullying. Leaders of PWG met with Meta executives including Nick Clegg, the president of global affairs, who vowed to keep dialog with workers open. But the accounts remain up, and Canary Mission and StopAntisemitism each have added about 15,000 followers since demands were drafted.

    Taking it as a sign of the uphill battle they face, the employees recently seized on a photograph on Instagram showing Nicola Mendelsohn, head of Meta’s Global Business Group, posing beside Liora Rez, founder and executive director of StopAntisemitism. Rez tells WIRED that her group does not hesitate calling individuals out for antisemitic views and alerting their employers, but declined further comment. Canary Mission says in an unsigned statement that “there needs to be accountability” for antisemitism.

    The disputes over Meta’s response to Gaza discussions have had cascading effects. In May, Meta’s internal community team shut down some planned Memorial Day commemorations to honor military veterans at the company. An employee asked for explanation in an internal forum with over 11,000 members, drawing a reply from Meta’s chief technology officer, Andrew Bosworth, who wrote that polarizing discussions about “regions or territories that are unrecognized” had in part required revisiting planning and oversight of all sorts of activities.

    While honoring veterans was “apolitical,” Bosworth wrote in the post seen by WIRED, the CEE rules needed to be applied consistently to survive under labor laws. “There are groups that are zealously looking for an excuse to undermine our company policies,” he wrote.

    Some Arab and Muslim workers felt Bosworth’s comments alluded to them. “I don’t want to work anywhere that is actively discriminating against my community,” says one Meta worker who’s nearly ready to leave. “It makes me sick that I work for this company.”

    Meta hasn’t let up on CEE enforcement in recent weeks. Workers remain barred from holding vigil internally. As a result, they planned to gather near the company’s New York and San Francisco offices this evening to recognize colleagues who have lost family in Gaza to the war, according to the Meta4employees Instagram account and two of the sources. They are curious to see how the company tries, if at all, to stop the memorial, which the public is invited to attend.

    Ashraf Zeitoon, who was Facebook’s head of Middle East and North Africa policy from 2014 to 2017 and still mentors many Arab employees at Meta, says discontent among those workers has soared. He used to push long-timers to quit when they were frustrated; now he has to convince recent hires to stay long enough to give the company a chance to evolve.

    “Unprecedented levels” of restrictions and enforcement have been “extremely painful and distressing for them,” Zeitoon says. It seems that the emotions Meta had wanted to avoid by keeping talk of war out of the workplace cannot be so easily suppressed.

    [ad_2]

    Paresh Dave, Vittoria Elliott

    Source link

  • A Nonprofit Tried to Fix Tech Culture—but Lost Control of Its Own

    A Nonprofit Tried to Fix Tech Culture—but Lost Control of Its Own

    [ad_1]

    Allen, a data scientist, and Massachi, a software engineer, worked for nearly four years at Facebook on some of the uglier aspects of social media, combating scams and election meddling. They didn’t know each other but both quit in 2019, frustrated at feeling a lack of support from executives. “The work that teams like the one I was on, civic integrity, was being squandered,” Massachi said in a recent conference talk. “Worse than a crime, it was a mistake.”

    Massachi first conceived the idea of using expertise like that he’d developed at Facebook to drive greater public attention to the dangers of social platforms. He launched the nonprofit Integrity Institute with Allen in late 2021, after a former colleague connected them. The timing was perfect: Frances Haugen, another former Facebook employee, had just leaked a trove of company documents, catalyzing new government hearings in the US and elsewhere about problems with social media. It joined a new class of tech nonprofits such as the Center for Humane Technology and All Tech Is Human, started by people working in industry trenches who wanted to become public advocates.

    Massachi and Allen infused their nonprofit, initially bankrolled by Allen, with tech startup culture. Early staff with backgrounds in tech, politics, or philanthropy didn’t make much, sacrificing pay for the greater good as they quickly produced a series of detailed how-to guides for tech companies on topics such as preventing election interference. Major tech philanthropy donors collectively committed a few million dollars in funding, including the Knight, Packard, MacArthur, and Hewlett foundations, as well as the Omidyar Network. Through a university-led consortium, the institute got paid to provide tech policy advice to the European Union. And the organization went on to collaborate with news outlets, including WIRED, to investigate problems on tech platforms.

    To expand its capacity beyond its small staff, the institute assembled an external network of two dozen founding experts it could tap for advice or research help. The network of so-called institute “members” grew rapidly to include 450 people from around the world in the following years. It became a hub for tech workers ejected during tech platforms’ sweeping layoffs, which significantly reduced trust and safety, or integrity, roles that oversee content moderation and policy at companies such as Meta and X. Those who joined the institute’s network, which is free but involves passing a screening, gained access to part of its Slack community where they could talk shop and share job opportunities.

    Major tensions began to build inside the institute in March last year, when Massachi unveiled an internal document on Slack titled “How We Work” that barred use of terms including “solidarity,” “radical,” and “free market,” which he said come off as partisan and edgy. He also encouraged avoiding the term BIPOC, an acronym for “Black, Indigenous, and people of color,” which he described as coming from the “activist space.” His manifesto seemed to echo the workplace principles that cryptocurrency exchange Coinbase had published in 2020, which barred discussions of politics and social issues not core to the company, drawing condemnation from some other tech workers and executives.

    “We are an internationally-focused open-source project. We are not a US-based liberal nonprofit. Act accordingly,” Massachi wrote, calling for staff to take “excellent actions” and use “old-fashioned words.” At least a couple of staffers took offense, viewing the rules as backward and unnecessary. An institution devoted to taming the thorny challenge of moderating speech now had to grapple with those same issues at home.

    [ad_2]

    Paresh Dave

    Source link

  • The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’

    The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’

    [ad_1]

    AI projects like OpenAI’s ChatGPT get part of their savvy from some of the lowest-paid workers in the tech industry—contractors often in poor countries paid small sums to correct chatbots and label images. On Wednesday, 97 African workers who do AI training work or online content moderation for companies like Meta and OpenAI published an open letter to President Biden, demanding that US tech companies stop “systemically abusing and exploiting African workers.”

    Most of the letter’s signatories are from Kenya, a hub for tech outsourcing, whose president, William Ruto, is visiting the US this week. The workers allege that the practices of companies like Meta, OpenAI, and data provider Scale AI “amount to modern day slavery.” The companies did not immediately respond to a request for comment.

    A typical workday for African tech contractors, the letter says, involves “watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day.” Pay is often less than $2 per hour, it says, and workers frequently end up with post-traumatic stress disorder, a well-documented issue among content moderators around the world.

    The letter’s signatories say their work includes reviewing content on platforms like Facebook, TikTok, and Instagram, as well as labeling images and training chatbot responses for companies like OpenAI that are developing generative-AI technology. The workers are affiliated with the African Content Moderators Union, the first content moderators union on the continent, and a group founded by laid-off workers who previously trained AI technology for companies such as Scale AI, which sells datasets and data-labeling services to clients including OpenAI, Meta, and the US military. The letter was published on the site of the UK-based activist group Foxglove, which promotes tech-worker unions and equitable tech.

    In March, the letter and news reports say, Scale AI abruptly banned people based in Kenya, Nigeria, and Pakistan from working on Remotasks, Scale AI’s platform for contract work. The letter says that these workers were cut off without notice and are “owed significant sums of unpaid wages.”

    “When Remotasks shut down, it took our livelihoods out of our hands, the food out of our kitchens,” says Joan Kinyua, a member of the group of former Remotasks workers, in a statement to WIRED. “But Scale AI, the big company that ran the platform, gets away with it, because it’s based in San Francisco.”

    Though the Biden administration has frequently described its approach to labor policy as “worker-centered.” The African workers’ letter argues that this has not extended to them, saying “we are treated as disposable.”

    “You have the power to stop our exploitation by US companies, clean up this work and give us dignity and fair working conditions,” the letter says. “You can make sure there are good jobs for Kenyans too, not just Americans.”

    Tech contractors in Kenya have filed lawsuits in recent years alleging that tech-outsourcing companies and their US clients such as Meta have treated workers illegally. Wednesday’s letter demands that Biden make sure that US tech companies engage with overseas tech workers, comply with local laws, and stop union-busting practices. It also suggests that tech companies “be held accountable in the US courts for their unlawful operations aboard, in particular for their human rights and labor violations.”

    The letter comes just over a year after 150 workers formed the African Content Moderators Union. Meta promptly laid off all of its nearly 300 Kenya-based content moderators, workers say, effectively busting the fledgling union. The company is currently facing three lawsuits from more than 180 Kenyan workers, demanding more humane working conditions, freedom to organize, and payment of unpaid wages.

    “Everyone wants to see more jobs in Kenya,” Kauna Malgwi, a member of the African Content Moderators Union steering committee, says. “But not at any cost. All we are asking for is dignified, fairly paid work that is safe and secure.”

    [ad_2]

    Caroline Haskins

    Source link

  • The Dark Side of Open Source AI Image Generators

    The Dark Side of Open Source AI Image Generators

    [ad_1]

    Whether through the frowning high-definition face of a chimpanzee or a psychedelic, pink-and-red-hued doppelganger of himself, Reuven Cohen uses AI-generated images to catch people’s attention. “I’ve always been interested in art and design and video and enjoy pushing boundaries,” he says—but the Toronto-based consultant, who helps companies develop AI tools, also hopes to raise awareness of the technology’s darker uses.

    “It can also be specifically trained to be quite gruesome and bad in a whole variety of ways,” Cohen says. He’s a fan of the freewheeling experimentation that has been unleashed by open source image-generation technology. But that same freedom enables the creation of explicit images of women used for harassment.

    After nonconsensual images of Taylor Swift recently spread on X, Microsoft added new controls to its image generator. Open source models can be commandeered by just about anyone and generally come without guardrails. Despite the efforts of some hopeful community members to deter exploitative uses, the open source free-for-all is near-impossible to control, experts say.

    “Open source has powered fake image abuse and nonconsensual pornography. That’s impossible to sugarcoat or qualify,” says Henry Ajder, who has spent years researching harmful use of generative AI.

    Ajder says that at the same time that it’s becoming a favorite of researchers, creatives like Cohen, and academics working on AI, open source image generation software has become the bedrock of deepfake porn. Some tools based on open source algorithms are purpose-built for salacious or harassing uses, such as “nudifying” apps that digitally remove women’s clothes in images.

    But many tools can serve both legitimate and harassing use cases. One popular open source face-swapping program is used by people in the entertainment industry and as the “tool of choice for bad actors” making nonconsensual deepfakes, Ajder says. High-resolution image generator Stable Diffusion, developed by startup Stability AI, is claimed to have more than 10 million users and has guardrails installed to prevent explicit image creation and policies barring malicious use. But the company also open sourced a version of the image generator in 2022 that is customizable, and online guides explain how to bypass its built-in limitations.

    Meanwhile, smaller AI models known as LoRAs make it easy to tune a Stable Diffusion model to output images with a particular style, concept, or pose—such as a celebrity’s likeness or certain sexual acts. They are widely available on AI model marketplaces such as Civitai, a community-based site where users share and download models. There, one creator of a Taylor Swift plug-in has urged others not to use it “for NSFW images.” However, once downloaded, its use is out of its creator’s control. “The way that open source works means it’s going to be pretty hard to stop someone from potentially hijacking that,” says Ajder.

    4chan, the image-based message board site with a reputation for chaotic moderation is home to pages devoted to nonconsensual deepfake porn, WIRED found, made with openly available programs and AI models dedicated solely to sexual images. Message boards for adult images are littered with AI-generated nonconsensual nudes of real women, from porn performers to actresses like Cate Blanchett. WIRED also observed 4chan users sharing workarounds for NSFW images using OpenAI’s Dall-E 3.

    That kind of activity has inspired some users in communities dedicated to AI image-making, including on Reddit and Discord, to attempt to push back against the sea of pornographic and malicious images. Creators also express worry about the software gaining a reputation for NSFW images, encouraging others to report images depicting minors on Reddit and model-hosting sites.

    [ad_2]

    Lydia Morrish

    Source link

  • X vs. EU: Elon Musk hit with probe over spread of toxic content

    X vs. EU: Elon Musk hit with probe over spread of toxic content

    [ad_1]

    Elon Musk just got an early, unwelcome Christmas present from Europe: the bloc’s first-ever investigation via its new social media law into X.

    The European Commission on Monday opened infringement proceedings under the Digital Services Act (DSA) into X, formerly known as Twitter, after the billionaire and his company were subjected to repeated claims they were not doing enough to stop disinformation and hate speech from spreading online.

    The four investigations focus on X’s failure to comply with rules to counter illegal content and disinformation as well as rules on transparency on advertising and data access for researchers. They will also scrutinize whether X misled its users by changing its so-called blue checks, which were initially launched as a verification tool but now serve as an indicator that a user is paying a subscription fee.

    “The Commission will carefully investigate X’s compliance with the DSA, to ensure European citizens are safeguarded online — as the regulation mandates,” Margrethe Vestager, the Commission’s executive vice president for digital policy, said in a statement.

    “We now have clear rules, ex-ante obligations, strong oversight, speedy enforcement and deterrent sanctions and we will make full use of our toolbox to protect our citizens and democracies,” said EU Internal Market Commissioner Thierry Breton. 

    “X remains committed to complying with the Digital Services Act, and is cooperating with the regulatory process,” Joe Benarroch, an X executive, said in an email to POLITICO.

    The investigations, which do not constitute wrongdoing and will lead to a monthslong probe,  could lead to fines of up to 6 percent of a company’s global revenue. 

    The rulebook, which started applying in late August, represents the most widespread attempt by any region or country in the Western world to hold social media companies to account for what is posted on their platforms. That includes lengthy risk assessments and outside audits to prove to regulators these companies are clamping down on illegal content like hate speech.

    The Commission, which enforces the DSA on 19 so-called Very Large Online Platforms, or VLOPs, has already taken preliminary steps like requests for information against several other social media networks including Instagram, Facebook, TikTok, YouTube and Snapchat. The focus has been on how they handle illegal content, combat disinformation and protect minors. 

    While Europe’s new social media rules only came into full force in late summer, X has been squarely on Brussels’ radar.

    Musk fired half of the company’s employees — including almost all of its trust and safety team — in November, 2022. That included many of the company’s European Union-focused policy jobs, either in Brussels or in Dublin, where the company has its EU headquarters.

    The social networking giant also pulled out of the EU’s code of practice on disinformation in May, an industry pledge coordinated by the Commission that will soon serve as a part of the bloc’s DSA rules. 

    Musk publicly committed X to complying with the bloc’s DSA rules, though he remains a vocal advocate for almost unfettered free speech rights for people that use his platform.

    Yet it was after Hamas militants attacked Israel on October 7 that Commission regulators upped their attention, according to four officials with direct knowledge of the matter who were granted anonymity to discuss internal discussions. Part of the investigations, linked to potentially illegal content, resulted from posts associated with the ongoing Middle East war.

    In the days and weeks following the Middle East attack, X was flooded with often gruesome images of suspected beheadings — often with few, if any, removals by the tech giant. Repeated requests for information from the company went unanswered, while discussions with X representatives, including at meetings in San Francisco with X engineers in the summer, often left Commission officials unsatisfied, according to two of the individuals who spoke to POLITICO.

    The company was the first to receive a request for information from the Commission in October about how it has tackled problematic content like graphic illegal content and disinformation linked to Hamas’ attack on Israel.

    The Commission on Monday said it would investigate whether X’s requirement to quickly remove illegal content, once flagged, had been respected, including “in light of X’s content moderation resources.” It said it would also examine whether X’s so-called community notes, or crowdsourced fact-checking program, and policies to limit risks for election integrity complied with the DSA.

    Brussels will also review whether X’s so-called blue checks, markers that can be bought by accounts to show they have been verified, could trick users into thinking blue check-holding accounts are more trustworthy. Regulators will similarly look into changes to how outsiders could analyze X’s data after the company replaced free access to this data with a paid version that costs up to $240,000 (€220,000) a month. X’s mandatory publicly accessible library of ads that ran on its platform will also be part of the investigations. 

    The investigations could lead to different results in the coming months from a sweeping fine to orders to impose specific measures and commitments from X to make changes. 

    “It is important that this process remains free of political influence and follows the law,” added Benarroch, the X executive. “X is focused on creating a safe and inclusive environment for all users on our platform, while protecting freedom of expression, and we will continue to work tirelessly toward this goal.”

    This article was updated to include new details.

    [ad_2]

    Clothilde Goujard and Mark Scott

    Source link

  • Twitch's new nudity policy allows illustrated nipples, but not human underboob | TechCrunch

    Twitch's new nudity policy allows illustrated nipples, but not human underboob | TechCrunch

    [ad_1]

    Twitch announced sweeping updates to its sexual content policy and content classification system, which now allows previously prohibited content like illustrated nipples and “erotic dances,” in addition to clarifying what nudity is and isn’t allowed on the platform.

    The update follows the widespread “topless meta” backlash, after streamer and OnlyFans model Morgpie went viral for appearing naked in recent streams. Morgpie’s “topless” streams were framed to show her bare shoulders, upper chest, and cleavage. The framing implied nudity, but never actually showed content that explicitly violated Twitch’s sexual content policies. Other streamers, who were predominantly male, were enraged by Morgpie’s content and called for Twitch to crack down on the apparent nudity. She was banned on Dec. 11, two days before Twitch’s content guideline overhaul. Jessica Ly, a streamer who also goes by asianbunnyx, has made similar content without being banned.

    The new policy is meticulously detailed and accounts for various situations, but also appears to contradict itself. Cartoon boobs, for example, are only allowed in certain contexts.

    “Fictionalized” — drawings, animations or sculpted renderings — of fully-exposed breasts and any butts or genitals regardless of gender are fine, but “augmented reality avatars that translate real-life movement into digital characters” (read: VTubers) must abide by the same attire requirements as regular streamers. Actual female-presenting human nipples must be covered. Cleavage is still “unrestricted.” Showing “underbust” is still forbidden.

    Twitch’s stance on sideboob remains unclear.

    A spokesperson for Twitch told TechCrunch that the platform has been overhauling its content moderation for the past year, and has focused on updating its community guidelines in response to feedback from streamers. By clarifying what is and isn’t allowed, Twitch believes that it’ll be easier for streamers to comply with its policies. The spokesperson also noted that the platform is still experimenting with nuance and context, and rather than lean on punitive content moderation, Twitch wants users to be informed.

    The update is supposed to streamline the platform’s approach to sexual content and modernize its previous policies, which disproportionately penalized female streamers. Twitch previously enforced separate policies for “sexually suggestive” and “sexually explicit” content, adding to the confusion. Those will now be consolidated into a single “Sexual Content Policy.” The company’s Content Classification Guidelines (CCLs), which rolled out in June, also now detail when streamers should label their content for “Sexual Themes.”

    “We believe that accurate content labeling is key to helping viewers get the experience they expect, and now that we can enable appropriate labeling of sexual content using CCLS we believe that some of the restrictions in our former policies are no longer required,” Twitch said in its blog post about the update. “In addition to providing clarity, these updates will also reduce the risk of inconsistent enforcement and bring our policy more in line with other social media services.”

    Under the new policy, streams tagged for “drugs, intoxication or excessive tobacco use,” “violent and graphic depictions,” “gambling” and “sexual themes” won’t be promoted on Twitch’s homepage recommendations, but will allow for more raunchy content that previously wasn’t allowed on the platform. This approach, Twitch said in its blog post, will prevent viewers from seeing content that they haven’t consented to seeing. Viewers will still be able to navigate directly to the channels streaming such content, though. Streams tagged for mature games and profanity can still be included in homepage recommendations.

    Twitch did not immediately respond to TechCrunch’s request for comment about whether labeling their streams as containing such content will affect streamers’ ad revenue.

    If properly labeled, content that was previously banned on the platform is now allowed, like artistic depictions of breast, butts and genitals. The puritanical restrictions on suggestive illustrations became a point of contention for Twitch’s art community, which Twitch acknowledged in its blog post. “Erotic dances,” like strip teases, twerking, grinding and pole dancing are also fine to stream, as long as it’s labeled. Streaming from a strip club or other “adult entertainment establishment” is still prohibited.

    The updates appear to respond to longstanding community complaints over the disproportionate moderation that female streamers faced on Twitch. The company attempted to crack down on lewd and sexually explicit streams by enacting a dress code in 2018, which stated that streamer attire should be “appropriate for a public street, mall, or restaurant.” The platform updated its attire policy in 2020 with specific guidelines clarifying that streamers could show cleavage, but not nipples or underboob.

    Although wildly popular hot tub streams were allowed under the guidelines, as long as streamers wore swimsuits, the attire policy still targeted women for wearing anything that could be interpreted as suggestive. Countless female streamers have been subjected to suspensions and outright bans over viewers mass-reporting them for inappropriate attire, and many have complained that the platform’s policy was wielded as a form of misogynistic, targeted harassment.

    Twitch previously prohibited streams that “deliberately highlighted breasts, buttocks or pelvic region,” even if streamers were fully clothed. The parameters for such content were vague and inconsistently enforced. It’s now allowed — as long as it’

    “Streamers found it difficult to determine what was prohibited and what was allowed and often evaluating whether or not a stream violated this portion of the policy was subjective,” Twitch said in its announcement. “In addition, the former Sexually Suggestive Content policy was out of line with industry standards and resulted in female-presenting streamers being disproportionately penalized.”

    In its Sexual Content Policy, Twitch notes that the attire allowed on the platform depends on the context of individual streams. An outfit that’s permitted for a beach or gym stream, Twitch said in its Community Guidelines, may “not be acceptable for a cooking or gameplay broadcast.” The company also said that attired “intended to be sexually suggestive” is still prohibited, which seems like it could still disproportionately affect female streamers who can be sexualized by viewers no matter what they wear.

    Morgpie, who is still banned, praised Twitch’s update in a statement to Dexerto.

    “With the updated terms of service, content on Twitch containing mature themes will be allowed but no longer pushed on the homepage of the site,” she said. “I think this is the best possible outcome, because it gives creators much more freedom, while also keeping this content from reaching the wrong audience. Bravo, Twitch!”

    [ad_2]

    Morgan Sung

    Source link

  • Meta has a moderation bias problem, not just a ‘bug,’ that’s suppressing Palestinian voices | TechCrunch

    Meta has a moderation bias problem, not just a ‘bug,’ that’s suppressing Palestinian voices | TechCrunch

    [ad_1]

    Earlier this year, Palestinian-American filmmaker Khitam Jabr posted a handful of Reels about her family’s trip to the West Bank. In the short travel vlogs, Jabr shared snippets of Palestinian culture, from eating decadent meals to dancing at her niece’s wedding. 

    “I hadn’t been in a decade, so it’s just like, life abroad,” Jabr told TechCrunch. But then, she noticed something odd happening with her account. “I would get [anti-Palestine] comments,” she recalled. “And I couldn’t respond [to them] or use my account for 24 hours. I wasn’t even posting anything about the occupation. But fast forward to now and the same shit’s happening.” 

    In the aftermath of Hamas’ attack on Israelis, Israel’s retaliatory airstrikes and total blockade — cutting access to electricity, water and vital supplies — have devastated Gaza. In response to the escalating violence, Meta said that it is closely monitoring its platforms for violations and may inadvertently flag certain content, but it never intends to “suppress a particular community or point of view.” Content praising or supporting Hamas, which governs Gaza and is designated as a terrorist organization by the United States and the European Union, is expressly forbidden on Meta’s platforms. 

    As the humanitarian crisis in Gaza grows more dire, many social media users suspect Instagram of censoring content about the besieged Palestinian territory, even if that content doesn’t support Hamas. Users have also complained that they’ve been harassed and reported for posting content about Palestine, regardless of whether or not it violates Meta’s policies. Jabr, for example, suspects that Instagram restricted her for 24 hours because other users reported her Palestine travel videos. Most recently, Instagram users accused Meta of “shadowbanning” their Stories about Palestine. 

    It’s the latest in a lengthy history of incidents on Meta platforms that reflect an inherent bias against Palestinian users in its processes, as documented by years of complaints from both inside and outside the company. The company may not intentionally suppress specific communities, but its moderation practices often disproportionately affect Palestinian users. 

    For instance, Meta struggles to navigate the cultural and linguistic nuances of Arabic, a language with over 25 dialects, and has been criticized for neglecting to adequately diversify its language resources. The company’s black-and-white policies often preclude it from effectively moderating any nuanced topic, like content that discusses violence without condoning it. Advocacy groups have also raised concerns that Meta’s partnerships with government agencies, such as the Israeli Cyber Unit, politically influence the platform’s policy decisions. 

    During the last violent outbreak between Hamas and Israel in 2021, a report commissioned by Meta and conducted by a third party concluded that the company’s actions had an “adverse human rights impact” on Palestinian users’ right to freedom of expression and political participation.

    The belief that Meta shadowbans, or limits the visibility of, content about Palestine is not new. In an Instagram Story last year, supermodel and activist Bella Hadid, who is of Palestinian descent, alleged that Instagram “disabled” her from posting content on her Story “pretty much only when it is Palestine based.” She said she gets “immediately shadowbanned” when she posts about Palestine, and her Story views drop by “almost 1 million.” 

    Meta blamed technical errors for the removal of posts about Palestine during the 2021 conflict. When reached for comment about these recent claims of shadowbanning, a representative for the company pointed TechCrunch to a Threads post by Meta communications director Andy Stone. 

    “We identified a bug impacting all Stories that re-shared Reels and Feed posts, meaning they weren’t showing up properly in people’s Stories tray, leading to significantly reduced reach,” Stone said. “This bug affected accounts equally around the globe and had nothing to do with the subject matter of the content — and we fixed it as quickly as possible.” 

    But many are frustrated that Meta continues to suppress Palestinian voices. Leen Al Saadi, a Palestinian journalist currently based in Jordan and host of the podcast “Preserving Palestine,” said she is used to “constantly being censored.” Her Instagram account was restricted last year after she posted a trailer for the podcast’s first episode, which discussed a documentary about Palestinian street art under occupation. 

    “Palestinians are currently undergoing two wars,” Al Saadi said. “The first is with their legal occupier. The second war is with the entire Western media landscape, and when I say the entire landscape, I mean social media.” 

    Meta’s alleged shadowbanning

    Instagram users accuse Meta of suppressing more than just Stories related to Palestine. 

    Creators say engagement on their posts tanked specifically after they publicly condemned Israel’s response to the Hamas attack as excessively violent. Some, like Jabr, say they were restricted from posting or going live, while others say Instagram flagged their content as “sensitive,” limiting its reach. Users also allege their posts were flagged as “inappropriate” and removed, even if the content adhered to Instagram’s Community Guidelines

    Meta’s representative didn’t address the other accusations of censorship beyond just Story visibility and did not respond to TechCrunch’s follow-up questions. It’s unclear if this “bug” impacted accounts posting content unrelated to Gaza. Instagram users have posted screenshots showing that Stories about Palestine have received significantly fewer views than other Stories posted on the same day, and allege that their view counts went back up when they posted content unrelated to the conflict. 

    A user based in Egypt, who asked to stay anonymous for fear of harassment, said her posts usually get around 300 views, but when she started posting pro-Palestine content after the Hamas attack earlier this month, her stories would only get one to two views. 

    “It happened to all my friends, too,” she continued. “Then we noticed that posting a random pic would get higher views. So by posting a random pic, then a pro-Palestine post, would increase the views.” 

    Another Instagram user based in the United Kingdom, who also asked to stay anonymous out of fear of harassment, said that his view count returned to normal when he posted a cat photo. 

    “My stories went from 100s of views to zero or a handful,” he said. “I’ve had to post intermittent non-Gaza content in order to ‘release’ my stories to be viewed again.” 

    It isn’t just Stories. The Arab Center for Social Media Advancement (7amleh), which documents cases of Palestinian digital rights violations and works directly with social media companies to appeal violations, told TechCrunch it has received reports of Instagram inconsistently filtering comments containing the Palestinian flag emoji. Users report that Instagram has flagged comments containing the emoji as “potentially offensive,” hiding the comment. Meta did not respond to follow-up requests for comment.   

    The organization has also received countless reports of Meta flagging and restricting Arabic content, even if it’s posted by news outlets. Jalal Abukhater, 7amleh’s advocacy manager, said that the organization has documented multiple cases of journalists on Instagram reporting the same news in Arabic, Hebrew and English, but only getting flagged for their Arabic content. 

    “It’s literally journalistic content, but the same wording in Hebrew and English does not get restricted,” Abukhater said. “As if there’s better moderation for those languages, and more careless moderation for Arabic content.” 

    And as the Intercept reported, Instagram and Facebook are flagging images of the al-Ahli Hospital, claiming that the content violates Meta’s Community Guidelines on nudity or sexual activity.

    The Community Guidelines are enforced inconsistently, particularly when it comes to content related to Palestine. Al Saadi recently tried to report a comment that said she should be “raped” and “burned alive” — left in response to her comment on a CNN post about the conflict — but in screenshots reviewed by TechCrunch, Instagram said that it didn’t violate the platform’s Community Guidelines against violence or dangerous organizations. 

    “The restrictions on content, especially the content that relates to Palestine, is heavily politicized,” Abukhater said. “It feeds into the bias against Palestinian narrative genuinely. It really takes the balance against Palestinians in a situation where there’s a huge asymmetry of power.”

    A history of suppression

    Content about Palestine is disproportionately scrutinized, as demonstrated during the last severe violent outbreak between Hamas and Israel two years ago. Amid the violence following the May 2021 court ruling to evict Palestinian families from Sheikh Jarrah, a neighborhood in occupied East Jerusalem, users across Facebook and Instagram accused Meta of taking down posts and suspending accounts that voiced support for Palestinians. 

    The digital rights nonprofit Electronic Frontier Foundation (EFF) described Meta’s actions in 2021 as “systemic censorship of Palestinian voices.” In its 2022 report of Palestinian digital rights, 7amleh said that Meta is “still the most restricting company” compared to other social media giants in the extent of its moderation of the Palestinian digital space. 

    Meta forbids support of terrorist organizations, like most social media companies based in the U.S., but struggles to moderate content around it, from user discourse to journalistic updates. This policy, along with the company’s partnership with Israel to monitor posts that incite violence, complicates things for Palestinians living under Hamas’ governance. As EFF points out, something as simple as Hamas’ flag in the background of an image can result in a strike. 

    Jillian York, the director for international freedom of expression for EFF, blames automation and decisions made by “minimally trained humans” for the inconsistency. Meta’s zero tolerance policy and imprecise enforcement often suppress content from or about conflict zones, she said. The site’s moderation issues have negatively affected multiple non-English speaking regions, including Libya, Syria and Ukraine. 

    “These rules can prevent people from sharing documentation of human rights violations, documentation of war crimes, even just news about what’s happening on the ground,” York continued. “And so I think that is what is the most problematic right now about that particular rule, and the way that it’s enforced.” 

    Over the 13 days leading up to the ceasefire between Hamas and Israel, 7amleh documented more than 500 reports of Palestinian “digital rights violations,” including the removal and restriction of content, hashtags and accounts related to the conflict. 

    Meta blamed some of the instances of perceived censorship to technical issues, like one that prevented users in Palestine and Colombia from posting Instagram Stories. It attributed others to human error, like blocking the hashtag for Al-Aqsa Mosque, the holy site where Israeli police clashed with Ramadan worshippers, because it was mistaken for a terrorist organization. The company also blocked journalists in Gaza from WhatsApp without explanation. 

    The same month, a group of Facebook employees filed internal complaints accusing the company of bias against Arab and Muslim users. In internal posts obtained by BuzzFeed News, an employee attributed the bias to “years and years of implementing policies that just don’t scale globally.” 

    At the recommendation of its Oversight Board, Meta conducted a third-party due diligence report about the platform’s moderation during the May 2021 conflict. The report found that Arabic content was flagged as potentially violating at significantly higher rates than Hebrew content was, and was more likely to be erroneously removed. The report noted that Meta’s moderation system may not be as precise for Arabic content as it was for Hebrew content, because the latter is a “more standardized language,” and suggested that reviewers may lack the linguistic and cultural competence to understand less common Arabic dialects like Palestinian Arabic. 

    Has anything improved?

    Meta committed to implementing policy changes based on the report’s recommendations, such as updating its keywords associated with dangerous organizations, disclosing government requests to remove content and launching a hostile speech classifier for Hebrew content. Abukhater added that Meta has improved its response to harassment, at least in comparison to other social media platforms like X (formerly Twitter). Although harassment and abuse are still rampant on Instagram and Facebook, he said, the company has been responsive to suspending accounts with patterns of targeting other users. 

    The company has also made more contact with regional Palestinian organizations since 2021, York added, but it’s been slow to implement recommendations from EFF and other advocacy groups. It’s “very clear” that Meta is not putting the same resources behind Arabic and other non-English languages, York said, compared to the attention Meta gives to countries that have the most regulatory pressure. Moderation of English and other European languages tends to be more comprehensive, for example, because the EU enforces the Digital Services Act

    In Meta’s response to the report, Miranda Sissons, the company’s director of human rights, said that Meta was “assessing the feasibility” of reviewing Arabic content by dialect. Sissons said that the company has “large and diverse teams” who understand “local cultural context across the region,” including in Palestine. Responding to the escalating violence earlier this month, Meta stated that it established a “special operations center” staffed with fluent Hebrew and Arabic speakers to closely monitor and respond to violating content. 

    Despite Meta’s apparent efforts to diversify its language resources, Arabic is still disproportionately flagged as violating — like in the case of journalists reporting news in multiple languages. 

    “The balance of power is very fixed, in reality, between Israelis and Palestinians,” Abukhater said. “And this is something that today is reflected heavily on platforms like Meta, even though they have human rights teams releasing reports and trying to improve upon their policies. Whenever an escalation like the one we’re experiencing now happens, things just go back to zero.”

    And at times, Meta’s Arabic translations are completely inaccurate. This week, multiple Instagram users raised concerns over the platform mistranslating the relatively common Arabic phrase “Alhamdulillah,” or “Praise be to God.” In screen recordings posted online, users found that if they included “Palestinian” and the corresponding flag emoji in their Instagram bio along with the Arabic phrase, Instagram automatically translated their bio to “Palestinian terrorists – Praise be to Allah” or “Praise be to God, Palestinian terrorists are fighting for their freedom.” When users removed “Palestinian” and the flag emoji, Instagram translated the Arabic phrase to “Thank God.” Instagram users complained that the offensive mistranslation was active for hours before Meta appeared to correct it.

    Shayaan Khan, a TikTok creator who posted a viral video about the mistranslation, told TechCrunch that Meta’s lack of cultural competence isn’t just offensive, it’s dangerous. He said that the “glitch” can fuel Islamophobic and racist rhetoric, which has already been exacerbated by the war in Gaza. Khan pointed to the fatal stabbing of Wadea Al-Fayoume, a Palestinian-American child whose death is being investigated as a hate crime

    Meta did not respond to TechCrunch’s request for comment about the mistranslation. Abukhater said that Meta told 7amleh that a “bug” caused the mistranslation. In a statement to 404 Media, a Meta spokesperson said that the issue had been fixed. 

    “We fixed a problem that briefly caused inappropriate Arabic translations in some of our products,” the statement said, “We sincerely apologize that this happened.”

    As the war continues, social media users have tried to find ways around the alleged shadowbanning on Instagram. Supposed loopholes include misspelling certain words, like “p@lestine” instead of “Palestine,” in hopes of bypassing any content filters. Users also share information about Gaza in text superimposed over unrelated images, like a cat photo, so it won’t be flagged as graphic or violent content. Creators have tried to include an emoji of the Israeli flag or tag their posts and Stories with #istandwithisrael, even if they don’t support the Israeli government, in hopes of gaming engagement. 

    Al Saadi said that her frustration with Meta is common among Palestinians, both in occupied territories and across the diaspora. 

    “All we’re asking for is to give us the exact same rights,” she said. “We’re not asking for more. We’re literally just asking Meta, Instagram, every single broadcast channel, every single media outlet, to just give us the respect that we deserve.” 

    Dominic-Madori Davis contributed to this story’s reporting.

    [ad_2]

    Morgan Sung

    Source link

  • Hamas hate videos make Elon Musk Europe’s digital enemy No. 1

    Hamas hate videos make Elon Musk Europe’s digital enemy No. 1

    [ad_1]

    Elon Musk has made himself Europe’s digital public enemy No. 1.

    Since Hamas attacked Israel on Saturday, the billionaire’s social network X has been flooded with gruesome images, politically-motivated lies and terrorist propaganda that authorities say appear to violate both its own policies and the European Union’s new social media law.

    Now Musk is facing the threat of sanctions — including potentially hefty fines — as officials in Brussels start gathering evidence in preparation for a formal investigation into whether X has broken the European Union’s rules. Authorities in the U.K. and Germany have joined the criticism.

    The tussle represents a critical test for all sides. Musk will be keen to fight any claim that he’s failing to be a responsible owner of the social network formerly known as Twitter — all while upholding his commitment to free speech. The EU will want to show its new regulation, known as the Digital Services Act (DSA), has teeth.

    Thierry Breton, Europe’s commissioner in charge of social media content rules, demanded that Musk explain why graphic images and disinformation about the Middle East crisis were widespread on X.

    “I urge you to ensure a prompt, accurate and complete response to this request within the next 24 hours,” Breton wrote on X late Tuesday.

    “We will include your answer in our assessment file on your compliance with the DSA,” said Breton, who also wrote to Meta’s Mark Zuckerberg to remind him of his obligations under Europe’s rules. TikTok’s head Shou Zi Chew was also asked on October 12 to explain how his platform was dealing with misinformation and graphic content.

    “I remind you that following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed,” Breton said. Those fines can total up to 6 percent of a company’s global revenue.

    In response, Linda Yaccarino, X’s chief executive, wrote to Breton Thursday to outline how the social media giant had responded to the ongoing Middle East conflict. That included removing or labelling potentially harmful content, working with law enforcement agencies and adding so-called “community notes,” or crowd-sourced fact-checks, to posts.

    The heat on Twitter did not begin with the Hamas attacks. Ever since Musk bought the platform, he’s been hit by criticism that he’s failing to stop hate speech from spreading online.

    X has cut back on its content moderation teams, in the spirit of promoting free speech; pulled out of a Brussels-backed pledge to tackle digital foreign interference; and tweaked its social media algorithms to promote often shady content over verified material from news organizations and politicians.

    Musk has responded — via his social media account with 159 million followers — with jeers and attacks on his naysayers. But the latest uproar over content apparently inciting and praising terrorism has made it a surefire bet that X will be one of the first companies to be investigated under the EU’s social media rules.

    In response to Breton’s demand, Musk asked the French commissioner to outline how X had potentially violated Europe’s content regulations. “Our policy is that everything is open source and transparent,” he added. In the U.K., Michelle Donelan, the country’s digital minister, also met with social media executives Wednesday to discuss how their firms were combatting online hate speech.

    The probe is coming

    In truth, an investigation into X’s compliance with Europe’s new content rulebook has been on the cards for months. Over the summer, Breton and senior EU officials visited the company’s headquarters in San Francisco for a so-called “stress test” to see how it was complying.

    Under the EU’s legislation, tech giants like X, TikTok and Facebook must carry out lengthy risk assessments to figure out how hate speech and other illegal content can spread on their platforms. These firms must also allow greater access to external auditors, regulators and civil society groups that will track how social media companies are complying with the new oversight.

    Investigations into potential wrongdoing under Europe’s content rules will likely involve months-long inquiries into a company’s behavior, the Commission taking a legal decision on whether to levy fines or other sanctions, and a likely appeal from the firm in response. Such cases are expected to take years to complete.

    Within Brussels, the Commission has been compiling evidence of potential wrongdoing across multiple social media companies, even before the EU’s new content legislation came into full force in August, according to five officials and other individuals with direct knowledge of the matter.

    The goal is to start at least three investigations linked to the Digital Services Act by early next year, according to three of those people. They spoke on condition of anonymity because the discussions are not public and remain ongoing.

    In recent days, Commission officials have been compiling evidence associated with Hamas’ attacks on Israel — much of which has been shared on X with little, if any, pushback from the company.

    That content included verified X accounts with ties to Russia and Iran reposting graphic footage of alleged atrocities targeting Israeli soldiers. Some of these posts have been viewed hundreds of thousands of times. Other accounts linked to Hezbollah and ISIS have similarly posted widely with few, if any, removals.

    It is unclear whether such footage will lead to a specific investigation into X’s handling of the most recent violent content. But it has reaffirmed the likelihood Musk will soon face legal consequences for not removing such material from his social network.

    Combating violent and terrorist content requires “people sitting at a computer screen and looking at this and making judgments,” said Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Lab, which has tracked the online footprint of Hamas’ ongoing attacks. “It used to be that there were dozens of people that do that at Twitter, and now there’s only a handful.”

    Steven Overly contributed reporting from Washington. This article has been updated.

    [ad_2]

    Mark Scott

    Source link

  • Graphic videos of Hamas attacks spread on X

    Graphic videos of Hamas attacks spread on X

    [ad_1]

    Videos and images of mass shootings, kidnapped civilians and soldiers and other violence linked with Hamas’ attack on Israel are being widely shared on X, formerly known as Twitter, in violation of the company’s own rules against inciting violence.

    POLITICO’s review of Elon Musk’s social media platform in the wake of Hamas’ attacks, which began on October 7, discovered scores of videos that allegedly showed militants murdering civilians and Israeli soldiers; viral hashtags associated with the ongoing violence that praised Hamas’ activities; and social media posts that included graphic pictures of those killed and antisemitic hate speech.

    Such extremist material was also accessible on other social media platforms, most notably on Telegram. But the level at which the terrorist-related content was circulated on X was significantly higher compared with others, according to analysis by POLITICO and two outside researchers who independently reviewed the tech companies’ response to the Middle East crisis.

    “There is a huge prevalence of extremely graphic violent material on X,” said Adam Hadley, director of Tech Against Terrorism, a nonprofit organization that works with social media platforms and governments to combat how terrorist organizations spread their propaganda online. “This doesn’t appear to be the same on other large platforms.”

    Hadley and Moustafa Ayad, executive director for Africa, the Middle East and Asia for the Institute for Strategic Dialogue, a think tank that tracks online extremism, reviewed how graphic content tied to the unfolding violence spread across social media.

    A representative for X did not respond to a request for comment. The company’s internal rules say users cannot promote violent acts or share propaganda related to terrorist activities. “There is no place on X for violent and hateful entities,” the firm’s policy says.

    Under the European Union’s new social media rules, known as the Digital Services Act, large social media platforms like X also must combat the spread of hate speech — including content related to terrorist groups — or face fines of up to 6 percent of annual global revenue. Musk said X would comply with the 27-country bloc’s rules despite the billionaire’s free speech ethos and the firing of much of X’s global content moderation team.

    Yet in the days following Hamas’ widespread attacks on Israel, which have left hundreds of people dead, POLITICO easily found graphic images and videos on X in violation of both the EU and X’s separate rules.

    The content included grainy footage of militants gunning down Israeli soldiers, other social media posts of alleged Hamas fighters desecrating the bodies of victims, and videos of beheadings that, while promoted as taken from the most recent attacks, had, in fact, been reused from earlier jihadi violence in Syria.

    Hamas-related hashtags that praised the ongoing violence had also begun to trend across X despite much of this content either including graphic imagery or promoting terrorist attacks in violation of X’s own terms of service, based on POLITICO’s review of the social media platform.

    While such gruesome material is outlawed under all the tech companies’ internal policies, these firms’ executives and European regulators still find themselves in a difficult position when deciding how to respond to the ongoing conflict in the Middle East.

    Alongside the graphic violence shared online, people across the world have similarly taken to social media to voice their support for different sides of the conflict. Much of this content represents political speech and does not meet the threshold of promoting terrorism. With the violence spreading, tech giants’ content moderation teams and regulators must determine the fine line between what represents legitimate speech and what veers into jihadi propaganda.

    The lack of moderation tools and verification systems, particularly on X, also could lead to further offline violence — both inside and outside Israel. 

    Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Lab, which tracks online misinformation, said he had already seen spikes in antisemitism and Islamophobia correlated directly to Hamas attacks in Israel. 

    “Those [social media] platforms are already trending towards more hate speech, and this is going to exacerbate that problem even more,” he said.

    Rebecca Kern contributed reporting.

    [ad_2]

    Mark Scott

    Source link