A federal judge’s decision this week to restrict the government’s communication with social media platforms could have broad side effects, according to researchers and groups that combat hate speech, online abuse and disinformation: It could further hamper efforts to curb harmful content.

Alice E. Marwick, a researcher at the University of North Carolina at Chapel Hill, was one of several disinformation experts who said on Wednesday that the ruling could impede work meant to keep false claims about vaccines and voter fraud from spreading.

The order, she said, followed other efforts, largely from Republicans, that are “part of an organized campaign pushing back on the idea of disinformation as a whole.”

Judge Terry A. Doughty granted a preliminary injunction on Tuesday, saying the Department of Health and Human Services and the Federal Bureau of Investigation, along with other parts of the government, must stop corresponding with social media companies for “the purpose of urging, encouraging, pressuring or inducing in any manner the removal, deletion, suppression or reduction of content containing protected free speech.”

The ruling stemmed from a lawsuit by the attorneys general of Louisiana and Missouri, who accused Facebook, Twitter and other social media sites of censoring right-leaning content, sometimes in league with the government. They and other Republicans cheered the judge’s move, in U.S. District Court for the Western District of Louisiana, as a win for the First Amendment.

Several researchers, however, said the government’s work with social media companies was not an issue as long as it didn’t coerce them to remove content. Instead, they said, the government has historically notified companies about potentially dangerous messages, like lies about election fraud or misleading information about Covid-19. Most misinformation or disinformation that violates social platforms’ policies is flagged by researchers, nonprofits, or people and software at the platforms themselves.

“That’s the really important distinction here: The government should be able to inform social media companies about things that they feel are harmful to the public,” said Miriam Metzger, a communication professor at the University of California, Santa Barbara, and an affiliate of its Center for Information Technology and Society.

A larger concern, researchers said, is a potential chilling effect. The judge’s decision blocked certain government agencies from communicating with some research organizations, such as the Stanford Internet Observatory and the Election Integrity Partnership, about removing social media content. Some of those groups have already been targeted in a Republican-led legal campaign against universities and think tanks.

Their peers said such stipulations could dissuade younger scholars from pursuing disinformation research and intimidate donors who fund crucial grants.

Bond Benton, an associate communication professor at Montclair State University who studies disinformation, described the ruling as “a bit of a potential Trojan horse.” It is limited on paper to the government’s relationship with social media platforms, he said, but carried a message that misinformation qualifies as speech and its removal as the suppression of speech.

“Previously, platforms could simply say we don’t want to host it: ‘No shirt, no shoes, no service,’” Dr. Benton said. “This ruling will now probably make platforms a little bit more cautious about that.”

In recent years, platforms have relied more heavily on automated tools and algorithms to spot harmful content, limiting the effectiveness of complaints from people outside the companies. Academics and anti-disinformation organizations often complained that platforms were unresponsive to their concerns, said Viktorya Vilk, the director for digital safety and free expression at PEN America, a nonprofit that supports free expression.

“Platforms are very good at ignoring civil society organizations and our requests for help or requests for information or escalation of individual cases,” she said. “They are less comfortable ignoring the government.”

Several disinformation researchers worried that the ruling could give cover for social media platforms, some of which have already scaled back their efforts to curb misinformation, to be even less vigilant before the 2024 election. They said it was unclear how relatively new government initiatives that had fielded researchers’ concerns and suggestions, such as the White House Task Force to Address Online Harassment and Abuse, would fare.

For Imran Ahmed, the chief executive of the Center for Countering Digital Hate, the decision on Tuesday underscored other issues: the United States’ “particularly fangless” approach to dangerous content compared with places like Australia and the European Union, and the need to update rules governing social media platforms’ liability. The ruling on Tuesday cited the center as having delivered a presentation to the surgeon general’s office about its 2021 report on online anti-vaccine activists, “The Disinformation Dozen.”

“It’s bananas that you can’t show a nipple on the Super Bowl but Facebook can still broadcast Nazi propaganda, empower stalkers and harassers, undermine public health and facilitate extremism in the United States,” Mr. Ahmed said. “This court decision further exacerbates that feeling of impunity social media companies operate under, despite the fact that they are the primary vector for hate and disinformation in society.”

Tiffany Hsu and Stuart A. Thompson

Source link

You May Also Like

Google apologizes for ahistorical and inaccurate Gemini AI images: ‘We missed the mark’

Google officially apologized on Friday for embarrassing and inaccurate images generated by…

Lessing’s Hospitality Group is the new food service provider at a hospital café | Long Island Business News

Great River-based Lessing’s Hospitality Group is the new food service provider for…

Joseph Lelyveld, Former Top Editor of The New York Times, Dies at 86

The former Times columnist Russell Baker, in The New York Review of…

LiveOne raises fiscal guidance to 800K – 825K new paid members (NASDAQ:LVO)

LiveOne announces raises fiscal guidance to 800K – 825K new paid members…