The old saying goes, sticks and stones may break my bones, but words will never hurt me. Turns out that when those words are propelled by online outrage algorithms, they can be every bit as dangerous as the proverbial sticks and stones.

Boundless, a new group combating Jew-hatred, and the Network Contagion Research Institute, which specializes in cyber-social threat-identification and forecasting, traced the online discourse that emerged during the last major flare up in fighting between Israel and Hamas in Gaza in May of 2021 to understand the impact here in the United States.

We found that as fighting intensified, words coopted from the human rights discourse were weaponized and used to vilify and stigmatize Israel and Jews. The words apartheid, colonialism, and settler reached their highest levels ever on Twitter. As Israel responded to Hamas rockets, the word Jew became more closely associated with the words “white” and “oppressor” on social media.

A man holds up an antisemitic sign on a crowded midtown Manhattan street.
Andrew Lichtenstein/Corbis via Getty Images

Here in the U.S., we found that in the very same places where Jews were intimidated or attacked, the term apartheid had surged on Twitter. Much has already been written on the harm that emanates from the political far right. Our research strongly suggests that weaponized language—that is language appropriated from the human rights discourse and unleashed on Israel and Jews—is contributing to real-world harm against Jews in America.

One awful legacy of 2022 is the skyrocketing rise of online hate, culminating in a flood of social media posts of Kanye West, now known as “Ye,” peddling antisemitic tropes denying the Holocaust, and praising Hitler. While shocking, it shouldn’t be surprising. According to the FBI, as of November of 2022, Jews, who make up just 2.4 percent of the U.S. population, were the target of 63 percent of religious hate crimes, up from 54.9 percent in 2020.

As the late Rabbi Jonathan Sacks knew, “The hate that begins with Jews never ends with Jews.” There are troves of publicly available social media postings that illuminate how online hate circulates and replicates in much the same way a virus infects healthy bodies. Hateful content is reaching audiences primed to engage in violence against Jews and members of other historically marginalized groups, including Asians, Muslims, Hindus, Blacks, and other minority groups.

Research undertaken by the NCRI showed a similar pattern of online hate exploding into real-world violence following last summer’s Asia Cup cricket match between India and Pakistan. Malicious vitriol spread like wildfire online, including characterizing Hindus as Nazis and global oppressors. The result was Hindu and Muslim men brawling on the streets of Leicester in central England.

When it comes to social media, the reality is: if it enrages, it engages. Stigmatizing content attracts a high degree of attention and is widely circulated because it elicits a visceral reaction and triggers emotions such as contempt and disgust. Those who subscribe to extremist and fringe beliefs will be exposed to increasingly more radical content as they follow a rabbit-hole of increasingly radical content fed to them by recommendation algorithms.

This isn’t accidental. Eliciting outrage drives user engagement, which in turn drives profits. This helps explain why so little has been done to mitigate the explosion of online hatred impacting vulnerable communities. It simply doesn’t pay.

Yet, the propagation of conspiracy theories and online outrage threatens to undermine many of the American values we cherish, including the duty to protect minorities, the right to speak freely, and the ability to seek truth and acquire knowledge.

Some believe that interventions designed to keep people safe from online-instigated violence will interfere with the right to freedom of expression. These voices confuse freedom of speech with “freedom of reach.” Purveyors of hate have no right to be handed an ever-louder megaphone to disseminate their messages of intolerance and bigotry.

There is enormous power in having a wealth of information, connectivity, and entertainment in the palm of our hands. But we can no longer ignore the unchecked harm being inflicted by the technology companies that develop and manipulate their algorithms. In an environment of feeble accountability, hate and disinformation will continue to spread, amplify, and encourage real-world violence. In time, these runaway processes could escalate into digitally inspired pogroms.

As ancient hatreds meet new technological frontiers, there must be far more transparency around algorithms and oversight if lawmakers, law enforcement, and civil society leaders are to defend our values and keep our communities safe—from real-world violence, not merely from words.

Aviva Klompas is CEO and co-founder of Boundless and can be found on Twitter @AvivaKlompas

John Donohue is the COO of the Network Contagion Research Institute (NCRI) and retired NYPD intelligence chief.

The views expressed in this article are the writers’ own.

Source link

You May Also Like

Antonio Guterres Fast Facts | CNN

CNN  —  Here’s a look at the life of United Nations (UN)…

Putin marks anniversary of annexation of Ukrainian regions as drones attack overnight

KYIV, Ukraine — Russian President Vladimir Putin on Saturday insisted that the…

Will Super Tuesday be Haley’s last stand against Trump? It’s ‘all about how competitive we can be’

Nikki Haley has repeatedly promised to remain in the Republican primary race…

Wilbur Ross Fast Facts | CNN Politics

CNN  —  Here’s a look at the life of former Commerce Secretary…