Opinions expressed by Entrepreneur contributors are their own.
The rise of the internet and other digital technologies has transformed how businesses operate. Along with the inarguable benefits of this age comes a daunting challenge: avoiding falling for or participating in misinformation, which can lead to costly mistakes, damaged reputations and lost opportunities. Fortunately, there are ways of working proactively to avoid such pitfalls, with an embrace of digital literacy as a starting point.
What is digital literacy?
The internet broadly, and social media platforms particularly, are breeding grounds for rumors, false claims and inaccurate statistics. These can gain traction at a frightening pace, causing confusion and chaos, and small business owners are uniquely at risk. Digital literacy refers, in part, to effectively accessing, evaluating and using information from digital sources. In today’s landscape, this is not just a nice-to-have skill, but a necessity.
From marketing strategies to financial planning and customer interactions, every aspect of operations can be influenced by information obtained online, and owners who are digitally literate are simply better equipped to make informed choices.
The role of continuous learning
As an owner, it’s your responsibility to stay informed about the latest digital trends and challenges — to actively and regularly update your knowledge — and there are a number of areas to consider when doing so:
Evolution of technology: As AI and other digital tools become more sophisticated, so do the methods used to spread misinformation. Business owners need to acquire a basic level of understanding regarding the capabilities of such emerging technologies.
Evolving platforms: Social media and other online communication channels seem to be ever-transforming, especially those wielding complex algorithms for sharing content. It’s important to understand how information spreads on these platforms so you can adapt strategies.
Cybersecurity knowledge: Cybercriminals are becoming increasingly creative, leaving small businesses vulnerable to phishing attacks and data breaches. Such bad actors could then leverage your own technology and tools and spread false information in the name of your business, unless you stay ahead of the information curve, or engage someone who is.
Sharing skill sets: Digital literacy shouldn’t be a skill that rests just with the business owner: Providing training to employees is a great way to add an extra layer of defense, especially when it comes to those staff members authorized to share information via social media or other channels, as well as key decision makers.
Practical strategies for protecting against misinformation
Cultivate a fact-based culture: Advancing a company environment that values fact-based decision-making means insisting that employees back their decisions with reliable data. By instilling a sense of positive skepticism — encouraging people to question information they encounter — you can greatly reduce the risk of inadvertently internalizing or spreading inaccuracies or distortions.
Create an information-sharing policy: It’s helpful to establish clear and company-wide guidelines for verifying and disseminating data, particularly on social media.
Be rigorous in verification: Never share information that’s obscure or which can’t be traced to a reputable source (which can include reputable news outlets, government websites and well-credentialed organizations) — ideally to multiple sources. There are a number of organizations and websites dedicated to such verification, including popular fact-checkers like Snopes, PolitiFact and FactCheck.org.
Leverage AI: Although advances in technology have accelerated the spread of misinformation, it can also help in combatting it. Artificial intelligence, for example, can aid in detecting false information by identifying inconsistencies and flagging potential inaccuracies.
LONDON — Elon Musk sat down with British Prime Minister Rishi Sunak in London’s Lancaster House on Thursday night for a chat that veered closer to “love-in” than interview.
In the lavish gold-trimmed room where Theresa May gave one of her most famous Brexit speeches, the tech tycoon and British PM were joined by an audience that included Cabinet ministers, tech execs and — somewhat improbably — the American rapper will.i.am.
Here’s what we learned as the conversation unfolded:
Elon thinks you won’t need to work
The world’s richest man predicted a “future of abundance” from advances in AI models.
“There will come a point where no job is needed,” Musk said. “You can have a job if you want to have a job … but the AI will be able to do everything. I don’t know if that makes people comfortable or uncomfortable.”
Sunak, who will be out of a job himself after the next U.K. election if current polls are correct, laughed along nervously.
Rishi should leave the journalism to the pros
The format was meant to be Sunak interviewing Musk — but the PM’s lengthy questions diverged into listing his own achievements and heaping praise onto the tech tycoon.
“You’re known for being such a brilliant innovator and technologist,” the PM gushed, during one attempt to get a question out.
Rishi loves Big Tech
Sunak sees the AI Safety Summit as a key part of his legacy, and has been cozying up to leading AI lab founders over the last six months. This event was no different, with the PM taking his chance to list his pro-tech and pro-investment policies and to heap praise on Musk, who owns Tesla, SpaceX and X.
“It’s been a huge privilege and pleasure to have you here,” the British prime minister told Musk as they left the stage.
The love-in was mutual
Musk can play down the provocateur shtick and dial up the charm when he needs to.
He ticked every box for Sunak, praising London as a destination for AI companies, hailing the AI Safety Summit’s achievements and — crucially — backing Sunak’s decision to invite China to the Bletchley Park event, which has angered some lawmakers in the U.K. Conservative Party.
“Thank you for inviting them,” Musk said. “Having them here is essential. If they’re not participants, it’s pointless.”
AI is your new best friend … or worst enemy
It wasn’t just Sunak and Musk building a friendship on Thursday night. Musk predicted that humans more generally will make deep friendships with AI once the technology becomes intelligent enough.
But in the parts of the discussion where they debated the risks of frontier AI models, Musk called for a “referee” and an “off switch” built-in to models to “throw it into a safe state.”
Sunak also said AI-generated misinformation would be a “real issue” in elections taking place next year, including in the U.K. “Probably,” he added teasingly, given the election could yet be pushed to January 2025.
Musk, whose own social media platform has been plagued by misinformation, said he wanted to make X as “accurate as possible and as truthful as possible.”
LONDON — British Prime Minister Rishi Sunak has closed the world’s first AI Safety Summit by getting backing from Elon Musk.
In London’s Lancaster House, where the wifi was patchy, but the gold trim abundant, Musk sat down for a one-on-one interview with the British PM on Thursday evening.
The billionaire owner of Tesla, SpaceX and X described Rishi Sunak’s decision to invite China to the Bletchley Park summit as “essential.”
“Thank you for inviting them,” Musk said. “Having them here is essential. If they’re not participants, it’s pointless.”
Musk said AI had the potential to “create a future of abundance” and a “universal high income” if governments stepped in to act as referees.
“There will come a point where no job is needed,” Musk said. “You can have a job if you want to have a job… but the AI will be able to do everything. I don’t know if that makes people comfortable or uncomfortable,” provoking nervous laughter from Sunak.
Just under four hours earlier the prime minister had wrapped up the world’s first AI Safety Summit at Bletchley Park with an international agreement which included monitoring large language models developed by the most advanced labs.
Musk, who has had several run-ins with governments over regulation, said the state had a role to play in AI governance to “safeguard the interests of the public”. “If you look at any sports game, there’s always a referee,” he added, in comments supportive of Sunak’s approach to AI governance.
The pair sat on a stage in a casual interview format, with Sunak jacketless and crossed legged, while Musk wore a black blazer over a T-shirt.
Musk told the audience, which included Cabinet ministers and tech execs, that San Francisco and Greater London are the “two leading locations on earth” for AI, adding the U.K. is “doing very well.”
Sunak had faced criticism for hosting Musk, whose platform, X, has been plagued by misinformation.
But he defended that decision in an interview with POLITICO’s Power Play podcast on Wednesday stating: “I think actually if you listen to what Elon Musk is saying, he’s someone who for a long time has been talking about the potential risk of AI and its existential risks.”
LONDON — London and Washington are to announce a “close collaboration” on AI safety as early as Wednesday, a U.K and U.S. official confirmed to POLITICO.
The collaboration is expected to marry new guardrails the White House placed on artificial intelligence development in this week’s executive order (EO) with existing work by the United Kingdom’s “Frontier AI Taskforce.”
“We plan to announce close bilateral collaboration with the U.S. safety institute this week,” a U.K. official close to the planning of Britain’s AI safety summit told POLITICO. The person was granted anonymity to talk about the summit, which will take place at Bletchley Park on Nov. 1 and 2.
Both countries will be announcing their own version of the institutes as the summit kicks off. In a speech Wednesday in London, U.S. Vice President Kamala Harris, who is representing the Biden administration at the summit, will announce the United States AI Safety Institute, which will be housed at the Department of Commerce, according to a U.S. official granted anonymity to discuss internal plans.
“It will work to create guidelines, standards and best practices for evaluating and mitigating the full spectrum of risks,” the U.S. official added. “We must address the full spectrum of risk, from potentially catastrophic risks to societal harms that are already happening such [as] bias, discrimination and the proliferation of misinformation.”
Meanwhile, British Prime Minister Rishi Sunak has said he will set up an “AI Safety Institute” that will examine, evaluate and test new types of the emerging technology. Sunak said the new institute will build on the work of Britain’s existing Frontier AI Taskforce, which he said has already been granted “privileged access” to the technology models of leading AI companies like Google DeepMind, Anthropic and OpenAI.
The countries will “also participate in information sharing and research collaboration,” said the U.S. official, and will be making their own separate announcements. The U.S. will also share information with other similar safety institutes in other countries.
The White House executive order signed Monday will require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. It is designed to ensure AI systems are safe before companies make them public. Under the EO, Washington will set up an “AI Safety and Security Board.”
The U.K.’s Tech Secretary Michelle Donelan told POLITICO that it was easier for the U.S. to lead the industry to be more transparent because it is dominated by American firms | Dan Kitwood/Getty Images
“We’re trying to lead with substance here and we’re trying to engage with other countries with substance and this is a vision, and the Vice President will lay it out in her speech, […] for how the United States is seeing AI policy and AI governance,” said the White House special adviser on AI, Ben Buchanan, on the forthcoming episode of the POLITICO Tech podcast on the timing of the EO coming in the same week as the U.K. AI summit. Harris is giving a speech in London on the administration’s AI initiatives, including the EO on Wednesday afternoon.
The U.K.’s Tech Secretary Michelle Donelan told POLITICO on Tuesday that it was easier for the U.S. to lead the industry to be more transparent because it is dominated by American firms, but there are aspects of the work that the U.K. can move faster on.
“I know America and other countries will have plans for institutes too, but we can do it a lot quicker, because we already have that initial organization in the [Frontier AI Taskforce],” she said. “We’ve already got that expertise setup, funding in there, and our processes allow us to do that at a quicker speed.”
“The future vision is to secure the safety of models before they are released,” Sunak said Thursday. Britain is expected to publish some information publicly, but will reserve more sensitive national security intel to a smaller group of like-minded governments.
[ad_2]
Vincent Manancourt, Eugene Daniels and Annabelle Dickson
SAN FRANCISCO — One year ago, billionaire and new owner Elon Musk walked into Twitter‘s San Francisco headquarters with a white bathroom sink and a grin, fired its CEO and other top executives and began transforming the social media platform into what is now known as X.
X looks and feels something like Twitter, but the more time you spend on it the clearer it becomes that it’s merely an approximation. Musk has dismantled core features of what made Twitter, Twitter — its name and blue bird logo, its verification system, its Trust and Safety advisory group. Not to mention content moderation and hate speech enforcement.
He also fired, laid off or lost the majority of its workforce — engineers who keep the site running, moderators who keep it from being overrun with hate, executives in charge of making rules and enforcing them.
The result, long-term Twitter watchers say, has been the end of the platform’s role as an imperfect but useful place to find out what’s going on in the world. What X will become, and whether Musk can achieve his ambition of turning it into an “everything app” that everyone uses, remains as unclear as it was a year ago.
“Musk hasn’t managed to make a single meaningful improvement to the platform and is no closer to his vision of an ‘everything app,’ than he was a year ago,” said Insider Intelligence analyst Jasmine Enberg. “Instead, X has driven away users, advertisers, and now it has lost its primary value proposition in the social media world: Being a central hub for news.”
As one of the platform’s most popular and prolific users even before he bought the company, Musk had a unique experience on Twitter that is markedly different from how regular users experience it. But many of the changes he’s introduced to X has been based on his own impressions of the site — in fact, he even polled his millions of followers for advice on how to run it (they said he should step down).
“Musk’s treatment of the platform as a technology company that he could remake in his vision rather than a social network fueled by people and ad dollars has been the single largest cause of the demise of Twitter,” Enberg said.
The blue checkmarks that once signified that the person or institution behind an account was who they said they are — a celebrity, athlete, journalist from global or local publication, a nonprofit agency — now merely shows that someone pays $8 a month for a subscription service that boosts their posts above un-checked users. It’s these paying accounts that have been found to spread misinformation on the platform that is often amplified by its algorithms.
On Thursday, for instance, a new report from the left-leaning nonprofit Media Matters found that numerous blue-checked X accounts with tens of thousands of followers claimed that the mass shooting in Maine was a “false flag,” planned by the government. Researchers also found such accounts spreading misinformation and propaganda about the Israel-Hamas war — so much so that the European Commission made a formal, legally binding request for information to X over its handling of hate speech, misinformation and violent terrorist content related to the war.
Ian Bremmer, a prominent foreign policy expert, posted on X this month that the level of disinformation on the Israel-Hamas war “being algorithmically promoted” on the platform “is unlike anything I’ve ever been exposed to in my career as a political scientist.”
It’s not just the platform’s identity that’s on shaky grounds. Twitter was already struggling financially when Musk purchased it for $44 billion in a deal that closed Oct. 27, 2022, and the situation appears more precarious today. Musk took the company private, so its books are no longer public — but in July, the Tesla CEO said the company had lost about half of its advertising revenue and continues to face a large debt load.
“We’re still negative cash flow,” he posted on the site on July 14, due to about a “50% drop in advertising revenue plus heavy debt load.”
“Need to reach positive cash flow before we have the luxury of anything else,” he said.
In May, Musk hired Linda Yaccarino, a former NBC executive with deep ties to the advertising industry in an attempt to lure back top brands, but the effort has been slow to pay off. While some advertisers have returned to X, they are not spending as much as they did in the past — despite a rebound in the online advertising market that boosted the most recent quarterly profits for Facebook parent company, Meta, and Google parent company, Alphabet.
Insider Intelligence estimates that X will bring in $1.89 billion in advertising revenue this year, down 54% from 2022. The last time its ad revenue was near this level was in 2015, when it came in at $1.99 billion. In 2022, it was $4.12 billion according to the research firm’s estimates.
Outside research also shows that people are using X less.
According to research firm Similarweb, global web traffic to Twitter.com was down 14%, year-over-year, and traffic to the ads.twitter.com portal for advertisers was down 16.5%. Performance on mobile was no better, down 17.8% year-over-year based on combined monthly active users for Apple’s iOS and Android.
“Even though the cultural relevance of Twitter was already starting to decline,” before Musk took it over, “it’s as if the platform no longer exists. And it’s been a death by a thousand cuts,” Enberg said.
“What’s really fascinating is that almost all of the wounds have been self-inflicted. Usually when a social platform, starts to lose its relevance there are at least some external factors at play, but that’s not the case here.”
SAN FRANCISCO — One year ago, billionaire and new owner Elon Musk walked into Twitter‘s San Francisco headquarters with a white bathroom sink and a grin, fired its CEO and other top executives and began transforming the social media platform into what is now known as X.
X looks and feels something like Twitter, but the more time you spend on it the clearer it becomes that it’s merely an approximation. Musk has dismantled core features of what made Twitter, Twitter — its name and blue bird logo, its verification system, its Trust and Safety advisory group. Not to mention content moderation and hate speech enforcement.
He also fired, laid off or lost the majority of its workforce — engineers who keep the site running, moderators who keep it from being overrun with hate, executives in charge of making rules and enforcing them.
The result, long-term Twitter watchers say, has been the end of the platform’s role as an imperfect but useful place to find out what’s going on in the world. What X will become, and whether Musk can achieve his ambition of turning it into an “everything app” that everyone uses, remains as unclear as it was a year ago.
“Musk hasn’t managed to make a single meaningful improvement to the platform and is no closer to his vision of an ‘everything app,’ than he was a year ago,” said Insider Intelligence analyst Jasmine Enberg. “Instead, X has driven away users, advertisers, and now it has lost its primary value proposition in the social media world: Being a central hub for news.”
As one of the platform’s most popular and prolific users even before he bought the company, Musk had a unique experience on Twitter that is markedly different from how regular users experience it. But many of the changes he’s introduced to X has been based on his own impressions of the site — in fact, he even polled his millions of followers for advice on how to run it (they said he should step down).
“Musk’s treatment of the platform as a technology company that he could remake and his vision rather than a social network fueled by people and ad dollars has been the single largest cause of the demise of Twitter,” Enberg said.
The blue checkmarks that once signified that the person or institution behind an account was who they said they are — a celebrity, athlete, journalist from global or local publication, a nonprofit agency — now merely shows that someone pays $8 a month for a subscription service that boosts their posts above un-checked users. It’s these paying accounts that have been found to spread misinformation on the platform that is often amplified by its algorithms.
On Thursday, for instance, a new report from the left-leaning nonprofit Media Matters found that numerous blue-checked X accounts with tens of thousands of followers claimed that the mass shooting in Maine was a “false flag,” planned by the government. Researchers also found such accounts spreading misinformation and propaganda about the Israel-Hamas war — so much so that the European Commission made a formal, legally binding request for information to X over its handling of hate speech, misinformation and violent terrorist content related to the war.
Ian Bremmer, a prominent foreign policy expert, posted on X this month that the level of disinformation on the Israel-Hamas war “being algorithmically promoted” on the platform “is unlike anything I’ve ever been exposed to in my career as a political scientist.”
It’s not just the platform’s identity that’s on shaky grounds. Twitter was already struggling financially when Musk purchased it for $44 billion in a deal that closed Oct. 27, 2022, and the situation appears more precarious today. Musk took the company private, so its books are no longer public — but in July, the Tesla CEO said the company had lost about half of its advertising revenue and continues to face a large debt load.
“We’re still negative cash flow,” he posted on the site on July 14, due to a about a “50% drop in advertising revenue plus heavy debt load.”
“Need to reach positive cash flow before we have the luxury of anything else,” he said.
In May, Musk hired Linda Yaccarino, a former NBC executive with deep ties to the advertising industry in an attempt to lure back top brands, but the effort has been slow to pay off. While some advertisers have returned to X, they are not spending as much as they did in the past — despite a rebound in the online advertising market that boosted the most recent quarterly profits for Facebook parent company, Meta, and Google parent company, Alphabet.
Insider Intelligence estimates that X will bring in $1.89 billion in advertising revenue this year, down 54% from 2022. The last time its ad revenue was near this level was in 2015, when it came in at $1.99 billion. In 2022, it was $4.12 billion.
Outside research also shows that people are using X less.
According to research firm Similarweb, global web traffic to Twitter.com was down 14%, year-over-year, and traffic to the ads.twitter.com portal for advertisers was down 16.5%. Performance on mobile was no better, down 17.8% year-over-year based on combined monthly active users for Apple’s iOS and Android.
“Even though the cultural relevance of Twitter was already starting to decline,” before Musk took it over, “it’s as if the platform no longer exists. And it’s been a death by a thousand cuts,” Enberg said.
“What’s really fascinating is that almost all of the wounds have been self-inflicted. Usually when a social platform, starts to lose its relevance there are at least some external factors at play, but that’s not the case here.”
Remember when Sen. Rand Paul (R–Ky.) accused then–White House COVID-19 adviser Anthony Fauci of funding China’s Wuhan virus lab?
Fauci replied, “Senator Paul, you do not know what you’re talking about.”
The media loved it. Vanity Fairsmirked, “Fauci Once Again Forced to Basically Call Rand Paul a Sniveling Moron.”
But now the magazine has changed its tune, admitting, “In Major Shift, NIH Admits Funding Risky Virus Research in Wuhan” and “Paul might have been onto something.”
Then what about question two: Did COVID-19 occur because of a leak from that lab?
When Paul confronted Fauci, saying, “The evidence is pointing that it came from the lab!” Fauci replied, “I totally resent the lie that you are now propagating.”
Was Paul lying? What’s the truth?
The media told us COVID came from an animal, possibly a bat.
But in my new video, Paul points out there were “reports of 80,000 animals being tested. No animals with it.”
Now he’s released a book, Deception: The Great Covid Cover-Up, that charges Fauci and others with funding dangerous research and then covering it up.
“Three people in the Wuhan lab got sick with a virus of unknown origin in November of 2019,” says Paul. The Wuhan lab is 1,000 kilometers away from where bats live.
Today the FBI, the Energy Department, and others agree with Paul. They believe COVID most likely came from a lab.
I ask Paul, “COVID came from evil Chinese scientists, in a lab, funded by America?”
“America funded it,” he replies, “maybe not done with evil intentions. It was done with the misguided notion that ‘gain-of-function’ research was safe.”
Gain-of-function research includes making viruses stronger.
The purpose is to anticipate what might happen in nature and come up with vaccines in advance. So I push back at Paul, “They’re trying to find ways to stop diseases!”
He replies, “Many scientists have now looked at this and said, ‘We’ve been doing this gain-of-function research for quite a while.’ The likelihood that you create something that creates a vaccine that’s going to help anybody is pretty slim to none.”
Paul points out that Fauci supported “gain-of-function” research.
“He said in 2012, even if a pandemic occurs…the knowledge is worth it.” Fauci did write: “The benefits of such experiments and the resulting knowledge outweigh the risks.”
Paul answers: “Well, that’s a judgment call. There’s probably 16 million families around the world who might disagree with that.”
Fauci and the National Institutes of Health (NIH) didn’t give money directly to the Chinese lab. They gave it to a nonprofit, EcoHealth Alliance. The group works to protect people from infectious diseases.
“They were able to accumulate maybe over $100 million in U.S. taxpayer dollars, and a lot of it was funneled to Wuhan,” says Paul.
EcoHealth Alliance is run by zoologist Peter Daszak. Before the pandemic, Daszak bragged about combining coronaviruses in Wuhan.
Once COVID broke out, Daszak became less eager to talk about these experiments. He won’t talk to me.
“Peter Daszak has refused to reveal his communications with the Wuhan lab,” complains Paul. “I do think that ultimately there is a great deal of culpability on his part.… They squelched all dissent and said, ‘You’re a conspiracy theorist if you’re saying this [came from a lab],’ but they didn’t reveal that they had a monetary self-incentive to cover this up,” says Paul.
“The media is weirdly uncurious about this,” I say to Paul.
“We have a disease that killed maybe 16 million people,” Paul responds. “And they’re not curious as to how we got it?”
Also, our NIH still funds gain of function research, Paul says.
“This is a risk to civilization. We could wind up with a virus…that leaks out of a lab and kills half of the planet,” Paul warns.
Paul’s book reveals much more about Fauci and EcoHealth Alliance. I will cover more of that in this column in a few weeks.
At a July hearing of the House Select Subcommittee on the Weaponization of the Federal Government, Republican members focused on social media companies’ moderation of largely conservative viewpoints and accused the Biden administration of working hand-in-hand with tech companies to censor critics.
The First Amendment generally restricts the actions of the government and not purely private decisions of companies. A spirited, and unsettled, debate is emerging nationwide as to the extent of government pressure on platforms that should render a moderation decision a First Amendment violation.
But some members of the Weaponization Subcommittee sought to minimize the concerns about moderation without engaging in a nuanced discussion about government pressure, or “jawboning.”
“I’m an attorney by training, and one of the things I learned very early on in constitutional law is that no right given to the people of the United States is absolute,” Rep. Linda Sánchez (D–Calif.) said when asking a witness about the harms of health misinformation. “And that includes the right to free speech because you do not have the right to shout fire in a crowded theater, because it could produce harm and death of people by being false.”
Fire in a crowded theater. If you’re discussing whether U.S. law should protect allegedly false speech, there is a good chance that someone will say these five words. That person likely wants the government to regulate harmful speech and justifies it by pointing out that the U.S. Supreme Court said that you can never yell “fire” in a crowded theater.
Like much of the speech that those invoking “fire in a crowded theater” are trying to prohibit, the statement is incorrect because sometimes you could yell “fire” in a crowded theater without facing punishment. The theater may actually be on fire. Or you may reasonably believe that the theater is on fire. Or you are singing in a concert, and “fire” is one of your lyrics. Of course, there are scenarios in which intentionally lying about a fire in a crowded theater and causing a stampede might lead to a disorderly conduct citation or similar charge.
The real problem with the “fire in a crowded theater” discourse is that it too often is used as a placeholder justification for regulating any speech that someone believes is harmful or objectionable. In reality, the Supreme Court has defined narrow categories of speech that are exempt from First Amendment protections and set an extraordinarily high bar for imposing liability for other types of speech. As the Supreme Court wrote in 2010, the United States does not have a “free-floating test for First Amendment coverage,” and the free speech protections do not “extend only to categories of speech that survive an ad hoc balancing of relative social costs and benefits.”
“Fire in a crowded theater” is a derivative of a line in a 1919 Supreme Court opinion, Schenck v. United States, an appeal by a Socialist Party official of his conviction for distributing leaflets that criticized the military draft as a 13th Amendment violation. The Court unanimously rejected his appeal, reasoning that the First Amendment’s protections yield to a “clear and present danger” such as the leaflet. Writing for the Court, Justice Oliver Wendell Holmes wrote that the “most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic.”
The crowded theater scenario was a hypothetical to support a low-burden “clear and present danger” test and the conviction of a military draft critic. Although the Supreme Court has never had the occasion to adjudicate an actual dispute involving a person yelling “fire” in a crowded theater, the Court did at least narrow its “clear and present danger test” in 1969, setting a higher standard for imminent incitement of lawless action.
Yet the “fire in a crowded theater” enthusiasts persist, and they use the hypothetical to justify regulating a wide swath of harmful or objectionable speech without seriously evaluating the unintended consequences of giving the government more censorial power. Just as you cannot yell “fire” in a crowded theater, they argue, you can’t say insert false speech here.
But you often can utter or publish a falsehood without a regulator or court having the power to intervene, thanks to a long history of free speech precedent. These rights have not contracted; if anything, courts and legislators have expanded protections for false speech over the years. Of course, U.S. law does not protect all false speech. If a plaintiff meets the many stringent requirements for proving defamation, the defendant may be liable for damages. Regulators may oversee the claims that companies make about their products. Prosecutors may charge defendants with fraud, lying to government officials, and other crimes arising from false statements. There are even scenarios in which lying about a fire in a crowded theater could lead to liability. But the standards for holding speakers liable for false statements are high.
But such nuance is often absent in today’s discussions of free speech. After mentioning the crowded theater, Sánchez confirmed with the witness that social media platforms have policies regarding health misinformation. “We are not trying to censor speech,” Sánchez said. “We are simply trying to create factually correct information to prevent harm to people, including death, and that’s what they were trying to do during COVID.”
But alleged misinformation is speech. While some speech undisputedly can be regulated, the Supreme Court has explicitly rejected a broad exception for false speech. Invoking the crowded theater will not magically create an avenue for unchecked censorship.
The concerns about false speech have driven many commentators and politicians to propose new laws that would penalize at least some types of false statements that have long received legal protection. For many of the same reasons that courts and legislatures have protected falsehoods for centuries, imposing broad new “misinformation” laws would be stifling, ripe for abuse, inefficient, and largely inconsistent with the U.S. legal system’s approach to false speech.
***
Among the most notable of such recent proposals came from Gov. Jay Inslee on the first anniversary of the January 6, 2021, storming of the U.S. Capitol. The Washington state Democrat issued a press release that touted his support for “legislation currently being written that would outlaw attempts by candidates and elected officials to spread lies about free and fair elections when it has the likelihood to stoke violence.” State lawmakers, he said in the statement, were drafting a bill that would create a gross misdemeanor for elected officials or political candidates in Washington state who tell knowing lies about elections.
“The proposed law is narrowly tailored to capture only those false statements that are made for the purpose of undermining the election process or results and is further limited to lies that are likely to incite or cause lawlessness,” Inslee said. Inslee appeared to rely on Brandenburg v. Ohio, the 1969 case that refined the Schenck v. United States “clear and present danger” test that Holmes articulated in 1919. “The U.S. Supreme Court has made it clear that speech can be limited where it is likely to incite lawlessness,” Inslee’s press release stated. But the statement did not capture the narrowness of the Brandenburg opinion. In that ruling, the Court wrote that the First Amendment prohibits state regulation of advocacy unless that advocacy “is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Inslee’s press release omitted any mention of an imminence requirement. As First Amendment scholar and Volokh Conspiracy blogger Eugene Volokh told Reason, imminence is a high bar. An example of imminent lawless action, Volokh said, is “standing outside a police station and yelling ‘burn it down.'” Claiming fraudulent election results, Volokh said, is not incitement.
Therein was the problem with Inslee’s initial proposal. While it was well-intentioned and arose from a legitimate desire to prevent a repeat of the unrest at the Capitol, Inslee could not easily explain how a politician’s lie about election administration rose to the level of imminent incitement of lawless action.
Throughout January 2022, Inslee tried to justify the proposal as constitutional and urgently necessary. At an event on the day of his announcement, which took place as former President Donald Trump continued to contest the election results, Inslee resorted to a comfortable and censorious metaphor. “The defeated president as recently as an hour ago is yelling fire in the crowded theater of democracy,” Inslee said. But no amount of references to fires or crowds or theaters could justify jailing politicians just because their speech was found to be untrue.
Perhaps in response to the criticism that Inslee’s announcement received, lawmakers over the next few weeks consulted legal scholars and released a revised version of the bill. The proposal begins with legislative findings that contain bold statements about Washington state’s election integrity. The bill would create a gross misdemeanor, punishable by up to 364 days in jail, for any elected official or candidate who “knowingly, recklessly, or maliciously makes false statements or claims related to any pending or completed and certified election conducted in the state, regarding the legitimacy or integrity of the election process or election results,” provided that the false speech: (1) is “intended to incite or produce imminent lawless action and do incite or produce such action resulting in harm to a person or to property”; (2) is “made for the purpose of undermining the election process or the election results”; or (3) “falsely claim[s] entitlement to an office that an elected official or candidate did not win after any lawful challenge made pursuant to this title is completed and the election results are certified.”
To the credit of those who drafted the revised bill, they at least tried to hew more closely to the language of Brandenburg than Inslee did in his press release. But even the narrower language—tying the false statements to imminent lawless action—was not guaranteed to survive constitutional scrutiny. And the revised bill covered two other types of false speech that were unrelated to the Brandenburg standard.
At a January 28, 2022, hearing on the bill, then–state Sen. David Frockt (D–Seattle), the bill’s primary sponsor, discussed the delicate balancing act that was required to address election lies while adhering to United States v. Alvarez, Brandenburg, and other First Amendment precedents. “It’s kind of like trying to drive a toaster through a car wash,” Frockt said. “You have to get it just right. And so we do not take the First Amendment for granted. I don’t. We don’t treat it cavalierly.” Others who testified were more skeptical both about the bill’s constitutionality and its potential impacts.
Paul Guppy, vice president for research of the conservative Washington Policy Center think tank, pointed to the state’s close 2004 gubernatorial election, which required a recount that lasted more than a month. “That was exactly a time period when we needed the maximum open and transparent debate of different opinions about what was happening with that election than ever,” Guppy said. “If this bill had been in effect, public officials and candidates would have been restricted or chilled or fearful about what they could say about that election.” The bill could undermine its primary goal, Guppy said. “It doesn’t increase the confidence in the outcome of the election,” he said. “It actually creates more suspicion when people are not allowed to debate the outcome of elections honestly.”
The opposition was substantial enough to prevent the bill from passing. A few weeks after the hearing, Frockt issued a statement acknowledging that the proposal would not progress in the legislature in 2022.
***
(Photo: sidewaysdesign/iStock)
Had the bill passed, would it have survived a constitutional challenge? It is hard to predict with certainty. The revised bill at least attempted to address First Amendment concerns by mimicking the Brandenburg imminent incitement standard. While adding the Brandenburg language increases the chances of the law surviving First Amendment challenges, it also reduces the number of scenarios in which the government could hold a politician accountable for lying about election integrity.
In a 1973 opinion, Hess v. Indiana, the U.S. Supreme Court highlighted the narrowness of the Brandenburg exception that it had articulated four years earlier. The case involved an antiwar protest at Indiana University. After police began clearing the street, the defendant said something like “We’ll take the fucking street later” and was arrested for disorderly conduct. The Supreme Court reversed his conviction, finding that the Brandenburg exception did not apply. “Since the uncontroverted evidence showed that [the defendant’s] statement was not directed to any person or group of persons, it cannot be said that he was advocating, in the normal sense, any action,” the Court wrote. “And since there was no evidence, or rational inference from the import of the language, that his words were intended to produce, and likely to produce, imminent disorder, those words could not be punished by the State on the ground that they had a tendency to lead to violence.”
Even with the Brandenburg language, the Washington law still might face First Amendment problems. A politician challenging the law might argue that the uncertainty about what constitutes imminent incitement would chill a wider swath of constitutionally protected speech. A politician who has legitimate concerns about how an election was administered may understandably refrain from saying anything to avoid even the prospect of being prosecuted and sentenced to up to a year in prison. Even though the prosecution would face a high burden of proving all elements of the crime beyond a reasonable doubt, it is not inconceivable that a politically biased judge could sway a guilty verdict. Even if they were not ultimately convicted, they would need to spend substantial time and money defending the case. Perhaps it is more attractive to not say anything about their concerns.
Nor does the bill’s limitation to knowing, malicious, or reckless falsehoods directed toward particular goals eliminate concerns of a chilling effect, as illustrated in the 8th Circuit’s opinion in 281 Care Committee v. Arneson. In striking down a Minnesota law that criminalized intentional falsehoods about ballot questions, the court rejected the argument that limiting the misdemeanor to intentional falsehoods avoided constitutional problems. “The risk of chilling otherwise protected speech is not eliminated or lessened by the mens rea requirement because, as we have already noted, a speaker might still be concerned that someone will file a complaint with the [Office of Administrative Hearings], or that they might even ultimately be prosecuted, for a careless false statement or possibly a truthful statement someone deems false, no matter the speaker’s veracity,” the court wrote. “Or, most cynically, many might legitimately fear that no matter what they say, an opponent will utilize [the law] to simply tie them up in litigation and smear their name or position on a particular matter, even if the speaker never had the intent required to render him liable.”
Even if the Washington bill were somehow found to comport with the First Amendment, I question whether it would meet its goals of instilling further confidence in elections and preventing repeats of the January 6 violence. The mere presence of the law on Washington state’s books might make some segments of the public more skeptical of the state’s elections procedures, perhaps fueling speculation that politicians might be aware of problems but stay quiet out of fear of jail time. This would not be an unreasonable worry; after all, they might think, why would Washington state need to threaten politicians with jail time if its elections actually were secure?
It is far from certain that such a law would substantially reduce the most harmful false speech about elections. Trump and some other elected officials spread false claims about the 2020 elections, but they were not the only ones. Washington state’s proposed law does not (and could not) regulate false speech spread by talk radio hosts, social media trolls, foreign governments, and others.
The opposition to and failure of Washington state’s proposal reveals the many difficulties of addressing falsehoods through legal penalties. First Amendment precedent guides the legal analysis, but even if it survived a constitutional challenge, the law would reveal many practical problems in effectively regulating false speech. All the reasons for allowing falsehoods apply to arguments against new misinformation regulations. Censorial new laws threaten to chill the ability of people to express criticism of those in power. They also reduce the ability of speakers to shine light on public functions such as the elections system. And it’s unclear whether they are effective.
LONDON — The European Union on Thursday demanded Meta and TikTok detail their efforts to curb illegal content and disinformation during the Israel-Hamas war, flexing the power of a new law that threatens billions in fines if tech giants fail to do enough to protect users.
The European Commission, the 27-nation bloc’s executive branch, formally requested that the social media companies provide information on how they’re complying with pioneering digital rules aimed at cleaning up online platforms.
The commission asked Meta and TikTok to explain the measures they have taken to reduce the risk of spreading and amplifying terrorist and violent content, hate speech and disinformation.
It’s the prelude to a possible crackdown under the new digital rules, which took effect in August and have made the EU a global leader in reining in Big Tech. The biggest platforms face extra obligations to stop a wide range of illegal content from flourishing or face the threat of fines of up to 6% of annual global revenue.
The new rules, known as the Digital Services Act, are being put to the test by the Israel-Hamas war. Photos and videos have flooded social media of the carnage alongside posts from users pushing false claims and misrepresenting videos from other events.
Brussels issued its first formal request under the DSA last week to Elon Musk’s social media platform X, formerly known as Twitter.
European Commissioner Thierry Breton, the bloc’s digital enforcer, had previously sent warning letters to the three platforms, as well as YouTube, highlighting the risks that the war poses.
“In our exchanges with the platforms, we have specifically asked them to prepare for the risk of live broadcasts of executions by Hamas — an imminent risk from which we must protect our citizens — and we are seeking assurances that the platforms are well prepared for such possibilities,” Breton said in a speech Wednesday.
Meta, which owns Facebook and Instagram, said it has a “well-established process for identifying and mitigating risks during a crisis while also protecting expression.”
After Hamas militants attacked Israeli communities, “we quickly established a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation,” the company said.
Meta said it has teams working around the clock to keep its platforms safe, take action on content that violates its policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation.
TikTok didn’t respond to a request for comment.
The companies have until Wednesday to respond to the commission’s questions related to their crisis response. They also face a second deadline of Nov. 8 for responses on protecting election integrity and, in TikTok’s case, child safety.
Depending on their responses, Brussels could decide to open formal proceedings against Meta or TikTok and impose fines for “incorrect, incomplete, or misleading information,” the commission said.
As Israel’s war with Hamas militants in Gaza plays out on social media, experts say misinformation and propaganda are spreading rampantly on X, the social media platform formerly known as Twitter. Jo Ling Kent reports.
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
Opinions expressed by Entrepreneur contributors are their own.
Technology has provided endless benefits to businesses across the globe and is vital to their growth. Today, data can be shared at breakneck speeds among companies, their leaders, vendors and customers, but this also allows inaccurate and false information to spread at the same rate, most recently by bots powered by artificial intelligence. It has metastasized in society broadly and the internet specifically — whether in the form of social media feeds/online forums, as well as in news articles and other traditional media. Much of it is intentional, including attempts to mislead consumers and gain a competitive advantage.
While this often affects individuals personally, it can also cause severe damage to enterprises and entrepreneurs who rely upon their reputation and credibility. It’s critical, then, for them to understand the risks of misinformation, how to avoid participating in its spread and how to lessen the damage it can cause to a professional and personal brand.
How misinformation negatively impacts small businesses
Whether it comes in the form of rumors, hoaxes, fake news or misleading narratives, misinformation represents a particular danger to small companies: They often lack teams of marketing and public relations professionals to deal with such issues and so are more prone to resulting disruptions, loss of customers, negative press, reduced revenue and legal consequences.
Let’s explore a few effects in more detail:
Reputation damage: Entrepreneurs depend, of course, upon the honesty and integrity of their brands in the minds of customers, investors and partners. Misinformation can tarnish these assets, eroding the trust that’s been so hard to establish. This can be especially difficult for a small business to address since it likely can’t distance itself from an owner or other principal, for example, in a way that a large organization might be more capable of.
Poor decisions: Falling for false data/narratives regarding market trends or what’s happening with competitors could lead an entrepreneur to make a poor staffing, sales or customer service move, with potentially disastrous consequences.
Loss of customers: Incorrect/misleading information can drive away existing and potential customers, who, understandably, fear doing business with an enterprise or individual associated with it.
Legal ramifications: Deliberately disseminating misinformation about a business or individual can lead to defamation lawsuits, among other dangers.
As a small business owner, you are likely solely responsible for addressing inaccurate information about your company, customers and suppliers, and having a strategy in place to do so can significantly reduce adverse effects. The correct response will depend on the type and severity of the misinformation being shared — each situation likely requires a customized solution.
Remember, too, that some incidents may be nothing more than misunderstandings. For example, in the Battle of Constantinople in 1453, mysterious lights were seen over the city. Word quickly spread that they were a sign from the heavens that the Ottomans would be defeated in battle. It turns out what was witnessed was nothing more than St. Elmo’s Fire, a natural and harmless phenomenon in which ionized plasma looks like a bluish flame.
If the misinformation you are dealing with is analogously harmless, you can address it by simply taking responsibility — issuing an apology or otherwise setting the record straight. Businesses that show accountability will almost always come out on top.
In other cases, misinformation is intentionally malicious. Another historical example involves Benjamin Franklin, who in 1782 created a fake version of the Boston Independent Chronicle newspaper. Within, a false story claimed that the British had hired Native Americans to terrorize American soldiers and civilians across the frontier. Before long, it had been republished throughout the colonies, sparking increased hostility toward Native Americans.
In severe cases, a business may need to go on the offensive to stay ahead of intentionally malicious storytelling. This might include launching a PR campaign or hiring an attorney.
Of course, the best way to eliminate misinformation is to avoid it altogether. At the very least, you can minimize damage by catching it early. Here are some best practices that entrepreneurs can apply to do so:
Fact-checking and other verification: Before sharing information on websites, social media profiles or other media, entrepreneurs should be vigilant to carefully fact-check it, ideally from at least two reputable sources. It’s much easier to stop misinformation before it starts than to put the genie back in the bottle.
Build a solid reputation: Businesses known for being honest, trustworthy and ethical are less likely to be impacted by misinformation. Half the battle lies in the degree to which people are inclined to believe the negative thing they are presented with. If you run a shady operation, people are more likely to act upon something bad they heard, while those who know you run an upstanding enterprise will be more likely to come to your defense.
Monitor your online presence: A good practice for catching misinformation before it spirals out of control is to regularly monitor online mentions related to your business or you as a person. Consider setting up Google Alerts to be notified of such new content.
LONDON — The European Commission on Thursday made a formal, legally binding request for information from Elon Musk’s social media platform X over its handling of hate speech, misinformation and violent terrorist content related to the Israel-Hamas war.
It is the first step in what could become the EU’s inaugural investigation under the Digital Services Act, in this case to determine if the site formerly known as Twitter is in compliance with the tough new rules meant to keep users safe online and stop the spread of harmful content.
San Francisco-based X has until Wednesday to respond to questions related to how its crisis response protocol is functioning. Responses to other questions must be received by Oct. 31. The commission said its next steps, which could include the opening of formal proceedings and penalties, would be determined by X’s replies.
Representatives for X did not immediately respond to a message seeking comment. The company’s CEO, Linda Yaccarino, said earlier that the site has removed hundreds of Hamas-linked accounts and taken down or labeled tens of thousands of pieces of content since the militant group’s attack on Israel. One social media expert called the actions “a drop in the bucket.”
Yaccarino on Thursday outlined steps taken by X to combat illegal content flourishing on the platform. She was responding to an earlier letter from a top European Union official for information on how X is complying with the EU’s new digital rules during the Israel-Hamas war. That letter, which essentially served as a warning, was not legally binding — the latest one, however, is.
“X is proportionately and effectively assessing and addressing identified fake and manipulated content during this constantly evolving and shifting crisis,” Yaccarino said in a letter to European Commissioner Thierry Breton, the 27-nation bloc’s digital enforcer.
But some say the efforts are not nearly enough to tackle the problem.
“While these actions are better than nothing, it is not enough to curtail the misinformation problem on X,” said Kolina Koltai, a researcher at the investigative collective Bellingcat who previously worked at Twitter on Community Notes.
“There is an overwhelming amount of misinformation on the platform,” Koltai said. “From what we have seen, the moderation efforts from X are only addressing a drop in the bucket.”
Since the war erupted, photos and videos have flooded social media of the carnage, including haunting footage of Hamas fighters taking terrified Israelis hostage, alongside posts from users pushing false claims and misrepresenting videos from other events.
The conflict is one of the first major tests for the EU’s groundbreaking digital rules, which took effect in August. Breton fired off a similar letter Thursday to TikTok, telling CEO Shou Zi Chew that he has a “particular obligation” to protect child and teen users from “violent content depicting hostage taking and other graphic videos” reportedly making the rounds on the video sharing app.
For X, changes that Musk has made to the platform since he bought it last year mean accounts that subscribe to X’s blue-check service can get paid if their posts go viral, creating a financial incentive to post whatever gets the most reaction. Plus, X’s workforce — including its content moderation team — has been gutted.
Those changes are running up against the EU’s Digital Services Act, which forces social media companies to step up policing of their platforms for illegal content, such as terrorist material or illegal hate speech, under threat of hefty fines.
“There is no place on X for terrorist organizations or violent extremist groups and we continue to remove such accounts in real time, including proactive efforts,” Yaccarino wrote in the letter posted to X.
X has taken action to “remove or label tens of thousands of pieces of content,” Yaccarino said, pointing out that there are 700 unique Community Notes — a feature that allows users to add their own fact-checks to posts — “related to the attacks and unfolding events.”
The platform has been “responding promptly” and in a “diligent and objective manner” to takedown requests from law enforcement agencies from around the world, including more than 80 from EU member states, Yaccarino said.
Koltai, the researcher and former Twitter employee, said Community Notes are not an “end-all solution to curtailing misinfo” and that there are gaps that the feature just can’t fill yet.
“There are still many videos and photos on X that don’t have notes that are unmoderated, and continue to spread misleading claims,” she said.
Since Musk acquired Twitter and renamed it, social-media watchers say the platform has become not just unreliable but actively promotes falsehoods, while a study commissioned by the EU found that it’s the worst-performing platform for online disinformation.
Rivals such as TikTok, YouTube and Facebook also are coping with a flood of unsubstantiated rumors and falsehoods about the Middle Eastern conflict, playing the typical whack-a-mole that erupts each time a news event captures world attention.
Breton, the EU official, urged TikTok’s leader to step up its efforts at tackling disinformation and illegal content and respond within 24 hours. The company did not reply immediately to an email seeking comment.
Breton’s warning letters have also gone to Mark Zuckerberg, CEO of Facebook and Instagram parent Meta.
—
AP Technology Writer Barbara Ortutay in Oakland, California, contributed to this report.
While Twitter has always struggled with combating misinformation about major news events, it was still the go-to place to find out what’s happening in the world. But the Israel-Hamas war has underscored how the platform now transformed into X has become not only unreliable but is actively promoting falsehoods.
Experts say that under Elon Musk the platform has deteriorated to the point that it’s not just failing to clamp down on misinformation but is favoring posts by accounts that pay for its blue-check subscription service, regardless of who runs them.
If such posts go viral, their blue-checked creators can be eligible for payments from X, creating a financial incentive to post whatever gets the most reaction — including misinformation.
Ian Bremmer, a prominent foreign policy expert, posted on X that the level of disinformation on the Israel-Hamas war “being algorithmically promoted” on the platform “is unlike anything I’ve ever been exposed to in my career as a political scientist.”
And the European Union’s digital enforcer wrote to Musk about misinformation and “potentially illegal content” on X, in what’s shaping up to be one of the first major tests for the 27-nation bloc’s new digital rules aimed at cleaning up social media platforms.
While Musk’s social media site is awash in chaos, rivals such as TikTok, YouTube and Facebook are also coping with a flood of unsubstantiated rumors and falsehoods about the conflict, playing the usual whack-a-mole that emerges every time a news event captivates the world’s attention.
“People are desperate for information and social media context may actively interfere with people’s ability to distinguish fact from fiction,” said Gordon Pennycook, an associate professor of psychology at Cornell University who studies misinformation.
For instance, instead of asking whether something is true, people might focus on whether something is surprising, interesting or even likely to make people angry — the sorts of posts more likely to elicit strong reactions and go viral.
The liberal advocacy group Media Matters found that since Saturday, subscribers to X’s premium service shared at least six misleading videos about the war. This included out-of-context videos and old ones purporting to be recent — that earned millions of views.
TikTok, meanwhile, is “almost as bad” as X, said Kolina Koltai, a researcher at the investigative collective Bellingcat. She previously worked at Twitter on Community Notes, its crowd-sourced fact-checking service.
But unlike X, TikTok has never been known as the No. 1 source for real-time information about current events.
“I think everyone knows to take TikTok with a grain of salt,” Koltai said. But on X “you see people actively profiteering off of misinformation because of the incentives they have to spread the content that goes viral — and misinformation tends to go viral.”
Emerging platforms, meanwhile, are still finding their footing in the global information ecosystem, so while they might not yet be targets for large-scale disinformation campaigns, they also don’t have the sway of larger, more established rivals.
Facebook and Instagram owner Meta’s Threads, for instance, is gaining traction among users fleeing X, but the company has so far tried to de-emphasize news and politics in favor of more “friendly” topics.
Meta, TikTok and X did not immediately respond to Associated Press requests for comment.
A post late Monday from X’s safety team said: “In the past couple of days, we’ve seen an increase in daily active users on @X in the conflict area, plus there have been more than 50 million posts globally focusing on the weekend’s terrorist attack on Israel by Hamas. As the events continue to unfold rapidly, a cross-company leadership group has assessed this moment as a crisis requiring the highest level of response.”
While plenty of real imagery and accounts of the carnage have emerged, they have been intermingled with social media users pushing false claims and misrepresenting videos from other events.
Among the fabrications are false claims that a top Israeli commander was kidnapped, a doctored White House memo purporting to show U.S. President Joe Biden announcing billions in aid for Israel, and old unrelated videos of Russian President Vladimir Putin with inaccurate English captions. Even a clip from a video game was passed on as footage from the conflict.
“Every time there is some major event and information is at a premium, we see misinformation spread like wildfire,” Pennycook said. “There is now a very consistent pattern, but every time it happens there’s a sudden surge of concern about misinformation that tends to fade away once the moment passes.”
“We need tools that help build resistance toward misinformation prior to events such as this,” he said.
For now, those looking for a central hub to find reliable, real time information online might be out of luck. Imperfect as Twitter was, there’s no clear replacement for it. This means anyone looking for accurate information online needs to exercise vigilance.
In times of big breaking news such as the current conflict, Koltai recommended, “going to your traditional name brands and news media outlets like AP, Reuters, who are doing things like fact checking” and active reporting on the ground.
Meanwhile, in Europe, major social media platforms are facing stricter scrutiny over the war.
Britain’s Technology Secretary Michelle Donelan summoned the U.K. bosses of X, TikTok, Snapchat Google and Meta for a meeting Wednesday to discuss “the proliferation of antisemitism and extremely violent content” following the Hamas attack.
She demanded they outline the actions they’re taking to quickly remove content that breaches the U.K.’s online safety law or their terms and conditions.
European Commissioner Thierry Breton warned in his letter to Musk of penalties for not complying with the EU’s new Digital Services Act, which puts the biggest online platforms like X, under extra scrutiny and requires them to make it easier for users to flag illegal content and take steps to reduce disinformation — or face fines up to 6% of annual global revenue.
Musk responded by touting the platform’s approach using crowdsourced factchecking labels, an apparent reference to Community Notes.
“Our policy is that everything is open source and transparent, an approach that I know the EU supports,” Musk wrote on X. “Please list the violations you allude to on X, so that the public can see them.”
Breton replied that Musk is “well aware” of the reports on “fake content and glorification of violence.”
“Up to you to demonstrate that you walk the talk,” he said.
While Twitter has always struggled with combatting misinformation about major news events, it was still the go-to place to find out what’s happening in the world. But the Israel-Hamas war has underscored how the platform now transformed into X has become not only unreliable but is actively promoting falsehoods.
Experts say that under Elon Musk the platform has deteriorated to the point that it’s not just failing to clamp down on misinformation but is favoring posts by accounts that pay for its blue-check subscription service, regardless of who runs them.
If such posts go viral, their blue-checked creators can be eligible for payments from X, creating a financial incentive to post whatever gets the most reaction — including misinformation.
Ian Bremmer, a prominent foreign policy expert, posted on X that the level of disinformation on the Israel-Hamas war “being algorithmically promoted” on the platform “is unlike anything I’ve ever been exposed to in my career as a political scientist.”
And the European Union’s digital enforcer wrote to Musk about misinformation and “potentially illegal content” on X, in what’s shaping up to be one of the first major tests for the 27-nation bloc’s new digital rules aimed at cleaning up social media platforms.
While Musk’s social media site is awash in chaos, rivals such as TikTok, YouTube and Facebook are also coping with a flood of unsubstantiated rumors and falsehoods about the conflict, playing the usual whack-a-mole that emerges every time a news event captivates the world’s attention.
“People are desperate for information and social media context may actively interfere with people’s ability to distinguish fact from fiction,” said Gordon Pennycook, an associate professor of psychology at Cornell University who studies misinformation.
For instance, instead of asking whether something is true, people might focus on whether something is surprising, interesting or even likely to make people angry — the sorts of posts more likely to elicit strong reactions and go viral.
The liberal advocacy group Media Matters found that since Saturday, subscribers to X’s premium service shared at least six misleading videos about the war. This included out-of-context videos and old ones purporting to be recent — that earned millions of views.
TikTok, meanwhile, is “almost as bad” as X, said Kolina Koltai, a researcher at the investigative collective Bellingcat. She previously worked at Twitter on Community Notes, its crowd-sourced fact-checking service.
But unlike X, TikTok has never been known as the No. 1 source for real-time information about current events.
“I think everyone knows to take TikTok with a grain of salt,” Koltai said. But on X “you see people actively profiteering off of misinformation because of the incentives they have to spread the content that goes viral — and misinformation tends to go viral.”
Emerging platforms, meanwhile, are still finding their footing in the global information ecosystem, so while they might not yet be targets for large-scale disinformation campaigns, they also don’t have the sway of larger, more established rivals.
Facebook and Instagram owner Meta’s Threads, for instance, is gaining traction among users fleeing X, but the company has so far tried to de-emphasize news and politics in favor of more “friendly” topics.
Meta, TikTok and X did not immediately respond to Associated Press requests for comment.
A post late Monday from X’s safety team said: “In the past couple of days, we’ve seen an increase in daily active users on @X in the conflict area, plus there have been more than 50 million posts globally focusing on the weekend’s terrorist attack on Israel by Hamas. As the events continue to unfold rapidly, a cross-company leadership group has assessed this moment as a crisis requiring the highest level of response.”
While plenty of real imagery and accounts of the carnage have emerged, they have been intermingled with social media users pushing false claims and misrepresenting videos from other events.
Among the fabrications are false claims that a top Israeli commander was kidnapped, a doctored White House memo purporting to show U.S. President Joe Biden announcing billions in aid for Israel, and old unrelated videos of Russian President Vladimir Putin with inaccurate English captions. Even a clip from a video game was passed on as footage from the conflict.
“Every time there is some major event and information is at a premium, we see misinformation spread like wildfire,” Pennycook said. “There is now a very consistent pattern, but every time it happens there’s a sudden surge of concern about misinformation that tends to fade away once the moment passes.”
“We need tools that help build resistance toward misinformation prior to events such as this,” he said.
For now, those looking for a central hub to find reliable, real time information online might be out of luck. Imperfect as Twitter was, there’s no clear replacement for it. This means anyone looking for accurate information online needs to exercise vigilance.
In times of big breaking news such as the current conflict, Koltai recommended, “going to your traditional name brands and news media outlets like AP, Reuters, who are doing things like fact checking” and active reporting on the ground.
Meanwhile, in Europe, major social media platforms are facing stricter scrutiny over the war.
Britain’s Technology Secretary Michelle Donelan summoned the U.K. bosses of X, TikTok, Snapchat Google and Meta for a meeting Wednesday to discuss “the proliferation of antisemitism and extremely violent content” following the Hamas attack.
She demanded they outline the actions they’re taking to quickly remove content that breaches the U.K.’s online safety law or their terms and conditions.
European Commissioner Thierry Breton warned in his letter to Musk of penalties for not complying with the EU’s new Digital Services Act, which puts the biggest online platforms like X, under extra scrutiny and requires them to make it easier for users to flag illegal content and take steps to reduce disinformation — or face fines up to 6% of annual global revenue.
Musk responded by touting the platform’s approach using crowdsourced factchecking labels, an apparent reference to Community Notes.
“Our policy is that everything is open source and transparent, an approach that I know the EU supports,” Musk wrote on X. “Please list the violations you allude to on X, so that the public can see them.”
Breton replied that Musk is “well aware” of the reports on “fake content and glorification of violence.”
“Up to you to demonstrate that you walk the talk,” he said.
The social media platform X, formerly known as Twitter, says it is struggling with a flood of posts sharing graphic media, violent speech and hateful conduct about the Israel-Hamas war. But it has received a broadside of criticism, including from a top European Union official, questioning the adequacy of the response.
Outside watchdog groups said misinformation about the war abounds on the platform, whose workforce — including its content moderation team — was gutted by billionaire Elon Musk after he bought it last year.
Fake and manipulated imagery circulating on X include “repurposed old images of unrelated armed conflicts or military footage that actually originated from video games,” said a Tuesday letter to Musk from European Commissioner Thierry Breton. “This appears to be manifestly false or misleading information.”
Breton, the EU’s digital rights chief, also warned Musk that authorities have been flagging “potentially illegal content” that could violate EU laws and “you must be timely, diligent and objective” in removing it when warranted.
San Francisco-based X didn’t immediately respond to a request for comment about Breton’s letter.
A post late Monday from X’s safety team claimed it is treating the crisis with utmost effort: “In the past couple of days, we’ve seen an increase in daily active users on @X in the conflict area, plus there have been more than 50 million posts globally focusing on the weekend’s terrorist attack on Israel by Hamas. As the events continue to unfold rapidly, a cross-company leadership group has assessed this moment as a crisis requiring the highest level of response.”
That includes continuing a policy frequently championed by Musk of letting users help rate what might be misinformation, which causes those posts to include a note of context but not disappear from the platform.
The struggle to identify reliable sources for news about the war was exacerbated over the weekend by Musk, who on Sunday posted the names of two accounts he said were “good” for “following the war in real-time.” Analyst Emerson Brooking of the Atlantic Council called one of those accounts “absolutely poisonous.” Journalists and X users also pointed out that both accounts had previously shared a fake AI-generated image of an explosion at the Pentagon, and that one of them had posted numerous antisemitic comments in recent months. Musk later deleted his post.
Brooking posted on X that Musk had enabled fake war reporting by abandoning the blue check verification system for trusted accounts and allowing anyone to buy a blue check.
Brooking said Tuesday that it is “significantly harder to determine ground truth in this conflict as compared to Russia’s invasion of Ukraine” last year and “Elon Musk bears personal responsibility for this.”
He said Musk’s changes to the X platform have made it impossible to quickly assess the credibility of accounts while his “introduction of view monetization has created perverse incentives for war-focused accounts to post as many times as possible, even unverified rumors, and to make the most salacious claims possible.”
“War is always a cauldron of tragedy and disinformation; Musk has made it worse,” he added. Further, Brooking said via email “Musk has repeatedly and purposefully denigrated the idea of an objective media, and he made platform design decisions that undermine such reporting. We now see the result.”
Part of Musk’s drastic changes over the past year included removing many of the people responsible for moderating toxic content and harmful misinformation.
One former member of Twitter’s public policy team said the company is having a harder time taking action on posts that violate its policies because there aren’t enough people to do that work.
“The layoffs are undermining the capacity of Twitter’s trust and safety team, and associated teams like public policy, to provide needed support during a critical time of crisis,” said Theodora Skeadas, one of thousands of employees who lost their jobs in the months after Musk bought the company.
X says it changed one policy over the weekend to enable people to more easily choose whether or not to see sensitive media without the company actually taking down those posts.
“X believes that, while difficult, it’s in the public’s interest to understand what’s happening in real time,” its statement said.
The company said it is also removing newly created Hamas-affiliated accounts and working with other tech companies to try to prevent “terrorist content” from being distributed online. It said it is “also continuing to proactively monitor for antisemitic speech as part of all our efforts. Plus we’ve taken action to remove several hundred accounts attempting to manipulate trending topics.”
Linda Yaccarino, whom Elon Musk named in May as the top executive at X, withdrew from an upcoming three-day tech conference where she was scheduled to speak, citing the need to focus on how the platform was handling the war.
“With the global crisis unfolding, Linda and her team must remain fully focused on X platform safety,” X told the organizers of the WSJ Tech Live conference being held next week in Laguna Beach, California.
—-
Associated Press writer Ali Swenson contributed to this report.
President Biden issued several stark warnings about threats to American democracy Thursday in Arizona. CBS News election law expert and political contributor David Becker discusses what stood out from the president’s speech.
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
Republicans are still seeking revenge for their 2020 election losses. In Wisconsin, they’ve put a target on the back of Meagan Wolfe, the state’s nonpartisan elections chief, who they’re apparently still mad at for refusing to take Donald Trump’s baseless claims of election fraud more seriously. Wolfe called the 2020 election “an incredible success that was a result of years of preparation and meticulously, carefully following the law.” Nevertheless, state Republicans heeded Trump’s election lies, launching a review of the 2020 results, which was led by a former right-wing judge—who attended a symposium on election fraud headed by MyPillow founder Mike Lindell. The investigation cost taxpayers over $1.1 million and ultimately, in 2022, reported no evidence of fraud. Unsatisfied, and still without a smoking gun, Wisconsin Republicans have made Wolfe a scapegoat. Nearly a year from the 2024 election, it looks like they want to fire her, subject to a possibly illegitimate confirmation hearing last week. Wolfe’s future as elections administrator remains in limbo.
The power struggle playing out around her post serves as a portent of the machinations to come in 2024—when it’s highly possible Trump will once again be on the ballot. It also comes as Democratic secretaries of state are sounding the alarm of continued and emergent threats facing American democracy. Trump allies are already deploying the same playbook as they did in 2020.
“The attack on democracy has not stopped—very specifically Trump’s efforts to undermine American democracy have not stopped,” Colorado secretary of state Jena Griswold tells Vanity Fair.
The breadth of Trump’s election denialism was thrust back into focus last month, when Fulton County district attorney Fani Willis released a damning indictment laying out an alleged conspiracy to overturn the 2020 election in not only Georgia, but in other states including Michigan, Arizona, Pennsylvania, Nevada, and New Mexico. “It is sort of a roadmap in a sense. It gives us an idea about what to expect and what to guard against,” Minnesota secretary of state Steve Simon said of the indictment. “If there’s a similar plot or scheme by anyone in 2024, they won’t necessarily follow the same roadmap as in 2020. But it does give us an idea about what the pressure points are.”
Arizona secretary of state Adrian Fontes—who served as the election recorder for Maricopa County, one of the fiercest battlegrounds, in 2020—points out that Trump continues to push election disinformation. Trump was “mildly inciting folks to violence” in his recent interview with Tucker Carlson, Fontes said, referring to the ex-president saying that his political opponents were “savage animals; they’re people that are sick,” and entertaining Carlson’s suggestion that the former president could be assassinated. The indictments haven’t stopped Trump from pushing election denialism and engaging in dangerous hyperbole. And at the state level, his supporters are following suit with organized attacks on the system, such as that against Wolfe. Both Fontes and Griswold said they regularly receive death threats, as do other elections officials.
As Democratic secretaries of state are sounding the alarm ahead of the 2024 presidential election, some Republican officials are amplifying Trump’s unfounded claims. For instance, as the Associated Press reported, secretaries of state in Ohio, West Virginia, and Missouri—three states Trump won—have supported increased voter restrictions, which appears to buy into the former president’s false rhetoric that Biden stole the presidency, despite being the very individuals tasked with ensuring election integrity. In a recent interview, West Virginia secretary of state Mac Warner summed up the balance he and other Republicans are trying to strike. “I will admit Biden won the election, but did he do it legitimately? Or did that happen outside the election laws that legislatures in certain states had put in place? That’s where I balk and say no,” he said. As Republican officials continue to engage in election denialism, they are only adding to the confusion and challenges ahead of next year’s election.
As 2024 approaches, there is a real fear of what Griswold described as “insider threats” to the system—something she experienced firsthand in Colorado. Former Colorado county clerk Tina Peterswas indicted in 2022 in a breach of Mesa County’s election system; she was accused of allowing an unauthorized individual access to the voting system in search of evidence to support Trump’s claims of election fraud (Peters pleaded not guilty and is awaiting trial). Griswold says Conan Hayes, who has also been identified in media reports as individual 27 in Georgia’s Coffee County indictment for accessing election data, and, as the Times reported, was associated with Peters, was the one to physically compromise the voting equipment in Mesa County. A second breach occurred in Colorado when an Elbert County clerk, Dallas Schroeder,gained unauthorized access and made copies of the county’s election system. (Schroeder did not face charges.) In legal filings, Schroeder said he had help from an individual named Shawn Smith, who leads pro-Trump election denial groups and has been associated with John Eastman, the former Trump attorney behind the 2020 fake electors scheme, and one of the central figures of the Georgia indictment. Griswold recalls, “Eastman was on the stage as a far-right militia called for me to be hung.” (Smith said onstage at an event in February 2022, where Eastman was reportedly in attendance: “I think if you’re involved in election fraud, then you deserve to hang. Sometimes the old ways are the best ways.”)
After the 2020 election, Colorado governor Jared Polissigned into law new legislation aimed to protect against “insider threats” and impose greater protections for election workers against harassment and intimidation. “Every state needs to pass that legislation immediately because it’s a tremendous risk for the ’24 election,” Griswold told VF.
The secretaries of state who spoke with VF described an ongoing game of whack-a-mole when it comes to election denialism and disinformation. “I always make a distinction between disinformation and disagreement. Disagreement is welcome and normal and a sign of health, I think for a democracy for people to disagree on issues. But I’m not talking so much about what the election system ought to be as what it is. Let’s agree on what it is, whether you like what it is or isn’t,” Simon said. But, “There are some people who are pushing election disinformation knowing that it’s false and doing it for political purposes.”
The hope, Simon added, is that greater transparency into the election process will cut through the barrage of mis- and disinformation. But he added, “I’m not naive… It’s not a binary thing. It’s not that someone’s going to hear something and completely change their mind,” he said. However, “They might do some incremental changing.” The 2024 election is only poised to be more combustible, as it will likely happen in tandem with Trump’s multiple criminal trials. “We’re heading into a presidential election year, which always means more passion, more drama, more intensity,” Simon says. “It’s worth revisiting 2020 if for no other reason than to talk about the lessons learned and what we can do to stabilize democracy in America.”
“[Trump] is the sexy clickbait right now, but that’s not what this is about,“ Fontes told VF. “There are so many things that go into the day-to-day of election administration. It is 365 days a year that every once in a while something pops up like a new lawsuit, a new scandal, a new headline, a new indictment. Nowadays they’re coming fast and furious.”
“That’s you,” I tell Smoke in my most reassuring voice, but she always forgets. And this is the catch-22 of confronting your doppelganger: Bark all you want, but you inevitably end up confronting yourself.
My commitment to non-involvement began to weaken during COVID, when the stakes of getting confused with Other Naomi rose markedly. Several months into the pandemic, Wolf emerged not as a scattershot peddler of conspiratorial speculation but as one of the most outspoken opponents of almost every anti-COVID public health measure, from masks to vaccines to vaccine-verification apps, which she equated with fascism while wantonly drawing comparisons with Nazi Germany. An NPR investigation found that Wolf was a primary spreader of the theory that vaccinated people shed dangerous particles onto unvaccinated people, possibly compromising their fertility, a theory that led a Florida private school to ban vaccinated teachers from the classroom.
Mocked and deplatformed in liberal circles, she quickly became a full-fledged crossover star on the MAGA right, appearing regularly (sometimes daily) on Stephen Bannon’s podcast War Room, as well as on Tucker Carlson’s now canceled show on Fox News—that is, when she wasn’t testifying for Republicans (or attempting to) in statehouses or posting photos of her new firearm. A “biofascist” coup d’état was taking place under cover of mask mandates and vaccine-verification apps, she warned, and her new fans ate it up.
Meanwhile, my doppelganger troubles escalated. No longer was it a periodic annoyance every few months. When I went online to try to find some simulation of the friendships and communities I missed during those achingly isolated months, I would invariably find, instead, The Confusion: a torrent of people discussing me and what I’d said and what I’d done—only it wasn’t me. It was her.
And look, it was confusing, and also, in a gallows way, funny, even to me. We are both Naomis with a skepticism of elite power. We even had some of the same targets. I, for instance, was furious when Bill Gates sided with the drug companies as they defended their patents on lifesaving COVID vaccines, using the World Trade Organization’s insidious intellectual property agreement as a weapon, despite the fact that vaccine development was lavishly subsidized with public money, and that this lobbying helped keep the shots out of the arms of millions of the poorest people on the planet. Wolf was furious that people were being pushed to get vaccinated at all and boosted conspiracies about Gates using vaccine apps to track people and to usher in a sinister world order. To stressed-out, busy people inundated with thumbnail-size names and avatars, we’re just a blur of Naomis with highlights going on about Bill Gates.
Again and again, she was saying things that sounded a little like the argument I made in The Shock Doctrine but refracted through a funhouse mirror of plots and conspiracies based almost exclusively on a series of hunches. I felt like she had taken my ideas, fed them into a bonkers blender, and then shared the thought purée with Carlson, who nodded vehemently. All the while, Wolf’s followers hounded me about why I had sold out to the “globalists” and was duping the public into believing that masks, vaccines, and restrictions on indoor gatherings were legitimate public health measures amid mass death. “I think she’s been got at!” @RickyBaby321 said of me, telling Wolf, “I have relegated Naomi Klein to the position of being: ‘The Other Naomi’!” It’s a vertiginous thing to be harangued on social media about your alleged misunderstanding of your own ideas—while being told that another Naomi is a better version of you than you are.
Doppelganger comes from German, combining doppel (double) with gänger (goer). Sometimes it’s translated as “double-walker,” and I can tell you that having a double walking around is profoundly uncanny, the feeling Sigmund Freud described as “that species of the frightening that goes back to what was once well known and had long been familiar”—but is suddenly alien. The uncanniness provoked by doppelgangers is particularly acute because the thing that becomes unfamiliar is you. A person who has a doppelganger, Freud wrote, “may identify himself with another and so become unsure of his true self.” He wasn’t right about everything, but he was right about that.
My first response to Other Naomi’s COVID antics was horror and a little rage: Surely now I needed to fight back in earnest, scream from my screen that she is not me. After all, lives were being lost to the kind of industrial-scale medical misinformation she was doing so much to help spread. Surely it was time to get serious about defending the boundaries of my identity.
But then something happened that I didn’t expect. I stopped being so horrified and got interested. Interested in what it means to have a doppelganger. Interested in the conspiratorial world in which Other Naomi was now so prominent, a place that often felt like a doppelganger of the world where I live. Why were so many people drawn to fantastical theories? What needs were they fulfilling? And what would their proponents do next?
In the hopes of picking up a few pointers on how others had handled their double trouble, I began reading and watching everything I could find about doppelgangers, from Carl Jung to Ursula K. Le Guin; Fyodor Dostoyevsky to Jordan Peele. The figure of the double began to fascinate me—its meaning in ancient mythology and in the birth of psychoanalysis. The way the twinned self stands in for our highest aspiration—the eternal soul, that ephemeral being that supposedly outlives the body. And the way the double also represents the most repressed, depraved, and rejected parts of ourselves that we cannot bear to see—the evil twin, the shadow self, the anti-self, the Hyde to our Jekyll. The doppelganger as warning or harbinger: Pay attention, they tell us.
From these stories, I quickly learned that my identity crisis was likely unavoidable: The appearance of one’s doppelganger is almost always chaotic, stressful, and paranoia-inducing, and the person encountering their double is invariably pushed to their limits by the frustration and uncanniness of it all.
Confrontations with our doppelgangers raise existentially destabilizing questions. Am I who I think I am, or am I who others perceive me to be? And if enough others start seeing someone else as me, who am I, then? Actual doppelgangers are not the only way we can lose control over ourselves, of course. The carefully constructed self can be undone in any number of ways and in an instant—by a disabling accident, by a psychotic break, or, these days, by a hacked account or deepfake. This is the perennial appeal of doppelgangers in novels and films: The idea that two strangers can be indistinguishable from each other taps into the precariousness at the core of identity—the painful truth that, no matter how deliberately we tend to our personal lives and public personas, the person we think we are is fundamentally vulnerable to forces outside of our control.
In the age of artificial intelligence, many of us are feeling this particularly acutely now, which may be why twins and doppelgangers and multiverses seem suddenly ubiquitous in the culture, from Everything Everywhere All at Once to the remake of Dead Ringers. When machines can generate the voice and the style of any person, living or dead, do any of us control ourselves?
“How many of everybody is there going to be?” asks a character in Jordan Peele’s 2019 doppelganger movie, Us.
Answer: a lot.
If doppelganger literature and mythology is any guide, when confronted with the appearance of one’s double, a person is duty bound to go on a journey—a quest to understand what messages, secrets, and forebodings are being offered. So that is what I have done. Rather than push my doppelganger away, I have attempted to learn everything I can about her and the movements of which she is a part. I burrowed deeper and deeper into a warren of conspiracy rabbit holes, places where it often seems that my own research has gone through the looking glass and is now gazing back at me as a network of fantastical plots that cast the very real crises we face—from COVID to climate change to Russian military aggression—as false flag attacks, planted by the Chinese Communists/corporate globalists/Jews.
As I went, I found myself confronting yet more forms of doubling and doppelganging, these ones distinctly more consequential. Like the way that all of politics increasingly feels like a mirror world, with society split in two and each side defining itself against the other—whatever one says and believes, the other seems obliged to say and believe the exact opposite. The deeper I went, the more I noticed this phenomenon all around me: individuals not guided by legible principles or beliefs, but acting as members of groups playing yin to the other’s yang—well versus weak; awake versus sheep; righteous versus depraved. Binaries where thinking once lived.
TALLINN, Estonia — On the battlefields of Ukraine, the fog of war plagues soldiers. And far from the fighting, a related and just as disorienting miasma afflicts those who seek to understand what’s happening in the vast war.
Disinformation, misinformation and absent information all cloud civilians’ understanding. Officials from each side denounce devious plots being prepared by the enemy, which never materialize. They claim victories that can’t be confirmed — and stay quiet about defeats.
None of this is unique to the Russia-Ukraine conflict. Any nation at war bends the truth — to boost morale on the home front, to rally support from its allies, to try to persuade its detractors to change their stance.
But Europe’s largest land war in decades — and the biggest one since the dawn of the digital age — is taking place in a superheated information space. And modern communications technology, theoretically a force for improving public knowledge, tends to multiply the confusion because deceptions and falsehoods reach audiences instantly.
“The Russian government is trying to portray a certain version of reality, but it’s also being pumped out by the Ukrainian government and advocates for Ukraine’s cause. And those people currently also have views and are using information very effectively to try to shape all of our views of the war and its impact,” says Andrew Weiss, an analyst at the Carnegie Foundation for International Peace.
THE ‘FOG’ IS NOT A NEW DEVELOPMENT
Even before the war began, confusion and contradiction were rife.
Russia, despite massing tens of thousands of soldiers on the border, claimed it had no intent of invading. Ukrainian President Volodymyr Zelenskyy consistently downplayed the likelihood of war — an alarming stance to some Western allies — although the defense of Kyiv showed Ukrainian forces were well-prepared for just that eventuality.
Within a day of the war’s start on Feb. 24, 2022, disinformation spread, notably the “Ghost of Kyiv” tale of a Ukrainian fighter pilot who shot down six Russian planes. The story’s origin is unclear, but it was quickly backed by Ukrainian official accounts before authorities admitted it was a myth.
One of the most flagrant cases of disinformation arose in the war’s second week, when a maternity hospital in the besieged city of Mariupol was bombed from the air. Images taken by a photographer for The Associated Press, which had the only foreign news team in the city, appalled the world, particularly one of a heavily pregnant woman being carried on a stretcher through the ruins.
The brutal attack flew in the face of Russian claims that it was hitting only targets of military value and was avoiding civilian facilities. Russia quickly launched a multi-pronged and less-than-coherent campaign to tamp down the outrage.
Diplomats, including Russia’s U.N. ambassador, denounced AP’s reporting and images as outright fakes. It claimed that a patient interviewed after the attack — who was standing and appeared uninjured — and the woman on the stretcher were the same person and that she had been a crisis actor. Foreign Minister Sergey Lavrov alleged Ukrainian fighters were sheltering in the hospital, making it a legitimate target.
The patient who was interviewed muddied the situation by later claiming she had not given journalists permission to cite her and sayimg she had not heard planes over the hospital before the blasts, suggesting it could have been shelled rather than bombed. Russian authorities seized on those statements to bolster their claims, although the woman confirmed the attack itself was real.
A week later, Mariupol’s main drama theater was destroyed in an airstrike even though the word “children” was written in Russian in large letters in two spots around the theater to show that civilians were sheltering there. The blast killed as many as 600 people.
Russia denied the attack, claiming again that Ukrainian fighters were sheltering inside and that the fighters themselves blew up the building.
RUSSIA MAKES ITS OWN CLAIMS ABOUT ITS PROGRESS
The Russian ministry almost daily makes claims of killing dozens or hundreds of Ukrainian soldiers, which cannot be confirmed and are widely believed to be inflated.
In January, the Defense Ministry bragged that its forces killed as many as 600 Ukrainian soldiers in a missile attack on buildings in the city of Kramatorsk, where the soldiers were temporarily billeted. However, journalists including an AP reporter who went to the site the next day found the buildings without serious damage and no sign of any deaths.
Russia said the purported attack was in retaliation for a Ukrainian strike on a Russian base that killed at least 89, one of the largest known single-incident losses for Russia.
Sometimes the fact of shocking destruction cannot be denied, but who caused it is disputed. When a renowned cathedral in Odesa was heavily damaged in July, Ukraine said it was hit by a Russian missile; Russia said it was hit by the remnants of a Ukrainian defense missile.
The disastrous collapse in May of the Kakhovka dam, which was under Russian control, brought vehemently competing accounts from Russia — which claimed it was hit by Ukrainian missiles — and Ukraine, which alleged Russian forces blew it up. An AP analysis found Russia had the means and motive to destroy the dam, which was the only remaining fixed crossing between the Russian- and Ukrainian-held banks of the Dnieper River in the frontline Kherson province.
Both sides play at demonizing the other with claims of the other’s devious plans. Sometimes one alleges the other side is preparing a “false-flag” attack, as when Ukraine claimed Russia planned missile strikes on its ally Belarus in order to blame Ukraine and to draw Belarus’ troops into the war.
Russia and Ukraine both invoke the specter of nuclear disaster. Russian Foreign Minister Sergey Lavrov and Defense Minister Sergei Shoigu grabbed worldwide attention in October with claims that Ukraine was preparing a “dirty bomb” — a conventional explosive that spreads radioactive material. Zelenskyy in turn has repeatedly warned that Russia has planted explosives to cause a catastrophe at the Zaporizhzhia nuclear power plant, which it occupies. Corroborating evidence of either is absent.
FOG ALSO CLOAKS THE FUTURE
In the war, fog shrouds both events that occur and didn’t occur — and obscures understanding of what may occur next. And it does not creep in on little cat feet, but spreads instantly as Russia and Ukraine each take advantage of social media, messaging apps and the world’s hunger for news to put forth both facts and deceptions.
And what has or hasn’t happened isn’t the only fodder. What might or might not happen is fair game, too. Occasionally, dark allegations about what the other side is planning take a step further and complain about what supposedly won’t happen.
When a Russian journalist died in an attack by Ukrainian forces in July, Foreign Ministry spokeswoman Maria Zakharova claimed within hours that a reaction to the death from international organizations was unlikely. She fumed that “pathological hypocrisy has long been a political tradition of Western liberalism and its unconditioned reflex.”
Among those who deplored the reporter’s death in the following days: the head of UNESCO and the International Federation of Journalists.
___
Jim Heintz has covered Russia for The Associated Press since 1999.
CHICAGO — Leading up to the 2020 election, Facebook ads targeting Latino and Asian American voters described Joe Biden as a communist. A local station claimed a Black Lives Matter co-founder practiced witchcraft. Doctored images showed dogs urinating on Donald Trump campaign posters.
None of these claims was true, but they scorched through social media sites that advocates say have fueled election misinformation in communities of color.
As the 2024 election approaches, community organizations are preparing for what they expect to be a worsening onslaught of disinformation targeting communities of color and immigrant communities. They say the tailored campaigns challenge assumptions of what kinds of voters are susceptible to election conspiracies and distrust in voting systems.
“They’re getting more complex, more sophisticated and spreading like wildfire,” said Sarah Shah, director of policy and community engagement at the advocacy group Indian American Impact, which runs the fact-checking site Desifacts.org. “ What we saw in 2020, unfortunately, will probably be fairly mild in comparison to what we will see in the months leading up to 2024.”
A growing subset of communities of color, especially immigrants for whom English is not their first language, are questioning the integrity of U.S. voting processes and subscribing to Trump’s lies of a stolen 2020 election, said Jenny Liu, mis/disinformation policy manager at the nonprofit Asian Americans Advancing Justice. Still, she said these communities are largely left out of conversations about misinformation.
“When you think of the typical consumer of a conspiracy theory, you think of someone who’s older, maybe from a rural area, maybe a white man,” she said. “You don’t think of Chinese Americans scrolling through WeChat. That’s why this narrative glosses over and erases a lot of the disinformation harms that many communities of colors face.”
Tailoring disinformation
In addition to general misinformation themes about voting machines and mail-in voting, groups are catering their messaging to communities of color, experts say.
For example, immigrants from authoritarian regimes in countries like Venezuela or who have lived through the Chinese Cultural Revolution may be “more vulnerable to misinformation claiming politicians are wanting to turn the U.S. into a Socialist state,” said Inga Trauthig, head of research for the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin. People from countries that have not recently had free and fair elections may have a preexisting distrust of elections and authority that may make them vulnerable to misinformation as well, Trauthig said.
Disinformation efforts often hinge on topics most important to each community, whether that is public safety, immigration, abortion, education, inflation or alleged extramarital affairs, said Laura Zommer, co-founder of the Spanish-language fact-checking group Factchequeado.
“It takes advantage of their very real fear and trauma from their experiences in their home countries,” Zommer said.
Other vulnerabilities include language barriers and a lack of knowledge of the U.S. media landscape and how to find credible U.S. news sources, several misinformation experts told The Associated Press. Many immigrants rely on translated content for voting information, leaving space for bad actors to inject misinformation.
“These tactics exploit information vacuums when there’s a lot of uncertainty around how these processes work, especially because a lot of election materials may not be translated in the languages our communities speak or be available in forms they are likely to access,” said Clara Jiménez Cruz, another co-founder of Factchequeado.
Misinformation can also arise from mistranslations. The Brookings Institute, a nonprofit think tank, found examples of mistranslations in Colombian, Cuban and Venezuelan WhatsApp groups, where “progressive” was translated to “progresista,” which carries “far-left connotations that are closer to the Spanish words ‘socialista’ and ‘comunista.’”
How disinformation spreads
Disinformation, often in languages like Spanish, Mandarin or Hindi, flows onto social media apps like WhatsApp and WeChat heavily used by communities of color.
Minority communities that believe their views and perspectives aren’t represented by the mainstream are likely to “retreat into more private spaces” found on messaging apps or groups on social media sites like Facebook, Trauthig said.
“But disinformation also targets them on these platforms, even though it may feel to them to be that safer space,” she said.
Messages on WhatsApp are also encrypted and can’t be easily seen or traced by moderators or fact-checkers.
“As a result, messages on apps like WhatsApp often fly under the radar and are allowed to spread and spread, largely unchecked,” said Randy Abreu, policy counsel for the National Hispanic Media Coalition, which leads the Spanish Language Disinformation Coalition.
Abreu also raised concerns about Spanish YouTube channels and radio shows that are growing in popularity. He said the coalition is tracking more and more YouTube and radio personalities who are spreading misinformation in Spanish.
A 2022 report by the left-leaning watchdog group Media Matters tracked 40 Spanish-language YouTube videos spreading misinformation about U.S. elections. Many of these videos remained on the platform, despite violating YouTube election misinformation policy, the report said.
Disinformation and disenfranchising communities of color
Amid changes in voting policies at state and local levels, advocates are sounding the alarm on how disinformation about voting in 2024 may target communities of color. Many of these efforts have surged as Asian American, Black and Latino communities have grown in political power, said María Teresa Kumar, founding president of the nonprofit advocacy group Voto Latino.
“Disinformation is, at its core, meant to be a sort of voter suppression tactic for communities of color,” she said. “It targets communities of color in a way that feeds into their already justifiable concerns that the system is stacked against them.”
The tactics also feed into a history “as old as the Jim Crow era of attempting to disenfranchise people of color, going back to voter intimidation and suppression efforts after the Civil Rights Act of 1866,” said Atiba Ellis, a professor of law at Case Western Reserve University School of Law.
While many of the same recycled claims around alleged fraud in the 2020 and 2022 elections are expected to resurface, experts say disinformation campaigns will likely be more sophisticated and granular in attempts to target specific groups of voters of color.
Trauthig also raised concerns about how layoffs and instability at social media platforms like Twitter may leave them less prepared to tackle misinformation in 2024. It also remains to be seen how new social media platforms like Threads will approach the threat of misinformation. Changes in policies like WhatsApp launching a “Communities” function connecting multiple groups and expanding group chat sizes may also “have big implications for how quickly misinformation will spread on the platform,” she said.
In response to the mounting threat of misinformation, Indian American Impact is ramping up its fact-checking efforts through what the organization says is the first fact-checking website specifically for South Asian Americans. Shah said the group is drawing inspiration from 2022 projects, including a voting toolkit using memes with Bollywood characters and passing out Parle-G crackers with voting information stickers at Indian grocery stores.
Cruz of Factchequeado is paying close attention to misinformation in swing states with significant Latino populations like Nevada and Arizona. And Liu of Asian Americans Advancing Justice is reviewing misinformation trends from previous elections to strategize about how to inoculate Asian American voters against them.
Still, they say there is more work to be done.
Critics are urging social media companies to invest in content moderation and fact-checking in languages other than English. Government and election officials should also make voting information more accessible to non-English speakers, organize media literacy trainings in community spaces and identify “trusted messengers” in communities of color to help approach trends in misinformation narratives, experts said.
“These are not monolithic groups,” Cruz said. “This disinformation is very specifically tailored to each of these communities and their fears. So we also need to be partnering with grassroots organizations in each of these communities to tailor our approaches. If we don’t take the time to do this work, our democracy is at stake.”
___
The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.