ReportWire

Tag: Misinformation

  • TikTok let through disinformation in political ads despite its own ban, Global Witness finds

    TikTok let through disinformation in political ads despite its own ban, Global Witness finds

    [ad_1]

    SAN FRANCISCO — Just weeks before the U.S. presidential election, TikTok approved advertisements that contained election disinformation even though it has a ban on political ads, according to a report published Thursday by the nonprofit Global Witness.

    The technology and environmental watchdog group submitted ads that it designed to test how well systems at social media companies work in detecting different types of election misinformation.

    The group, which did a similar investigation two years ago, did find that the companies — especially Facebook — have improved their content-moderation systems since then.

    But it called out TikTok for approving four of the eight ads submitted for review that contained falsehoods about the election. That’s despite the platform’s ban on all political ads in place since 2019.

    The ads never appeared on TikTok because Global Witness pulled them before they went online.

    “Four ads were incorrectly approved during the first stage of moderation, but did not run on our platform,” TikTok spokesman Ben Rathe said. “We do not allow political advertising and will continue to enforce this policy on an ongoing basis.”

    Facebook, which is owned by Meta Platforms Inc., “did much better” and approved just one of the eight submitted ads, according to the report.

    In a statement, Meta said while “this report is extremely limited in scope and as a result not reflective of how we enforce our policies at scale, we nonetheless are continually evaluating and improving our enforcement efforts.”

    Google’s YouTube did the best, Global Witness said, approving four ads but not letting any publish. It asked for more identification from the Global Witness testers before it would publish them and “paused” their account when they didn’t. However, the report said it is not clear whether the ads would have gone through had Global Witness provided the required identification.

    Google did not immediately respond to a message for comment.

    Companies nearly always have stricter policies for paid ads than they do for regular posts from users. The ads submitted by Global Witness included outright false claims about the election — such as stating that Americans can vote online — as well as false information designed to suppress voting, like claims that voters must pass an English test before casting a ballot. Other fake ads encouraged violence or threatened electoral workers and processes.

    The ads Global Witness submitted were text-based, but the group said it translated them into what it called “algospeak.” This is a widely used trick to try to bypass internet companies’ text-focused content moderation systems by substituting numbers and symbols as stand-in for letters, making it harder for automated systems to “read” the text.

    [ad_2]

    Source link

  • Elon Musk’s X is back in Brazil after its suspension, having complied with all judicial demands

    Elon Musk’s X is back in Brazil after its suspension, having complied with all judicial demands

    [ad_1]

    RIO DE JANEIRO (AP) — The social media platform X began returning to Brazil on Wednesday, after remaining inaccessible for more than a month due to a clash between its owner, Elon Musk, and a justice on the country’s highest court.

    Internet service providers began restoring access to the platform after Supreme Court Justice Alexandre de Moraes authorized lifting X’s suspension on Tuesday.

    “TWITTER IS ALIVE,” Lucas dos Santos Consoli, known as luscas on X, wrote on the platform to his more than 7 million followers.

    “I’m happy that the platform decided to follow the laws of Brazil and finally adapted, after all I’ve been using the app for almost 15 years so I can’t deny that I was missing it,” the 31-year-old told The Associated Press.

    De Moraes ordered the shutdown of X on Aug. 30 after a monthslong dispute with Musk over free speech, far-right accounts and misinformation. Musk had disparaged de Moraes, calling him an authoritarian and a censor, although his rulings, including X’s nationwide suspension, were repeatedly upheld by his peers.

    Musk’s company ultimately complied with all of de Moraes’ demands. They included blocking certain accounts from the platform, paying outstanding fines and naming a legal representative. Failure to do the latter had triggered the suspension.

    “This sends a message to the world that the richest person on the planet is subject to local laws and constitutions,” said David Nemer, who specializes in the anthropology of technology at the University of Virginia. It could set a precedent as to how other countries that are clashing with Musk — such as Australia — could move forward, as it shows Musk is not unbeatable, he added.

    Brazil — a highly online country of 213 million people — is one of X’s biggest markets, with estimates of its user base ranging from 20 million to 40 million.

    “X is proud to return to Brazil,” the company said in a statement posted on its Global Government Affairs account. “Giving tens of millions of Brazilians access to our indispensable platform was paramount throughout this entire process. We will continue to defend freedom of speech, within the boundaries of the law, everywhere we operate.”

    Julia Bahri, an 18-year-old law student, said she was delighted with X’s return. She said that losing access to the platform had led to “one of the most desperate feelings I’ve experienced for a while,” adding that she had felt lost with regards to news.

    Bahri said she uses X to express herself, whereas Instagram and Snapchat are mostly for posting photos.

    The Aug. 30 ban came two days after the company said it was removing all its remaining staff in Brazil. X said de Moraes had threatened to arrest its legal representative in the country, Rachel de Oliveira Villa Nova Conceição, if the company did not comply with orders to block accounts.

    Brazilian law requires foreign companies to have a local legal representative to receive notifications of court decisions and swiftly take any requisite action — particularly, in X’s case, the takedown of accounts.

    Sleeping Giants Brazil, a platform for activism that seeks to combat fake news and hate speech, said the resumption of X’s activities in Brazil marked “a significant victory for Brazilian democracy.”

    “It is crucial to remain steadfast against efforts to weaken democratic state authority, institutions and values,” it said in a statement.

    Some of Brazilian X’s users have migrated to other platforms, such as Meta’s Threads and, primarily, Bluesky. It’s unclear how many of them will return to X.

    In a statement to the AP, Bluesky reported that it now has 10.6 million users and continues to see strong growth in Brazil. Bluesky has appointed a legal representative in the South American country.

    “Never get back with your eX,” Paul Frazee, a developer at Bluesky, wrote on the platform on Tuesday.

    X is returning to Brazil weaker than it was before the ban, said Nemer, noting that X is now worth less than a fifth than when Musk bought Twitter. The platform has lost a lot of users, especially in Brazil, he said.

    Brazil was not the first country to ban X — but such a drastic step has generally been limited to authoritarian regimes. The platform and its former incarnation, Twitter, have been banned in Russia, China, Iran, Myanmar, North Korea, Venezuela and Turkmenistan. Other countries, such as Pakistan, Turkey and Egypt, have also temporarily suspended X before, usually to quell dissent and unrest.

    X’s dustup with Brazil has some parallels to the company’s dealings with the Indian government three years ago, back when it was still called Twitter and before Musk purchased it for $44 billion. In 2021, India threatened to arrest employees of Twitter (as well as Meta’s Facebook and WhatsApp), for not complying with the government’s requests to take down posts related to farmers’ protests that rocked the country.

    Musk’s decision to reverse course in Brazil after publicly criticizing de Moraes isn’t surprising, said Matteo Ceurvels, research firm Emarketer’s analyst for Latin America and Spain.

    “The move was pragmatic, likely driven by the economic consequences of losing access to millions of users in its third-largest market worldwide, along with the millions of dollars in associated advertising revenue,” Ceurvels said.

    “Although X may not be a top priority for most advertisers in Brazil, the platform needs them more than they need it,” he said.

    ___

    Ortutay reported from San Francisco

    [ad_2]

    Source link

  • Harris calls out Trump for hurricane misinformation

    Harris calls out Trump for hurricane misinformation

    [ad_1]

    GREENVILLE, N.C. — Kamala Harris used an appearance Sunday before a largely Black church audience in battleground North Carolina to call out Donald Trump for spreading misinformation about the government’s hurricane response. President Joe Biden visited Florida for the second time this month to survey storm damage.

    Harris, the Democratic presidential nominee, did not speak Trump’s name, but he is most prominent among those promoting false claims about the Biden administration’s response to Hurricanes Milton and Helene. Florida was in the path of both storms, with Helene also hitting North Carolina, South Carolina and Georgia, while Milton headed for the open Atlantic.

    The vice president spoke at the Koinonia Christian Center about the “heroes” all around who are helping residents without regard to political affiliation.

    “Yet, church, there are some who are not acting in the spirit of community, and I am speaking of these who have been literally not telling the truth, lying about people who are working hard to help the folks in need, spreading disinformation when the truth and facts are required,” Harris said.

    “The problem with this, beyond the obvious, is it’s making it harder, then, to get people life-saving information if they’re led to believe they cannot trust,” she said. “And that’s the pain of it all, which is the idea that those who are in need have somehow been convinced that the forces are working against them in a way that they would not seek aid.”

    Harris said they are trying “to gain some advantage for themselves, to play politics with other people’s heart break, and it is unconscionable,” she said. “Now is not a time to incite fear. It is not right to make people feel alone.”

    “That is not what leaders, as we know, do in crisis,” she said.

    Trump made a series of false claims after Helene struck in late September, including saying that Washington was intentionally withholding aid from Republicans in need across the Southeast. The former president falsely claimed the Federal Emergency Management Agency had run out of money to help them because it was spent on programs to help immigrants who are in the United States illegally.

    He pressed that argument on Fox News’ “Sunday Morning Futures,” saying the White House response was “absolutely terrible” and repeating the claim about FEMA’s dollars. “It came out from there and everybody knew it,” Trump said in an interview that was taped Thursday and broadcast Sunday.

    Before Harris spoke in church, Biden was surveying hurricane damage on a helicopter flight between Tampa and St. Pete Beach on the Gulf Coast. From the air, he saw the torn-up roof of Tropicana Field, home of the Tampa Bay Rays baseball team. On the ground, the president saw waterlogged household furnishings piled up outside flooded homes. Some houses had collapsed.

    The president said he was thankful that Milton was not as bad as officials had anticipated, but that it still was a “cataclysmic” event for many people, including those who lost irreplaceable personal items. He also praised the first responders, some of whom had come from Canada.

    “It’s in moments like this we come together to take care of each other, not as Democrats or Republicans, but as Americans,” Biden said after he was briefed by federal, state and local officials, and met some residents and responders. “We are one United States, one United States.”

    Harris opened her second day in North Carolina by speaking at the Christian center in Greenville, part of her campaign’s “Souls to the Polls” effort to help turn out Black churchgoers before the Nov. 5 election.

    The vice president later spoke to roughly 7,000 supporters at a Sunday afternoon rally at East Carolina University’s arena, suggesting that Trump’s team has stopped him from releasing medical records or debating her again because they might be “afraid that people will see that he is too weak and unstable.”

    The North Carolina appearances mark the start of a week that will find Harris working to shore up support among Black voters, a key constituency for the Democratic Party. She is counting on Black turnout in competitive states such as North Carolina to help her defeat Trump, who has focused on energizing men of all races and has tried to make inroads with Black men in particular.

    On Tuesday, she will appear in Detroit for a live conversation with Charlamagne tha God, a prominent Black media personality.

    Black registered voters have overwhelmingly favorable views of Harris and negative views of Trump despite his attempts to appeal to nonwhite voters, according to a recent poll from The Associated Press-NORC Center for Public Affairs Research. But the poll also shows that many Black voters aren’t sure whether Harris would improve the country overall or better their own lives.

    In Florida, which Biden had visited the Big Bend region on Oct. 3 after Helene struck, the president announced $612 million for six Department of Energy projects in hurricane-affected areas to bolster the region’s electric grid. The money includes $47 million for Gainesville Regional Utilities and $47 million for Switched Source to partner with Florida Power and Light.

    With a little more than three weeks before the election, the hurricanes have added another dimension to the closely contested presidential race.

    Trump has said the Biden administration’s storm response was lacking, particularly in western North Carolina after Helene. Biden and Harris have hammered Trump for promoting falsehoods about the federal response.

    Biden said Trump was “not singularly” to blame for the spread of misinformation but that he has the “biggest mouth.”

    “They blame me for everything. It’s OK,” Trump told Fox.

    Biden has pressed for Congress to act quickly to make sure the Small Business Administration and FEMA have the money they need to get through hurricane season, which ends Nov. 30 in the Atlantic. He said Friday that Milton alone had caused an estimated $50 billion in damages.

    Homeland Security Secretary Alejandro Mayorkas, whose department oversees FEMA, said the hurricane season is far from over and there are other natural disasters for which the agency must ready.

    “We don’t know what’s coming tomorrow, whether it’s another hurricane, a tornado, a fire, an earthquake. We have to be ready. and it is not good government to be dependent on a day-to-day existence as opposed to appropriate planning,” Mayorkas said on CBS’ “Face the Nation.”

    House Speaker Mike Johnson, R-La., said there was plenty of time and that lawmakers would address the funding issue when Congress comes back into session after the Nov. 5 election.

    “We’ll provide the additional resources,” Johnson told CBS.

    Milton made landfall in Florida as a Category 3 storm on Wednesday evening. At least 10 people were killed and hundreds of thousands of residents remain without power. Officials say the toll could have been worse if not for widespread evacuations. ee.

    [ad_2]

    By JOSH BOAK, AAMER MADHANI and DARLENE SUPERVILLE – Associated Press

    Source link

  • Is Reddit the Future of Crisis Communications?

    Is Reddit the Future of Crisis Communications?

    [ad_1]

    Photo-Illustration: Intelligencer; Photo: Getty

    Let’s say you represent the most powerful government on Earth and would like to convey some information to the citizens of your country in a moment of crisis. We’re talking pretty basic stuff: How to apply for federal assistance after a series of massive natural disasters, the general state of the recovery effort, things like that. You’ve got a lot of options, more than ever in the history of mankind. You can issue a press release. You can call a press conference. You can have the president give a little speech or send surrogates out for interviews. You can communicate with state and local authorities who will use the channels at their disposal. You can post anything you want on all sorts of social media platforms and reach out to influencers, theoretically accessing a near-infinite audience.

    This will all help, but it won’t necessarily work. Nobody pays attention to the channels you control. Traditional media is fragmented and its audiences are diminished and hyper-polarized. Lots of people are watching TV, but not the TV you need them to watch; everyone’s looking at their phones, but they’re not receiving your messages. Your posts on Facebook, which briefly assumed a role in basic civic communication across the country, are filtered through recommendation algorithms and submerged in slop. Your announcements on Instagram have no way to spread and people aren’t looking for them, anyway. Your posts on TikTok feel like a joke and mostly get distributed to random people in other states. Your posts on X, which used to be at least marginally helpful as a sort of straightforward institutional newswire, are barely visible and overwhelmed by conspiracy theories. It’s a little paradoxical, and if you’re in the business of communications, probably sort of discouraging: It isn’t just your propaganda machine that’s broken, it’s your basic means of reaching people in any way at all. It’s also darkly funny: Everyone can talk to everyone and now suddenly nobody can hear anyone.

    Still, you’d like to get that FEMA phone number out there and clarify a few things. A weary social media manager pipes up from the corner of the office: I guess… we could post on Reddit? Reports the Verge:

    Reddit isn’t the first place you’d think to see official statements and news coming from the federal government, but today, The White House is on the site making posts… The Biden administration’s “whitehouse” account has new posts in subreddits r/NorthCarolina and r/Georgia to discuss the federal response to Hurricane Helene and Hurricane Milton.

    For years, the typical story about governments, politicians, or public figures showing up on Reddit focused on the unlikeliness of that match. Reddit was rowdy, weird, or nerdy, and it was sort of interesting or fun or strange for people with big platforms to show up there. In recent years, Reddit has grown from a large cluster of online communities into a sort of last refuge semi-protected habitat for online communities in general — that is, spaces where actual people gather to discuss or find information about certain topics or interests, organized and moderated by other actual people. Now, nobody is deigning to post on Reddit. They’re just hoping it might add to their audience a bit. It helps that /r/NorthCarolina is the sort of place where you might be able to post something like “hey, visit DisasterAssistance.gov” without getting drowned out by posts claiming that FEMA is going to seize your property.

    Reddit is also coming into play for another (comparatively inconsequential) type of crisis: Brand meltdowns. At her Substack Link in Bio, Rachel Karten tells the story of KeithFromSonos, the Sonos employee who kept dutifully and patiently posting on the brand’s unofficial subreddit after a disastrous app update nearly tanked the company. The subreddit has a couple hundred thousand members, many of whom are pretty mad at the company. But they don’t hate Keith, who has been posting there for a while, and some even feel sorry for him. As a result, he can post straightforward updates without getting yelled at, unlike pretty much anywhere else, which got the attention of the company’s CEO — who then ended up heading over to Reddit, too.

    To be clear, Reddit probably won’t help save Sonos from its angry customers, much less have any measurable effect on the response to hurricanes Helene and Milton — these are, in the grand scheme of things, fairly small communities full of people who otherwise don’t have too much trouble finding information online. And for its part, Reddit hasn’t grown without consequence. Community moderators have grown weary of volunteering for an increasingly profit-minded (and now public) company that takes them for granted, and the site’s size and new visibility on Google — the search company also sees Reddit as a rare source of authentic human activity, albeit for harvesting purposes — has caused an influx of spammers and bots looking to get a piece, all but setting a timer on Reddit’s eventual ruin. But in the meantime, Reddit is serving as an online information oasis of last resort, a channel through which extremely basic mass communication is still possible, at least for now.


    See All



    [ad_2]

    John Herrman

    Source link

  • Border protection head debunks false claims about FEMA funds

    Border protection head debunks false claims about FEMA funds

    [ad_1]

    Border protection head debunks false claims about FEMA funds – CBS News


    Watch CBS News



    The federal government says it has been dealing with an unprecedented number of rumors surrounding the recent hurricanes, Helene and Milton. CBS News immigration and politics reporter Camilo Montoya-Galvez speaks with the head of U.S. Customs and Border Protection about one of those false claims. Then, CBS News national security contributor Sam Vinograd joins with further analysis.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Beware the

    Beware the

    [ad_1]

    As Florida and the Southeast recover from two major hurricanes, conspiracy theories and falsehoods have surged to levels that the head of the Federal Emergency Management Agency says are unprecedented. First responders, local officials and nonprofit organizations in the storm zones have had to dedicate time and resources to debunking false claims.

    Officials say these falsehoods have real-world consequences, including preventing victims from evacuating or seeking help, distracting from recovery efforts and making the job of aid workers harder. 

    And while the scale of misinformation following Helene and Milton took some by surprise, the claims themselves follow a familiar pattern.  Similar misinformation has followed other hurricanes and natural disasters, including exaggerated crime reports, fake or misleading visuals and outright scams. Researchers say understanding the misinformation is crucial to mitigating its spread and minimizing its impact. 

    Here is a look at some of the recurring themes to watch out for:

    Unverified crime scares 

    In the aftermath of Helene, rumors spread online that people were slashing tires of trucks transporting aid to storm victims, a claim that local police said is not true. 

    Unverified crimes reports have followed storms for decades. After Hurricane Katrina devastated parts of Louisiana in 2005, officials later said claims of looting, murder, and rape, which were repeated by some news media and officials, were either exaggerated or false. 

    At one point, the Mayor of New Orleans reported multiple murders at the Louisiana Superdome, where thousands sheltered. The National Guard later said there were no homicides at the stadium. 

    A 2018 report by the Department of Homeland Security found that false claims often spread after disasters because verified information is slow to emerge, fuelling rumor and speculation. 

    Conspiracies

    Jennie King, director of Climate Disinformation Research and Policy at the Institute for Strategic Dialogue, said conspiracies blaming the government for hurricanes have also become routine. Researchers at the institute found this claim and others debunked by FEMA generated more than 160 million views online after Helene.

    “If it is producing a visceral emotion, negative or positive, that should cause you to pause for breath,” King said. “Do a little bit of wider reading. And if you do find out that those claims are false or unsubstantiated, don’t give them additional oxygen.” 

    Fake or misleading photos or videos

    Edited or misleading videos and images have become a regular feature of major weather events. As Hurricane Dorian approached Florida in 2019, old images circulated online that pushed false claims of looting. 

    After Sandy hit New York and New Jersey in 2013, researchers identified over 10,000 unique posts on Twitter, now called X, that contained fake images. 

    One of these images was the now-famous “street shark” — an edited image of a shark swimming along a highway — which has reappeared during multiple hurricanes since at least 2011.

    fake-shark-hurricane.jpg
    Images of the “street shark” have circulated online after hurricanes since at least 2011.

    Fact checkers also regularly debunk images and videos of landmarks and transportation hubs submerged in water, which can mislead the public during natural disasters. 

    A digitally altered image of planes under flood waters, created by an artist in an effort to warn about the potential impact of climate change, was falsely described as showing the effects of hurricanes in 2017 and 2018. 

    And after Hurricane Milton hit Florida, AI-generated images appearing to show flooding at Disney World spread online on multiple platforms.

    AI tools have made it easier for people to publish misleading or completely fabricated visuals, according to Anupam Joshi, who co-authored a study on misleading visuals after Hurricane Sandy. 

    “You need to take everything you see online with a very healthy grain of salt,” said Joshi, the director of the University of Maryland, Baltimore County Cybersecurity Institute.

    Scams

    Scammers often target victims of hurricanes and those wishing to help them. After Hurricane Katrina, scammers impersonated charities including the Red Cross, which was one of the reasons the U.S. Department of Justice established the National Center for Disaster Fraud. 

    Jun Zhuang, a researcher who studied misinformation online following Hurricanes Harvey and Irma, told CBS News that scammers solicit money from victims through fraudulent links.

    “‘Hey, if you register to this link, you will get $200.’ Or the other way around, ‘Hey, please donate through this link,’ but you never know where your money is going to,” said Zhuang.

    Historically, bad actors also target storm victims with offers of assistance. After Hurricane Sandy, fake “contractors” claimed that FEMA would reimburse survivors for damage assessments and rapid repairs to their homes. One of these scammers defrauded 30 people of about $1.9 million.

    To avoid falling victim to scams, FEMA has advised people to be wary of unsolicited messages and to verify charities before donating. 

    [ad_2]

    Source link

  • (Media News) Fox News Host Neil Cavuto Criticizes Trump for Spreading False Claims on Hurricane Relief

    (Media News) Fox News Host Neil Cavuto Criticizes Trump for Spreading False Claims on Hurricane Relief

    [ad_1]

    Fox News anchor Neil Cavuto criticized former President Donald Trump for spreading false claims about hurricane relief distribution in recent weeks. During a conversation with Transportation Secretary Pete Buttigieg on Thursday, Cavuto addressed misinformation regarding federal disaster assistance in states like North Carolina and Florida.

    “We’ve also got a lot of misinformation, don’t we?” Cavuto said. “We get people who say in North Carolina that if you’re a Republican, you’re not going to get help, if you’re a Democrat, you’ll get help. I would imagine that does a huge disservice to people working together and scares the bejesus out of others when they believe it.”

    Buttigieg agreed, adding that false claims about aid can prevent people from seeking help. “You know, whenever somebody hears that, they believe it, and then they don’t apply for more aid, which they could absolutely qualify for,” he said.

    Trump recently faced backlash after suggesting that federal officials were withholding aid from Republican areas affected by hurricanes and accusing Vice President Kamala Harris of using FEMA funds for housing illegal migrants. Cavuto directly called out Trump for his remarks: “Donald Trump said that about North Carolina… But that kind of misinformation gets out there, and whether a politician or someone perpetrates it you think is someone of note and authority, it is wrong. And it cannot be tolerated.”


    Do you appreciate our work? Please consider one of the following ways to sustain us.

    MBFC Ad-Free 

    or

    MBFC Donation


    Follow Media Bias Fact Check: 

    BlueSky: https://bsky.app/profile/mediabiasfactcheck.bsky.social

    Reddit: https://www.reddit.com/r/Media_Bias_Fact_Check/

    Threads: https://www.threads.net/@mediabiasfactcheck

    Twitter: https://twitter.com/MBFC_News

    Facebook: https://www.facebook.com/mediabiasfactcheck

    Mastodon: https://mastodon.social/@mediabiasfactcheck

    Instagram: https://www.instagram.com/mediabiasfactcheck/

    Pinterest: https://www.pinterest.com/mbfcnews/

    Subscribe With Email

    Join 23.5K other subscribers

    [ad_2]

    Media Bias Fact Check

    Source link

  • No, the government is not controlling the weather. “It’s so stupid, it’s got to stop,” Biden says

    No, the government is not controlling the weather. “It’s so stupid, it’s got to stop,” Biden says

    [ad_1]

    President Biden on Wednesday addressed what he called “stupid” claims that the federal government can control the weather, as the false claim was promoted by some politicians and high-profile figures. 

    “Marjorie Taylor Greene, the congresswoman from Georgia, is now saying the federal government is literally controlling the weather, we’re controlling the weather. It’s beyond ridiculous. It’s so stupid, it’s got to stop,” Mr. Biden said in his remarks from the White House. He also pledged federal support for hurricane recovery efforts.

    The Federal Emergency Management Agency created a new “Hurricane Rumor Response” page to combat misinformation. The White House launched an official Reddit account, with one of its posts focused on debunking hurricane misinformation. And many local officials have taken to social media to push back on harmful misinformation

    Rep. Chuck Edwards, a North Carolina Republican, sent a letter to his constituents in the wake of Hurricane Helene urging them to beware of “untrustworthy sources trying to spark chaos by sharing hoaxes, conspiracy theories, and hearsay about hurricane response efforts across our mountains.” 

    Claims that the government was controlling Hurricane Milton spread widely on social media platforms including X, TikTok and Facebook just days after similar false claims spread during Hurricane Helene. One post on X with more than 100,000 views claimed Hurricane Milton is a “modified and manipulated” storm being used as a “weapon.” 

    “Yes they can control the weather,” Congresswoman Marjorie Taylor Greene posted to X on Oct. 3. “It’s ridiculous for anyone to lie and say it can’t be done.” Greene also shared a 2013 CBS News segment in which a physics professor discussed lab experiments investigating the potential use of lasers to affect the weather. 

    Several meteorology experts told CBS News it’s not possible to create or control hurricanes, with one expert calling the claim “utter nonsense.” 

    Some social media users also reference geoengineering, which NASA describes as proposed schemes “to reverse or limit climate change by intentional, large-scale manipulation of the Earth’s climate.” But “geoengineering could not create or control hurricanes,” said Joshua Horton, a senior program fellow studying solar geoengineering at Harvard University. 

    “It doesn’t currently exist, but if it did, geoengineering would be much too imprecise to control weather or weather events like hurricanes,” Horton said. 

    Some social media users have also pointed to cloud seeding as evidence that the government can control Hurricane Milton. Cloud seeding is a type of weather modification that can improve a cloud’s ability to create rain or snow, according to the Nevada-based Desert Research Institute, which has its own cloud seeding program. Cloud seeding has existed since the 1940s, and dozens of countries have such programs.

    Hurricane modification through cloud seeding was explored in the years between 1962 and 1983, but the project ended after seeding was found to be ineffective compared to the natural forces of the hurricane, according to the National Oceanic and Atmospheric Administration.

    Charles Konrad, a professor at University of North Carolina at Chapel Hill’s Department of Geography and Environment, said it’s important to acknowledge that hurricane modification was tested in the past but that it was ineffective. 

    “They tried to modify hurricanes and at the end of it, they realized that they couldn’t,” he said. 

    Konrad said hurricanes are “big atmospheric entities” that require an “incredible amount of energy” — more energy than humans could harness to weaken or direct the storm.

    Hugh Willoughby, a research professor at Florida International University whose work focuses on hurricanes, said he’s not aware of any U.S. programs to revive the hurricane modification project. He said the idea that the government can control a hurricane is “utter nonsense.” 

    Konrad said the National Hurricane Center is a trusted source for anyone seeking verified information about hurricanes. “We have the best and the brightest, very specialized tropical meteorologists who have given their careers to understanding these things,” he said. 


    Florida conditions worsen, misinformation spreads as Milton approaches

    11:06

    [ad_2]

    Source link

  • How Pete Buttigieg Stopped Elon Musk’s Childish Antics Like an Adult

    How Pete Buttigieg Stopped Elon Musk’s Childish Antics Like an Adult

    [ad_1]

    Transportation Secretary Pete Buttigieg had to speak to Elon Musk on the phone to get the tech billionaire and Donald Trump superfan to stop spouting hurricane relief misinformation.

    Buttigieg and Musk initially clashed on X last Friday, when Musk falsely claimed that the Federal Aviation Administration planned to “shut down” airspace over hard-hit states. They later spoke on the phone, Buttigieg told MSNBC’s Jen Psaki, after Buttigieg corrected Musk publicly and invited the noted conspiracist to call him with questions.

    “I’ve been amazed at how a little kernel of some detail gets blown up on the internet into something that it’s not,” Buttigieg told Psaki, adding that the torrent of partisan misinformation about the response to Hurricane Helene had “a real cost for people on the ground.”

    Musk and Buttigieg’s specific dust-up involved temporary flight restrictions over North Carolina, where Musk’s satellite internet company, Starlink, has been establishing emergency internet service. “Hundreds” of pilots in the region had been unable to land because the FAA and the Federal Emergency Management Agency blocked their flights, Musk posted.

    But the reality, as Buttigieg and FAA spokesmen have since explained, is less diabolical. The FAA never closed down airspace over North Carolina. In some areas, however, at the request of local law enforcement, the FAA and state aviation agencies have required what Buttigieg called a “higher level of coordination” between pilots and local airports to prevent in-air collisions. Such requirements are standard in the aftermath of major natural disasters, when nearby air space can become dangerously crowded.

    That explanation appeared to work for Musk, who on Friday afternoon posted a message thanking Buttigieg for the call and helping to “simplify” FAA regulations. But he still hasn’t walked back any of his false or misleading posts, which also claimed that federal aid workers “seized” emergency aid shipments and exhausted their agency’s budget “ferrying illegals” to the U.S.

    Experts who research emergency management have grieved the vibe shift on X, which once served as a useful source of real-time disaster information. Today, it’s ground zero for a wider right-wing disinformation campaign about the federal government’s response to Hurricane Helene.

    “You look at what’s going on online [and] a lot of it seems to be driven by politics,” Buttigieg said. “And that is actively harming and disrupting the process of getting back to normal for so many people whose lives were upended by this awful storm.”

    [ad_2]

    Caitlin Dewey

    Source link

  • For small cities across Alabama with Haitian populations, Springfield is a cautionary tale

    For small cities across Alabama with Haitian populations, Springfield is a cautionary tale

    [ad_1]

    ENTERPRISE, Ala. — The transition from the bustling Port-au-Prince, Haiti, to a small Alabama city on the southernmost tip of the Appalachian mountain range was challenging for Sarah Jacques.

    But, over the course of a year, the 22-year-old got used to the quiet and settled in. Jacques got a job at a manufacturing plant that makes car seats, found a Creole-language church and came to appreciate the ease and security of life in Albertville after the political turmoil and violence that’s plagued her home country.

    Recently, though, as Republican presidential nominee Donald Trump and his running mate began promoting debunked misinformation about Haitian migrants in Springfield, Ohio, causing crime and “eating pets,” Jacques said there have been new, unforeseen challenges.

    “When I first got here, people would wave at us, say hello to us, but now it’s not the same,” Jacques said in Creole through a translator. “When people see you, they kind of look at you like they’re very quiet with you or afraid of you.”

    Amid this mounting tension, a bipartisan group of local religious leaders, law enforcement officials and residents across Alabama see the fallout in Springfield as a cautionary tale — and have been taking steps to help integrate the state’s Haitian population in the small cities where they live.

    As political turmoil and violence intensify in Haiti, Haitian migrants have embraced a program established by President Joe Biden in 2023 that allows the U.S. to accept up to 30,000 people a month from Cuba, Haiti, Nicaragua and Venezuela for two years and offers work authorization. The Biden administration recently announced the program could allow an estimated 300,000 Haitians to remain in the U.S. at least through February 2026.

    In 2023, there were 2,370 people of Haitian ancestry in Alabama, according to census data. There is no official count of the increase in the Haitian population in Alabama since the program was implemented.

    The immigration debate is not new to Albertville, where migrant populations have been growing for three decades, said Robin Lathan, executive assistant to the Albertville mayor. Lathan said the city doesn’t track how many Haitians have moved to the city in recent years but said “it seems there has been an increase over the last year, in particular.”

    A representative from Albertville’s school system said that, in the last school year, 34% of the district’s 5,800 students were learning English as a second language — compared to only 17% in 2017.

    In August, weeks before Springfield made national headlines, a Facebook post of men getting off a bus to work at a poultry plant led some residents to speculate that the plant was hiring people living in the country illegally.

    Representatives for the poultry plant said in an email to The Associated Press that all its employees are legally allowed to work in the U.S.

    The uproar culminated in a public meeting where some residents sought clarity about the federal program that allowed Haitians to work in Alabama legally, while others called for landlords to “cut off the housing” for Haitians and suggested that the migrants have a “smell to them,” according to audio recordings.

    To Unique Dunson, a 27-year-old lifelong Albertville resident and community activist, these sentiments felt familiar.

    “Every time Albertville gets a new influx of people who are not white, there seems to be a problem,” Dunson said.

    Dunson runs a store offering free supplies to the community. After tensions boiled over across the country, she put up multiple billboards across town that read, in English, Spanish and Creole, “welcome neighbor glad you came.”

    Dunston said the billboards are a way to “push back” against the notion that migrants are unwelcome.

    When Pastor John Pierre-Charles first arrived in Albertville in 2006, he said the only other Haitians he knew in the area were his family members.

    In 14 years of operation, the congregation at his Creole-language church, Eglise Porte Etroite, has gone from just seven members in 2010 to approximately 300 congregants. He is now annexing classrooms to the church building for English language classes and drivers’ education classes, as well as a podcast studio to accommodate the burgeoning community.

    Still, Pierre-Charles describes the last months as “the worst period” for the Haitian community in all his time in Albertville.

    “I can see some people in Albertville who are really scared right now because they don’t know what’s going to happen,” said Pierre-Charles. “Some are scared because they think they may be sent back to Haiti. But some of them are scared because they don’t know how people are going to react to them.”

    After the fallout from the initial public meetings in August, Pierre-Charles sent a letter to city leadership calling for more resources for housing and food to ensure his growing community could safely acclimate, both economically and culturally.

    “That’s what I’m trying to do, to be a bridge,” said Pierre-Charles.

    He is not working alone.

    In August, Gerilynn Hanson, 54, helped organize the initial meetings in Albertville because she said many residents had legitimate questions about how migration was affecting the city.

    Now, Hanson said she is adjusting her strategy, “focusing on the human level.”

    In September, Hanson, an electrical contractor and Trump supporter, formed a nonprofit with Pierre-Charles and other Haitian community leaders to offer more stable housing and English language classes to meet the growing demand.

    “We can look at (Springfield) and become them in a year,” Hanson said, referring to the animosity that’s taken hold in the Ohio city, which has been inundated with threats. “We can sit back and do nothing and let it unfold under our eyes. Or we can try to counteract some of that and make it to where everyone is productive and can speak to each other.”

    Similar debates have proliferated in public meetings across the state — even in places where Haitian residents make up less than 0.5% of the entire population.

    In Sylacauga, videos from numerous public meetings show residents questioning the impact of the alleged rise in Haitian migrants. Officials said there are only 60 Haitian migrants in the town of about 12,000 people southeast of Birmingham.

    In Enterprise, not far from the Alabama-Florida border, cars packed the parking lot of Open Door Baptist Church in September for an event that promised answers about how the growing Haitian population was affecting the city.

    After the event, James Wright, the chief of the Ma-Chis Lower Creek Indian Tribe, was sympathetic to the reasons Haitians were fleeing their home but said he worried migrants would affect Enterprise’s local “political culture” and “community values.”

    Other attendees echoed fears and misinformation about Haitian migrants being “lawless” and “dangerous.”

    But some came to try to ease mounting anxieties about the migrant community.

    Enterprise police Chief Michael Moore said he shared statistics from his department that show no measurable increase in crimes as the Haitian population has grown.

    “I think there was quite a few people there that were more concerned about the fearmongering than the migrants,” Moore told the AP.

    Moore said his department had received reports of Haitian migrants living in houses that violated city code, but when he reached out to the people in question, the issues were quickly resolved. Since then, his department hasn’t heard any credible complaints about crimes caused by migrants.

    “I completely understand that some people don’t like what I say because it doesn’t fit their own personal thought process,” said Moore. “But those are the facts.”

    ___

    Riddle is a corps member for The Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

    [ad_2]

    Source link

  • In global game of influence, China turns to a cheap and effective tool: fake news

    In global game of influence, China turns to a cheap and effective tool: fake news

    [ad_1]

    WASHINGTON — When veteran U.S. diplomat Kurt Campbell traveled to the Solomon Islands to counter Beijing’s influence in the South Pacific country, he quickly saw just how far China would go to spread its message.

    The Biden administration’s Asia czar woke up one morning in 2022 to a long article in the local press about the U.S. running chemical and biological labs in Ukraine, a claim that Washington calls an outright lie. Started by Russia, the false and incendiary claim was vigorously amplified by China’s vast overseas propaganda apparatus.

    It was another example of “clearly effective Russian and Chinese disinformation,” Campbell told the Senate Foreign Relations Committee in July.

    Two years later, the claim still reverberates online, demonstrating China’s sprawling effort to reshape global perceptions. The campaign, costing many billions per year, is becoming ever more sophisticated thanks to artificial intelligence. China’s operations have caught the attention of intelligence analysts and policymakers in Washington, who vow to combat any actions that could influence the November election or undermine American interests.

    The key tactic: networks of websites purporting to be legitimate news outlets, delivering pro-China coverage that often parallels official statements and positions from Beijing.

    Shannon Van Sant, an adviser to the Committee for Freedom in Hong Kong Foundation, tracked a network of dozens of sites that posed as news organizations. One site mimicked The New York Times, using a similar font and design in what she called an attempt at legitimacy. The site carried strongly pro-Chinese messages.

    When Van Sant researched the site’s reporters she found no information. Their names didn’t belong to any known journalists working in China, and their photos bore telltale signs of being created with AI.

    “Manipulation of the media is ultimately a manipulation of readers and the audience, and this is damaging to democracy and society,” Van Sant said.

    Liu Pengyu, spokesman for the Chinese Embassy in the U.S., said allegations that China uses news websites and social media to spread pro-Beijing information and influence public opinion in the U.S. “are full of malicious speculations against China, which China firmly opposes.”

    In addition to its state media, Beijing has turned to foreign players — real or not — to relay messages and lend credibility to narratives favoring the Communist Party, said Xiao Qiang, a research scientist at the School of Information at the University of California, Berkeley. Xiao also is editor-in-chief of China Digital Times, a bilingual news website that aggregates information from and about China.

    Beijing’s methods are wide-ranging and links to the government are often difficult to prove, Xiao said. But whether it’s journalists with American-sounding names or an Indian influencer, the consistently pro-Beijing messages give them away.

    “The implicit message is the same — that the Chinese Communist Party works for its people,” Xiao said.

    Analysts at the cybersecurity firm Logically identified 1,200 websites that had carried Russian or Chinese state media stories. The sites often target specific audiences and have names that sound like traditional news organizations or defunct newspapers.

    Unlike Russia or Iran, which have displayed clear preferences in the U.S. presidential campaign, Beijing is more cautious and focused on spreading positive content about China.

    While the sites aren’t owned by China, they run Chinese content. When Logically looked at content specifically about the U.S. election, 20% could be traced back to Chinese or Russian state media.

    “There’s a decent likelihood that these articles could influence U.S. audiences without them even knowing where it comes from,” said Alex Nelson, Logically’s senior manager for strategy and analysis.

    According to the Gallup World Poll, more countries surveyed view the U.S. positively, but the share of countries where views of both the U.S. and China are negative overall is higher than 15 years ago, signaling the U.S. doesn’t appear to be making gains over China.

    Some U.S. officials want to increase spending to even the playing field. The House of Representatives this month approved a bill that would authorize $325 million annually through 2027 to counter China’s global influence, including its disinformation campaigns. The measure still needs Senate approval.

    “We are in a global competition for influence with China, and if you want to win it, then you cannot do it on a middle-power budget,” said Rep. Gregory Meeks, a Democrat from New York.

    Chinese President Xi Jinping has demanded a systematic buildup of Chinese narratives that would give his country a global voice “commensurate with” its international stature.

    Beijing has invested in state media such as the Xinhua news agency and China Central Television to convey its messages to global audiences in various languages and platforms. Media groups at the local level are creating “international communication centers” to build an overseas presence with websites, news channels and social media accounts.

    Beijing also has struck media partnerships worldwide, and the article Campbell read in the Solomon Islands is likely a result of those.

    China’s outreach is tied to the global race for economic dominance in electric vehicles, computer chips, AI and quantum computing, said Jaret Riddick, a senior fellow at Georgetown University’s Center for Security and Emerging Technology.

    “The countries that lead on emerging technologies will be the countries that have a great advantage going forward,” Riddick said.

    To tell its story, Beijing has not shied away from using fake personas. A 2023 State Department report detailed the case of a published writer named Yi Fan, originally described as a Chinese foreign ministry analyst. Yi morphed into a journalist, then became an independent analyst.

    Yi’s details changed, but the message did not. Through published commentaries and writings, Yi trumpeted close ties between China and Africa, praised Beijing’s approach to environmental sustainability and argued that China must counter distorted Western narratives.

    Then there was Wilson Edwards, a supposed Swiss virologist quoted in Chinese media as a COVID-19 expert who criticized the U.S. response. But Swiss officials found no evidence he existed.

    “If you exist, we would like to meet you!” the Swiss Embassy in Beijing wrote on social media.

    ___

    AP writer Amelia Thomson-DeVeaux contributed from Washington.

    [ad_2]

    Source link

  • Breathlessness. Unformed facial features. Manipulative. Here’s how to spot a political deepfake

    Breathlessness. Unformed facial features. Manipulative. Here’s how to spot a political deepfake

    [ad_1]

    You’ve probably seen the word “deepfakes” in the news lately, but are you confident you would be able to spot the difference between real and artificial intelligence-generated content? During the summer, a video of Vice President Kamala Harris saying that she was “the ultimate diversity hire” and “knew nothing about running the country” circulated on social media. Elon Musk, the owner of X, retweeted it. This was, in fact, a deepfake video.By posting it, Musk seemingly ignored X’s own misinformation policies and shared it with his 193 million followers. Although the Federal Communication Commission announced in February that AI-generated audio clips in robocalls are illegal, deepfakes on social media and in campaign advertisements are yet to be subject to a federal ban. A growing number of state legislatures have begun submitting bills to regulate deepfakes as concerns about the spread of misinformation and explicit content heighten on both sides of the aisle. In September, with less than 50 days before the election, California Gov. Gavin Newsom signed three bills that target deepfakes directly — one of which takes effect immediately. AB 2839 bans individuals and groups “from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content.” This ban would take effect 120 days before an election and 60 days after it, an aim at reducing content that may spread misinformation as votes are being counted and certified. “Signing AB 2839 into law is a significant step in continuing to protect the integrity of our democratic process. With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally altered content that can interfere with the election,” said Gail Pellerin, the chair of the Assembly Elections Committee.According to Public Citizen, 25 states have now either signed a bill into law that addresses political deepfakes or have a bill that is awaiting the governor’s signature. Do you know how to spot a deepfake?According to cyber news reporter and cybersecurity expert Kerry Tomlinson, “a deepfake is a computer-created image or voice or video of a person, either a person who doesn’t exist but seems real, or a person who does exist, making them do or say something they never actually did or said.”Tomlinson says there are several giveaways to identify a deepfake. Objects and parts of the face, such as earrings, teeth or glasses, may not be fully formed. Pay attention to the breathing. The speaker takes no breaths while speaking. Ask yourself: Is the message potentially harmful or manipulating?Can the information be verified?Ultimately, Tomlinson encourages people to “learn about how attackers are using deepfakes. Learn about how politicians and political parties are using deepfakes. Read about it. It’s as simple as that.”

    You’ve probably seen the word “deepfakes” in the news lately, but are you confident you would be able to spot the difference between real and artificial intelligence-generated content?

    During the summer, a video of Vice President Kamala Harris saying that she was “the ultimate diversity hire” and “knew nothing about running the country” circulated on social media. Elon Musk, the owner of X, retweeted it. This was, in fact, a deepfake video.

    By posting it, Musk seemingly ignored X’s own misinformation policies and shared it with his 193 million followers.

    Although the Federal Communication Commission announced in February that AI-generated audio clips in robocalls are illegal, deepfakes on social media and in campaign advertisements are yet to be subject to a federal ban.

    A growing number of state legislatures have begun submitting bills to regulate deepfakes as concerns about the spread of misinformation and explicit content heighten on both sides of the aisle.

    In September, with less than 50 days before the election, California Gov. Gavin Newsom signed three bills that target deepfakes directly — one of which takes effect immediately.

    AB 2839 bans individuals and groups “from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content.”

    This ban would take effect 120 days before an election and 60 days after it, an aim at reducing content that may spread misinformation as votes are being counted and certified.

    “Signing AB 2839 into law is a significant step in continuing to protect the integrity of our democratic process. With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally altered content that can interfere with the election,” said Gail Pellerin, the chair of the Assembly Elections Committee.

    According to Public Citizen, 25 states have now either signed a bill into law that addresses political deepfakes or have a bill that is awaiting the governor’s signature.

    Do you know how to spot a deepfake?

    According to cyber news reporter and cybersecurity expert Kerry Tomlinson, “a deepfake is a computer-created image or voice or video of a person, either a person who doesn’t exist but seems real, or a person who does exist, making them do or say something they never actually did or said.”

    Tomlinson says there are several giveaways to identify a deepfake.

    • Objects and parts of the face, such as earrings, teeth or glasses, may not be fully formed.
    • Pay attention to the breathing. The speaker takes no breaths while speaking.
    • Ask yourself: Is the message potentially harmful or manipulating?
    • Can the information be verified?

    Ultimately, Tomlinson encourages people to “learn about how attackers are using deepfakes. Learn about how politicians and political parties are using deepfakes. Read about it. It’s as simple as that.”

    [ad_2]

    Source link

  • Russian disinformation slams Paris and amplifies Khelif debate to undermine the Olympics

    Russian disinformation slams Paris and amplifies Khelif debate to undermine the Olympics

    [ad_1]

    WASHINGTON (AP) — The actor in the viral music video denouncing the 2024 Olympics looks a lot like French President Emmanuel Macron. The images of rats, trash and the sewage, however, were dreamed up by artificial intelligence.

    Portraying Paris as a crime-ridden cesspool, the video mocking the Games spread quickly on social media platforms like YouTube and X, helped on its way by 30,000 social media bots linked to a notorious Russian disinformation group that has set its sights on France before. Within days, the video was available in 13 languages, thanks to quick translation by AI.

    “Paris, Paris, 1-2-3, go to Seine and make a pee,” taunts an AI-enhanced singer as the faux Macron actor dances in the background, seemingly a reference to water quality concerns in the Seine River where some competitions are taking place.

    Moscow is making its presence felt during the Paris Games, with groups linked to Russia’s government using online disinformation and state propaganda to spread incendiary claims and attack the host country — showing how global events like the Olympics are now high-profile targets for online disinformation and propaganda.

    Over the weekend, disinformation networks linked to the Kremlin seized on a divide over Algerian boxer Imane Khelif, who has faced unsubstantiated questions about her gender. Baseless claims that she is a man or transgender surfaced after a controversial boxing association with Russian ties said she failed an opaque eligibility test before last year’s world boxing championships.

    Russian networks amplified the debate, which quickly became a trending topic online. British news outlets, author J.K. Rowling and right-wing politicians like Donald Trump added to the deluge. At its height late last week, X users were posting about the boxer tens of thousands of times per hour, according to an analysis by PeakMetrics, a cyber firm that tracks online narratives.

    The boxing group at the root of the claims — the International Boxing Association — has been permanently barred from the Olympics, has a Russian president who is an ally of Russian President Vladimir Putin and its biggest sponsor is the state energy company Gazprom. Questions also have surfaced about its decision to disqualify Khelif last year after she had beaten a Russian boxer.

    Approving only a small number of Russian athletes to compete as neutrals and banning them from team sports following the invasion of Ukraine all but guaranteed the Kremlin’s response, said Gordon Crovitz, co-founder of NewsGuard, a firm that analyzes online misinformation. NewsGuard has tracked dozens of examples of disinformation targeting the Paris Games, including the fake music video.

    Russia’s disinformation campaign targeting the Olympics stands out for its technical skill, Crovitz said.

    “What’s different now is that they are perhaps the most advanced users of generative AI models for malign purposes: fake videos, fake music, fake websites,” he said.

    AI can be used to create lifelike images, audio and video, rapidly translate text and generate culturally specific content that sounds and reads like it was created by a human. The once labor-intensive work of creating fake social media accounts or websites and writing conversational posts can now be done quickly and cheaply.

    Another video amplified by accounts based in Russia in recent weeks claimed the CIA and U.S. State Department warned Americans not to use the Paris metro. No such warning was issued.

    Russian state media has trumpeted some of the same false and misleading content. Instead of covering the athletic competitions, much of the coverage of the Olympics has focused on crime, immigration, litter and pollution.

    One article in the state-run Sputnik news service summed it up: “These Paris ‘games’ sure are going swimmingly. Here’s an idea. Stop awarding the Olympics to the decadent, rotting west.”

    Russia has used propaganda to disparage past Olympics, as it did when the then-Soviet Union boycotted the 1984 Games in Los Angeles. At the time, it distributed printed material to Olympic officials in Africa and Asia suggesting that non-white athletes would be hunted by racists in the U.S., according to an analysis from Microsoft Threat Intelligence, a unit within the technology company that studies malicious online actors.

    Russia also has targeted past Olympic Games with cyberattacks.

    “If they cannot participate in or win the Games, then they seek to undercut, defame, and degrade the international competition in the minds of participants, spectators, and global audiences,” analysts at Microsoft concluded.

    A message left with the Russian government was not immediately returned on Monday.

    Authorities in France have been on high alert for sabotage, cyberattacks or disinformation targeting the Games. A 40-year-old Russian man was arrested in France last month and charged with working for a foreign power to destabilize the European country ahead of the Games.

    Other nations, criminal groups, extremist organizations and scam artists also are exploiting the Olympics to spread their own disinformation. Any global event like the Olympics — or a climate disaster or big election — that draws a lot of people online is likely to generate similar amounts of false and misleading claims, said Mark Calandra, executive vice president at CSC Digital Brand Services, a firm that tracks fraudulent activity online.

    CSC’s researchers noticed a sharp increase in fake website domain names being registered ahead of the Olympics. In many cases, groups set up sites that appear to provide Olympic content, or sell Olympic merchandise.

    Instead, they’re designed to collect information on the user. Sometimes it’s a scam artist looking to steal personal financial data. In others, the sites are used by foreign governments to collect information on Americans — or as a way to spread more disinformation.

    “Bad actors look for these global events,” Calandra said. “Whether they’re positive events like the Olympics or more concerning ones, these people use everyone’s heightened awareness and interest to try to exploit them.”

    [ad_2]

    Source link

  • Faked video targeting France and UAE likely Russian despite Moscow’s links to Gulf Arab states

    Faked video targeting France and UAE likely Russian despite Moscow’s links to Gulf Arab states

    [ad_1]

    DUBAI, United Arab Emirates (AP) — A fake video that ricocheted across the internet claiming tensions between France and the United Arab Emirates after Telegram CEO Pavel Durov’s detention in Paris likely came from Russia, an analysis by The Associated Press shows, despite Moscow’s efforts to maintain crucial ties to the UAE.

    It remains unclear why Russian operatives would choose to publish such a video falsely claiming the Emirates halted a French arms sale, which appears to be the first noticeable effort by Moscow to target the UAE with a disinformation campaign. The Emirates remains one of the few locations to still have direct flights to Moscow, while Russian money has flooded into Dubai’s booming real estate market since President Vladimir Putin launched his full-scale invasion of Ukraine in 2022.

    France, however, remains one of the key backers of Ukraine and its President Volodymyr Zelenskyy as the war grinds on. Meanwhile, Russia likely remains highly interested in what happens to Telegram, an app believed to be used widely by its military in the war and one that’s also been used by activists in the past. And the move comes amid concerns in the United States over Russia, Iran and China interfering in the upcoming U.S. presidential election.

    Russia’s Embassy in Washington did not respond to a request for comment.

    The fake video began circulating online Aug. 27, bearing the logos of the Qatar-based satellite news network Al Jazeera and attempting to copy the channel’s style. It falsely claimed the Emirati government had halted a previously announced purchase of 80 Rafale fighter jets from France worth 16 billion euros ($18 billion) at the time, the largest-ever French weapons contract for export. It also sought to link Dubai’s ruler and his crown prince son to the decision, as Durov holds an Emirati passport and has lived in Dubai.

    Such a decision, however, was never made. The UAE and France maintain close relations, with the French military operating a naval base in the country. French warplanes and personnel also are stationed in a major facility outside the Emirati capital, Abu Dhabi.

    Reached for comment, Al Jazeera told the AP that the footage was “fake and we refute this attribution to the media network.” The network never aired any such claim when reporting on Durov’s detention as well, according to an AP check. On the social platform X, a note later appended by the company to some posts with the video identified it as “manipulated media.”

    The video also appeared to seek to exploit the low-level suspicion still gripping the Gulf Arab states following the yearslong Qatar diplomatic crisis by falsely attributing it to the news network. State-funded Al Jazeera has drawn criticism in the past from Gulf nations over its coverage of the 2011 Arab Spring, from the United States for airing videos from al-Qaida leader Osama bin Laden and most recently in Israel, where authorities closed its operation over its coverage of the war against Hamas in the Gaza Strip.

    The social media account that first spread the video did not respond to questions from the AP and later deleted its post. That account linked to another on the Telegram message app that repeatedly shared graphic images of dead Ukrainian soldiers and pro-Russian messages.

    Such accounts have proliferated since the war began and bear the hallmark of past Russian disinformation campaigns.

    In Ukraine, the Center for Countering Disinformation in Kyiv, a government project there focused on countering such Russian campaigns, told the AP that the account engaged in “systematic cross-quoting and reposting of content” associated with Russian state media and its government.

    That indicates the account “is aimed at an international audience for the purpose of informational influence,” the center said. It “probably belongs to the Russian network of subversive information activities abroad.”

    Other experts assessed the video to be likely Russian disinformation.

    The Emirati government declined to comment. The French Embassy in Abu Dhabi did not respond to AP’s request to comment.

    Durov is now free on 5 million euros bail after being questioned by French authorities and preliminarily charged for allegedly allowing Telegram to be used for criminal activity. He has disputed the charges and promised to step up efforts to fight criminality on the messaging app.

    Despite the video being flagged as fake online, captions and versions of the video continue to circulate, showing the challenge of trying to refute such messages. Meanwhile, Russian Foreign Minister Sergey Lavrov just attended a meeting of the Gulf Cooperation Council in Saudi Arabia attended by the UAE. Both Saudi Arabia and the UAE have mediated prisoner exchanges amid the war.

    Given those close ties, the UAE likely will or has reached out quietly to Moscow over the video, said Kristian Coates Ulrichsen, a research fellow at Rice University’s Baker Institute who has long studied the region.

    “It may be that this is a part of the Russian playbook which is to seek to create wedges between political and security partners, in a bid to create divisions and sow uncertainty,” Ulrichsen said.

    “The importance of the UAE to Russia post-2022 does make it unusual, but it may be that the campaign is aimed primarily at France and that any impact on the UAE’s image and reputation is a secondary issue as far as those behind the video are concerned.”

    ___

    Associated Press writer Volodymr Yurchuk in Kyiv, Ukraine, contributed to this report.

    [ad_2]

    Source link

  • G20 nations agree to join efforts to fight disinformation and set AI guidelines

    G20 nations agree to join efforts to fight disinformation and set AI guidelines

    [ad_1]

    SAO PAULO (AP) — Group of 20 leaders agreed Friday to join efforts to fight disinformation and set up an agenda on artificial intelligence as their governments struggle against the speed, scale and reach of misinformation and hate speech.

    The ministers, who gathered this week in Maceio, the capital of the northeastern state of Alagoas, emphasized in a statement the need for digital platforms to be transparent and “in line with relevant policies and applicable legal frameworks.”

    It is the first time in the G20’s history that the group recognizes the problem of disinformation and calls for transparency and accountability from digital platforms, João Brant, secretary for digital policy at the Brazilian presidency, told The Associated Press by phone.

    G20 representatives also agreed to establish guidelines for developing artificial intelligence, calling for “ethical, transparent, and accountable use of AI,” with human oversight and compliance with privacy and human rights laws.

    “We hope this will be referenced in the leaders’ declaration and that South Africa will continue the work,” Renata Mielli, adviser to Brazil’s ministry of science, technology and innovation, said. The G20 Leaders’ Summit is scheduled for November, in Rio de Janeiro.

    Mielli, Brazil’s negotiator in the AI working group, said there were disagreements from countries including China and the United States, but declined to provide details. In the end, she said, a consensus prevailed that the world’s richest countries should collaborate to reduce global asymmetry in AI development.

    This week’s meeting took place in the aftermath of X’s ban in Brazil, ordered by Supreme Court Justice Alexandre de Moraes after a monthslong feud with its owner, tech billionaire Elon Musk.

    Since last year, X has clashed with de Moraes over its reluctance to block some users, mostly far-right activists accused of undermining Brazilian democracy. Musk has called the Brazilian justice a dictator and an autocrat due to his rulings affecting his companies in Brazil.

    Brazil currently has the presidency of the 20 leading rich and developing nations and President Luiz Inácio Lula da Silva has put issues that concern the developing world — such as the reduction of inequalities and the reform of multilateral institutions — at the heart of its agenda.

    [ad_2]

    Source link

  • G20 nations agree to join efforts to fight disinformation and set AI guidelines

    G20 nations agree to join efforts to fight disinformation and set AI guidelines

    [ad_1]

    SAO PAULO — SAO PAULO (AP) — Group of 20 leaders agreed Friday to join efforts to fight disinformation and set up an agenda on artificial intelligence as their governments struggle against the speed, scale and reach of misinformation and hate speech.

    The ministers, who gathered this week in Maceio, the capital of the northeastern state of Alagoas, emphasized in a statement the need for digital platforms to be transparent and “in line with relevant policies and applicable legal frameworks.”

    It is the first time in the G20’s history that the group recognizes the problem of disinformation and calls for transparency and accountability from digital platforms, João Brant, secretary for digital policy at the Brazilian presidency, told The Associated Press by phone.

    G20 representatives also agreed to establish guidelines for developing artificial intelligence, calling for “ethical, transparent, and accountable use of AI,” with human oversight and compliance with privacy and human rights laws.

    “We hope this will be referenced in the leaders’ declaration and that South Africa will continue the work,” Renata Mielli, adviser to Brazil’s ministry of science, technology and innovation, said. The G20 Leaders’ Summit is scheduled for November, in Rio de Janeiro.

    Mielli, Brazil’s negotiator in the AI working group, said there were disagreements from countries including China and the United States, but declined to provide details. In the end, she said, a consensus prevailed that the world’s richest countries should collaborate to reduce global asymmetry in AI development.

    This week’s meeting took place in the aftermath of X’s ban in Brazil, ordered by Supreme Court Justice Alexandre de Moraes after a monthslong feud with its owner, tech billionaire Elon Musk.

    Since last year, X has clashed with de Moraes over its reluctance to block some users, mostly far-right activists accused of undermining Brazilian democracy. Musk has called the Brazilian justice a dictator and an autocrat due to his rulings affecting his companies in Brazil.

    Brazil currently has the presidency of the 20 leading rich and developing nations and President Luiz Inácio Lula da Silva has put issues that concern the developing world — such as the reduction of inequalities and the reform of multilateral institutions — at the heart of its agenda.

    [ad_2]

    Source link

  • Fake news, social media, and

    Fake news, social media, and

    [ad_1]

    We live in an age of alternate facts. More and more Americans are getting their information almost entirely from outlets that echo their own political point of view. And then, of course, there’s social media, where there are few (if any) filters between users and a wide world of misinformation.

    For example: On July 13 a sniper came within inches of assassinating Donald Trump as he addressed an outdoor rally in Pennsylvania. Within minutes, social media was alive with uninformed speculation. One woman posted, “Who did it?  I bet you it was the government themselves. They’re all on the same side.”

    Koppel said, “We have no idea who she is, she has no particular credibility. Why should I even care that she is out there?”

    “Because she could potentially have an audience,” said journalist and author Steven Brill. “If the algorithm gives it steam, that could be seen by millions of people.”

    And then on X (formerly Twitter), this message: “You’re telling me the Secret Service let a guy climb up on a roof with a rifle only 150 yards from Trump? Inside job.” That message has seven million views and counting.

    Brill said, “We’re at a point where nobody believes anything. Truth as a concept is really in trouble.  It’s suspect.”

    The cumulative impact of the lies and distortions just keeps growing, such that Brill titled his new book “The Death of Truth.” “There are facts,” he said, “and it used to be in this world that people could at least agree on the same set of facts and then they could debate what to do about those facts.”

    Knopf


    But we’re losing our grip on any sort of shared reality. Brill’s company, NewsGuard, is attempting to put the brakes on. Its 40 or so staffers around the world identify and rate the credibility of online news and information sources.

    It’s a finger in the dike, because there’s no price to be paid. Almost 30 years ago, the federal government decided that internet platforms were like the phone company. You can’t sue the phone company for what a caller might say in a phone conversation.

    Brill said, “They inserted a three-paragraph section called Section 230, which said that these [internet] publishers would not be responsible for anything that was published in their chat rooms.”

    Instead, it left the internet essentially without any enforceable rules. Social media companies exercise only limited control, permitting lies, fake news and intentionally divisive content to proliferate.

    The torrent of allegedly Moscow-backed content provoked an angry reaction from the U.S. this past week.

    tenet-media.jpg
    In an indictment unsealed on Wednesday, the Department of Justice alleged that Russian nationals funneled millions of dollars to an American media company that paid right-wing influencers for videos pushing narratives favorable to the Kremlin. The Biden administration also accused Russia of using fake news sites designed to covertly spread Russian propaganda in an attempt to interfere with the 2024 U.S. presidential election.

    CBS News


    But most of the damage is home-grown, from national and supposedly local outlets. “There are more fake news sites posing as legitimate local news in the United States than there are news sites of legitimate local newspapers,” said Brill. “There is no monopoly on virtue from either side here. Just to take an example, the most effective fake local news sites are financed by liberal political action committees. And they’re sort of especially self-righteous about it. When I interviewed them, they basically said, ‘Well, the other guys do it, so we’ll do it.’ But it’s undermining democracy.”

    And then, Brill points out, we’re just beginning to come to terms with the full potential of artificial intelligence. Note that none of these images is real:

    ai-generated-images-of-trump-and-harris.jpg
    Fake images of Donald Trump and Vice President Kamala Harris that have been shared on social media.

    CBS News


    Brill said, “It disorients everything, because you don’t know if something is a hoax, or is political propaganda, or is a deep fake. You just don’t know what to believe.”

    Koppel asked, “In the environment you describe, is it possible for us to have a clean, fair, universally-acceptable election?”

    “Your last condition is the one that is, I think, impossible – universally acceptable,” Brill replied. “Forget universally, even modestly acceptable. I have a real fear that one way or another, regardless of the outcome, that the chaos and the disbelief and anger that’s going to prevail on November 6, the day after the election, is really going to put our country to the test.”

         
    READ AN EXCERPT: “The Death of Truth” by Steven Brill

         
    For more info:

           
    Story produced by Dustin Stephens. Editor: Ed Givnish.

         
    See also:

    [ad_2]

    Source link

  • With charges and sanctions, US takes aim at Russian disinformation ahead of November election

    With charges and sanctions, US takes aim at Russian disinformation ahead of November election

    [ad_1]

    WASHINGTON (AP) — The Biden administration seized Kremlin-run websites and charged two Russian state media employees in its most sweeping effort yet to push back against what it says are Russian attempts to spread disinformation ahead of the November presidential election.

    The measures, which in addition to indictments also included sanctions and visa restrictions, represented a U.S. government effort just weeks before the November election to disrupt a persistent threat from Russia that American officials have long warned has the potential to sow discord and create confusion among voters.

    Washington has said that Moscow, which intelligence officials have said has a preference for Republican Donald Trump, remains the primary threat to elections even as the FBI continues to investigate a hack by Iran this year that targeted the presidential campaigns of both political parties.

    What to know about the 2024 Election

    “The Justice Department’s message is clear: We will have no tolerance for attempts by authoritarian regimes to exploit our democratic systems of government,” Attorney General Merrick Garland said.

    One criminal case disclosed by the Justice Department accuses two employees of RT, a Russian state media company, of covertly funding a Tennessee-based content creation company with nearly $10 million to publish English-language videos on social media platforms including TikTok and YouTube with messages in favor of the Russia government’s interests and agenda, including about the war in Ukraine.

    The nearly 2,000 videos posted by the company have gotten more than 16 million views on YouTube alone, prosecutors said.

    The two defendants, Kostiantyn Kalashnikov and Elena Afanasyeva, are charged with conspiracy to commit money laundering and violating the Foreign Agents Registration Act. They are at large. It was not immediately clear if they had lawyers.

    The Justice Department says the company did not disclose that it was funded by RT and that neither it nor its founders registered as required by law as an agent of a foreign principal.

    Though the indictment does not name the company, it describes it as a Tennessee-based content creation firm with six commentators and with a website identifying itself as “a network of heterodox commentators that focus on Western political and cultural issues.”

    That description exactly matches Tenet Media, an online company that hosts videos made by well-known conservative influencers Tim Pool, Benny Johnson and others.

    Johnson and Pool both responded with posts on X, the platform formerly known as Twitter, calling themselves “victims.” Calling Russian President Vladimir Putin a “scumbag,” Pool wrote that “should these allegations prove true, I as well as the other personalities and commentators were deceived.”

    In his post, Johnson wrote that he had been asked a year ago to provide content to a “media startup.” He said his lawyers negotiated a “standard, arms length deal, which was later terminated.”

    Tenet Media’s shows in recent months have featured high-profile conservative guests, including RNC co-chair Lara Trump, former Republican presidential candidate Vivek Ramaswamy and U.S. Senate candidate Kari Lake.

    In the other action, officials announced the seizure of 32 internet domains that were used by the Kremlin to spread Russian propaganda and weaken international support for Ukraine. The websites were designed to look like authentic news sites but were actually fake, with bogus social media personas manufactured to appear as if they belonged to American users.

    The Justice Department did not identify which candidate in particular the propaganda campaign was meant to boost. But internal strategy notes from participants in the effort released Wednesday by the Justice Department make clear that Trump was the intended beneficiary, even though the names of the candidates were blacked out.

    The proposal for one propaganda project, for instance, states that one of its objectives was to secure a victory for a candidate who is currently out of power and to increase the percentage of Americans who believe the U.S. has been doing too much to support Ukraine. President Joe Biden has strongly supported Ukraine during the invasion by Russia.

    Intelligence agencies have previously charged that Russia, which during the 2016 election launched a massive campaign of foreign influence and interference on Trump’s behalf, was using disinformation to try to meddle in this year’s election. The new steps show the depth of those concerns.

    “Today’s announcement highlights the lengths some foreign governments go to undermine American democratic institutions,” the State Department said. “But these foreign governments should also know that we will not tolerate foreign malign actors intentionally interfering and undermining free and fair elections.”

    The State Department announced it was taking action against several employees of Russian state-owned media outlets, designating them as “foreign missions,” and offering a cash reward for information provided to the U.S. government about foreign election interference.

    It also said it was adding media company Rossiya Segodnya and its subsidiaries RIA Novosti, RT, TV-Novosti, Ruptly, and Sputnik to its list of foreign missions. That will require them to register with the U.S. government and disclose their properties and personnel in the U.S.

    In a speech last month, Deputy Attorney General Lisa Monaco said Russia remained the biggest threat to election integrity, accusing Putin and his proxies of “targeting specific voter demographics and swing-state voters to in an effort to manipulate presidential and congressional election outcomes.” Russia, she said was “intent on co-opting unwitting Americans on social media to push narratives advancing Russian interests.”

    She struck a similar note Thursday, saying at an Aspen Institute event that the foreign influence threat is more diverse and aggressive than in past years.

    “More diverse and aggressive because they involve more actors from more countries than we have ever seen before, operating in a more polarized world than we have ever seen before, all fueled by more technology and accelerated by technology, like AI, and that is what we have exposed in the law enforcement actions we took today,” she said.

    Much of the concern around Russia centers on cyberattacks and disinformation campaigns designed to influence the November vote.

    The tactics include using state media like RT to advance anti-U.S. messages and content, as well as networks of fake websites and social media accounts that amplify the claims and inject them into Americans’ online conversations. Typically, these networks seize on polarizing political topics such as immigration, crime or the war in Gaza.

    In many cases, Americans may have no idea that the content they see online either originated or was amplified by the Kremlin.

    Groups linked to the Kremlin are increasingly hiring marketing and communications firms within Russia to outsource some of the work of creating digital propaganda while also covering their tracks, the officials said during the briefing with reporters.

    Two such firms were the subject of new U.S. sanctions announced in March. Authorities say the two Russian companies created fake websites and social media profiles to spread Kremlin disinformation.

    The ultimate goal, however, is to get Americans to spread Russian disinformation without questioning its origin. People are far more likely to trust and repost information that they believe is coming from a domestic source, officials said. Fake websites designed to mimic U.S. news outlets and AI-generated social media profiles are just two methods.

    Messages left with the Russian Embassy were not immediately returned.

    _____

    Associated Press writers Dan Merica and Alanna Durkin Richer in Washington, Ali Swenson in New York and Alan Suderman in Richmond, Va., contributed to this report.

    [ad_2]

    Source link

  • US accuses Russia of using state media to spread disinformation before the November election

    US accuses Russia of using state media to spread disinformation before the November election

    [ad_1]

    WASHINGTON — The Biden administration announced wide-ranging actions Wednesday meant to call out Russian influence in the upcoming U.S. presidential election, unsealing criminal charges against two employees of a Russian state-run media company and seizing internet domains used by the Kremlin to spread disinformation.

    The measures represented a U.S. government effort at disrupting a persistent threat from Russia that American officials have long warned has the potential to sow discord and create confusion among voters. Washington has said that Russia remains the primary threat to elections even as the FBI investigates a hack by Iran of Donald Trump’s campaign and an attempt breach of the Joe Biden-Kamala Harris campaign.

    One criminal case accuses two employees of RT, a Russian-state-funded media organization that was forced by the Justice Department to register as a foreign agent, of covertly funding a Tennessee-based content creation company to publish nearly 2,000 videos containing Russian propaganda. The defendants, who remain at large, used fake identities and the company was unaware it was being used by Russia.

    In the other action, officials announced the seizure of 32 internet domains that were used by the Kremlin to spread Russian propaganda and weaken global support for Ukraine.

    Attorney General Merrick Garland said the actions relate to Russia’s use of state media to enlist unwitting American influencers to spread propaganda and disinformation.

    Intelligence agencies have previously charged that Russia was using disinformation to try to interfere in the election. The new steps show the depth of U.S. concerns and signal legal actions against those suspected of being involved.

    “Today’s announcement highlights the lengths some foreign governments go to undermine American democratic institutions,” the State Department said. “But these foreign governments should also know that we will not tolerate foreign malign actors intentionally interfering and undermining free and fair elections.”

    In a speech last month, Deputy Attorney General Lisa Monaco said Russia remained the biggest threat to election integrity, accusing Russian President Vladimir Putin and “his proxies are using increasingly sophisticated techniques in their interference operations. They’re targeting specific voter demographics and swing-state voters to in an effort to manipulate presidential and congressional election outcomes. They’re intent on co-opting unwitting Americans on social media to push narratives advancing Russian interests.”

    Much of the concern around Russia centers on cyberattacks and disinformation campaigns designed to influence the November vote. The tactics include using state media like RT to advance anti-U.S. messages and content, as well as networks of fake websites and social media accounts that amplify the claims and inject them into American’s online conversations. Typically, these networks seize on polarizing political topics such as immigration, crime or the war in Gaza.

    In many cases, Americans may have no idea that the content they see online either originated or was amplified by the Kremlin.

    “Russia is taking a whole of government approach to influence the election including the presidential race,” an official from the Office of the Director of National Intelligence said this summer during a briefing. The official spoke on condition of anonymity under rules worked out with that office.

    Groups linked to the Kremlin are increasingly hiring marketing and communications firms within Russia to outsource some of the work of creating digital propaganda while also covering their tracks, the officials said during the briefing with reporters.

    Two such firms were the subject of new U.S. sanctions announced in March. Authorities say the two Russian companies created fake websites and social media profiles to spread Kremlin disinformation.

    The ultimate goal, however, is to get Americans to spread Russian disinformation without questioning its origin. People are far more likely to trust and repost information that they believe is coming from a domestic source, officials said. Fake websites designed to mimic U.S. news outlets and AI-generated social media profiles are just two methods.

    Messages left with the Russian Embassy were not immediately returned.

    [ad_2]

    Source link

  • China-linked ‘Spamouflage’ network mimics Americans to sway US political debate

    China-linked ‘Spamouflage’ network mimics Americans to sway US political debate

    [ad_1]

    WASHINGTON — When he first emerged on social media, the user known as Harlan claimed to be a New Yorker and an Army veteran who supported Donald Trump for president. Harlan said he was 29, and his profile picture showed a smiling, handsome young man.

    A few months later, Harlan underwent a transformation. Now, he claimed to be 31 and from Florida.

    New research into Chinese disinformation networks targeting American voters shows Harlan’s claims were as fictitious as his profile picture, which analysts think was created using artificial intelligence.

    As voters prepare to cast their ballots this fall, China has been making its own plans, cultivating networks of fake social media users designed to mimic Americans. Whoever or wherever he really is, Harlan is a small part of a larger effort by U.S. adversaries to use social media to influence and upend America’s political debate.

    The account was traced back to Spamouflage, a Chinese disinformation group, by analysts at Graphika, a New York-based firm that tracks online networks. Known to online researchers for several years, Spamouflage earned its moniker through its habit of spreading large amounts of seemingly unrelated content alongside disinformation.

    “One of the world’s largest covert online influence operations — an operation run by Chinese state actors — has become more aggressive in its efforts to infiltrate and to sway U.S. political conversations ahead of the election,” Jack Stubbs, Graphika’s chief intelligence officer, told The Associated Press.

    Intelligence and national security officials have said that Russia, China and Iran have all mounted online influence operations targeting U.S. voters ahead of the November election. Russia remains the top threat, intelligence officials say, even as Iran has become more aggressive in recent months, covertly supporting U.S. protests against the war in Gaza and attempting to hack into the email systems of the two presidential candidates.

    China, however, has taken a more cautious, nuanced approach. Beijing sees little advantage in supporting one presidential candidate over the other, intelligence analysts say. Instead, China’s disinformation efforts focus on campaign issues particularly important to Beijing — such as American policy toward Taiwan — while seeking to undermine confidence in elections, voting and the U.S. in general.

    Officials have said it’s a longer-term effort that will continue well past Election Day as China and other authoritarian nations try to use the internet to erode support for democracy.

    Chinese Embassy spokesperson Liu Pengyu rejected Graphika’s findings as full of “prejudice and malicious speculation” and said “China has no intention and will not interfere” in the election.

    Compared with armed conflict or economic sanctions, online influence operations can be a low-cost, low-risk means of flexing geopolitical power. Given the increasing reliance on digital communications, the use of online disinformation and fake information networks is only likely to increase, said Max Lesser, senior analyst for emerging threats at the Foundation for Defense of Democracies, a national security think tank in Washington.

    “We’re going to see a widening of the playing field when it comes to influence operations, where it’s not just Russia, China and Iran but you also see smaller actors getting involved,” Lesser said.

    That list could include not only nations but also criminal organizations, domestic extremist groups and terrorist organizations, Lesser said.

    When analysts first noticed Spamouflage five years ago, the network tended to post generically pro-China, anti-American content. In recent years, the tone sharpened as Spamouflage expanded and began focusing on divisive political topics like gun control, crime, race relations and support for Israel during its war in Gaza. The network also began creating large numbers of fake accounts designed to mimic American users.

    Spamouflage accounts don’t post much original content, instead using platforms like X or TikTok to recycle and repost content from far-right and far-left users. Some of the accounts seemed designed to appeal to Republicans, while others cater to Democrats.

    While Harlan’s accounts succeeded in getting traction — one video mocking President Joe Biden was seen 1.5 million times — many of the accounts created by the Spamouflage campaign did not. It’s a reminder that online influence operations are often a numbers game: the more accounts, the more content, the better the chance that one specific post goes viral.

    Many of the accounts newly linked to Spamouflage took pains to pose as Americans, sometimes in obvious ways. “I am an American,” one of the accounts proclaimed. Some of the accounts gave themselves away by using stilted English or strange word choices. Some were clumsier than others: “Broken English, brilliant brain, I love Trump,” read the biographical section of one account.

    Harlan’s profile picture, which Graphika researchers believe was created using AI, was identical to one used in an earlier account linked to Spamouflage. Messages sent to the person operating Harlan’s accounts were not returned.

    Several of the accounts linked to Spamouflage remain active on TikTok and X.

    [ad_2]

    Source link