ReportWire

Tag: Misinformation

  • Twitter’s plan to charge researchers for data access puts it in EU crosshairs

    Twitter’s plan to charge researchers for data access puts it in EU crosshairs

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Elon Musk pledged Twitter would abide by Europe’s new content rules — but Yevgeniy Golovchenko is not so convinced.

    The Ukrainian academic, an assistant professor at the University of Copenhagen, relies on the social network’s data to track Russian disinformation, including propaganda linked to the ongoing war in Ukraine. But that access, including to reams of tweets analyzing pro-Kremlin messaging, may soon be cut off. Or, even worse for Golovchenko, cost him potentially millions of euros a year.

    Under Musk’s leadership, Twitter is shutting down researchers’ free access to its data, though the final decision on when that will happen has yet to be made. Company officials are also offering new pay-to-play access to researchers via deals that start at $42,000 per month and can rocket up to $210,000 per month for the largest amount of data, according to Twitter’s internal presentation to academics that was shared with POLITICO.

    Yet this switch — from almost unlimited, free data access to costly monthly subscription fees — falls afoul of the European Union’s new online content rules, the Digital Services Act. Those standards, which kick in over the coming months, require the largest social networking platforms, including Twitter, to provide so-called vetted researchers free access to their data.

    It remains unclear how Twitter will meet its obligations under the 27-country bloc’s rules, which impose fines of up to 6 percent of its yearly revenue for infractions.

    “If Twitter makes access less accessible to researchers, this will hurt research on things like disinformation and misinformation,” said Golovchenko who — like many academics who spoke with POLITICO — are now in limbo until Twitter publicly decides when, or whether, it will shut down its current free data-access regime.

    It also means that “we will have fewer choices,” added the Ukrainian, acknowledging that, until now, Twitter had been more open for outsiders to poke around its data compared with the likes of Facebook or YouTube. “This means will be even more dependent on the goodwill of social media platforms.”

    Meeting EU commitments

    When POLITICO contacted Twitter for comment, the press email address sent back a poop emoji in response. A company representative did not respond to POLITICO’s questions, though executives met with EU officials and civil society groups Wednesday to discuss how Twitter would comply with Europe’s data-access obligations, according to three people with knowledge of those discussions, who were granted anonymity in order to discuss internal deliberations.

    Twitter was expected to announce details of its new paid-for data access regime last week, according to the same individuals briefed on those discussions, though no specifics about the plans were yet known. As of Friday night, no details had yet been published.

    Still, the ongoing uncertainty comes as EU regulators and policymakers have Musk in their crosshairs as the onetime world’s richest man reshapes Twitter into a free speech-focused social network. The Tesla chief executive has fired almost all of the trust, safety and policy teams in a company-wide cull of employees and has already failed to comply with some of the bloc’s new content rules that require Twitter to detail how it is tackling falsehoods and foreign interference.

    Musk has publicly stated the company will comply with the bloc’s content rules.

    “Access to platforms’ data is one of the key elements of democratic oversight of the players that control increasingly bigger part of Europe’s information space,” Věra Jourová, the European Commission vice president for values and transparency, told POLITICO in an emailed statement in reference to the EU’s code of practice on disinformation, a voluntary agreement that Twitter signed up to last year. A Commission spokesperson said such access would have to be free to approved researchers.

    European Commission Vice President Věra Jourová said “Access to platforms’ data is one of the key elements of democratic oversight” | Olivier Hoslet/EPA-EFE

    “If the access to researchers is getting worse, most likely that would go against the spirit of that commitment (under Europe’s new content rules),” Jourová added. “I appeal to Twitter to find the solution and respect its commitments under the code.”

    Show me the data access

    For researchers based in the United States — who don’t fall under the EU’s new content regime — the future is even bleaker.

    Megan Brown, a senior research engineer at New York University’s Center for Social Media and Politics, which relies heavily on Twitter’s existing access, said half of her team’s 40 projects currently use the company’s data. Under Twitter’s proposed price hikes, the researchers would have to scrap their reliance on the social network via existing paid-for access through the company’s so-called Decahose API for large-scale data access, which is expected to be shut off by the end of May.

    NYU’s work via Twitter data has looked at everything from how automated bots skew conversations on social media to potential foreign interference via social media during elections. Such projects, Brown added, will not be possible when Twitter shuts down academic access to those unwilling to pay the new prices.

    “We cannot pay that amount of money,” said Brown. “I don’t know of a research center or university that can or would pay that amount of money.”

    For Rebekah Tromble, chairperson of the working group on platform-to-researcher data access at the European Digital Media Observatory, a Commission-funded group overseeing which researchers can access social media companies’ data under the bloc’s new rules, any rollback of Twitter’s data-access allowances would be against their existing commitments to give researchers greater access to its treasure trove of data.

    “If Twitter makes the choice to begin charging researchers for access, it will clearly be in violation of its commitments under the code of practice [on disinformation],” she said.

    This article has been updated.

    [ad_2]

    Mark Scott

    Source link

  • China says US spreading disinformation, suppressing TikTok

    China says US spreading disinformation, suppressing TikTok

    [ad_1]

    BEIJING — China accused the United States on Thursday of spreading disinformation and suppressing TikTok following reports that the Biden administration was calling for its Chinese owners to sell their stakes in the popular video-sharing app.

    The U.S. has yet to present evidence that TikTok threatens its national security and was using the excuse of data security to abuse its power to suppress foreign companies, Foreign Ministry spokesperson Wang Wenbin told reporters at a daily briefing.

    “The U.S. should stop spreading disinformation about data security, stop suppressing the relevant company, and provide an open, fair and non-discriminatory environment for foreign businesses to invest and operate in the U.S.,” Wang said.

    TikTok was dismissive Wednesday of a report in The Wall Street Journal that said the Committee on Foreign Investment in the U.S., part of the Treasury Department, was threatening a U.S. ban on the app unless its owners, Beijing-based ByteDance Ltd., divested.

    “If protecting national security is the objective, divestment doesn’t solve the problem: A change in ownership would not impose any new restrictions on data flows or access,” TikTok spokesperson Maureen Shanahan said.

    Shanahan said TikTok was already answering concerns through “transparent, U.S.-based protection of U.S. user data and systems, with robust third-party monitoring, vetting, and verification.”

    The Journal report cited anonymous “people familiar with the matter.” The Treasury Department and the White House’s National Security Council declined to comment.

    In late February, the White House gave all federal agencies 30 days to wipe TikTok off all government devices. Some agencies, including the Departments of Defense, Homeland Security and the State Department already have restrictions in place. The White House already does not allow TikTok on its devices.

    Congress passed the “No TikTok on Government Devices Act” in December as part of a sweeping government funding package. The legislation does allow for TikTok use in certain cases, including for national security, law enforcement and research purposes.

    Meanwhile, lawmakers in both the House and Senate have been moving forward with legislation that would give the Biden administration more power to clamp down on TikTok.

    TikTok remains extremely popular and is used by two-thirds of teens in the U.S. But there is increasing concern that Beijing could obtain control of American user data that the app has obtained and push pro-Beijing narratives and propaganda on the app.

    China has long been concerned about the influence of overseas social media and communications apps, and bans most of the best-known ones, including Facebook, Twitter, Instagram, YouTube — and TikTok.

    [ad_2]

    Source link

  • China says US spreading disinformation, suppressing TikTok

    China says US spreading disinformation, suppressing TikTok

    [ad_1]

    BEIJING — China accused the United States on Thursday of spreading disinformation and suppressing TikTok following reports that the Biden administration was calling for its Chinese owners to sell their stakes in the popular video-sharing app.

    The U.S. has yet to present evidence that TikTok threatens its national security and was using the excuse of data security to abuse its power to suppress foreign companies, Foreign Ministry spokesperson Wang Wenbin told reporters at a daily briefing.

    “The U.S. should stop spreading disinformation about data security, stop suppressing the relevant company, and provide an open, fair and non-discriminatory environment for foreign businesses to invest and operate in the U.S.,” Wang said.

    TikTok was dismissive Wednesday of a report in The Wall Street Journal that said the Committee on Foreign Investment in the U.S., part of the Treasury Department, was threatening a U.S. ban on the app unless its owners, Beijing-based ByteDance Ltd., divested.

    “If protecting national security is the objective, divestment doesn’t solve the problem: A change in ownership would not impose any new restrictions on data flows or access,” TikTok spokesperson Maureen Shanahan said.

    Shanahan said TikTok was already answering concerns through “transparent, U.S.-based protection of U.S. user data and systems, with robust third-party monitoring, vetting, and verification.”

    The Journal report cited anonymous “people familiar with the matter.” The Treasury Department and the White House’s National Security Council declined to comment.

    In late February, the White House gave all federal agencies 30 days to wipe TikTok off all government devices. Some agencies, including the Departments of Defense, Homeland Security and the State Department already have restrictions in place. The White House already does not allow TikTok on its devices.

    Congress passed the “No TikTok on Government Devices Act” in December as part of a sweeping government funding package. The legislation does allow for TikTok use in certain cases, including for national security, law enforcement and research purposes.

    Meanwhile, lawmakers in both the House and Senate have been moving forward with legislation that would give the Biden administration more power to clamp down on TikTok.

    TikTok remains extremely popular and is used by two-thirds of teens in the U.S. But there is increasing concern that Beijing could obtain control of American user data that the app has obtained and push pro-Beijing narratives and propaganda on the app.

    China has long been concerned about the influence of overseas social media and communications apps, and bans most of the best-known ones, including Facebook, Twitter, Instagram, YouTube — and TikTok.

    [ad_2]

    Source link

  • China says US spreading disinformation, suppressing TikTok

    China says US spreading disinformation, suppressing TikTok

    [ad_1]

    BEIJING — China accused the United States on Thursday of spreading disinformation and suppressing TikTok following reports that the Biden administration was calling for its Chinese owners to sell their stakes in the popular video-sharing app.

    The U.S. has yet to present evidence that TikTok threatens its national security and was using the excuse of data security to abuse its power to suppress foreign companies, Foreign Ministry spokesperson Wang Wenbin told reporters at a daily briefing.

    “The U.S. should stop spreading disinformation about data security, stop suppressing the relevant company, and provide an open, fair and non-discriminatory environment for foreign businesses to invest and operate in the U.S.,” Wang said.

    TikTok was dismissive Wednesday of a report in The Wall Street Journal that said the Committee on Foreign Investment in the U.S., part of the Treasury Department, was threatening a U.S. ban on the app unless its owners, Beijing-based ByteDance Ltd., divested.

    “If protecting national security is the objective, divestment doesn’t solve the problem: A change in ownership would not impose any new restrictions on data flows or access,” TikTok spokesperson Maureen Shanahan said.

    Shanahan said TikTok was already answering concerns through “transparent, U.S.-based protection of U.S. user data and systems, with robust third-party monitoring, vetting, and verification.”

    The Journal report cited anonymous “people familiar with the matter.” The Treasury Department and the White House’s National Security Council declined to comment.

    In late February, the White House gave all federal agencies 30 days to wipe TikTok off all government devices. Some agencies, including the Departments of Defense, Homeland Security and the State Department already have restrictions in place. The White House already does not allow TikTok on its devices.

    Congress passed the “No TikTok on Government Devices Act” in December as part of a sweeping government funding package. The legislation does allow for TikTok use in certain cases, including for national security, law enforcement and research purposes.

    Meanwhile, lawmakers in both the House and Senate have been moving forward with legislation that would give the Biden administration more power to clamp down on TikTok.

    TikTok remains extremely popular and is used by two-thirds of teens in the U.S. But there is increasing concern that Beijing could obtain control of American user data that the app has obtained and push pro-Beijing narratives and propaganda on the app.

    China has long been concerned about the influence of overseas social media and communications apps, and bans most of the best-known ones, including Facebook, Twitter, Instagram, YouTube — and TikTok.

    [ad_2]

    Source link

  • Supreme Court to review federal law that protects websites from lawsuits over user-generated content

    Supreme Court to review federal law that protects websites from lawsuits over user-generated content

    [ad_1]

    Supreme Court to review federal law that protects websites from lawsuits over user-generated content – CBS News


    Watch CBS News



    The Supreme Court is set to hear arguments this week that could hold social media outlets accountable for some of the information and videos they recommend to their users. Jan Crawford reports.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • UNESCO chief urges tougher regulation of social media

    UNESCO chief urges tougher regulation of social media

    [ad_1]

    PARIS (AP) — The United Nations’ educational, scientific and cultural agency chief on Wednesday called for a global dialogue to find ways to regulate social media companies and limit their role in the spreading of misinformation around the world.

    Audrey Azoulay, the director general of UNESCO, addressed a gathering of lawmakers, journalists and civil societies from around the world to discuss ways to regulate social media platforms such as Twitter and others to help make the internet a safer, fact-based space.

    The two-day conference in Paris aims to formulate guidelines that would help regulators, governments and businesses manage content that undermines democracy and human rights, while supporting freedom of expression and promoting access to accurate and reliable information.

    The global dialogue should provide the legal tools and principles of accountability and responsibility for social media companies to contribute to the “public good,” Azoulay said in an interview with The Associated Press on the sidelines of the conference. She added: “It would limit the risks that we see today, that we live today, disinformation (and) conspiracy theories spreading faster than the truth.”

    The European Union last year passed landmark legislation that will compel big tech companies like Google and Facebook parent Meta to police their platforms more strictly to protect European users from hate speech, disinformation and harmful content.

    The Digital Services Act is one of the EU’s three significant laws targeting the tech industry.

    In the United States, the Justice Department and Federal Trade Commission have filed major antitrust actions against Google and Facebook, although Congress remains politically divided on efforts to address online disinformation, competition, privacy and more.

    Filipino journalist and Nobel laureate Maria Ressa told participants in the Paris conference that putting laws into place that would prevent social media companies from “proliferating misinformation on their platforms” is long overdue.

    Ressa is a longtime critic of social media platforms that she said have put “democracy at risk” and distracted societies from solving problems such climate change and the rise of authoritarianism around the world.

    By “insidiously manipulating people at the scale that’s happening now, … (they have) changed our values and it has rippled to cascading failure,” Ressa told the AP in an interview on Wednesday.

    “If you don’t have a set of shared facts, how do we deal with climate change?” Ressa said. “If everything is debatable, if trust is destroyed (there’s no) meaningful exchange.”

    She added: “Just a reminder, democracy is not just about talking. It’s about listening. It’s about finding compromises that are impossible in the world of technology today.”

    ___

    Nicholas Garriga in Paris contributed

    [ad_2]

    Source link

  • FDA’s own reputation could be restraining its misinfo fight

    FDA’s own reputation could be restraining its misinfo fight

    [ad_1]

    WASHINGTON — The government agency responsible for tracking down contaminated peanut butter and defective pacemakers is taking on a new health hazard: online misinformation.

    It’s an unlikely role for the Food and Drug Administration, a sprawling, century-old bureaucracy that for decades directed most its communications toward doctors and corporations.

    But FDA Commissioner Dr. Robert Califf has spent the last year warning that growing “distortions and half-truths” surrounding vaccines and other medical products are now “a leading cause of death in America.”

    “Almost no one should be dying of COVID in the U.S. today,” Califf told The Associated Press, noting the government’s distribution of free vaccines and antiviral medications. “People who are denying themselves that opportunity are dying because they’re misinformed.”

    Califf, who first led the agency under President Barack Obama, said the FDA could once rely on a few communication channels to reach Americans.

    “We’re now in a 24/7 sea of information without a user guide for people out there in society,” Califf said. “So this requires us to change the way we communicate.”

    The FDA’s answer? Short YouTube videos, long Twitter threads and other online postings debunking medical misinformation, including bogus COVID-19 remedies like ivermectin, the anti-parasite drug intended for farm animals. “Hold your horses y’all. Ivermectin may be trending, but it still isn’t authorized or approved to treat COVID-19” the FDA told its 500,000 Twitter followers in April.

    On Instagram, FDA memes referencing Scooby-Doo and SpongeBob urge Americans to get boosted and ignore misinformation, alongside staid agency postings about the arrival of National Handwashing Awareness Week.

    The AP asked more than a half-dozen health communication experts about the FDA’s fledgling effort. They said it mostly reflects the latest science on combating misinformation, but they also questioned whether it’s reaching enough people to have an impact — and whether separate FDA controversies are undercutting the agency’s credibility.

    “The question I start with is, ‘Are you a trusted messenger or not?’” said Dr. Seema Yasmin, a Stanford University professor who studies medical misinformation and trains health officials in responding to it. “In the context of FDA, we can highlight multiple incidents which have damaged the credibility of the agency and deepened distrust of its scientific decisions.”

    In the last two years the FDA has come under fire for its controversial approval of an unproven Alzheimer’s drug as well as its delayed response to a contaminated baby formula plant, which contributed to a national supply shortage.

    Meanwhile, the agency’s approach to booster vaccinations has been criticized by some of its top vaccine scientists and advisers.

    “It’s not fair, but it doesn’t take too many negative stories to unravel the public’s trust,” said Georgetown University’s Leticia Bode, who studies political communication and misinformation.

    About a quarter of Americans said they have “a lot” of trust in the FDA’s handling of COVID-19, according to a survey conducted last year by University of Pennsylvania researchers, while less than half said they have “some trust.”

    “The FDA’s word is still one of the most highly regarded pieces of information people want to see,” said Califf, who was confirmed to his second stint leading the FDA last February.

    As commissioner he is trying to tackle a host of issues, including restructuring the agency’s food safety program and more aggressively deploying FDA scientists to explain vaccine decisions in the media.

    The array of challenges before the FDA raises questions about the new focus on misinformation. And Califf acknowledges the limits of what his agency can accomplish.

    “Anyone who thinks the government’s going to solve this problem alone is deluding themselves,” he said. “We need a vast network of knowledgeable people who devote part of their day to combating misinformation.”

    Georgetown’s Bode said the agency is “moving in the right direction,” on misinformation, particularly its “Just a Minute” series of factchecking videos, which feature FDA’s vaccine chief Dr. Peter Marks succinctly addressing a single COVID-19 myth or topic.

    But how many people are seeing them?

    “FDA’s YouTube videos have a minuscule audience,” said Brandon Nyhan, who studies medical misinformation at Dartmouth College. The people watching FDA videos ”are not the people we typically think about when we think about misinformation.”

    Research by Nyhan and his colleagues suggests that fact-checking COVID-19 myths briefly dispels false beliefs, but the effects are “ephemeral.” Nyhan and other researchers noted the most trusted medical information source for most Americans is their doctor, not the government.

    Even if the audience for FDA’s work is small, experts in online analytics say it may be having a bigger impact.

    An FDA page dubbed “Rumor Control” debunks a long list of false claims about vaccines, such as that they contain pesticides. A Google search for “vaccines” and “pesticides” brings up the FDA’s response as a top response, because the search engine prioritizes credible websites.

    “Because the FDA puts that information on its website, it will actually crowd out the misinformation from the top 10 or 20 Google results,” said David Lazer, a political and computer scientist at Northeastern University.

    Perhaps the most promising approach to fighting misinformation is also the toughest to execute: introduce people to emerging misinformation and explain why it’s false before they encounter it elsewhere.

    That technique, called “pre-bunking,” presents challenges for large government agencies.

    “Is the FDA nimble enough to have a detection system for misinformation and then quickly put out pre-bunking information within hours or days?” Lazer asked.

    Califf said the FDA tracks new misinformation trends online and quickly decides whether — and when — to intervene.

    “Sometimes calling attention to an issue can make it worse,” he notes.

    Other communication challenges are baked into how the FDA operates. For instance, the agency consults an independent panel of vaccine specialists on major decisions about COVID-19 shots, considered a key step in fostering trust in the process.

    But some of those experts have disagreed on who should receive COVID-19 vaccine boosters or how strong the evidence is for their use, particularly among younger people.

    The FDA then largely relies on news media to translate those debates and its final decisions, which are often laden with scientific jargon.

    The result has been “utter confusion,” about the latest round of COVID-19 boosters, says Lawrence Gostin, a public health specialist at Georgetown.

    “If you’re trying to counteract misinformation on social media your first job is to clarify, simplify and explain things in an understandable way to the lay public,” said Gostin. “I don’t think anyone could say that FDA has done a good job with that.”

    ___

    Follow Matthew Perrone on Twitter: @AP_FDAwriter

    ___

    The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Science and Educational Media Group. The AP is solely responsible for all content.

    [ad_2]

    Source link

  • Google to expand misinformation “prebunking” in Europe

    Google to expand misinformation “prebunking” in Europe

    [ad_1]

    WASHINGTON — After seeing promising results in Eastern Europe, Google will initiate a new campaign in Germany that aims to make people more resilient to the corrosive effects of online misinformation.

    The tech giant plans to release a series of short videos highlighting the techniques common to many misleading claims. The videos will appear as advertisements on platforms like Facebook, YouTube or TikTok in Germany. A similar campaign in India is also in the works.

    It’s an approach called pre-bunking, which involves teaching people how to spot false claims before they encounter them. The strategy is gaining support among researchers and tech companies.

    “There’s a real appetite for solutions,” said Beth Goldberg, head of research and development at Jigsaw, an incubator division of Google that studies emerging social challenges. “Using ads as a vehicle to counter a disinformation technique is pretty novel. And we’re excited about the results.”

    While belief in falsehoods and conspiracy theories isn’t new, the speed and reach of the internet has given them a heightened power. When catalyzed by algorithms, misleading claims can discourage people from getting vaccines, spread authoritarian propaganda, foment distrust in democratic institutions and spur violence.

    It’s a challenge with few easy solutions. Journalistic fact checks are effective, but they’re labor intensive, aren’t read by everyone, and won’t convince those already distrustful of traditional journalism. Content moderation by tech companies is another response, but it only drives misinformation elsewhere, while prompting cries of censorship and bias.

    Pre-bunking videos, by contrast, are relatively cheap and easy to produce and can be seen by millions when placed on popular platforms. They also avoid the political challenge altogether by focusing not on the topics of false claims, which are often cultural lightning rods, but on the techniques that make viral misinformation so infectious.

    Those techniques include fear-mongering, scapegoating, false comparisons, exaggeration and missing context. Whether the subject is COVID-19, mass shootings, immigration, climate change or elections, misleading claims often rely on one or more of these tricks to exploit emotions and short-circuit critical thinking.

    Last fall, Google launched the largest test of the theory so far with a pre-bunking video campaign in Poland, the Czech Republic and Slovakia. The videos dissected different techniques seen in false claims about Ukrainian refugees. Many of those claims relied on alarming and unfounded stories about refugees committing crimes or taking jobs away from residents.

    The videos were seen 38 million times on Facebook, TikTok, YouTube and Twitter — a number that equates to a majority of the population in the three nations. Researchers found that compared to people who hadn’t seen the videos, those who did watch were more likely to be able to identify misinformation techniques, and less likely to spread false claims to others.

    The pilot project was the largest test of pre-bunking so far and adds to a growing consensus in support of the theory.

    “This is a good news story in what has essentially been a bad news business when it comes to misinformation,” said Alex Mahadevan, director of MediaWise, a media literacy initiative of the Poynter Institute that has incorporated pre-bunking into its own programs in countries including Brazil, Spain, France and the U.S.

    Mahadevan called the strategy a “pretty efficient way to address misinformation at scale, because you can reach a lot of people while at the same time address a wide range of misinformation.”

    Google’s new campaign in Germany will include a focus on photos and videos, and the ease with which they can be presented of evidence of something false. One example: Last week, following the earthquake in Turkey, some social media users shared video of the massive explosion in Beirut in 2020, claiming it was actually footage of a nuclear explosion triggered by the earthquake. It was not the first time the 2020 explosion had been the subject of misinformation.

    Google will announce its new German campaign Monday ahead of next week’s Munich Security Conference. The timing of the announcement, coming before that annual gathering of international security officials, reflects heightened concerns about the impact of misinformation among both tech companies and government officials.

    Tech companies like pre-bunking because it avoids touchy topics that are easily politicized, said Sander van der Linden, a University of Cambridge professor considered a leading expert on the theory. Van der Linden worked with Google on its campaign and is now advising Meta, the owner of Facebook and Instagram, as well.

    Meta has incorporated pre-bunking into many different media literacy and anti-misinformation campaigns in recent years, the company told The Associated Press in an emailed statement.

    They include a 2021 program in the U.S. that offered media literacy training about COVID-19 to Black, Latino and Asian American communities. Participants who took the training were later tested and found to be far more resistant to misleading COVID-19 claims.

    Pre-bunking comes with its own challenges. The effects of the videos eventually wears off, requiring the use of periodic “booster” videos. Also, the videos must be crafted well enough to hold the viewer’s attention, and tailored for different languages, cultures and demographics. And like a vaccine, it’s not 100% effective for everyone.

    Google found that its campaign in Eastern Europe varied from country to country. While the effect of the videos was highest in Poland, in Slovakia they had “little to no discernible effect,” researchers found. One possible explanation: The videos were dubbed into the Slovak language, and not created specifically for the local audience.

    But together with traditional journalism, content moderation and other methods of combating misinformation, pre-bunking could help communities reach a kind of herd immunity when it comes to misinformation, limiting its spread and impact.

    “You can think of misinformation as a virus. It spreads. It lingers. It can make people act in certain ways,” Van der Linden told the AP. “Some people develop symptoms, some do not. So: if it spreads and acts like a virus, then maybe we can figure out how to inoculate people.”

    ___

    Follow the AP’s coverage of misinformation at https://apnews.com/hub/misinformation.

    [ad_2]

    Source link

  • New AI voice-cloning tools ‘add fuel’ to misinformation fire

    New AI voice-cloning tools ‘add fuel’ to misinformation fire

    [ad_1]

    NEW YORK (AP) — In a video from a Jan. 25 news report, President Joe Biden talks about tanks. But a doctored version of the video has amassed hundred of thousands of views this week on social media, making it appear he gave a speech that attacks transgender people.

    Digital forensics experts say the video was created using a new generation of artificial intelligence tools, which allow anyone to quickly generate audio simulating a person’s voice with a few clicks of a button. And while the Biden clip on social media may have failed to fool most users this time, the clip shows how easy it now is for people to generate hateful and disinformation-filled “deepfake” videos that could do real-world harm.

    “Tools like this are going to basically add more fuel to fire,” said Hafiz Malik, a professor of electrical and computer engineering at the University of Michigan who focuses on multimedia forensics. “The monster is already on the loose.”

    It arrived last month with the beta phase of ElevenLabs’ voice synthesis platform, which allowed users to generate realistic audio of any person’s voice by uploading a few minutes of audio samples and typing in any text for it to say.

    The startup says the technology was developed to dub audio in different languages for movies, audiobooks and gaming to preserve the speaker’s voice and emotions.

    Social media users quickly began sharing an AI-generated audio sample of Hillary Clinton reading the same transphobic text featured in the Biden clip, along with fake audio clips of Bill Gates supposedly saying that the COVID-19 vaccine causes AIDS and actress Emma Watson purportedly reading Hitler’s manifesto “Mein Kampf.”

    Shortly after, ElevenLabs tweeted that it was seeing “an increasing number of voice cloning misuse cases,” and announced that it was now exploring safeguards to tamp down on abuse. One of the first steps was to make the feature available only to those who provide payment information. Initially, anonymous users were able to access the voice cloning tool for free. The company also claims that if there are issues, it can trace any generated audio back to the creator.

    But even the ability to track creators won’t mitigate the tool’s harm, said Hany Farid, a professor at the University of California, Berkeley, who focuses on digital forensics and misinformation.

    “The damage is done,” he said.

    As an example, Farid said bad actors could move the stock market with fake audio of a top CEO saying profits are down. And already there’s a clip on YouTube that used the tool to alter a video to make it appear Biden said the U.S. was launching a nuclear attack against Russia.

    Free and open-source software with the same capabilities have also emerged online, meaning paywalls on commercial tools aren’t an impediment. Using one free online model, the AP generated audio samples to sound like actors Daniel Craig and Jennifer Lawrence in just a few minutes.

    “The question is where to point the finger and how to put the genie back in the bottle?” Malik said. “We can’t do it.”

    When deepfakes first made headlines about five years ago, they were easy enough to detect since the subject didn’t blink and audio sounded robotic. That’s no longer the case as the tools become more sophisticated.

    The altered video of Biden making derogatory comments about transgender people, for instance, combined the AI-generated audio with a real clip of the president, taken from a Jan. 25 CNN live broadcast announcing the U.S. dispatch of tanks to Ukraine. Biden’s mouth was manipulated in the video to match the audio. While most Twitter users recognized that the content was not something Biden was likely to say, they were nevertheless shocked at how realistic it appeared. Others appeared to believe it was real – or at least didn’t know what to believe.

    Hollywood studios have long been able to distort reality, but access to that technology has been democratized without considering the implications, said Farid.

    “It’s a combination of the very, very powerful AI based technology, the ease of use, and then the fact that the model seems to be: let’s put it on the internet and see what happens next,” Farid said.

    Audio is just one area where AI-generated misinformation poses a threat.

    Free online AI image generators like Midjourney and DALL-E can churn out photorealistic images of war and natural disasters in the style of legacy media outlets with a simple text prompt. Last month, some school districts in the U.S. began blocking ChatGPT, which can produce readable text – like student term papers – on demand.

    ElevenLabs did not respond to a request for comment.

    [ad_2]

    Source link

  • The Truth Behind Viral Videos Linking COVID Vaccine to Spasms, Shakes

    The Truth Behind Viral Videos Linking COVID Vaccine to Spasms, Shakes

    [ad_1]

    SOURCES: 

    Twitter: @AngeliaDesselle, Jan. 21, 2023, @seanybrams, Jan. 24, 2023. 

    JAMA Neurology: “Current Concepts in Diagnosis and Treatment of Functional Neurological Disorders.”

    Functional Neurological Disorder Society: “Press release from the Functional Neurological Disorders Society.”

    European Journal of Neurology: “Functional disorders as a common motor manifestation of COVID-19 infection or vaccination.”

    Alfonso Fasano MD, chair, Neuromodulation and Multi-Disciplinary Care, University of Toronto and University Health Network, co-director, Surgical Program for Movement Disorders, Toronto Western Hospital, Ottawa, Canada. 

    Neurologist: “Functional Neurological Disorders: Clinical Spectrum, Diagnosis, and Treatment.”

    Matthew Laurens, MD, pediatric infectious disease specialist, professor of pediatrics, University of Maryland School of Medicine, Baltimore.

    Jennifer Frontera, MD, neurologist, NYU Langone Health, professor of neurology, NYU Langone School of Medicine, New York City.

    Annals of Neurology: “Neurological Events Reported after COVID-19 Vaccines: An Analysis of VAERS.”

    Movement Disorders Clinical Practice: “Tics and TikTok: Functional Tics Spread Through Social Media.”

    Politifact: “The ‘shaking’ COVID-19 vaccine side-effect videos and what we know about them.”

    [ad_2]

    Source link

  • ‘Died suddenly’ posts twist tragedies to push vaccine lies

    ‘Died suddenly’ posts twist tragedies to push vaccine lies

    [ad_1]

    Results from 6-year-old Anastasia Weaver’s autopsy may take weeks. But online anti-vaccine activists needed only hours after her funeral this week to baselessly blame the COVID-19 vaccine.

    A prolific Twitter account posted Anastasia’s name and smiling dance portrait in a tweet with a syringe emoji. A Facebook user messaged her mother, Jessica Day-Weaver, to call her a “murderer” for having her child vaccinated.

    In reality, the Ohio kindergartner had experienced lifelong health problems since her premature birth, including epilepsy, asthma and frequent hospitalizations with respiratory viruses. “The doctors haven’t given us any information other than it was due to all of her chronic conditions. … There was never a thought that it could be from the vaccine,” Day-Weaver said of her daughter’s death.

    But those facts didn’t matter online, where Anastasia was swiftly added to a growing list of hundreds of children, teens, athletes and celebrities whose unexpected deaths and injuries have been incorrectly blamed on COVID-19 shots. Using the hashtag #diedsuddenly, online conspiracy theorists have flooded social media with news reports, obituaries and GoFundMe pages in recent months, leaving grieving families to wrestle with the lies.

    There’s the 37-year-old Brazilian television host who collapsed live on air because of a congenital heart problem. The 18-year-old unvaccinated bull rider who died from a rare disease. The 32-year-old actress who died from bacterial infection complications.

    The use of “died suddenly” — or a misspelled version of it — has surged more than 740% in tweets about vaccines over the past two months compared with the two previous months, the media intelligence firm Zignal Labs found in an analysis conducted for The Associated Press. The phrase’s explosion began with the late November debut of an online “documentary” by the same name, giving power to what experts say is a new and damaging shorthand.

    “It’s kind of in-group language, kind of a wink wink, nudge nudge,” said Renee DiResta, technical research manager at the Stanford Internet Observatory. “They’re taking something that is a relatively routine way of describing something — people do, in fact, die unexpectedly — and then by assigning a hashtag to it, they aggregate all of these incidents in one place.”

    The campaign causes harm beyond just the internet, epidemiologist Dr. Katelyn Jetelina said.

    “The real danger is that it ultimately leads to real world actions such as not vaccinating,” said Jetelina, who tracks and breaks down COVID data for her blog, “Your Local Epidemiologist.”

    Rigorous study and real-world evidence from hundreds of millions of administered shots prove that COVID-19 vaccines are safe and effective. Deaths caused by vaccination are extremely rare and the risks associated with not getting vaccinated are far higher than the risks of vaccination. But that hasn’t stopped conspiracy theorists from lobbing a variety of untrue accusations at the vaccines.

    The “Died Suddenly” film features a montage of headlines found on Google to falsely suggest they prove that sudden deaths have “never happened like this until now.” The film has amassed more than 20 million views on an alternative video sharing website, and its companion Twitter account posts about more deaths and injuries daily.

    An AP review of more than 100 tweets from the account in December and January found that claims about the cases being vaccine related were largely unsubstantiated and, in some cases, contradicted by public information. Some of the people featured died of genetic disorders, drug overdoses, flu complications or suicide. One died in a surfing accident.

    The filmmakers did not respond to specific questions from the AP, but instead issued a statement that referenced a “surge in sudden deaths” and a “PROVEN rate of excess deaths,” without providing data.

    The number of overall deaths in the U.S. has been higher than what would be expected since the start of the COVID-19 pandemic, in part because of the virus, overdoses and other causes. COVID-19 vaccines prevented nearly 2 million U.S. deaths in just their first year of use.

    Some deaths exploited in the film predate the pandemic. California writer Dolores Cruz published an essay in 2022 about grieving for her son, who died in a car crash in 2017. “Died Suddenly” used a screenshot of the headline in the film, portraying his death as vaccine related.

    “Without my permission, someone has taken his story to show one side, and I don’t appreciate that,” Cruz said in an interview. “His legacy and memory are being tarnished.”

    Others featured in the film survived — but have been forced to watch clips of their medical emergencies misrepresented around the world. For Brazilian TV presenter Rafael Silva, who collapsed while reporting on air because of a congenital heart abnormality, online disinformation prompted a wave of harassment even before the “Died Suddenly” film used the footage.

    “I received messages saying that I should have died to serve as an example for other people who were still thinking about getting the vaccine,” Silva said.

    Many of the posts online cite no evidence except that the person who died had been vaccinated at some point in the past, using a common disinformation strategy known as post hoc fallacy, according to Jetelina.

    “People assume that one thing caused another merely because the first thing preceded the other,” she said.

    Some claims about those who’ve suffered heart issues also weaponize a kernel of truth — that COVID-19 vaccines can cause rare heart inflammation issues, myocarditis or pericarditis, especially in young men. Medical experts say these cases are typically mild and the benefits of immunization far outweigh the risks.

    The narrative also has leveraged high-profile moments like the collapse of Buffalo Bills safety Damar Hamlin as he suffered cardiac arrest during a game last month after a fierce blow to his chest. But sudden cardiac arrest has long been a prominent cause of death in the U.S. — and medical experts agree the vaccine didn’t cause Hamlin’s injury.

    For some families, the misinformation represents a sideshow to their real focus: understanding why their loved ones died and preventing similar tragedies.

    Clint Erickson’s son, Tyler, died in September just before his 18th birthday while golfing near their home in Florida. The family knows his heart stopped but still doesn’t know exactly why. Tyler wasn’t vaccinated, but his story appeared in the “Died Suddenly” film nonetheless.

    “It bothers me, him being used in that way,” Erickson said. But “the biggest personal issue I have is trying to find an answer or a closure to what caused this.”

    Day-Weaver said it was upsetting to see people exploiting her daughter’s death when they knew nothing about her. They didn’t know that she loved people so much she would hug strangers at Walmart, or that she had just learned how to snap.

    Still, Day-Weaver said, “I wouldn’t wish the loss of a child on anybody. Even them.”

    ___

    Natália Scarabotto in Río de Janeiro contributed to this report.

    [ad_2]

    Source link

  • As elites arrive in Davos, conspiracy theories thrive online

    As elites arrive in Davos, conspiracy theories thrive online

    [ad_1]

    NEW YORK (AP) — When some of the world’s wealthiest and most influential figures gathered at the World Economic Forum’s annual meeting last year, sessions on climate change drew high-level discussions on topics such as carbon financing and sustainable food systems.

    But an entirely different narrative played out on the internet, where social media users claimed leaders wanted to force the population to eat insects instead of meat in the name of saving the environment.

    The annual event in the Swiss ski resort town of Davos, which opens Monday, has increasingly become a target of bizarre claims from a growing chorus of commentators who believe the forum involves a group of elites manipulating global events for their own benefit. Experts say what was once a conspiracy theory found in the internet’s underbelly has now hit the mainstream.

    “This isn’t a conspiracy that is playing out on the extreme fringes,” said Alex Friedfeld, a researcher with the Anti-Defamation League who studies anti-government extremism. “We’re seeing it on mainstream social media platforms being shared by regular Americans. We were seeing it being spread by mainstream media figures right on their prime time news, on their nightly networks.”

    The meeting draws heads of state, business executives, cultural trendsetters and representatives from international organizations to the luxe mountain town. Though it’s always unclear how much concrete action will emerge, the meeting is slated to take on pressing global issues from climate change and economic uncertainty to geopolitical instability and public health.

    Hundreds of public sessions are planned, but the four-day conference is also known for secretive backroom meetings and deal-making by business leaders. This gap between what’s shown to the public and what happens behind closed doors helps make that makes the meeting a flashpoint for misinformation.

    “When we have very high levels of ambiguity, it’s very easy to fill in narratives,” said Kathleen Hall Jamieson, who is the director of the Annenberg Public Policy Center at the University of Pennsylvania and also studies misinformation.

    Theories about influential global leaders are not new, she said, but scrutiny of the forum and its chairman, Klaus Schwab, intensified in 2020 in the early days of the COVID-19 pandemic. That year, the theme of the annual meeting was “The Great Reset.” The initiative envisioned sweeping changes to how societies and economies would work to recover from the pandemic and build a more sustainable future.

    Now, in increasingly mainstream corners of the internet and on conservative talk shows, “The Great Reset” has become shorthand for what skeptics say is a reorganization of society, using global uncertainty as a guise to take away rights. Believers argue that measures including pandemic lockdowns and vaccine mandates are tools to consolidate power and undercut individual sovereignty.

    In a time of mounting anxiety, Jamieson says the public has become more susceptible to falsehoods, as conspiracy theories emerge as a tool to cut through the chaos. Researchers who monitor extremism say these beliefs are becoming more popular and more concerning.

    At a rally staged on the grounds of an upstate New York church last fall, a photo of Schwab was displayed on the center of a large screen alongside other “villains” accused of threatening American values. The crowd of thousands had gathered in a revivalist tent at a traveling roadshow used as a recruiting tool for an ascendant Christian nationalist movement. Participants discussed “The Great Reset,” among a host of other theories, as an assault on America’s foundations.

    The phrase was used more than 60 times across all programs on Fox News in 2022, according to one tally generated by the Internet Archive’s TV news database. That’s up from 30 mentions in 2021 and about 20 in 2020. It was discussed most frequently on “The Ingraham Angle” and “Tucker Carlson Tonight.”

    And in August, amid a defamation trial for calling the Sandy Hook Elementary School attack a hoax, Infowars host Alex Jones released a book called “The Great Reset: And The War For the World.” It’s described as an analysis of “the global elite’s international conspiracy to enslave humanity and all life on the planet.”

    As the World Economic Forum has become intertwined with this narrative, a steady stream of claims have plagued the organization. While some people offer legitimate criticisms of the forum — namely that it hosts wealthy executives who fly in on emissions-spewing corporate jets — others spread unverified or baseless information as fact.

    For example, a site known for spreading fabricated stories falsely claimed last month that Schwab publicly encouraged the decriminalization of sex between children and adults, using an invented quote and other baseless statements. Still, it drew tens of thousands of shares on Twitter and Facebook.

    Meanwhile, the popular claim that the forum wants people to replace meat with bugs is a distorted reference to an article once published on the organization’s website. In another instance, a widely shared post claimed without evidence that the forum had “appointed” U.S. Rep. Kevin McCarthy as speaker of the House before the actual vote had taken place.

    The concern, Friedfeld says, is that posts like these could introduce people to more fringe and dangerous conspiracy theories or even translate into real-world violence. Yann Zopf, head of media for the forum, says the organization has increased its monitoring of this kind of online activity and carefully watches for direct threats.

    “Creating all that kind of stuff can generate enemies that people believe are responsible for whatever bad thing is happening in the world,” Friedfeld said. “Once that happens, when you believe that that things are happening in the world and a certain person or group of people is responsible for these attacks, all of a sudden, the idea of using violence to resist becomes more plausible.”

    ___

    Follow AP’s coverage of the World Economic Forum meeting at https://apnews.com/hub/world-economic-forum

    [ad_2]

    Source link

  • Climate misinformation ‘rocket boosters’ on Musk’s Twitter

    Climate misinformation ‘rocket boosters’ on Musk’s Twitter

    [ad_1]

    WASHINGTON (AP) — Search for the word “climate” on Twitter and the first automatic recommendation isn’t “climate crisis” or “climate jobs” or even “climate change” but instead “climate scam.”

    Clicking on the recommendation yields dozens of posts denying the reality of climate change and making misleading claims about efforts to mitigate it.

    Such misinformation has flourished on Twitter since it was bought by Elon Musk last year, but the site isn’t the only one promoting content that scientists and environmental advocates say undercuts public support for policies intended to respond to a changing climate.

    “What’s happening in the information ecosystem poses a direct threat to action,” said Jennie King, head of climate research and response at the Institute for Strategic Dialogue, a London-based nonprofit. “It plants those seeds of doubt and makes people think maybe there isn’t scientific consensus.”

    The institute is part of a coalition of environmental advocacy groups that on Thursday released a report tracking climate change disinformation in the months before, during and after the U.N. climate summit in November.

    The report faulted social media platforms for, among other things, failing to enforce their own policies prohibiting climate change misinformation. It is only the latest to highlight the growing problem of climate misinformation on Twitter.

    Meta, which owns Facebook and Instagram, allowed nearly 4,000 advertisements on its site — most bought by fossil fuel companies — that dismissed the scientific consensus behind climate change and criticized efforts to respond to it, the researchers found.

    In some cases, the ads and the posts cited inflation and economic fears as reasons to oppose climate policies, while ignoring the costs of inaction. Researchers also found that a significant number of the accounts posting false claims about climate change also spread misinformation about U.S. elections, COVID-19 and vaccines.

    Twitter did not respond to questions from The Associated Press. A spokesperson for Meta cited the company’s policy prohibiting ads that have been proven false by its fact-checking partners, a group that includes the AP. The ads identified in the report had not been fact-checked.

    Under Musk, Twitter laid off thousands of employees and made changes to its content moderation that its critics said undercut the effort. In November, the company announced it would no longer enforce its policy against COVID-19 misinformation. Musk also reinstated many formerly banned users, including several who had spread misleading claims about climate change. Instances of hate speech and attacks on LGBTQ people soared.

    Tweets containing “climate scam” or other terms linked to climate change denial rose 300% in 2022, according to a report released last week by the nonprofit Advance Democracy. While Twitter had labeled some of the content as misinformation, many of the popular posts were not labeled.

    Musk’s new verification system could be part of the problem, according to a report from the Center for Countering Digital Hate, another organization that tracks online misinformation. Previously, the blue checkmarks were held by people in the public eye such as journalists, government officials or celebrities.

    Now, anyone willing to pay $8 a month can seek a checkmark. Posts and replies from verified accounts are given an automatic boost on the platform, making them more visible than content from users who don’t pay.

    When researchers at the Center for Countering Digital Hate analyzed accounts verified after Musk took over, they found they spread four times the amount of climate change misinformation compared with users verified before Musk’s purchase.

    Verification systems are typically created to assure users that the accounts they follow are legitimate. Twitter’s new system, however, makes no distinction between authoritative sources on climate change and anyone with $8 and an opinion, according to Imran Ahmed, the center’s chief executive.

    “We found,” Ahmed said, “it has in fact put rocket boosters on the spread of lies and disinformation.”

    __

    This story has been updated to correct the last name of Imran Ahmed.

    [ad_2]

    Source link

  • On King’s holiday, daughter calls for bold action over words

    On King’s holiday, daughter calls for bold action over words

    [ad_1]

    ATLANTA — America has honored Martin Luther King Jr. with a federal holiday for nearly four decades yet still hasn’t fully embraced and acted on the lessons from the slain civil rights leader, his youngest daughter said Monday.

    The Rev. Bernice King, who leads The King Center in Atlanta, said leaders — especially politicians — too often cheapen her father’s legacy into a “comfortable and convenient King” offering easy platitudes.

    “We love to quote King in and around the holiday. … But then we refuse to live King 365 days of the year,” she declared at the commemorative service at Ebenezer Baptist Church, where her father once preached.

    The service, sponsored by the center and held at Ebenezer annually, headlined observances of the 38th federal King holiday. King, gunned down in Memphis in 1968 as he advocated for better pay and working conditions for the city’s sanitation workers, would have celebrated his 94th birthday Sunday.

    Her voice rising and falling in cadences similar to her father’s, Bernice King bemoaned institutional and individual racism, economic and health care inequities, police violence, a militarized international order, hardline immigration structures and the climate crisis. She said she’s “exhausted, exasperated and, frankly, disappointed” to hear her father’s words about justice quoted so extensively alongside “so little progress” addressing society’s gravest problems.

    “He was God’s prophet sent to this nation and even the world to guide us and forewarn us. … A prophetic word calls for an inconvenience because it challenges us to change our hearts, our minds and our behavior,” Bernice King said. “Dr. King, the inconvenient King, puts some demands on us to change our ways.”

    President Joe Biden was scheduled Monday to address an MLK breakfast hosted in Washington by the Rev. Al Sharpton’s National Action Network. Sharpton got his start as a civil rights organizer in his teens as youth director of an anti-poverty project of King’s Southern Christian Leadership Conference.

    “This is a time for choosing,” Biden said, repeating themes from a speech he delivered Sunday at Ebenezer at the invitation of Sen. Raphael Warnock, the senior pastor at Ebenezer who recently won re-election to a full term as Georgia’s first Black U.S. senator.

    “Will we choose democracy over autocracy, or community over chaos? Love over hate?” Biden asked Monday. “These are the questions of our time that I ran for president to try to help answer. … Dr. King’s life and legacy — in my view — shows the way forward.”

    Other commemorations echoed Bernice King’s reminder and Biden’s allusions that the “Beloved Community” — Martin Luther King’s descriptor for a world in which all people are free from fear, discrimination, hunger and violence — remains elusive.

    In Boston, Mayor Michelle Wu talked about a fight for the truth in an era of hyper-partisanship and misinformation.

    “We’re battling not just two sides or left or right and a gradient in between that have to somehow come to compromise, but a growing movement of hate, abuse, extremism and white supremacy fueled by misinformation, fueled by conspiracy theories that are taking root at every level,” she said.

    Wu, the first woman and person of color elected mayor of Boston, said education restores trust. Quoting King, she called for overcoming the “fatigue of despair” to enact change. “It is sometimes in those moments when we feel most tired, most despairing, that we are just about to break through,” Wu told attendees at a memorial breakfast.

    Volunteers in Philadelphia held a “day of service” focused on gun violence prevention. The city has seen a surge in homicides that saw 516 people killed last year and 562 the year before, the highest total in at least six decades.

    Some participants in the effort’s signature project, led by Children’s Hospital of Philadelphia, worked to assemble gun safety kits for public distribution. The kits include “gun cable locks and additional safety devices for childproofing,” according to organizers. They also include information about firearm storage, health and social services information, and coping in the aftermath of gun violence.

    Other kits being assembled highlighted Temple University Hospital’s “Fighting Chance” program and included materials to enable immediate response to victims at the scene of gunfire, organizers said. Recipients are to be trained in the use of the materials, which include tourniquets, gauze, chest seals and other items to treat critical wounds, they said.

    In Selma, Alabama, a seminal site in the civil rights movement, residents were commemorating King as they recover from a deadly storm system that moved across the South last week.

    King was not present at Selma’s Edmund Pettus Bridge for the initial march known as “Bloody Sunday,” when Alabama state troopers attacked and beat marchers in March 1965. But he joined a subsequent procession that successfully crossed the bridge toward the Capitol in Montgomery, punctuating efforts that pushed Congress to pass and President Lyndon Johnson to sign the Voting Rights Act of 1965.

    The Pettus Bridge was unscathed by Thursday’s storm.

    Maine’s first Black House speaker urged residents Monday to honor King’s memory by joining in acts of service.

    “His unshakable faith, powerful nonviolent activism and his vision for peace and justice in our world altered the course of history,” Rachel Talbot Ross said in a statement. Talbot Ross is also the daughter of Maine’s first black lawmaker, and a former president of the Portland NAACP.

    “We must follow his example of leading with light and love and recommit ourselves to building a more compassionate, just and equal community,” she added.

    At Ebenezer, Warnock, who has led the congregation for 17 years, hailed his predecessor’s role in securing ballot access for Black Americans. But, like Bernice King, the senator warned against a reductive understanding of King.

    “Don’t just call him a civil rights leader. He was a faith leader,” Warnock said. “Faith was the foundation upon which he did everything he did. You don’t face down dogs and water hoses because you read Nietzsche or Niebuhr. You gotta tap into that thing, that God he said he met anew in Montgomery when someone threatened to bomb his house and kill his wife and his new child.”

    King, Warnock said, “left the comfort of a filter that made the whole world his parish,” turning faith into “the creative weapon of love and nonviolence.”

    While echoing Bernice King’s call for bolder public policy, Warnock noted some progress in his lifetime. As he’s done through two Senate campaigns, Warnock noted he was born a year after King’s assassination, when both of Georgia senators were staunch segregationists, including one Warnock described as loving “the Negro” as long as he was “in his place at the back door.”

    But, Warnock said, “Because of what Dr. King and because of what you did … I now sit in his seat.”

    — Associated Press journalists Will Weissert in Washington, David Sharp in Portland, Maine, and Ron Todt in Philadelphia contributed.

    [ad_2]

    Source link

  • As elites gather in Davos, conspiracy theories gain traction online

    As elites gather in Davos, conspiracy theories gain traction online

    [ad_1]

    When some of the world’s wealthiest and most influential figures gathered at the World Economic Forum’s annual meeting last year, sessions on climate change drew high-level discussions on topics such as carbon financing and sustainable food systems.

    But an entirely different narrative played out on the internet, where social media users claimed leaders wanted to force the population to eat insects instead of meat in the name of saving the environment.

    The annual event in the Swiss ski resort town of Davos, which opens Monday, has increasingly become a target of bizarre claims from a growing chorus of commentators who believe the forum involves a group of elites manipulating global events for their own benefit. Experts say what was once a conspiracy theory found in the internet’s underbelly has now hit the mainstream.

    “This isn’t a conspiracy that is playing out on the extreme fringes,” said Alex Friedfeld, a researcher with the Anti-Defamation League who studies anti-government extremism. “We’re seeing it on mainstream social media platforms being shared by regular Americans. We were seeing it being spread by mainstream media figures right on their prime time news, on their nightly networks.”

    The meeting draws heads of state, business executives, cultural trendsetters and representatives from international organizations to the luxe mountain town. Though it’s always unclear how much concrete action will emerge, the meeting is slated to take on pressing global issues from climate change and economic uncertainty to geopolitical instability and public health.


    Global economy could be in for a “tough year” in 2023, IMF chief warns

    05:27

    “The Great Reset”

    Hundreds of public sessions are planned, but the four-day conference is also known for secretive backroom meetings and deal-making by business leaders. This gap between what’s shown to the public and what happens behind closed doors helps make that makes the meeting a flashpoint for misinformation.

    “When we have very high levels of ambiguity, it’s very easy to fill in narratives,” said Kathleen Hall Jamieson, who is the director of the Annenberg Public Policy Center at the University of Pennsylvania and also studies misinformation.

    Theories about influential global leaders are not new, she said, but scrutiny of the forum and its chairman, Klaus Schwab, intensified in 2020 in the early days of the COVID-19 pandemic. That year, the theme of the annual meeting was “The Great Reset.” The initiative envisioned sweeping changes to how societies and economies would work to recover from the pandemic and build a more sustainable future.

    Now, in increasingly mainstream corners of the internet and on conservative talk shows, “The Great Reset” has become shorthand for what skeptics say is a reorganization of society, using global uncertainty as a guise to take away rights. Believers argue that measures including pandemic lockdowns and vaccine mandates are tools to consolidate power and undercut individual sovereignty.

    Extremist beliefs take root

    In a time of mounting anxiety, Jamieson says the public has become more susceptible to falsehoods, as conspiracy theories emerge as a tool to cut through the chaos. Researchers who monitor extremism say these beliefs are becoming more popular and more concerning.

    At a rally staged on the grounds of an upstate New York church last fall, a photo of Schwab was displayed on the center of a large screen alongside other “villains” accused of threatening American values. The crowd of thousands had gathered in a revivalist tent at a traveling roadshow used as a recruiting tool for an ascendant Christian nationalist movement. Participants discussed “The Great Reset,” among a host of other theories, as an assault on America’s foundations.

    The phrase was used more than 60 times across all programs on Fox News in 2022, according to one tally generated by the Internet Archive’s TV news database. That’s up from 30 mentions in 2021 and about 20 in 2020. It was discussed most frequently on “The Ingraham Angle” and “Tucker Carlson Tonight.”

    And in August, amid a defamation trial for calling the Sandy Hook Elementary School attack a hoax, Infowars host Alex Jones released a book called “The Great Reset: And The War For the World.” It’s described as an analysis of “the global elite’s international conspiracy to enslave humanity and all life on the planet.”


    Researchers warn TikTok is a growing source of misinformation

    04:55

    As the World Economic Forum has become intertwined with this narrative, a steady stream of claims have plagued the organization. While some people offer legitimate criticisms of the forum — namely that it hosts wealthy executives who fly in on emissions-spewing corporate jets — others spread unverified or baseless information as fact.

    For example, a site known for spreading fabricated stories falsely claimed last month that Schwab publicly encouraged the decriminalization of sex between children and adults, using an invented quote and other baseless statements. Still, it drew tens of thousands of shares on Twitter and Facebook.

    Bug eaters?

    Meanwhile, the popular claim that the forum wants people to replace meat with bugs is a distorted reference to an article once published on the organization’s website. In another instance, a widely shared post claimed without evidence that the forum had “appointed” U.S. Rep. Kevin McCarthy as speaker of the House before the actual vote had taken place.

    The concern, Friedfeld says, is that posts like these could introduce people to more fringe and dangerous conspiracy theories or even translate into real-world violence. Yann Zopf, head of media for the forum, says the organization has increased its monitoring of this kind of online activity and carefully watches for direct threats.

    “Creating all that kind of stuff can generate enemies that people believe are responsible for whatever bad thing is happening in the world,” Friedfeld said. “Once that happens, when you believe that that things are happening in the world and a certain person or group of people is responsible for these attacks, all of a sudden, the idea of using violence to resist becomes more plausible.”

    [ad_2]

    Source link

  • Holmes’ former partner faces sentencing in Theranos case

    Holmes’ former partner faces sentencing in Theranos case

    [ad_1]

    A former Theranos executive learns Wednesday whether he will be punished as severely as his former lover and business partner for peddling the company’s bogus blood-testing technology that duped investors and endangered patients.

    The sentencing for Ramesh “Sunny” Balwani, who was convicted in July of fraud and conspiracy, comes less than three weeks after Elizabeth Holmes, the company’s founder and CEO, received more than 11 years in prison for her role in the scheme. The scandal revolved around the company’s false claims to have developed a medical device that could scan for hundreds of diseases and other potential problems with just a few drops of blood taken with a finger prick.

    The case threw a bright light on Silicon Valley’s dark side, exposing how its culture of hype and boundless ambition could veer into lies.

    Holmes, 38, could have gotten up to 20 years in prison — a penalty that U.S. District Judge Edward Davila could now impose on Balwani, who spent six years as Theranos’ chief operating officer while remaining romantically involved with Holmes until a bitter split in 2016.

    While on the witness stand in her trial, Holmes accused Balwani, 57, of manipulating her through years of emotional and sexual abuse. Balwani’s attorney has denied the allegations.

    The two trials had somewhat different outcomes. Unlike Balwani, Holmes was acquitted on several charges of defrauding and conspiring against people who paid for Theranos blood tests that produced misleading results and could have pointed patients toward the wrong treatment. The jury in Holmes’ trial also deadlocked on three charges.

    Balwani was convicted on all 12 felony counts, and his lawyers contend he deserves a far more lenient sentence of just four to 10 months in prison, preferably in home confinement. Prosecutors for the Justice Department are seeking 15 years. A probation report recommends nine years.

    Duncan Levin, a former federal prosecutor who is now a defense attorney, described Balwani’s bid for a light sentence as “utterly unrealistic.” Levin suspects the judge may give greater weight to the Justice Department and the probation office recommendations, which mirror the sentences those agencies sought for Holmes.

    The judge ultimately gave her 11 1/4 years in prison and recommended that the sentence be served in a low-security facility in Byran, Texas.

    The Justice Department “has now conceded that both defendants deserve the same sentence, even though Balwani was convicted for far more counts,” Levin said. Since Holmes got an 11-year sentence, “it follows logically that he will get the same sentence.”

    Federal prosecutors also want the judge to order Balwani to pay $804 million in restitution to defrauded investors — the same amount sought from Holmes. Davila deferred a decision on restitution during Holmes’ Nov. 18 sentencing until an unspecified future date.

    In court documents, Balwani’s lawyers painted him as a hardworking immigrant who moved from India to the U.S. during the 1980s to become the first member of his family to attend college. He graduated from the University of Texas in 1990 with a degree in information systems.

    He later moved to Silicon Valley, where he first worked as a computer programmer for Microsoft before founding an online startup that he sold for millions of dollars during the dot-com boom of the 1990s.

    Balwani and Holmes met around the same time she dropped out of Stanford University to start Theranos in 2003. He became enthralled with her and her quest to revolutionize health care.

    Balwani’s lawyers said he eventually invested about $5 million in a stake in Theranos that eventually became worth about $500 million on paper — a fraction of Holmes’ one-time fortune of of $4.5 billion.

    That wealth evaporated after Theranos began to unravel in 2015 amid revelations that its blood-testing technology never worked as Holmes had boasted in glowing magazine articles that likened her to Silicon Valley visionaries such as Apple co-founder Steve Jobs.

    Before Theranos’ downfall, Holmes teamed up with Balwani to raise nearly $1 billion from deep-pocketed investors that included software mogul Larry Ellison and media magnate Rupert Murdoch.

    “Mr. Balwani is not the same as Elizabeth Holmes,” his lawyers wrote in a memo to the judge. “”He actually invested millions of dollars of his own money; he never sought fame or recognition; and he has a long history of quietly giving to those less fortunate.” Balwani’s lawyers also asserted that Holmes “was dramatically more culpable” for the Theranos fraud.

    Echoing similar claims made by Holmes’s lawyers before her sentencing, Balwani’s attorneys also argued that he has been adequately punished by the intense media coverage of Theranos, which has been the subject of a book, documentary and award-winning TV series.

    Balwani “has lost his career, his reputation and his ability to meaningfully work again,” his lawyers wrote.

    Federal prosecutors cast Balwani as a ruthless, power-hungry accomplice in crimes that ripped off investors and imperiled people who received flawed results. The blood tests were to be available in a partnership with Walgreen’s that Balwani helped engineer.

    “Balwani presented a fake story about Theranos’ technology and financial stability day after day in meeting after meeting,” the prosecutors wrote in their memo to the judge. “Balwani maintained this façade of accomplishments, after making the calculated decision that honesty would destroy Theranos.”

    [ad_2]

    Source link

  • As Musk is learning, content moderation is a messy job

    As Musk is learning, content moderation is a messy job

    [ad_1]

    Now that he’s back on Twitter, neo-Nazi Andrew Anglin wants somebody to explain the rules.

    Anglin, the founder of an infamous neo-Nazi website, was reinstated Thursday, one of many previously banned users to benefit from an amnesty granted by Twitter’s new owner Elon Musk. The next day, Musk banished Ye, the rapper formerly known as Kanye West, after he posted a swastika with a Star of David in it.

    “That’s cool,” Anglin tweeted Friday. “I mean, whatever the rules are, people will follow them. We just need to know what the rules are.”

    Ask Musk. Since the world’s richest man paid $44 billion for Twitter, the platform has struggled to define its rules for misinformation and hate speech, issued conflicting and contradictory announcements, and failed to full address what researchers say is a troubling rise in hate speech.

    As the “ chief twit ” may be learning, running a global platform with nearly 240 million active daily users requires more than good algorithms and often demands imperfect solutions to messy situations — tough choices that must ultimately be made by a human and are sure to displease someone.

    A self-described free speech absolutist, Musk has said he wants to make Twitter a global digital town square. But he also said he wouldn’t make major decisions about content or about restoring banned accounts before setting up a “ content moderation council ” with diverse viewpoints.

    He soon changed his mind after polling users on Twitter, and offered reinstatement to a long list of formerly banned users including ex-President Donald Trump, Ye, the satire site The Babylon Bee, the comedian Kathy Griffin and Anglin, the neo-Nazi.

    And while Musk’s own tweets suggested he would allow all legal content on the platform, Ye’s banishment shows that’s not entirely the case. The swastika image posted by the rapper falls in the “lawful but awful” category that often bedevils content moderators, according to Eric Goldman, a technology law expert and professor at Santa Clara University law school.

    While Europe has imposed rules requiring social media platforms to create policies on misinformation and hate speech, Goldman noted that in the U.S. at least, loose regulations allow Musk to run Twitter as he sees fit, despite his inconsistent approach.

    “What Musk is doing with Twitter is completely permissible under U.S. law,” Goldman said.

    Pressure from the EU may force Musk to lay out his policies to ensure he is complying with the new law, which takes effect next year. Last month, a senior EU official warned Musk that Twitter would have to improve its efforts to combat hate speech and misinformation; failure to comply could lead to huge fines.

    In another confusing move, Twitter announced in late November that it would end its policy prohibiting COVID-19 misinformation. Days later, it posted an update claiming that “None of our policies have changed.”

    On Friday, Musk revealed what he said was the inside story of Twitter’s decision in 2020 to limit the spread of a New York Post story about Hunter Biden’s laptop.

    Twitter initially blocked links to the story on its platform, citing concerns that it contained material obtained through computer hacking. That decision was reversed after it was criticized by then-Twitter CEO Jack Dorsey. Facebook also took actions to limit the story’s spread.

    The information revealed by Musk included Twitter’s decision to delete a handful of tweets after receiving a request from Joe Biden’s campaign. The tweets included nude photos of Hunter Biden that had been shared without his consent — a violation of Twitter’s rules against revenge porn.

    Instead of revealing nefarious conduct or collusion with Democrats, Musk’s revelation highlighted the kind of difficult content moderation decisions that he will now face.

    “Impossible, messy and squishy decisions” are unavoidable, according to Yoel Roth, Twitter’s former head of trust and safety who resigned a few weeks into Musk’s ownership.

    While far from perfect, the old Twitter strove to be transparent with users and steady in enforcing its rules, Roth said. That changed under Musk, he told a Knight Foundation forum this week.

    “When push came to shove, when you buy a $44 billion thing, you get to have the final say in how that $44 billion thing is governed,” Roth said.

    While much of the attention has been on Twitter’s moves in the U.S., the cutbacks of content-moderation workers is affecting other parts of the world too, according to activists with the #StopToxicTwitter campaign.

    “We’re not talking about people not having resilience to hear things that hurt feelings,” said Thenmozhi Soundararajan, executive director of Equality Labs, which works to combat caste-based discrimination in South Asia. “We are talking about the prevention of dangerous genocidal hate speech that can lead to mass atrocities.”

    Soundararajan’s organization sits on Twitter’s Trust and Safety Council, which hasn’t met since Musk took over. She said “millions of Indians are terrified about who is going to get reinstated,” and the company has stopped responding to the group’s concerns.

    “So what happens if there’s another call for violence? Like, do I have to tag Elon Musk and hope that he’s going to address the pogrom?” Soundararajan said.

    Instances of hate speech and racial epithets soared on Twitter after Musk’s purchase as some users sought to test the new owner’s limits. The number of tweets containing hateful terms continues to rise, according to a report published Friday by the Center for Countering Digital Hate, a group that tracks online hate and extremism.

    Musk has said Twitter has reduced the spread of tweets containing hate speech, making them harder to find unless a user searches for them. But that failed to satisfy the center’s CEO, Imran Ahmed, who called the rise in hate speech a “clear failure to meet his own self-proclaimed standards.”

    Immediately after Musk’s takeover and the firing of much of Twitter’s staff, researchers who previously had flagged harmful hate speech or misinformation to the platform reported that their pleas were going unanswered.

    Jesse Littlewood, vice president for campaigns at Common Cause, said his group reached out to Twitter last week about a tweet from U.S. Rep. Marjorie Taylor Greene that alleged election fraud in Arizona. Musk had reinstated Greene’s personal account after she was kicked off Twitter for spreading COVID-19 misinformation.

    This time, Twitter was quick to respond, telling Common Cause that the tweet didn’t violate any rules and would stay up — even though Twitter requires the labeling or removal of content that spreads false or misleading claims about election results.

    Twitter gave Littlewood no explanation for why it wasn’t following its own rules.

    “I find that pretty confounding,” Littlewood said.

    Twitter did not respond to messages seeking comment for this story. Musk has defended the platform’s sometimes herky-jerky moves since he took over, and said mistakes will happen as it evolves. “We will do lots of dumb things,” he tweeted.

    To Musk’s many online fans, the disarray is a feature, not a bug, of the site under its new ownership, and a reflection of the free speech mecca they hope Twitter will be.

    “I love Elon Twitter so far,” tweeted a user who goes by the name Some Dude. “The chaos is glorious!”

    [ad_2]

    Source link

  • Twitter ends enforcement of COVID misinformation policy

    Twitter ends enforcement of COVID misinformation policy

    [ad_1]

    Twitter will no longer enforce its policy against COVID-19 misinformation, raising concerns among public health experts and social media researchers that the change could have serious consequences if it discourages vaccination and other efforts to combat the still-spreading virus.

    Eagle-eyed users spotted the change Monday night, noting that a one-sentence update had been made to Twitter’s online rules: “Effective November 23, 2022, Twitter is no longer enforcing the COVID-19 misleading information policy.”

    By Tuesday, some Twitter accounts were testing the new boundaries and celebrating the platform’s hands-off approach, which comes after Twitter was purchased by Elon Musk.

    “This policy was used to silence people across the world who questioned the media narrative surrounding the virus and treatment options,” tweeted Dr. Simone Gold, a physician and leading purveyor of COVID-19 misinformation. “A win for free speech and medical freedom!”

    Twitter’s decision to no longer remove false claims about the safety of COVID-19 vaccines disappointed public health officials, however, who said it could lead to more false claims about the virus, or the safety and effectiveness of vaccines.

    “Bad news,” tweeted epidemiologist Eric Feigl-Ding, who urged people not to flee Twitter but to keep up the fight against bad information about the virus. “Stay folks — do NOT cede the town square to them!”

    While Twitter’s efforts to stop false claims about COVID weren’t perfect, the company’s decision to reverse course is an abdication of its duty to its users, said Paul Russo, a social media researcher and dean of the Katz School of Science and Health at Yeshiva University in New York.

    Russo added that it’s the latest of several recent moves by Twitter that could ultimately scare away some users and even advertisers. Some big names in business have already paused their ads on Twitter over questions about its direction under Musk.

    “It is 100% the responsibility of the platform to protect its users from harmful content,” Russo said. “This is absolutely unacceptable.”

    The virus, meanwhile, continues to spread. Nationally, new COVID cases averaged nearly 38,800 a day as of Monday, according to data from Johns Hopkins University — far lower than last winter but a vast undercount because of reduced testing and reporting. About 28,100 people with COVID were hospitalized daily and about 313 died, according to the most recent federal daily averages.

    Cases and deaths were up from two weeks earlier. Yet a fifth of the U.S. population hasn’t been vaccinated, most Americans haven’t gotten the latest boosters, and many have stopped wearing masks.

    Musk, who has himself spread COVID misinformation on Twitter, has signaled an interest in rolling back many of the platform’s previous rules meant to combat misinformation.

    Last week, Musk said he would grant “amnesty” to account holders who had been kicked off Twitter. He’s also reinstated the accounts for several people who spread COVID misinformation, including that of Rep. Marjorie Taylor Greene, whose personal account was suspended this year for repeatedly violating Twitter’s COVID rules.

    Greene’s most recent tweets include ones questioning the effectiveness of masks and making baseless claims about the safety of COVID vaccines.

    Since the pandemic began, platforms like Twitter and Facebook have struggled to respond to a torrent of misinformation about the virus, its origins and the response to it.

    Under the policy enacted in January 2020, Twitter prohibited false claims about COVID-19 that the platform determined could lead to real-world harms. More than 11,000 accounts were suspended for violating the rules, and nearly 100,000 pieces of content were removed from the platform, according to Twitter’s latest numbers.

    Despite its rules prohibiting COVID misinformation, Twitter has struggled with enforcement. Posts making bogus claims about home remedies or vaccines could still be found, and it was difficult on Tuesday to identify exactly how the platform’s rules may have changed.

    Messages left with San Francisco-based Twitter seeking more information about its policy on COVID-19 misinformation were not immediately returned Tuesday.

    A search for common terms associated with COVID misinformation on Tuesday yielded lots of misleading content, but also automatic links to helpful resources about the virus as well as authoritative sources like the Centers for Disease Control and Prevention.

    Dr. Ashish Jha, the White House COVID-19 coordinator, said Tuesday that the problem of COVID-19 misinformation is far larger than one platform, and that policies prohibiting COVID misinformation weren’t the best solution anyway.

    Speaking at a Knight Foundation forum Tuesday, Jha said misinformation about the virus spread for a number of reasons, including legitimate uncertainty about a deadly illness. Simply prohibiting certain kinds of content isn’t going to help people find good information, or make them feel more confident about what they’re hearing from their medical providers, he said.

    “I think we all have a collective responsibility,” Jha said of combating misinformation about COVID. “The consequences of not getting this right — of spreading that misinformation — is literally tens of thousands of people dying unnecessarily.”

    [ad_2]

    Source link

  • EU warns Musk to beef up Twitter controls ahead of new rules

    EU warns Musk to beef up Twitter controls ahead of new rules

    [ad_1]

    LONDON (AP) — A top European Union official warned Elon Musk on Wednesday that Twitter needs to beef up measures to protect users from hate speech, misinformation and other harmful content to avoid violating new rules that threaten tech giants with big fines or even a ban in the 27-nation bloc.

    Thierry Breton, the EU’s commissioner for digital policy, told the billionaire Tesla CEO that the social media platform will have to significantly increase efforts to comply with the new rules, known as the Digital Services Act, set to take effect next year.

    The two held a video call to discuss Twitter’s preparedness for the law, which will require tech companies to better police their platforms for material that, for instance, promotes terrorism, child sexual abuse, hate speech and commercial scams.

    It’s part of a new digital rulebook that has made Europe the global leader in the push to rein in the power of social media companies, potentially setting up a clash with Musk’s vision for a more unfettered Twitter. U.S. Treasury Secretary Janet Yellen also said Wednesday that an investigation into Musk’s $44 billion purchase was not off the table.

    Breton said he was pleased to hear that Musk considers the EU rules “a sensible approach to implement on a worldwide basis.”

    “But let’s also be clear that there is still huge work ahead,” Musk said, according to a readout of the call released by Breton’s office. “Twitter will have to implement transparent user policies, significantly reinforce content moderation and protect freedom of speech, tackle disinformation with resolve, and limit targeted advertising.”

    After Musk, a self-described “free speech absolutist,” bought Twitter a month ago, groups that monitor the platform for racist, antisemitic and other toxic speech, such the Cyber Civil Rights Initiative, say it’s been on the rise on the world’s de facto digital public square.

    Musk has signaled an interest in rolling back many of Twitter’s previous rules meant to combat misinformation, most recently by abandoning enforcement of its COVID-19 misinformation policy. He already reinstated some high-profile accounts that had violated Twitter’s content rules and had promised a “general amnesty” restoring most suspended accounts starting this week.

    Twitter didn’t respond to an email request for comment. In a separate blog post Wednesday, the company said “human safety” is its top priority and that its trust and safety team “continues its diligent work to keep the platform safe from hateful conduct, abusive behavior, and any violation of Twitter’s rules.”

    Musk, however, has laid off half the company’s 7,500-person workforce, along with an untold number of contractors responsible for content moderation. Many others have resigned, including the company’s head of trust and safety.

    In the call Wednesday, Musk agreed to let the EU’s executive Commission carry out a “stress test” at Twitter’s headquarters early next year to help the platform comply with the new rules ahead of schedule, the readout said.

    That will also help the company prepare for an “extensive independent audit” as required by the new law, which is aimed at protecting internet users from illegal content and reducing the spread of harmful but legal material.

    Violations could result in huge fines of up to 6% of a company’s annual global revenue or even a ban on operating in the European Union’s single market.

    Along with European regulators, Musk risks running afoul of Apple and Google, which power most of the world’s smartphones. Both have stringent policies against misinformation, hate speech and other misconduct, previously enforced to boot apps like the social media platform Parler from their devices. Apps must also meet certain data security, privacy and performance standards.

    Musk tweeted without providing evidence this week that Apple “threatened to withhold Twitter from its App Store, but won’t tell us why.” Apple hasn’t commented but Musk backtracked on his claim Wednesday, saying he met with Apple CEO Tim Cook who “was clear that Apple never considered” removing Twitter.

    Meanwhile, U.S. Treasury Secretary Janet Yellen walked back her statements about whether Musk’s purchase of Twitter warrants government review.

    “I misspoke,” she said at The New York Times’ DealBook Summit on Wednesday, referring to a CBS interview this month where she said there was “no basis” to review the Twitter purchase.

    The Treasury secretary oversees the Committee on Foreign Investment in the United States, an interagency committee that investigates the national security risks from foreign investments in American firms.

    “If there are such risks, it would be appropriate for the Treasury to have a look,” Yellen told The New York Times.

    She declined to confirm whether CFIUS is currently investigating Musk’s Twitter purchase.

    Billionaire Saudi Prince Alwaleed bin Talal is, through his investment company, Twitter’s biggest shareholder after Musk.

    ___

    Associated Press writers Fatima Hussein in Washington and Matt O’Brien in Providence, Rhode Island, contributed.

    [ad_2]

    Source link

  • Twitter ends enforcement of COVID misinformation policy

    Twitter ends enforcement of COVID misinformation policy

    [ad_1]

    Twitter will no longer enforce its policy against COVID-19 misinformation, raising concerns among public health experts and social media researchers that the change could have serious consequences if it discourages vaccination and other efforts to combat the still-spreading virus.

    Eagle-eyed users spotted the change Monday night, noting that a one-sentence update had been made to Twitter‘s online rules: “Effective November 23, 2022, Twitter is no longer enforcing the COVID-19 misleading information policy.”

    By Tuesday, some Twitter accounts were testing the new boundaries and celebrating the platform’s hands-off approach, which comes after Twitter was purchased by Elon Musk.

    “This policy was used to silence people across the world who questioned the media narrative surrounding the virus and treatment options,” tweeted Dr. Simone Gold, a physician and leading purveyor of COVID-19 misinformation. “A win for free speech and medical freedom!”

    Twitter’s decision to no longer remove false claims about the safety of COVID-19 vaccines disappointed public health officials, however, who said it could lead to more false claims about the virus, or the safety and effectiveness of vaccines.

    “Bad news,” tweeted epidemiologist Eric Feigl-Ding, who urged people not to flee Twitter but to keep up the fight against bad information about the virus. “Stay folks — do NOT cede the town square to them!”

    While Twitter’s efforts to stop false claims about COVID weren’t perfect, the company’s decision to reverse course is an abdication of its duty to its users, said Paul Russo, a social media researcher and dean of the Katz School of Science and Health at Yeshiva University in New York.

    Russo added that it’s the latest of several recent moves by Twitter that could ultimately scare away some users and even advertisers. Some big names in business have already paused their ads on Twitter over questions about its direction under Musk.

    “It is 100% the responsibility of the platform to protect its users from harmful content,” Russo said. “This is absolutely unacceptable.”

    The virus, meanwhile, continues to spread. Nationally, new COVID cases averaged nearly 38,800 a day as of Monday, according to data from Johns Hopkins University — far lower than last winter but a vast undercount because of reduced testing and reporting. About 28,100 people with COVID were hospitalized daily and about 313 died, according to the most recent federal daily averages.

    Cases and deaths were up from two weeks earlier. Yet a fifth of the U.S. population hasn’t been vaccinated, most Americans haven’t gotten the latest boosters, and many have stopped wearing masks.

    Musk, who has himself spread COVID misinformation on Twitter, has signaled an interest in rolling back many of the platform’s previous rules meant to combat misinformation.

    Last week, Musk said he would grant “amnesty” to account holders who had been kicked off Twitter. He’s also reinstated the accounts for several people who spread COVID misinformation, including that of Rep. Marjorie Taylor Greene, whose personal account was suspended this year for repeatedly violating Twitter’s COVID rules.

    Greene’s most recent tweets include ones questioning the effectiveness of masks and making baseless claims about the safety of COVID vaccines.

    Since the pandemic began, platforms like Twitter and Facebook have struggled to respond to a torrent of misinformation about the virus, its origins and the response to it.

    Under the policy enacted in January 2020, Twitter prohibited false claims about COVID-19 that the platform determined could lead to real-world harms. More than 11,000 accounts were suspended for violating the rules, and nearly 100,000 pieces of content were removed from the platform, according to Twitter’s latest numbers.

    Despite its rules prohibiting COVID misinformation, Twitter has struggled with enforcement. Posts making bogus claims about home remedies or vaccines could still be found, and it was difficult on Tuesday to identify exactly how the platform’s rules may have changed.

    Messages left with San Francisco-based Twitter seeking more information about its policy on COVID-19 misinformation were not immediately returned Tuesday.

    A search for common terms associated with COVID misinformation on Tuesday yielded lots of misleading content, but also automatic links to helpful resources about the virus as well as authoritative sources like the Centers for Disease Control and Prevention.

    Dr. Ashish Jha, the White House COVID-19 coordinator, said Tuesday that the problem of COVID-19 misinformation is far larger than one platform, and that policies prohibiting COVID misinformation weren’t the best solution anyway.

    Speaking at a Knight Foundation forum Tuesday, Jha said misinformation about the virus spread for a number of reasons, including legitimate uncertainty about a deadly illness. Simply prohibiting certain kinds of content isn’t going to help people find good information, or make them feel more confident about what they’re hearing from their medical providers, he said.

    “I think we all have a collective responsibility,” Jha said of combating misinformation about COVID. “The consequences of not getting this right — of spreading that misinformation — is literally tens of thousands of people dying unnecessarily.”

    [ad_2]

    Source link