ReportWire

Tag: Misinformation

  • Deep dive into Meta’s algorithms shows that America’s political polarization has no easy fix

    Deep dive into Meta’s algorithms shows that America’s political polarization has no easy fix

    [ad_1]

    WASHINGTON — The powerful algorithms used by Facebook and Instagram to deliver content to users have increasingly been blamed for amplifying misinformation and political polarization. But a series of groundbreaking studies published Thursday suggest addressing these challenges is not as simple as tweaking the platforms’ software.

    The four research papers, published in Science and Nature, also reveal the extent of political echo chambers on Facebook, where conservatives and liberals rely on divergent sources of information, interact with opposing groups and consume distinctly different amounts of misinformation.

    Algorithms are the automated systems that social media platforms use to suggest content for users by making assumptions based on the groups, friends, topics and headlines a user has clicked on in the past. While they excel at keeping users engaged, algorithms have been criticized for amplifying misinformation and ideological content that has worsened the country’s political divisions.

    Proposals to regulate these systems are among the most discussed ideas for addressing social media’s role in spreading misinformation and encouraging polarization. But when the researchers changed the algorithms for some users during the 2020 election, they saw little difference.

    “We find that algorithms are extremely influential in people’s on-platform experiences and there is significant ideological segregation in political news exposure,” said Talia Jomini Stroud, director of the Center for Media Engagement at the University of Texas at Austin and one of the leaders of the studies. “We also find that popular proposals to change social media algorithms did not sway political attitudes.”

    While political differences are a function of any healthy democracy, polarization occurs when those differences begin to pull citizens apart from each other and the societal bonds they share. It can undermine faith in democratic institutions and the free press.

    Significant division can undermine confidence in democracy or democratic institutions and lead to “affective polarization,” when citizens begin to view each other more as enemies than legitimate opposition. It’s a situation that can lead to violence, as it did when supporters of then-President Donald Trump attacked the U.S. Capitol on Jan. 6, 2021.

    To conduct the analysis, researchers obtained unprecedented access to Facebook and Instagram data from the 2020 election through a collaboration with Meta, the platforms’ owners. The researchers say Meta exerted no control over their findings.

    When they replaced the algorithm with a simple chronological listing of posts from friends — an option Facebook recently made available to users — it had no measurable impact on polarization. When they turned off Facebook’s reshare option, which allows users to quickly share viral posts, users saw significantly less news from untrustworthy sources and less political news overall, but there were no significant changes to their political attitudes.

    Likewise, reducing the content that Facebook users get from accounts with the same ideological alignment had no significant effect on polarization, susceptibility to misinformation or extremist views.

    Together, the findings suggest that Facebook users seek out content that aligns with their views and that the algorithms help by “making it easier for people to do what they’re inclined to do,” according to David Lazer, a Northeastern University professor who worked on all four papers.

    Eliminating the algorithm altogether drastically reduced the time users spent on either Facebook or Instagram while increasing their time on TikTok, YouTube or other sites, showing just how important these systems are to Meta in the increasingly crowded social media landscape.

    In response to the papers, Meta’s president for global affairs, Nick Clegg, said the findings showed “there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization or has any meaningful impact on key political attitudes, beliefs or behaviors.”

    Katie Harbath, Facebook’s former director of public policy, said they showed the need for greater research on social media and challenged assumptions about the role social media plays in American democracy. Harbath was not involved in the research.

    “People want a simple solution and what these studies show is that it’s not simple,” said Harbath, a fellow at the Bipartisan Policy Center and the CEO of the tech and politics firm Anchor Change. “To me, it reinforces that when it comes to polarization, or people’s political beliefs, there’s a lot more that goes into this than social media.”

    One organization that’s been critical of Meta’s role in spreading misinformation about elections and voting called the research “limited’ and noted that it was only a snapshot taken in the midst of an election, and didn’t take into account the effects of years of social media misinformation.

    Free Press, a non-profit that advocates for civil rights in tech and media, called Meta’s use of the research ”calculated spin.”

    “Meta execs are seizing on limited research as evidence that they shouldn’t share blame for increasing political polarization and violence,” Nora Benavidez, the group’s senior counsel and director of digital justice and civil rights said in a statement. “Studies that Meta endorses, which look piecemeal at narrow time periods, shouldn’t serve as excuses for allowing lies to spread.”

    The four studies also revealed the extent of the ideological differences of Facebook users and the different ways that conservatives and liberals use the platform to get news and information about politics.

    Conservative Facebook users are more likely to consume content that has been labeled misinformation by fact-checkers. They also have more sources to choose from. The analysis found that among the websites included in political Facebook posts, far more cater to conservatives than liberals.

    Overall, 97% of the political news sources on Facebook identified by fact-checkers as having spread misinformation were more popular with conservatives than liberals.

    The authors of the papers acknowledged some limitations to their work. While they found that changing Facebook’s algorithms had little impact on polarization, they note that the study only covered a few months during the 2020 election, and therefore cannot assess the long-term impact that algorithms have had since their use began years ago.

    They also noted that most people get their news and information from a variety of sources — television, radio, the internet and word-of-mouth — and that those interactions could affect people’s opinions, too. Many in the United States blame the news media for worsening polarization.

    To complete their analyses, the researchers pored over data from millions of users of Facebook and Instagram and surveyed specific users who agreed to participate. All identifying information about specific users was stripped out for privacy reasons.

    Lazer, the Northeastern professor, said he was at first skeptical that Meta would give the researchers the access they needed, but was pleasantly surprised. He said the conditions imposed by the company were related to reasonable legal and privacy concerns. More studies from the collaboration will be released in coming months.

    “There is no study like this,” he said of the research published Thursday. “There’s been a lot of rhetoric about this, but in many ways the research has been quite limited.”

    [ad_2]

    Source link

  • Robert F. Kennedy Jr. testifies at House censorship hearing, denies antisemitic comments

    Robert F. Kennedy Jr. testifies at House censorship hearing, denies antisemitic comments

    [ad_1]

    Robert F. Kennedy Jr. worked to defend himself Thursday against accusations that he traffics in racist and hateful online conspiracy theories, testifying at a House hearing on government censorship despite requests from outside groups to disinvite the Democratic presidential candidate after his recent antisemitic remarks.

    The Republican-led Select Subcommittee on the Weaponization of the Federal Government is amplifying GOP claims that conservatives and others are being unfairly targeted by technology companies that routinely work with the government to try to stem the spread of disinformation online. Democrats argued that free speech comes with responsibilities not to spread misinformation, particularly when it fans violence.

    In opening remarks, Kennedy invoked his famous family’s legacy in decrying the complaints of racism and antisemitism against him.

    “This is an attempt to censor a censorship hearing,” said Kennedy, the son of Robert F. Kennedy and the nephew of President John F. Kennedy.

    Robert F. Kennedy Jr. is sworn-in during a House Judiciary Subcommittee on the Weaponization of the Federal Government hearing on Thursday, July 20, 2023.
    Robert F. Kennedy Jr. is sworn-in during a House Judiciary Subcommittee on the Weaponization of the Federal Government hearing on Thursday, July 20, 2023.

    Al Drago/Bloomberg via Getty Images


    Growing animated at times, Kennedy defended his statements, which have delved into race, vaccine safety and other issues, as neither “racist or antisemitic.” He said his family has long believed in the First Amendment right to free speech.

    “The First Amendment was not written for easy speech,” Kennedy said. “It was written for the speech that nobody likes you for.”

    Republicans are eager to elevate Kennedy after he announced in April he was mounting a long-shot Democratic primary challenge to President Biden. Kennedy’s presidential campaign chairman, Dennis Kucinich, the former congressman and past presidential contender, sat in the front row behind him during the more-than-three-hours hearing.

    The Big Tech companies have adamantly denied the GOP assertions and say they enforce their rules impartially for everyone regardless of ideology or political affiliation. And researchers have not found widespread evidence that social media companies are biased against conservative news, posts or materials.

    The top Democrat on the House panel, Del. Stacey Plaskett of the Virgin Islands, said the Republican majority was giving a platform to Kennedy and others to promote conspiracy theories and a rallying cry for “bigotry and hate.”

    “This is not the kind of free speech I know,” Plaskett said.

    Plaskett warned against misinformation from Russia and other U.S. adversaries who have interfered in American elections and are expected to meddle again in the 2024 election.

    Often emotional and heated, Thursday’s hearing came as subcommittee chairman Jim Jordan, a Republican of Ohio, portrayed what he claimed were examples of censorship, including a White House request to Twitter to remove a race-based post from Kennedy about COVID-19 vaccines.

    “It’s why Mr. Kennedy is running for president — it’s to stop, to help us expose and stop what’s going on,” Jordan said.

    A watchdog group asked Jordan to drop the invitation to Kennedy after he suggested COVID-19 could have been “ethnically targeted” to spare Ashkenazi Jews and Chinese people.

    In those filmed remarks first published by The New York Post, Kennedy said “there is an argument” that COVID-19 “is ethnically targeted” and that it “attacks certain races disproportionately.”

    After the video was made public, Kennedy posted on Twitter that his words were twisted and denied ever suggesting that COVID-19 was deliberately engineered to spare Jewish people. He called for the Post’s article to be retracted.

    A clip from the video was aired at the hearing.

    Kennedy has a history of comparing vaccines — widely credited with saving millions of lives — with the genocide of the Holocaust during Nazi Germany, comments for which he has sometimes apologized.

    In heated exchanges, Democrats implored Kennedy and Republicans to consider the fallout from their words and actions — and noted that one of the posts Republicans had singled out at the hearing was not removed by any censors.

    “Hate speech has consequences,” said Democratic Rep. Gerry Connolly of Virginia, who made reference to the mass shooting at a Pittsburgh synagogue, among others. He called the hearing Orwellian.

    Democratic Rep. Sylvia Garcia of Texas said she received a death threat after the last hearing of the Weaponization panel.

    When Rep. Debbie Wasserman Schultz, a Democrat of Florida, read aloud Kennedy’s postings and questioned his intent, Kennedy interjected that she was “slandering me” and claimed what the congresswoman was saying was a lie.

    An organization that Kennedy founded, Children’s Health Defense, currently has a lawsuit pending against a number of news organizations, among them The Associated Press, accusing them of violating antitrust laws by taking action to identify misinformation, including about COVID-19 and COVID-19 vaccines.

    Ahead of the hearing, Jordan said that while he disagreed with Kennedy’s remarks, he was not about to drop him from the panel. Speaker Kevin McCarthy took a similar view, saying he did not want to censor Kennedy.

    The panel wants to probe the way the federal government works with technology companies to flag postings that contain false information or downright lies. Hanging over the debate is part of federal communications law, Section 230, which shields technology companies like Twitter and Facebook from liability over what’s said on their platforms.

    Lawmakers on the panel were also hearing testimony from Emma-Jo Morris, a journalist at Breitbart News, who has reported extensively on Mr. Biden’s son, Hunter Biden; and D. John Sauer, a former solicitor general in Missouri who is now a special assistant attorney general at the Louisiana Department of Justice involved in the lawsuit against the Biden administration.

    Morris tweeted part of her opening remarks in which she described an “elaborate censorship conspiracy” that she claimed sought to halt her reporting of Hunter Biden.

    A witness called by Democrats, Maya Wiley, the president and CEO of the Leadership Conference on Civil and Human Rights, implored the lawmakers to consider the platforms where Americans share views — but also “how deeply vital that they be based in fact, not fiction.”

    The U.S. has been hesitant to regulate the social media giants, even as outside groups warn of the rise of hate speech and misinformation that can be erosive to civil society.

    [ad_2]

    Source link

  • RFK Jr. will testify at a House hearing over online censorship as the GOP elevates Biden’s rival

    RFK Jr. will testify at a House hearing over online censorship as the GOP elevates Biden’s rival

    [ad_1]

    WASHINGTON — House Republicans will be delving into claims of government censorship of online speech at a public hearing, asking Robert F. Kennedy Jr. to testify despite requests from outside groups to disinvite the Democratic presidential candidate after his recent antisemitic remarks.

    The Republican-led Select Subcommittee on the Weaponization of the Federal Government is set to convene Thursday. Republicans claim conservatives are being unfairly targeted by technology companies that routinely work with the government to try to stem the spread of disinformation online.

    In announcing the hearing, the panel led by Rep. Jim Jordan, R-Ohio, said it “will examine the federal government’s role in censoring Americans.” The panel said it will probe “Big Tech’s collusion with out-of-control government agencies to silence speech.”

    The Big Tech companies have adamantly denied the GOP assertions and say they enforce their rules impartially for everyone regardless of ideology or political affiliation. And researchers have not found widespread evidence that social media companies are biased against conservative news, posts or materials.

    The hearing comes after a federal judge recently sought to halt the Biden administration from working with the social media companies to monitor misinformation and other online postings. An appellate court temporarily paused the order.

    Republicans are eager to elevate Kennedy, heir to the famous American political family, who in April announced his 2024 campaign for president. The son of Robert F. Kennedy and nephew of John F. Kennedy is mounting a long-shot Democratic primary challenge to President Joe Biden. He is set to testify alongside two other witnesses.

    A watchdog group asked the panel’s chairman, Jordan, to drop the invitation to Kennedy after the Democratic presidential candidate falsely suggested COVID-19 could have been “ethnically targeted” to spare Ashkenazi Jews and Chinese people.

    In the filmed remarks first published by The New York Post, Kennedy said “there is an argument” that COVID-19 “is ethnically targeted” and that it “attacks certain races disproportionately.”

    After the video was made public, Kennedy posted on Twitter that his words were twisted and denied ever suggesting that COVID-19 was deliberately engineered to spare Jewish people. He called for the Post’s article to be retracted.

    But Kennedy has a history of comparing vaccines — widely credited with saving millions of lives — with the genocide of the Holocaust during Nazi Germany, comments for which he has sometimes apologized.

    An organization that Kennedy founded, Children’s Health Defense, currently has a lawsuit pending against a number of news organizations, among them The Associated Press, accusing them of violating antitrust laws by taking action to identify misinformation, including about COVID-19 and COVID-19 vaccines.

    Jordan said that while he disagreed with Kennedy’s remarks, he was not about to drop him from the panel. Speaker Kevin McCarthy took a similar view, saying he did not want to censor Kennedy.

    The panel wants to probe the way the federal government works with technology companies to flag postings that contain false information or downright lies. Hanging over the debate is part of federal communications law, Section 230, which shields technology companies like Twitter and Facebook from liability over what’s said on their platforms.

    Lawmakers on the panel are also expected to receive testimony from Emma-Jo Morris, journalist at Breitbart News, who has reported extensively on Biden’s son, Hunter Biden; and D. John Sauer, a former Solicitor General in Missouri who is now a special Assistant Attorney General at the Louisiana Department of Justice involved in the lawsuit against the Biden administration.

    Ahead of the hearing, Morris tweeted part of her opening remarks in which she described an “elaborate censorship conspiracy” that she claimed sought to halt her reporting of Hunter Biden.

    The U.S. has been hesitant to regulate the social media giants, even as outside groups warn of the rise of hate speech and misinformation that can be erosive to civil society.

    [ad_2]

    Source link

  • RFK Jr. will testify at a House hearing over online censorship as the GOP elevates Biden’s rival

    RFK Jr. will testify at a House hearing over online censorship as the GOP elevates Biden’s rival

    [ad_1]

    WASHINGTON — House Republicans will be delving into claims of government censorship of online speech at a public hearing, asking Robert F. Kennedy Jr. to testify despite requests from outside groups to disinvite the Democratic presidential candidate after his recent antisemitic remarks.

    The Republican-led Select Subcommittee on the Weaponization of the Federal Government is set to convene Thursday. Republicans claim conservatives are being unfairly targeted by technology companies that routinely work with the government to try to stem the spread of disinformation online.

    In announcing the hearing, the panel led by Rep. Jim Jordan, R-Ohio, said it “will examine the federal government’s role in censoring Americans.” The panel said it will probe “Big Tech’s collusion with out-of-control government agencies to silence speech.”

    The Big Tech companies have adamantly denied the GOP assertions and say they enforce their rules impartially for everyone regardless of ideology or political affiliation. And researchers have not found widespread evidence that social media companies are biased against conservative news, posts or materials.

    The hearing comes after a federal judge recently sought to halt the Biden administration from working with the social media companies to monitor misinformation and other online postings. An appellate court temporarily paused the order.

    Republicans are eager to elevate Kennedy, heir to the famous American political family, who in April announced his 2024 campaign for president. The son of Bobby Kennedy and nephew of John F. Kennedy is mounting a long-shot Democratic primary challenge to President Joe Biden. He is set to testify alongside two other witnesses.

    A watchdog group asked the panel’s chairman, Jordan, to drop the invitation to Kennedy after the Democratic presidential candidate falsely suggested COVID-19 could have been “ethnically targeted” to spare Ashkenazi Jews and Chinese people.

    In the filmed remarks first published by The New York Post, Kennedy said “there is an argument” that COVID-19 “is ethnically targeted” and that it “attacks certain races disproportionately.”

    After the video was made public, Kennedy posted on Twitter that his words were twisted and denied ever suggesting that COVID-19 was deliberately engineered to spare Jewish people. He called for the Post’s article to be retracted.

    But Kennedy has a history of comparing vaccines — widely credited with saving millions of lives — with the genocide of the Holocaust during Nazi Germany, comments for which he has sometimes apologized.

    An organization that Kennedy founded, Children’s Health Defense, currently has a lawsuit pending against a number of news organizations, among them The Associated Press, accusing them of violating antitrust laws by taking action to identify misinformation, including about COVID-19 and COVID-19 vaccines.

    Jordan said that while he disagreed with Kennedy’s remarks, he was not about to drop him from the panel. Speaker Kevin McCarthy took a similar view, saying he did not want to censor Kennedy.

    The panel wants to probe the way the federal government works with technology companies to flag postings that contain false information or downright lies. Hanging over the debate is part of federal communications law, Section 230, which shields technology companies like Twitter and Facebook from liability over what’s said on their platforms.

    Lawmakers on the panel are also expected to receive testimony from Emma-Jo Morris, journalist at Breitbart News, who has reported extensively on Biden’s son, Hunter Biden; and D. John Sauer, a former Solicitor General in Missouri who is now a special Assistant Attorney General at the Louisiana Department of Justice involved in the lawsuit against the Biden administration.

    Ahead of the hearing, Morris tweeted part of her opening remarks in which she described an “elaborate censorship conspiracy” that she claimed sought to halt her reporting of Hunter Biden.

    The U.S. has been hesitant to regulate the social media giants, even as outside groups warn of the rise of hate speech and misinformation that can be erosive to civil society.

    [ad_2]

    Source link

  • Judge refuses to put hold on order limiting Biden administration contact with social media companies

    Judge refuses to put hold on order limiting Biden administration contact with social media companies

    [ad_1]

    NEW ORLEANS — A federal judge in Louisiana refused Monday to put a temporary hold on his own order limiting Biden administration officials contacts with social media companies.

    Biden administration attorneys had asked U.S. District Judge Terry Doughty in Monroe to stay his own order, which was issued last Tuesday, while they pursue an appeal. That order came in a lawsuit filed by Republican attorneys general in Louisiana and Missouri, as well as a conservative website owner and four individual critics of government COVID-19 policies.

    The lawsuit claimed the administration, in effect, censored free speech by using threats of regulatory action or protection while pressuring companies to remove what it deemed misinformation. COVID-19 vaccines, legal issues involving President Joe Biden’s son Hunter and election fraud allegations were among the topics spotlighted in the lawsuit.

    Doughty was nominated to the federal bench by former President Donald Trump. His injunction blocked the Department of Health and Human Services, the FBI and multiple other government agencies and administration officials from meeting with or contacting social media companies for the purpose of “encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech.”

    “Defendants do not identify any specific conduct that they claim is lawful but prevented by the injunction,” Doughty said in Monday’s ruling. He refused to block is own order while it is appealed to the 5th U.S. Circuit Court of Appeals in New Orleans. The administration can also ask the appeals court for a stay.

    Government lawyers have argued that the companies control their own policies regarding misinformation and that the lawsuit casts officials’ comments on issues and policy as threats. The administration said Doughty’s July 4 order was unclear about who in the executive branch it covers and what they can or cannot say about important topics discussed on social media platforms.

    The order could cause “grave harm” by preventing the government from “engaging in a vast range of lawful and responsible conduct,” government lawyers said in requesting the stay Thursday night.

    Doughty order said the administration “seems to have assumed a role similar to an Orwellian ‘Ministry of Truth.’” The order, which was to remain in effect pending further arguments in Doughty’s court, was hailed by conservatives as a victory for free speech and a blow to censorship. But critics said the order and accompanying reasons, covering more than 160 pages, were broad, unclear and could chill government efforts to fight misinformation on important topics.

    The criticisms were echoed in the government’s Thursday night request for a stay. “The potential breadth of the entities and employees covered by the injunction combined with the injunction’s sweeping substantive scope will chill a wide range of lawful government conduct relating to Defendants’ law enforcement responsibilities, obligations to protect the national security, and prerogative to speak on matters of public concern,” the government’s motion said.

    The lawsuit’s plaintiffs countered with a weekend filing opposing a stay. Among the arguments are that the July 4 injunction carves out exemptions allowing officials to contact social media companies about postings involving criminal activity or public safety threats; national security threats; election-related issues including voter suppression attempts, voting infrastructure threats and illegal campaign contributions; and saying officials can continue “exercising permissible public government speech promoting government policies or views on matters of public concern.”

    [ad_2]

    Source link

  • Judge’s order limits government contact with social media operators, raises disinformation questions

    Judge’s order limits government contact with social media operators, raises disinformation questions

    [ad_1]

    NEW ORLEANS (AP) — An order by a federal judge in Louisiana has ignited a high-stakes legal battle over how the government is allowed to interact with social media platforms, raising broad questions about whether — and how — officials can fight what they deem misinformation on health or other matters.

    U.S. District Judge Terry Doughty, a conservative nominated to the federal bench by former President Donald Trump, chose Independence Day to issue an injunction blocking multiple government agencies and administration officials. In his words, they are forbidden to meet with or contact social media companies for the purpose of “encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech.”

    The order also prohibits the agencies and officials from pressuring social media companies “in any manner” to try to suppress posts, raising questions about what officials could even say in public forums.

    Vermont State Police say a burglary suspect who led police on a high-speed chase and crashed his truck into two police cruisers, killing a 19-year-old officer and injuring two others, will be arraigned Monday on charges related to the crash.

    The leader of the conservative bloc in the European Parliament says his party will not cooperate with the far-right Alternative for Germany but is willing to work with Italy’s far-right premier to curb migration.

    Sixteen-year-old Mirra Andreeva earned the final spot in the fourth round of Wimbledon in her first appearance at the grass-court Grand Slam tournament.

    The Defense Department says a U.S. drone strike has killed an Islamic State group leader in Syria. The military says the strike on Friday came hours after the same MQ-9 Reaper drones were harassed by Russian military jets over the western part of Syria.

    Doughty’s order blocks the administration from taking such actions pending further arguments in his court in a lawsuit filed by Republican attorneys general in Missouri and Louisiana.

    The Justice Department file a notice of appeal and said it would also seek to try to stay the court’s order.

    White House press secretary Karine Jean-Pierre said, “We certainly disagree with this decision.” She declined to comment further.

    An administration official said there was some concern about the impact the decision would have on efforts to counter domestic extremism — deemed by the intelligence community to be a top threat to the nation — but that it would depend on how long the injunction remains in place and what steps platforms take on their own. The official was not authorized to speak publicly and spoke on the condition of anonymity.

    The lawsuit alleges that government officials used the possibility of favorable or unfavorable regulatory action to coerce social media platforms to squelch what the administration considered misinformation on a variety of topics, including COVID-19 vaccines, President Joe Biden’s son Hunter, and election integrity.

    The injunction — and Doughty’s accompanying reasons saying the administration “seems to have assumed a role similar to an Orwellian ‘Ministry of Truth’” — were hailed by conservatives as a victory for free speech and a blow to censorship.

    Legal experts, however, expressed surprise at the breadth of the order, and questioned whether it puts too many limits on a presidential administration.

    “When we were in the midst of the pandemic, but even now, the government has significantly important public health expertise,” James Speta, a law professor and expert on internet regulation at Northwestern University, said Wednesday. “The scope of the injunction limits the ability of the government to share public health expertise.”

    The implications go beyond public health.

    Disinformation researchers and social media watchdogs said the ruling could make social media companies less accountable to label and remove election falsehoods.

    “As the U.S. gears up for the biggest election year the internet age has seen, we should be finding methods to better coordinate between governments and social media companies to increase the integrity of election news and information,” said Nora Benavidez, senior counsel of the digital rights advocacy group Free Press.

    Social media companies routinely take down posts that violate their own standards, but they are rarely compelled to do so by the U.S. government.

    Meta restricted access to 27 items that it thought violated laws in the U.S. during the first six months of 2020, most of them involving price-gouging allegations, according to its transparency report. But it reported no U.S.-specific content restrictions during 2021 or the first six months of 2022, the most recent data available.

    By contrast, Meta restricted access to more than 17,000 social media posts in Mexico during the same period, most pertaining to unlawful advertising on risky cosmetic or dietary products, and more than 19,000 posts and comments in South Korea reported as violating national election rules.

    Administration attorneys, in past court filings, have called the lawsuit an attempt to gag the free speech rights of administration officials themselves.

    Justin Levitt, a law professor and constitutional law expert who is a former policy adviser to the Biden administration, said the order is unclear as to whether an official could even speak publicly to criticize misinformation on a social media platform.

    Elizabeth Murrill, an assistant Louisiana attorney general, said Wednesday that the order doesn’t infringe on such public criticism, as long as the official doesn’t threaten government action against the platform.

    Jennifer Grygiel, a communications professor and social media expert at Syracuse University, said Americans should resist the urge to dismiss the case as politically motivated and remain vigilant about the risks of federal encroachment on social media platforms.

    “I’m more concerned that we’re lacking critique in the government’s intervention in these spaces,” Grygiel said. “We need, as a public, to be very critical of any attempts by a government, a federal actor, to censor speech through a corporate entity.”

    Doughty has previously ruled against the Biden administration in other high-profile cases involving oil drilling and vaccination mandates.

    In 2021 he issued a nationwide block of a Biden administration requirement that health care workers be vaccinated against COVID-19. A panel of the 5th U.S. Circuit Court of Appeals trimmed the area covered by the order to 14 states that were plaintiffs in the lawsuit.

    ___

    O’Brien reported from Providence, Rhode Island. Swenson reported from New York. Associated Press Writer Zeke Miller in Washington also contributed to this report.

    ___

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    [ad_2]

    Source link

  • France accuses Russia of faking websites to sow confusion and disinformation about Ukraine war

    France accuses Russia of faking websites to sow confusion and disinformation about Ukraine war

    [ad_1]

    France’s government has accused Russia of a long-running online manipulation campaign that included impersonating the websites of leading French media and the French Foreign Ministry

    France’s government accused Russia on Tuesday of operating a long-running online manipulation campaign, including impersonating the websites of leading French media and the French Foreign Ministry, aimed at spreading confusion and false information about the war in Ukraine.

    The French agency responsible for fighting foreign digital interference, VIGINUM, said it has monitored the alleged operation since soon after Russia invaded its neighbor and that France was one of several European countries targeted. It said it traced the campaign to Russian individuals, companies and “state entities or entities affiliated to the Russian state.”

    Last month, the agency detected a mirror website mimicking the French Foreign Ministry’s and intervened with “protective and preventive measures,” VIGINUM said in a report published Tuesday.

    “France condemns these actions, which are unworthy of a permanent member of the United Nations Security Council. No attempt at manipulation will distract France from its support for Ukraine in the face of Russia’s war of aggression,” the ministry said in a statement with unusually strong wording.

    The Russian Embassy in France did not immediately comment.

    VIGINUM said it identified 355 domain names usurping the identities of European media outlets, including those of French daily newspapers Le Monde and Le Figaro. It said the content spread on the fake sites denigrated the Ukrainian armed forces and amplified criticism of sanctions against Russia and the impact of Ukrainian refugees on other European countries.

    French Foreign Ministry spokesperson Anne-Claire Legendre said it was the first time the government has “so clearly and explicitly” attributed blame for this kind of campaign.

    She said the fake ministry website was aimed at creating confusion about France’s support for Ukraine and undermining democratic debate. She said the government’s intervention limited the impact of the mirror site, which she described as an element of “the hybrid war that Russia is currently waging.”

    VIGINUM said the campaign has links to a sprawling disinformation network that Facebook parent Meta exposed last year, which sought to use fake social media accounts and sham news websites to spread Kremlin talking points.

    Since Russia invaded Ukraine, pro-Russian players have used online disinformation and conspiracy theories in an effort to weaken international support for Ukraine. Social media platforms and European governments have tried to stifle Russian propaganda and disinformation, only to see the Kremlin shift tactics.

    ___

    Follow AP’s coverage of the war in Ukraine: https://apnews.com/hub/russia-ukraine

    [ad_2]

    Source link

  • Microsoft’s media literacy program aims to empower internet users and combat online misinformation

    Microsoft’s media literacy program aims to empower internet users and combat online misinformation

    [ad_1]

    WASHINGTON — People are hungry for accurate and reliable information online and may just need help to find it, according to a new media literacy project launched by Microsoft.

    The tech company worked with the Trust Project, a nonprofit consortium of news organizations, to create advertisements directing internet users to a list of eight “ trust indicators ” that can be used to assess a website’s credibility. The indicators include things like the clear labeling of opinion pieces, a code of practices and the attribution of sources.

    Most people who saw the list expressed greater confidence in their own ability to find reliable news while ferreting out misinformation — a promising finding that suggests media literacy can be a cheap and scalable solution to the daunting problem of online misinformation.

    “This was a bit of an experiment for us,” said Ginny Badanes, senior director of Microsoft’s Democracy Forward Initiative, a unit at the company that focuses on efforts to strengthen democracy and online journalism. “The world is changing very quickly and people need tools to equip themselves.”

    The stakes are high. Misinformation on sites like Twitter, Facebook and YouTube has been blamed for encouraging political polarization, undermining trust in democratic institutions and promoting vaccine opposition, election denialism and violent extremism.

    The speed and power of the internet can make online misinformation seem like an unresolvable problem. Journalistic fact checks are effective, but they’re labor intensive, aren’t read by everyone, and won’t convince those already distrustful of traditional journalism. Content moderation by tech companies is often inconsistent and only drives misinformation elsewhere, while prompting cries of censorship and bias. Efforts to regulate the internet are legally and politically challenging.

    Measures to promote critical thinking and media literacy, however, have shown remarkable success in helping people learn how to detect misinformation themselves. Google launched a series of videos on YouTube in Eastern Europe last year designed to teach people how misinformation works; the campaign was recently expanded to Germany.

    Often, claims masquerading as reliable news don’t cite their sources, mix opinion and fact and use slanted stories or headlines designed to exploit powerful emotions like fear, anger or disgust.

    Legitimate news organizations, by contrast, will identify their sources, invite feedback, include diverse voices and hold their journalists to a code of conduct, said Sally Lehrman, a journalist and chief executive at the Trust Project.

    The ads were seen by users of Microsoft products and systems, including email. Over the course of six months, the ads prompted twice as many people to visit the project’s site; 62% of those who visited the site said it helped them feel more confident about assessing online information.

    “I’m very encouraged by our results,” Lehrman said, noting that short internet ads are a relatively cheap, easy solution compared to complicated and controversial government regulations or hit-or-miss efforts by tech companies.

    The need for media literacy has become more obvious as deepfakes and artificial intelligence makes misinformation easier than ever to spread, Lehrman said.

    But will people actually watch advertisements designed to help them become smarter consumers of news and information? Lehrman said the research shows that they will — especially when the ads are effective at grabbing people’s attention.

    “Are we asking people to eat their broccoli? I always reject that because I think broccoli is delicious,” she said. “But we have to make it delicious.”

    [ad_2]

    Source link

  • Elon Musk and Other Leaders Are Worried About AI. Here’s Why | Entrepreneur

    Elon Musk and Other Leaders Are Worried About AI. Here’s Why | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    “The age of AI has begun,” Bill Gates declared this March, reflecting on an OpenAI demonstration of feats such as acing an AP Bio exam and giving a thoughtful, touching answer to being asked what it would do if it were the father of a sick child.

    At the same time, tech giants like Microsoft and Google have been locked in a race to develop AI tech, integrate it into their existing ecosystems and dominate the market. In February, Microsoft CEO Satya Nadella challenged Sundar Pichai of Google to “come out and dance” in the AI battlefield.

    For businesses, it’s a challenge to keep up. On the one hand, AI promises to streamline workflows, automate tedious tasks and increased overall productivity. Conversely, the AI sphere is fast-paced, with new tools constantly appearing. Where should they place their bets to stay ahead of the curve?

    And now, many tech experts are backpedaling. Leaders like Apple co-founder Steve Wozniak and Tesla’s Elon Musk, alongside 1,300 other industry experts, professors and AI luminaries, all signed an open letter calling to halt AI development for six months.

    At the same time, the “godfather of AI,” Geoffrey Hinton, resigned as one of Google’s lead AI researchers and wrote a New York Times op-ed warning of the technology he’d helped create.

    Even ChatGPT’s Sam Altman joined in the chorus of warning voices during a Congress hearing.

    But what are these warnings about? Why do tech experts say that AI could actually pose a threat to businesses — and even humanity?

    Here is a closer look at their warnings.

    Uncertain liability

    To begin with, there is a very business-focused concern. Liability.

    While AIs have developed amazing capabilities, they are far from faultless. ChatGPT, for instance, famously invented scientific references in a paper it helped write.

    Consequently, the question of liability arises. If a business uses AI to complete a task and gives a client erroneous information, who is liable for damages? The business? The AI provider?

    None of that is clear right now. And traditional business insurance fails to cover AI-related liabilities.

    Regulators and insurers are struggling to catch up. Only recently, the EU drafted a framework to regulate AI liability.

    Related: Rein in the AI Revolution Through the Power of Legal Liability

    Large-scale data theft

    Another concern is linked to unauthorized data use and cybersecurity threats. AI systems frequently store and handle large amounts of sensitive information, much of it collected in legal gray areas.

    This could make them attractive targets for cyberattacks.

    “In the absence of robust privacy regulations (US) or adequate, timely enforcement of existing laws (EU), businesses have a tendency to collect as much data as they possibly can,” explained Merve Hickok, Chair & Research Director at Center for AI and Digital Policy, in an interview with The Cyber Express.

    “AI systems tend to connect previously disparate datasets,” Hickok continued. “This means that data breaches can result in exposure of more granular data and can create even more serious harm.”

    Misinformation

    Next up, bad actors are turning to AI to generate misinformation. Not only can this have serious ramifications for political figures, especially with an election year looming. It can also cause direct damage to businesses.

    Whether targeted or accidental, misinformation is already rampant online. AI will likely drive up the volume and make it harder to spot.

    AI-generated photos of business leaders, audio mimicking a politician’s voice and artificial news anchors announcing convincing economic news. Business decisions triggered by such fake information could have disastrous consequences.

    Related: Pope Francis Didn’t Really Wear A White Puffer Coat. But It Won’t Be the Last Time You’re Fooled By an AI-Generated Image.

    Demotivated and less creative team members

    Entrepreneurs are also debating how AI will affect the psyche of individual members of the workforce.

    “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the open letter asks.

    According to Matt Cronin, the U.S. Department of Justice’s National Security & Cybercrime Coordinator, the answer is a clear “No.” Such a large-scale replacement would devastate the motivation and creativity of people in the workforce.

    “Mastering a domain and deeply understanding a topic takes significant time and effort,” he writes in The Hill. “For the first time in history, an entire generation can skip this process and still progress in school and work. However, reliance on generative AI comes with a hidden price. You are not truly learning — at least not in a way that meaningfully benefits you.”

    Ultimately, widespread AI use may lower team members’ competence, including critical thinking skills.

    Related: AI Can Replace (Some) Jobs — But It Can’t Replace Human Connection. Here’s Why.

    Economic and political instability

    What economic shifts widespread AI adoption will cause are unknown, but they will likely be large and fast. After all, a recent Goldman Sachs estimate projected that two-thirds of current occupations could be partially or fully automated, with opaque ramifications for individual businesses.

    According to experts’ more pessimistic outlooks, AI could also incite political instability. This could range from election tampering to truly apocalyptic scenarios.

    In an op-ed in Time Magazine, decision theorist Eliezer Yudkowsky called for a general halt to AI development. He and others argue that we are unprepared for powerful AIs and that unfettered development could lead to catastrophe.

    Conclusion

    AI tools hold immense potential to increase businesses’ productivity and level up their success.

    However, it’s crucial to be aware of the danger that AI systems pose, not just according to doomsayers and techno-skeptics, but according to the very same people who developed these technologies.

    That awareness will help infuse businesses’ AI approach with a caution critical to successful adaptation.

    [ad_2]

    Hasan Saleem

    Source link

  • YouTube changes policy to allow false claims about past US presidential elections

    YouTube changes policy to allow false claims about past US presidential elections

    [ad_1]

    YouTube will stop removing content that falsely claims the 2020 election or other past U.S. presidential elections were marred by “widespread fraud, errors or glitches,” the platform announced Friday.

    The change is a reversal for the Google-owned video service, which said a month after the 2020 election that it would start removing new posts that falsely claimed widespread voter fraud or errors changed the outcome.

    YouTube said in a blog post that the updated policy was an attempt to protect the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions.”

    “In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm,” the blog post said.

    The updated policy, which goes into effect immediately, won’t stop YouTube from taking down content that tries to deceive voters in the upcoming 2024 election, or other future races in the U.S. and abroad. The company said its other existing rules against election misinformation remain unchanged.

    This could prove difficult to enforce, said John Wihbey, an associate professor at Northeastern University who studies social media and misinformation.

    “It doesn’t take a genius if you’re on the disinformation ‘we were wronged in 2020’ side to say, ’wait a minute, let’s just claim that voting just generally is not worth it. And 2020 is our example,” he said. “I don’t know how you disentangle rhetoric that both refers to past wrongs and to forward possibilities. The content moderation team, which is going to try to do this, is going to tie themselves in knots trying to figure out exactly where that line is.”

    The announcement comes after YouTube and other major social media companies, including Twitter and the Meta-owned Facebook and Instagram, have come under fire in recent years for not doing more to combat the firehose of election misinformation and disinformation that spreads on their platforms.

    The left-leaning media watchdog group Media Matters said the policy change is not a surprise, as it was one of the “last major social media platforms” to keep the policy in place.

    “YouTube and the other platforms that preceded it in weakening their election misinformation policies, like Facebook, have made it clear that one attempted insurrection wasn’t enough. They’re setting the stage for an encore,” said its vice president Julie Millican in a statement.

    [ad_2]

    Source link

  • Is it real or made by AI? Europe wants a label for that as it fights disinformation

    Is it real or made by AI? Europe wants a label for that as it fights disinformation

    [ad_1]

    LONDON (AP) — The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, a top official said Monday.

    EU Commission Vice President Vera Jourova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds raises “fresh challenges for the fight against disinformation.”

    She said she asked Google, Meta, Microsoft, TikTok and other tech companies that have signed up to the 27-nation bloc’s voluntary agreement on combating disinformation to work to tackle the AI problem.

    Online platforms that have integrated generative AI into their services, such as Microsoft’s Bing search engine and Google’s Bard chatbot, should build safeguards to prevent “malicious actors” from generating disinformation, Jourova said at a briefing in Brussels.

    Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to “recognize such content and clearly label this to users,” she said.

    Google, Microsoft, Meta and TikTok did not respond immediately to requests for comment.

    Jourova said EU regulations are aimed at protecting free speech, but when it comes to AI, “I don’t see any right for the machines to have the freedom of speech.”

    The swift rise of generative AI technology, which has the capability to produce human-like text, images and video, has amazed many and alarmed others with its potential to transform many aspects of daily life. Europe has taken a lead role in the global movement to regulate artificial intelligence with its AI Act, but the legislation still needs final approval and won’t take effect for several years.

    Officials in the EU, which also is bringing in a separate set of rules this year to safeguard people from harmful online content, are worried that they need to act faster to keep up with the rapid development of generative AI.

    Recent examples of debunked deepfakes include a realistic picture of Pope Francis in a white puffy jacket and an image of billowing black smoke next to a building accompanied with a claim that it showed an explosion near the Pentagon.

    Politicians have even enlisted AI to warn about its dangers. Danish Prime Minister Mette Frederiksen used OpenAI’s ChatGPT to craft the opening of a speech to Parliament last week, saying it was written “with such conviction that few of us would believe that it was a robot — and not a human — behind it.”

    European and U.S. officials said last week that they’re drawing up a voluntary code of conduct for artificial intelligence that could be ready within weeks as a way to bridge the gap before the EU’s AI rules take effect.

    Similar voluntary commitments in the bloc’s disinformation code will become legal obligations by the end of August under the EU’s Digital Services Act, which will force the biggest tech companies to better police their platforms to protect users from hate speech, disinformation and other harmful material.

    Jourova said, however, that those companies should start labeling AI-generated content immediately.

    Most digital giants are already signed up to the EU disinformation code, which requires companies to measure their work on combating false information and issue regular reports on their progress.

    Twitter dropped out last month in what appeared to be the latest move by Elon Musk to loosen restrictions at the social media company after he bought it last year.

    The exit drew a stern rebuke, with Jourova calling it a mistake.

    “Twitter has chosen the hard way. They chose confrontation,” she said. “Make no mistake, by leaving the code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinized vigorously and urgently.”

    Twitter will face a major test later this month when European Commissioner Thierry Breton heads to its San Francisco headquarters with a team to carry out a “stress test,” meant to measure the platform’s ability to comply with the Digital Services Act.

    Breton, who’s in charge of digital policy, told reporters Monday that he also will visit other Silicon Valley tech companies including OpenAI, chipmaker Nvidia and Meta.

    ___

    AP reporter Jan M. Olsen contributed from Copenhagen, Denmark.

    [ad_2]

    Source link

  • Is it real or made by AI? Europe wants a label for that as it fights disinformation

    Is it real or made by AI? Europe wants a label for that as it fights disinformation

    [ad_1]

    LONDON — The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, a top official said Monday.

    EU Commission Vice President Vera Jourova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds raises “fresh challenges for the fight against disinformation.”

    Jourova said she asked Google, Meta, Microsoft, TikTok and other tech companies that have signed up to the 27-nation bloc’s voluntary agreement on combating disinformation to dedicate efforts to tackling the AI problem.

    Online platforms that have integrated generative AI into their services, such as Microsoft’s Bing search engine and Google’s Bard chatbot, should build safeguards to prevent “malicious actors” from generating disinformation, Jourova said at a briefing in Brussels.

    Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to “recognize such content and clearly label this to users,” she said.

    Jourova said EU regulations are aimed at protecting free speech, but when it comes to AI, “I don’t see any right for the machines to have the freedom of speech.”

    The swift rise of generative AI technology, which has the capability to produce human-like text, images and video, has amazed many and alarmed others with its potential to transform many aspects of daily life. Europe has taken a lead role in the global movement to regulate artificial intelligence with its AI Act, but the legislation still needs final approval and won’t take effect for several years.

    Officials in the EU, which is bringing in a separate set of rules this year to safeguard people from harmful online content, are worried that they need to act faster to keep up with the rapid development of generative artificial intelligence.

    The voluntary commitments in the disinformation code will soon become legal obligations under the EU’s Digital Services Act, which will force the biggest tech companies by the end of August to better police their platforms to protect users from hate speech, disinformation and other harmful material.

    Jourova said, however, that those companies should start labeling AI-generated content immediately.

    Most of those digital giants are already signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.

    Twitter dropped out last month in what appeared to be the latest move by Elon Musk to loosen restrictions at the social media company after he bought it last year.

    The exit drew a stern rebuke, with Jourova calling it a mistake.

    “Twitter has chosen the hard way. They chose confrontation,” she said. “Make no mistake, by leaving the code, Twitter has attracted a lot of attention and its actions and compliance with EU law will be scrutinized vigorously and urgently.”

    [ad_2]

    Source link

  • YouTube Reverses Ban On 2020 Election Denial As 2024 Race Ramps Up

    YouTube Reverses Ban On 2020 Election Denial As 2024 Race Ramps Up

    [ad_1]

    YouTube announced Friday that it would no longer remove election lies from its platform as former President Donald Trump and the MAGA-faithful continue to deny the results of the 2020 presidential election.

    In a statement released on an official company blog, one of the world’s largest video platforms cited the “ability to openly debate political ideas, even those that are controversial or based on disproven assumptions,” as the reason for the change. A 2020 Pew Research study found that a quarter of American adults get their news from the platform. 

    “Two years, tens of thousands of video removals, and one election cycle later, we recognized it was time to reevaluate the effects of this policy in today’s changed landscape,” Google-owned YouTube said. 

    “With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections.”

    In its statement, the company clarified that it would continue to remove content that misleads voters about the voting process. 

    YouTube announced the policy in December 2020, just under a month before the January 6th attack on the U.S. Capitol. A study from the independent research group Transparency.tube found that videos peddling election lies garnered more than 137 million views during the election week. Those videos frequently spread to other social media platforms, comprising about one-third of all election-related videos posted to Twitter in November 2020. But after YouTube introduced the policy, the amount of election fraud videos shared on social media declined, The New York Times reported

    In a statement responding to the change, Julie Millican, vice president of liberal watchdog Media Matters for America, noted that Youtube was “one of the last major social media platforms to keep in place a policy attempting to curb 2020 election misinformation.” Twitter stopped suspending, banning, or fact-checking users spreading election lies in March 2021, while Facebook reduced its efforts to quell the spread of misinformation in the lead-up to the 2022 midterms. This March, YouTube reinstated Trump’s account, following Meta and Twitter’s lead. 

    YouTube “is now allowing people to say whatever they wish about the 2020 election,” far-right Republican congresswoman Lauren Boebert tweeted on Saturday, responding to the news. “Looks like even YouTube is ready for people to start talking TRUTH again.” 

    [ad_2]

    Jack McCordick

    Source link

  • YouTube Changes Policy To Allow False Claims About Past US Presidential Elections

    YouTube Changes Policy To Allow False Claims About Past US Presidential Elections

    [ad_1]

    YouTube will stop removing content that falsely claims the 2020 election or other past U.S. presidential elections were marred by “widespread fraud, errors or glitches,” the platform announced Friday.

    The change is a reversal for the Google-owned video service, which said a month after the 2020 election that it would start removing new posts that falsely claimed widespread voter fraud or errors changed the outcome.

    YouTube said in a blog post that the updated policy was an attempt to protect the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions.”

    “In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm,” the blog post said.

    The updated policy, which goes into effect immediately, won’t stop YouTube from taking down content that tries to deceive voters in the upcoming 2024 election, or other future races in the U.S. and abroad. The company said its other existing rules against election misinformation remain unchanged.

    This could prove difficult to enforce, said John Wihbey, an associate professor at Northeastern University who studies social media and misinformation.

    “It doesn’t take a genius if you’re on the disinformation ‘we were wronged in 2020’ side to say, ‘wait a minute, let’s just claim that voting just generally is not worth it. And 2020 is our example,” he said. “I don’t know how you disentangle rhetoric that both refers to past wrongs and to forward possibilities. The content moderation team, which is going to try to do this, is going to tie themselves in knots trying to figure out exactly where that line is.”

    The announcement comes after YouTube and other major social media companies, including Twitter and the Meta-owned Facebook and Instagram, have come under fire in recent years for not doing more to combat the firehose of election misinformation and disinformation that spreads on their platforms.

    The left-leaning media watchdog group Media Matters said the policy change is not a surprise, as it was one of the “last major social media platforms” to keep the policy in place.

    “YouTube and the other platforms that preceded it in weakening their election misinformation policies, like Facebook, have made it clear that one attempted insurrection wasn’t enough. They’re setting the stage for an encore,” said its vice president Julie Millican in a statement.

    [ad_2]

    Source link

  • YouTube changes policy to allow false claims about past US presidential elections

    YouTube changes policy to allow false claims about past US presidential elections

    [ad_1]

    YouTube will stop removing content that falsely claims the 2020 election or other past U.S. presidential elections were marred by “widespread fraud, errors or glitches,” the platform announced Friday.

    The change is a reversal for the Google-owned video service, which said a month after the 2020 election that it would start removing new posts that falsely claimed widespread voter fraud or errors changed the outcome.

    YouTube said in a blog post that the updated policy was an attempt to protect the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions.”

    “In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm,” the blog post said.

    The updated policy, which goes into effect immediately, won’t stop YouTube from taking down content that tries to deceive voters in the upcoming 2024 election, or other future races in the U.S. and abroad. The company said its other existing rules against election misinformation remain unchanged.

    This could prove difficult to enforce, said John Wihbey, an associate professor at Northeastern University who studies social media and misinformation.

    “It doesn’t take a genius if you’re on the disinformation ‘we were wronged in 2020’ side to say, ‘wait a minute, let’s just claim that voting just generally is not worth it. And 2020 is our example,” he said. “I don’t know how you disentangle rhetoric that both refers to past wrongs and to forward possibilities. The content moderation team, which is going to try to do this, is going to tie themselves in knots trying to figure out exactly where that line is.”

    The announcement comes after YouTube and other major social media companies, including Twitter and the Meta-owned Facebook and Instagram, have come under fire in recent years for not doing more to combat the firehose of election misinformation and disinformation that spreads on their platforms.

    The left-leaning media watchdog group Media Matters said the policy change is not a surprise, as it was one of the “last major social media platforms” to keep the policy in place.

    “YouTube and the other platforms that preceded it in weakening their election misinformation policies, like Facebook, have made it clear that one attempted insurrection wasn’t enough. They’re setting the stage for an encore,” said its vice president Julie Millican in a statement.

    [ad_2]

    Source link

  • EU official says Twitter abandons bloc’s voluntary pact against disinformation

    EU official says Twitter abandons bloc’s voluntary pact against disinformation

    [ad_1]

    A top European Union official says Twitter has dropped out of the bloc’s voluntary agreement to combat online disinformation

    ByKELVIN CHAN AP Business Writer

    FILE – The Twitter logo is seen on the awning of the building that houses the Twitter office in New York, Wednesday, Oct. 26, 2022. Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday, May 26, 2023. (AP Photo/Mary Altaffer, File)

    The Associated Press

    LONDON — Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday.

    European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU’s disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter’s “obligation” remained, referring to the EU’s tough new digital rules taking effect in August.

    “You can run but you can’t hide,” Breton said.

    San Francisco-based Twitter responded with an automated reply, as it does to most press inquiries, and did not comment.

    The decision to abandon the commitment to fighting false information appears to be the latest move by billionaire owner Elon Musk to loosen the reins on the social media company after he bought it last year. He has rolled back previous anti-misinformation rules, and has thrown its verification system and content-moderation policies into chaos as he pursues his goal of turning Twitter into a digital town square.

    Google, TikTok, Microsoft and Facebook and Instagram parent Meta are among those that have signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.

    There were already signs Twitter wasn’t prepared to live up to its commitments. The European Commission, the 27-nation bloc’s executive arm, blasted Twitter earlier this year for failing to provide a full first report under the code, saying it provided little specific information and no targeted data.

    Breton said that under the new digital rules that incorporate the code of practice, fighting disinformation will become a “legal obligation.”

    “Our teams will be ready for enforcement,” he said.

    [ad_2]

    Source link

  • EU official says Twitter abandons bloc’s voluntary pact against disinformation

    EU official says Twitter abandons bloc’s voluntary pact against disinformation

    [ad_1]

    A top European Union official says Twitter has dropped out of the bloc’s voluntary agreement to combat online disinformation

    ByKELVIN CHAN AP Business Writer

    FILE – The Twitter logo is seen on the awning of the building that houses the Twitter office in New York, Wednesday, Oct. 26, 2022. Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday, May 26, 2023. (AP Photo/Mary Altaffer, File)

    The Associated Press

    LONDON — Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday.

    European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU’s disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter’s “obligation” remained, referring to the EU’s tough new digital rules taking effect in August.

    “You can run but you can’t hide,” Breton said.

    San Francisco-based Twitter responded with an automated reply, as it does to most press inquiries, and did not comment.

    The decision to abandon the commitment to fighting false information appears to be the latest move by billionaire owner Elon Musk to loosen the reins on the social media company after he bought it last year. He has rolled back previous anti-misinformation rules, and has thrown its verification system and content-moderation policies into chaos as he pursues his goal of turning Twitter into a digital town square.

    Google, TikTok, Microsoft and Facebook and Instagram parent Meta are among those that have signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.

    There were already signs Twitter wasn’t prepared to live up to its commitments. The European Commission, the 27-nation bloc’s executive arm, blasted Twitter earlier this year for failing to provide a full first report under the code, saying it provided little specific information and no targeted data.

    Breton said that under the new digital rules that incorporate the code of practice, fighting disinformation will become a “legal obligation.”

    “Our teams will be ready for enforcement,” he said.

    [ad_2]

    Source link

  • EU official says Twitter abandons bloc’s code of practice on disinformation

    EU official says Twitter abandons bloc’s code of practice on disinformation

    [ad_1]

    A top European Union official says Twitter has dropped out of the bloc’s voluntary agreement to combat online disinformation

    ByKELVIN CHAN AP Business Writer

    LONDON — Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday.

    European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU’s disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter’s “obligation” remained, referring to the EU’s tough new digital rules taking effect in August.

    “You can run but you can’t hide,” Breton said.

    San Francisco-based Twitter responded with an automated reply, as it does to most press inquiries, and did not comment.

    The decision to abandon the commitment to fighting false information appears to be the latest move by billionaire owner Elon Musk to loosen the reins on the social media company after he bought it last year. He has rolled back previous anti-misinformation rules, and has thrown its verification system and content-moderation policies into chaos as he pursues his goal of turning Twitter into a digital town square.

    Google, TikTok, Microsoft and Facebook and Instagram parent Meta are among those that have signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.

    There were already signs Twitter wasn’t prepared to live up to its commitments. The European Commission, the 27-nation bloc’s executive arm, blasted Twitter earlier this year for failing to provide a full first report under the code, saying it provided little specific information and no targeted data.

    Breton said that under the new digital rules that incorporate the code of practice, fighting disinformation will become a “legal obligation.”

    “Our teams will be ready for enforcement,” he said.

    [ad_2]

    Source link

  • FACT FOCUS: Fake image of Pentagon explosion briefly sends jitters through stock market

    FACT FOCUS: Fake image of Pentagon explosion briefly sends jitters through stock market

    [ad_1]

    An image of black smoke billowing next to a bureaucratic-looking building spread across social media Monday morning, with the claim that it showed an explosion near the Pentagon.

    The posts sent a brief shiver through the stock market as they were quickly picked up by news outlets outside the U.S., before officials jumped in to clarify that no blast actually took place and the photo was a fake.

    Experts say the viral image had telltale signs of an AI-generated forgery, and its popularity underscores the everyday chaos these now increasingly sophisticated and easy-to-access programs can inflict.

    Here’s a closer look at the facts.

    CLAIM: An image shows an explosion near the Pentagon.

    THE FACTS: Police and fire officials in Arlington, Virginia, say the image is not real and there was no incident at the U.S. Department of Defense headquarters across the Potomac from the nation’s capital.

    Despite this, the image and claim was spread by outlets including RT, a Russian government-backed media company formerly known as Russia Today. It was also widely shared in investment circles, including an account bearing Twitter’s signature blue verification check mark that falsely suggested it was associated with Bloomberg News.

    “Reports of an explosion near the Pentagon in Washington DC,” the Russian state news agency wrote in a since-deleted tweet to its more than three million followers.

    RT confirmed it took down the tweet and “covered the official position from the Pentagon on the matter” after verifying the reports were inaccurate.

    “As with fast-paced news verification, we made the public aware of reports circulating and once provenance and veracity were ascertained, we took appropriate steps to correct the reporting,” the company wrote in an emailed statement Tuesday.

    Still the timing of the fake image, which appeared to spread widely just after the U.S. stock market opened for trading at 9:30 a.m., was enough to send a ripple through the investment world.

    The S&P 500 briefly dropped a modest 0.3% as social media accounts and investment websites popular with day traders repeated the false claims.

    Other investments also moved in ways that typically occur when fear enters the market. Prices for U.S. Treasury bonds and gold, for example, briefly began to climb, suggesting investors were looking for someplace safer to park their money.

    The image’s rapid spread prompted the Arlington County Fire Department to take to social media to knock down the rumors.

    “@PFPAOfficial and the ACFD are aware of a social media report circulating online about an explosion near the Pentagon,” the agency wrote, referring to the acronym for the Pentagon Force Protection Agency that polices the Pentagon. “There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public.”

    Capt. Nate Hiner, a spokesperson for the fire department, confirmed the agency’s tweet was authentic but declined to comment further, deferring to the Pentagon police force, which didn’t respond to email and phone messages.

    Misinformation experts say the fake image was likely created using generative artificial intelligence programs, which have allowed increasingly realistic, but oftentimes flawed, visuals to flood the internet recently.

    Inconsistencies in the building, fence and surrounding area are imperfections typically found in AI-generated images, noted Hany Farid, a computer science professor at the University of California, Berkeley, who specializes in digital forensics, misinformation and image analysis.

    “Specifically, the grass and concrete fade into each other, the fence is irregular, there is a strange black pole that is protruding out of the front of the sidewalk but is also part of the fence,” he wrote in an email. “The windows in the building are inconsistent with photos of the Pentagon that you can find online.”

    Chirag Shah, co-director of the Center for Responsibility in AI Systems & Experiences at the University of Washington in Seattle, cautioned that spotting fakes won’t always be as obvious.

    Society will need to lean more on “crowdsourcing and community vigilance to weed out bad information and arrive at the truth” as AI technology improves, he argued.

    “Simply relying on detection tools or social media posts are not going to be enough,” Shah wrote in an email.

    Before the explosion hoax, the biggest Beltway intrigue on Wall Street’s mind Monday morning was whether the U.S. government will avoid a disastrous default on its debt.

    But as the market is becoming increasingly reactive to headline-grabbing news, misinformation can be especially damaging when it’s shared by outlets even vaguely deemed as credible, said Adam Kobeissi, editor-in-chief at The Kobeissi Letter, an industry publication.

    “A lot of these moves are happening because of high frequency trading, algorithmic trading, which is basically taking headlines, synthesizing them and then breaking them down into a trade on a millisecond basis,” he explained by phone, noting that much of the market is now automated. “It’s basically like you’re pulling a trigger every time a headline comes out.”

    __

    Associated Press business reporters Stan Choe and Wyatte Grantham-Philips in New York contributed to this story.

    ___

    This is part of AP’s effort to address widely shared misinformation, including work with outside companies and organizations to add factual context to misleading content that is circulating online. Learn more about fact-checking at AP.

    [ad_2]

    Source link

  • Russia arrests young woman for St. Petersburg bombing

    Russia arrests young woman for St. Petersburg bombing

    [ad_1]

    Russian law enforcement on Monday detained a young woman suspected of bombing a St. Petersburg cafe, in which a pro-Kremlin military blogger was killed and dozens injured on Sunday, according to media reports.

    In a video from the interior ministry published by state news agency TASS, a woman presented as Darya Trepova can be heard saying she “brought a statuette” inside the cafe, which “later exploded.”

    She said she had been arrested for “being present at the place” where the bombing occurred.

    POLITICO was not able to independently verify whether Trepova’s statement was made under duress.

    Trepova was reportedly detained for several days last year for taking part in a protest against the war in Ukraine on the day Russia’s full-scale invasion started.

    Russian military blogger Vladlen Tatarsky was killed by the St. Petersburg cafe blast, which also injured 25 people according to Reuters.

    Tatarsky — whose real name was Maxim Fomin — was part of a group of high-profile influencers filing reports on the Ukraine war. He had more than half a million followers on Telegram.

    According to AP, Tatarsky utilized “ardent pro-war rhetoric” in favor of Russia’s war in Ukraine.

    Russia’s top investigative body announced Monday it had opened a probe into the bombing, which it labeled a “high-profile murder.”

    The state-controlled Russian National Anti-Terrorism Committee called the bombing a “terrorist act” and accused Ukraine’s special service of planning the attack.

    Mykhailo Podolyak, an adviser to Ukrainian President Volodymyr Zelenskyy’s office, tweeted that Russia had “returned to the Soviet classics: isolation … espionage … political repression.”

    This is the second time a pro-Kremlin media figure has been killed on Russian soil since the invasion began.

    Last August, Darya Dugina — who was under U.S. sanctions for spreading misinformation about the war — was killed in a car bombing.

    [ad_2]

    Nicolas Camut

    Source link