ReportWire

Tag: Misinformation

  • Brazil blocks Musk’s X after company refuses to name local representative amid feud with judge

    Brazil blocks Musk’s X after company refuses to name local representative amid feud with judge

    [ad_1]

    SAO PAULO — Brazil started blocking Elon Musk’s social media platform X early Saturday, making it largely inaccessible on both the web and through mobile apps after the billionaire refused to name a legal representative to the country.

    The move escalates a monthslong feud between Musk and a Brazilian Supreme Court justice over free speech, far-right accounts and misinformation. Justice Alexandre de Moraes ordered the suspension on Friday.

    To block X, Brazil’s telecommunications regulator, Anatel, told internet service providers to suspend users’ access to the social media platform. As of Saturday after midnight local time, major operators had begun doing so.

    De Moraes had warned Musk on Wednesday night that X could be blocked in Brazil if he failed to comply with his order to name a representative, and established a 24-hour deadline. The company hasn’t had a representative in the country since earlier this month.

    “Elon Musk showed his total disrespect for Brazilian sovereignty and, in particular, for the judiciary, setting himself up as a true supranational entity and immune to the laws of each country,” de Moraes wrote in his decision on Friday.

    The justice said the platform will stay suspended until it complies with his orders, and also set a daily fine of 50,000 reais ($8,900) for people or companies using VPNs to access it.

    In a later ruling, he backtracked on his initial decision to establish a 5-day deadline for internet service providers themselves — and not just the telecommunications regulator — to block access to X, as well as his directive for app stores to remove virtual private networks, or VPNs.

    Brazil is one of the biggest markets for X, which has struggled with the loss of advertisers since Musk purchased the former Twitter in 2022. Market research group Emarketer says some 40 million Brazilians, roughly one-fifth of the population, access X at least once per month.

    “This is a sad day for X users around the world, especially those in Brazil, who are being denied access to our platform. I wish it did not have to come to this – it breaks my heart,” X’s CEO Linda Yaccarino said Friday night, adding that Brazil is failing to uphold its constitution’s pledge to forbid censorship.

    X had posted on its official Global Government Affairs page late Thursday that it expected X to be shut down by de Moraes, “simply because we would not comply with his illegal orders to censor his political opponents.”

    “When we attempted to defend ourselves in court, Judge de Moraes threatened our Brazilian legal representative with imprisonment. Even after she resigned, he froze all of her bank accounts,” the company wrote.

    X has clashed with de Moraes over its reluctance to comply with orders to block users.

    Accounts that the platform previously has shut down on Brazilian orders include lawmakers affiliated with former President Jair Bolsonaro’s right-wing party and activists accused of undermining Brazilian democracy. X’s lawyers in April sent a document to the Supreme Court in April, saying that since 2019 it had suspended or blocked 226 users.

    In his decision Friday, de Moraes’ cited Musk’s statements as evidence that X’s conduct “clearly intends to continue to encourage posts with extremism, hate speech and anti-democratic discourse, and to try to withdraw them from jurisdictional control.”

    In April, de Moraes included Musk as a target in an ongoing investigation over the dissemination of fake news and opened a separate investigation into the executive for alleged obstruction.

    Musk, a self-proclaimed “free speech absolutist,” has repeatedly claimed the justice’s actions amount to censorship, and his argument has been echoed by Brazil’s political right. He has often insulted de Moraes on his platform, characterizing him as a dictator and tyrant.

    De Moraes’ defenders have said his actions aimed at X have been lawful, supported by most of the court’s full bench and have served to protect democracy at a time it is imperiled. He wrote Friday that his ruling is based on Brazilian law requiring internet services companies to have representation in the country so they can be notified when there are relevant court decisions and take requisite action — specifying the takedown of illicit content posted by users, and an anticipated churn of misinformation during October municipal elections.

    The looming shutdown is not unprecedented in Brazil.

    Lone Brazilian judges shut down Meta’s WhatsApp, the nation’s most widely used messaging app, several times in 2015 and 2016 due to the company’s refusal to comply with police requests for user data. In 2022, de Moraes threatened the messaging app Telegram with a nationwide shutdown, arguing it had repeatedly ignored Brazilian authorities’ requests to block profiles and provide information. He ordered Telegram to appoint a local representative; the company ultimately complied and stayed online.

    X and its former incarnation, Twitter, have been banned in several countries — mostly authoritarian regimes such as Russia, China, Iran, Myanmar, North Korea, Venezuela and Turkmenistan. Other countries, such as Pakistan, Turkey and Egypt, have also temporarily suspended X before, usually to quell dissent and unrest. Twitter was banned in Egypt after the Arab Spring uprisings, which some dubbed the “Twitter revolution,” but it has since been restored.

    A search Friday on X showed hundreds of Brazilian users inquiring about VPNs that could potentially enable them to continue using the platform by making it appear they were logging on from outside the country. It was not immediately clear how Brazilian authorities would police this practice and impose fines cited by de Moraes.

    “This is an unusual measure, but its main objective is to ensure that the court order to suspend the platform’s operation is, in fact, effective,” Filipe Medon, a specialist in digital law and professor at the law school of Getulio Vargas Foundation, a university in Rio de Janeiro, told The Associated Press.

    Mariana de Souza Alves Lima, known by her handle MariMoon, showed her 1.4 million followers on X where she intends to go, posting a screenshot of rival social network BlueSky.

    On Thursday evening, Starlink, Musk’s satellite internet service provider, said on X that de Moraes this week froze its finances, preventing it from doing any transactions in the country where it has more than 250,000 customers.

    “This order is based on an unfounded determination that Starlink should be responsible for the fines levied—unconstitutionally—against X. It was issued in secret and without affording Starlink any of the due process of law guaranteed by the Constitution of Brazil. We intend to address the matter legally,” Starlink said in its statement. The law firm representing Starlink told the AP that the company appealed, but wouldn’t make further comment.

    Musk replied to people sharing the reports of the freeze, adding insults directed at de Moraes. “This guy @Alexandre is an outright criminal of the worst kind, masquerading as a judge,” he wrote.

    Musk later posted on X that SpaceX, which runs Starlink, will provide free internet service in Brazil “until the matter is resolved” since “we cannot receive payment, but don’t want to cut anyone off.”

    In his decision, de Moraes said he ordered the freezing of Starlink’s assets, as X didn’t have enough money in its accounts to cover mounting fines, and reasoning that the two companies are part of the same economic group.

    While ordering X’s suspension followed warnings and fines and so was appropriate, taking action against Starlink seems “highly questionable,” said Luca Belli, coordinator of the Getulio Vargas Foundation’s Technology and Society Center.

    “Yes, of course, they have the same owner, Elon Musk, but it is discretionary to consider Starlink as part of the same economic group as Twitter (X). They have no connection, they have no integration,” Belli said.

    ___

    Ortutay reported from San Francisco and Biller from Rio. AP writer Mauricio Savarese contributed from Sao Paulo.

    [ad_2]

    Source link

  • Top Brazilian judge orders suspension of X platform in Brazil amid feud with Musk

    Top Brazilian judge orders suspension of X platform in Brazil amid feud with Musk

    [ad_1]

    SAO PAULO — A Brazilian Supreme Court justice on Friday ordered the suspension of Elon Musk’s social media giant X in Brazil after the tech billionaire refused to name a legal representative in the country, according to a copy of his decision.

    The move further escalates the monthslong feud between the two men over free speech, far-right accounts and misinformation.

    Justice Alexandre de Moraes had warned Musk on Wednesday night that X could be blocked in Brazil if he failed to comply with his order to name a representative, and established a 24-hour deadline. The company hasn’t had a representative in the country since earlier this month.

    “Elon Musk showed his total disrespect for Brazilian sovereignty and, in particular, for the judiciary, setting himself up as a true supranational entity and immune to the laws of each country,” de Moraes wrote in his decision.

    The justice said the platform will stay suspended until it complies with his orders, and also set a daily fine of 50,000 reais ($8,900) for people or companies using VPNs to access it.

    In a later ruling, he backtracked on his initial decision to establish a 5-day deadline for internet service providers themselves — and not just the telecommunications regulator — to block access to X, as well as his directive for app stores to remove virtual private networks, or VPNs.

    Brazil’s telecommunications regulator Anatel has 24 hours to comply. The regulator’s chairman Carlos Baigorri told GloboNews channel that the country’s biggest service providers will respond quickly, but added smaller ones might need more time to suspend X from their services.

    The full bench of Brazil’s Supreme Court is expected to rule on the case, but no date for deliberations was set.

    Brazil is an important market for X, which has struggled with the loss of advertisers since Musk purchased the former Twitter in 2022. Market research group Emarketer says some 40 million Brazilians, roughly one-fifth of the population, access X at least once per month.

    X had posted on its official Global Government Affairs page late Thursday that it expected X to be shut down by de Moraes, “simply because we would not comply with his illegal orders to censor his political opponents.”

    “When we attempted to defend ourselves in court, Judge de Moraes threatened our Brazilian legal representative with imprisonment. Even after she resigned, he froze all of her bank accounts,” the company wrote. “Our challenges against his manifestly illegal actions were either dismissed or ignored. Judge de Moraes’ colleagues on the Supreme Court are either unwilling or unable to stand up to him.”

    X has clashed with de Moraes over its reluctance to comply with orders to block users.

    Accounts that the platform previously has shut down on Brazilian orders include lawmakers affiliated with former President Jair Bolsonaro’s right-wing party and activists accused of undermining Brazilian democracy. X’s lawyers in April sent a document to the Supreme Court in April, saying that since 2019 it had suspended or blocked 226 users.

    In his decision Friday, de Moraes’ cited Musk’s statements as evidence that X’s conduct “clearly intends to continue to encourage posts with extremism, hate speech and anti-democratic discourse, and to try to withdraw them from jurisdictional control.”

    Musk, a self-proclaimed “free speech absolutist,” has repeatedly claimed the justice’s actions amount to censorship, and his argument has been echoed by Brazil’s political right. He has often insulted de Moraes on his platform, characterizing him as a dictator and tyrant.

    De Moraes’ defenders have said his actions aimed at X have been lawful, supported by most of the court’s full bench and have served to protect democracy at a time it is imperiled. He wrote Friday that his ruling is based on Brazilian law requiring internet services companies to have representation in the country so they can be notified when there are relevant court decisions and take requisite action — specifying the takedown of illicit content posted by users, and an anticipated churn of misinformation during October municipal elections.

    The looming shutdown is not unprecedented in Brazil.

    Lone Brazilian judges shut down Meta’s WhatsApp, the nation’s most widely used messaging app, several times in 2015 and 2016 due to the company’s refusal to comply with police requests for user data. In 2022, de Moraes threatened the messaging app Telegram with a nationwide shutdown, arguing it had repeatedly ignored Brazilian authorities’ requests to block profiles and provide information. He ordered Telegram to appoint a local representative; the company ultimately complied and stayed online.

    X and its former incarnation, Twitter, have been banned in several countries — mostly authoritarian regimes such as Russia, China, Iran, Myanmar, North Korea, Venezuela and Turkmenistan. Other countries, such as Pakistan, Turkey and Egypt, have also temporarily suspended X before, usually to quell dissent and unrest. Twitter was banned in Egypt after the Arab Spring uprisings, which some dubbed the “Twitter revolution,” but it has since been restored.

    Earlier on Friday, a search on X showed hundreds of Brazilian users inquiring about VPNs that could potentially enable them to continue using the platform by making it appear they were logging on from outside the country. It was not immediately clear how Brazilian authorities would police this practice and impose fines cited by de Moraes.

    “This is an unusual measure, but its main objective is to ensure that the court order to suspend the platform’s operation is, in fact, effective,” Filipe Medon, a specialist in digital law and professor at the law school of Getulio Vargas Foundation, a university in Rio de Janeiro, told The Associated Press. “As a general rule, there are no provisions in Brazilian law that prevent users from using VPNs, since they are not the subjects of the blocking and suspension orders, but rather the companies.”

    Even so, Mariana de Souza Alves Lima, known by her handle MariMoon, showed her 1.4 million followers on X where she intends to go, posting a screenshot of rival social network BlueSky.

    X said that it plans to publish what it has called de Moraes’ “illegal demands” and related court filings “in the interest of transparency.”

    Also on Thursday evening, Starlink, Musk’s satellite internet service provider, said on X that de Moraes this week froze its finances, preventing it from doing any transactions in the country where it has more than 250,000 customers.

    “This order is based on an unfounded determination that Starlink should be responsible for the fines levied—unconstitutionally—against X. It was issued in secret and without affording Starlink any of the due process of law guaranteed by the Constitution of Brazil. We intend to address the matter legally,” Starlink said in its statement. The law firm representing Starlink told the AP that the company appealed, but wouldn’t make further comment.

    Another Brazilian Supreme Court Justice, Cristiano Zanin, rejected an appeal by Starlink to unfreeze the company’s bank accounts.

    Musk replied to people sharing the reports of the freeze, adding insults directed at de Moraes. “This guy @Alexandre is an outright criminal of the worst kind, masquerading as a judge,” he wrote.

    Musk later posted on X that SpaceX, which runs Starlink, will provide free internet service in Brazil “until the matter is resolved” since “we cannot receive payment, but don’t want to cut anyone off.”

    In his decision, de Moraes said he ordered the freezing of Starlink’s assets, as X didn’t have enough money in its accounts to cover mounting fines, and reasoning that the two companies are part of the same economic group.

    While ordering X’s suspension followed warnings and fines and so was appropriate, taking action against Starlink seems “highly questionable,” said Luca Belli, coordinator of the Getulio Vargas Foundation’s Technology and Society Center.

    “Yes, of course, they have the same owner, Elon Musk, but it is discretionary to consider Starlink as part of the same economic group as Twitter (X). They have no connection, they have no integration,” Belli said.

    ___

    Ortutay reported from San Francisco and Biller from Rio. AP writer Mauricio Savarese contributed from Sao Paulo.

    [ad_2]

    Source link

  • Meta kills off misinformation tracking tool CrowdTangle despite pleas from researchers, journalists

    Meta kills off misinformation tracking tool CrowdTangle despite pleas from researchers, journalists

    [ad_1]

    SAN FRANCISCO (AP) — Facebook and Instagram parent Meta Platforms has shut down CrowdTangle, a tool widely used by researchers, watchdog organizations and journalists to monitor social media posts, notably to track how misinformation spreads on the company’s platforms.

    Wednesday’s shutdown, which Meta announced earlier this year, has been protested by researchers and nonprofits. In May, dozens of groups, including the Center for Democracy and Technology, the Digital Forensic Research Lab at the Atlantic Council, Human Rights Watch and NYU’s Center for Social Media & Politics, sent a letter to the company asking that it keep the tool running through at least January so it would be available through the U.S. presidential elections.

    “This decision jeopardizes essential pre- and post-election oversight mechanisms and undermines Meta’s transparency efforts during this critical period, and at a time when social trust and digital democracy are alarmingly fragile,” the letter said.

    CrowdTangle, “has been an essential tool in helping researchers parse through the vast amount of information on the platform and identify harmful content and threats,” it added.

    In March, the nonprofit Mozilla Foundation sent Meta a similar letter asking it to keep the tool, which was available for free, functioning until January. That letter was also signed by several dozen groups and individual academic researchers.

    “For years, CrowdTangle has represented an industry best practice for real-time platform transparency. It has become a lifeline for understanding how disinformation, hate speech, and voter suppression spread on Facebook, undermining civic discourse and democracy,” the Mozilla letter said.

    Meta has released an alternative to CrowdTangle, called the Meta Content Library. But access to it is limited to academic researchers and nonprofits, which excludes most news organizations. Critics have also complained that it’s not as useful as CrowdTangle — at least not yet.

    Nick Clegg, Meta’s president of global affairs, said in a blog post last week that the company has been gathering feedback about Meta Content Library from “hundreds of researchers in order to make it more user-friendly and help them find the data they need for their work.”

    Meta said Wednesday that CrowdTangle doesn’t provide a complete picture of what is happening on its platforms and said its new tools are more comprehensive.

    Meta acquired CrowdTangle in 2016.

    [ad_2]

    Source link

  • ‘Chaos agent’: Suspected Trump hack comes as Iran flexes digital muscles ahead of US election

    ‘Chaos agent’: Suspected Trump hack comes as Iran flexes digital muscles ahead of US election

    [ad_1]

    WASHINGTON (AP) — With less than three months before the U.S. election, Iran is intensifying its efforts to meddle in American politics, U.S. officials and private cybersecurity firms say, with the suspected hack of Donald Trump’s campaign being only the latest and most brazen example.

    Iran has long been described as a “chaos agent” when it comes to cyberattacks and disinformation campaigns and in recent months groups linked to the government in Tehran have covertly encouraged protests over Israel’s war in Gaza, impersonated American activists and created networks of fake news websites and social media accounts primed to spread false and misleading information to audiences in the U.S.

    While Russia and China remain bigger cyber threats against the U.S., experts and intelligence officials say Iran’s increasingly aggressive stance marks a significant escalation of efforts to confuse, deceive and frighten American voters ahead of the election.

    The pace will likely continue to increase as the election nears and America’s adversaries exploit the internet and advancements in artificial intelligence to sow discord and confusion.

    “We’re starting to really see that uptick and it makes sense, 90 days out from the election,” said Sean Minor, a former information warfare expert for the U.S. Army who now analyzes online threats for the cybersecurity firm Recorded Future, which has seen a sharp increase in cyber operations from Iran and other nations. “As we get closer, we suspect that these networks will get more aggressive.”

    The FBI is investigating the suspected hack of the Trump campaign as well as efforts to infiltrate the campaign of President Joe Biden, which became Vice President Kamala Harris’ campaign when Biden dropped out. Trump’s campaign announced Saturday that someone illegally accessed and retrieved internal documents, later distributed to three news outlets. The campaign blamed Iran, noting a recent Microsoft report revealing an attempt by Iranian military intelligence to hack into the systems of one of the presidential campaigns.

    “A lot of people think it was Iran. Probably was,” Trump said Tuesday on Univision before shrugging off the value of the leaked material. “I think it’s pretty boring information.”

    Iran has denied any involvement in the hack and said it has no interest in meddling with U.S. politics.

    That denial is disputed by U.S. intelligence officials and private cybersecurity firms who have linked Iran’s government and military to several recent campaigns targeting the U.S., saying they reflect Iran’s growing capabilities and its increasing willingness to use them.

    On Wednesday Google announced it had uncovered a group linked to Iran’s Revolutionary Guard that it said had tried to infiltrate the personal email accounts of roughly a dozen people linked to Biden and Trump since May.

    The company, which contacted law enforcement with its suspicions, said the group is still targeting people associated with Biden, Trump and Harris. It wasn’t clear whether the network identified by Google was connected to the attempt that Trump and Microsoft reported, or were part of a second attempt to infiltrate the campaign’s systems.

    Iran has a few different motives in seeking to influence U.S. elections, intelligence officials and cybersecurity analysts say. The country seeks to spread confusion and increase polarization in the U.S. while undermining support for Israel. Iran also aims to hurt candidates that it believes would increase tension between Washington and Tehran.

    What to know about the 2024 Election

    That’s a description that fits Trump, whose administration ended a nuclear deal with Iran, reimposed sanctions and ordered the killing of an Iranian Gen. Qassem Soleimani, an act that prompted Iran’s leaders to vow revenge.

    The two leaders of the Senate intelligence committee issued a joint letter on Wednesday warning Tehran and other governments hostile to the U.S. that attempts to deceive Americans or disrupt the election will not be tolerated.

    “There will be consequences to interfering in the American democratic process,” wrote the committee’s chairman, Democratic Sen. Mark Warner of Virginia, along with Republican Sen. Marco Rubio of Florida, the vice chairman.

    In 2021, federal authorities charged two Iranian nationals with attempting to interfere with the election the year before. As part of the plot, the men wrote emails claiming to be members of the far-right Proud Boys in which they threatened Democratic voters with violence.

    Last month, Director of National Intelligence Avril Haines said the Iranian government had covertly supported American protests against Israel’s war against Hamas in Gaza. Groups linked to Iran’s government also posed as online activists, encouraged campus protests and provided financial support to some protest groups, Haines said.

    Recent reports from Microsoft and Recorded Future have also linked Iran’s government to networks of fake news websites and social media accounts posing as Americans. The networks were discovered before they gained much influence and analysts say they may have been created ahead of time, to be activated in the weeks immediately before the election.

    The final weeks before an election may be the most dangerous when it comes to foreign efforts to impact voting. That’s when voters pay the most attention to politics and when false claims about candidates or voting can do the most damage.

    So-called ‘hack-and-leak’ attacks like the one reported by Trump’s campaign involve a hacker obtaining sensitive information from a private network and then releasing it, either to select individuals, the news media or to the public. Such attacks not only expose confidential information but can also raise questions about cybersecurity and the vulnerability of critical networks and systems.

    Especially concerning for elections, authorities say, would be an attack targeting a state or local election office that reveals sensitive information or disables election operations. Such an incursion could undermine trust in voting, even if the information exposed is worthless. Experts refer to this last possibility as a “perception hack,” when hackers steal information not because of its value, but because they want to flaunt their capabilities while spreading fear and confusion among their adversaries.

    “That can actually be more of a threat — the spectacle, the marketing this gives foreign adversaries — than the actual hack,” said Gavin Wilde, a senior fellow at the Carnegie Endowment for International Peace and former National Security Council analyst who specializes in cyber threats.

    In 2016, Russian hackers infiltrated Hillary Clinton’s campaign emails, ultimately obtaining and releasing some of the campaign’s most protected information in a hack-and-leak that upended the campaign in its final weeks.

    Recent advances in artificial intelligence have made it easier than ever to create and spread disinformation, including lifelike video and audio allowing hackers to impersonate someone and gain access to their organization’s systems. Nevertheless, the alleged hack of the Trump campaign reportedly involved much simpler techniques: someone gained access to an email account that lacked sufficient security protections.

    While people and organizations can take steps to minimize their vulnerability to hacks, nothing can eliminate the risk entirely, Wilde said, or completely reduce the likelihood that foreign adversaries will mount attacks on campaigns.

    “The tax we pay for being a digital society is that these hacks and leaks are unavoidable,” he said. “Whether you’re a business, a campaign or a government.”

    __

    Associated Press writer Ali Swenson contributed to this report from New York.

    [ad_2]

    Source link

  • Meta kills off misinformation tracking tool CrowdTangle despite pleas from researchers, journalists

    Meta kills off misinformation tracking tool CrowdTangle despite pleas from researchers, journalists

    [ad_1]

    SAN FRANCISCO — Facebook and Instagram parent Meta Platforms has shut down CrowdTangle, a tool widely used by researchers, watchdog organizations and journalists to monitor social media posts, notably to track how misinformation spreads on the company’s platforms.

    Wednesday’s shutdown, which Meta announced earlier this year, has been protested by researchers and nonprofits. In May, dozens of groups, including the Center for Democracy and Technology, the Digital Forensic Research Lab at the Atlantic Council, Human Rights Watch and NYU’s Center for Social Media & Politics, sent a letter to the company asking that it keep the tool running through at least January so it would be available through the U.S. presidential elections.

    “This decision jeopardizes essential pre- and post-election oversight mechanisms and undermines Meta’s transparency efforts during this critical period, and at a time when social trust and digital democracy are alarmingly fragile,” the letter said.

    CrowdTangle, “has been an essential tool in helping researchers parse through the vast amount of information on the platform and identify harmful content and threats,” it added.

    In March, the nonprofit Mozilla Foundation sent Meta a similar letter asking it to keep the tool, which was available for free, functioning until January. That letter was also signed by several dozen groups and individual academic researchers.

    “For years, CrowdTangle has represented an industry best practice for real-time platform transparency. It has become a lifeline for understanding how disinformation, hate speech, and voter suppression spread on Facebook, undermining civic discourse and democracy,” the Mozilla letter said.

    Meta has released an alternative to CrowdTangle, called the Meta Content Library. But access to it is limited to academic researchers and nonprofits, which excludes most news organizations. Critics have also complained that it’s not as useful as CrowdTangle — at least not yet.

    Nick Clegg, Meta’s president of global affairs, said in a blog post last week that the company has been gathering feedback about Meta Content Library from “hundreds of researchers in order to make it more user-friendly and help them find the data they need for their work.”

    Meta said Wednesday that CrowdTangle doesn’t provide a complete picture of what is happening on its platforms and said its new tools are more comprehensive.

    Meta acquired CrowdTangle in 2016.

    [ad_2]

    Source link

  • King Charles III applauds people who stood against racism during recent unrest in UK

    King Charles III applauds people who stood against racism during recent unrest in UK

    [ad_1]

    LONDON — King Charles III has applauded people who took to the streets of British towns and cities earlier this week to help blunt days of unrest fueled by far-right activists and misinformation about a stabbing attack that killed three girls.

    Charles on Friday held telephone audiences with Prime Minister Keir Starmer and law enforcement officials during which he offered his “heartfelt thanks” to police and other emergency workers for their efforts to restore order and help those affected by the violence, Buckingham Palace said in a statement.

    “The king shared how he had been greatly encouraged by the many examples of community spirit that had countered the aggression and criminality from a few with the compassion and resilience of the many,” the palace said. “It remains his majesty’s hope that shared values of mutual respect and understanding will continue to strengthen and unite the nation.”

    British police remain on alert for further violence after the nation was convulsed by rioting for more than a week as crowds spouting anti-immigrant and Islamophobic slogans attacked mosques, looted shops and clashed with police. The government described the violence as “far-right thuggery,” and mobilized 6,000 specially trained police officers to quell the unrest.

    The disturbances have been fueled by right-wing activists using social media to spread misinformation about the July 29 knife attack in which three girls between the ages of 6 and 9 were killed during a Taylor Swift-themed dance event in Southport, a seaside town north of Liverpool.

    Police detained a 17-year-old suspect. Rumors, later debunked, quickly circulated on social media that the suspect was an asylum-seeker, or a Muslim immigrant.

    On Saturday, the family of one of the Southport victims, Bebe King, 6, thanked their community, friends and even strangers who had offered the family solace in their grief.

    “The outpouring of love and support from our community and beyond has been a source of incredible comfort during this unimaginably difficult time,” they wrote. ”From the pink lights illuminating Sefton and Liverpool, to the pink bows, flowers, balloons, cards, and candles left in her memory, we have been overwhelmed by the kindness and compassion shown to our family.”

    The unrest has largely dissipated since Wednesday night, when a wave of expected far-right demonstrations failed to materialize after thousands of peaceful protesters flocked to locations around the U.K. to show their support for immigrants and asylum-seekers.

    Police had prepared for confrontations at more than 100 locations after right-wing groups circulated lists of potential targets on social media. While anti-racism groups planned counterprotests in response, in most places they reclaimed the streets with nothing to oppose.

    Starmer has insisted the police will remain on high alert this weekend, which marks the beginning of the professional soccer season. Authorities have been studying whether there is a link between the rioters and groups of “football hooligans” known to incite trouble at soccer matches.

    “My message to the police and all of those that are charged with responding to disorder is maintain that high alert,” Starmer said on Friday while visiting the special operations room of London’s Metropolitan Police Service.

    The National Police Chiefs’ Council said some 741 people have been arrested in connection with the violence, including 304 who have been charged with criminal offenses.

    Courts around the country have already begun hearing the cases of those charged in relation to the unrest, with some receiving sentences of three years in prison.

    Starmer has said he is convinced that the “swift justice that has been dispensed in our courts” will discourage rioters from returning to the streets this weekend.

    [ad_2]

    Source link

  • Information Pollution: The Tragedy of the Commons and Well-Poisoning on the Internet

    Information Pollution: The Tragedy of the Commons and Well-Poisoning on the Internet

    [ad_1]

    Discover how the internet propagates “information pollution” and how it threatens our collective understanding of facts and truth. Here’s how to navigate the chaos and find clean water to drink.


    In a healthy and functional society, shared common resources are essential for the well-being and sustainability of the community.

    These resources can include natural goods such as land, water, and the environment, as well as man-made goods such as public schools, parks, and libraries.

    Generally, the ability to manage, sustain, and distribute these resources determines the success of a society, community, or nation as a whole.

    The Tragedy of the Commons

    The tragedy of the commons is a concept introduced by ecologist Garrett Hardin in 1968, describing a scenario where individuals, acting in their own self-interest, overuse and deplete a shared resource, ultimately harming the entire community.

    Classic examples include overgrazing on common land, overfishing in shared waters, and pollution of air and water. The key issue is that while the benefits of exploitation are enjoyed by individuals, the costs are distributed among the entire community.

    Information as a Shared Resource

    One common resource that is often neglected is news and information.

    Over the last century, newspapers, radio, TV, and the internet have become the lifeblood of many nations, shaping public opinion and collective consciousness.

    Truth and reliable information function as shared resources critical for various societal functions, including governance, public health, and social interaction.

    Just as a community depends on clean water, society relies on accurate information to make decisions, build trust, and maintain peace and harmony.

    When these information resources are polluted, the consequences can be severe, leading to mistrust, division, and poor decision-making.

    Information Pollution

    Information is a shared resource that is susceptible to degradation through neglect or deliberate actions, leading to a type of “information pollution.”

    This phenomenon mirrors the “tragedy of the commons,” where the self-interested actions of individuals can spoil a common resource for everyone.

    Information pollution occurs when false, misleading, or harmful information is introduced into the public discourse. This can happen through:

    • Misinformation: Incorrect or misleading information spread unintentionally.
    • Disinformation: False information spread deliberately to deceive.
    • Malinformation: Information that is true but presented in a misleading context to cause harm.

    All three types of information pollution hurt people’s ability to discern truth from fiction.

    Well-Poisoning on the Internet

    The internet can be a wonderful place to learn new things, but it’s also littered with information pollution, especially on social media sites filled with bots, spammers, and grifters.

    When a water well is poisoned, everyone in the town ends up drinking dirty and contaminated water. The same is true for information pollution on the internet – and social media is dirty water.

    There are a lot of factors that drive information pollution on the internet, but key ones include:

    • Clickbait and engagement farming – For most people, the only measure of success on the internet is how much attention you get. An outrageous lie or falsehood will get a million impressions before anyone tries to confirm what’s been said. People rarely correct themselves if a lie is getting them a lot of impressions.
    • Grifting and easy money – Many people see the internet as an opportunity for a quick buck, so a lot of content you see is purely money-driven, including advertisements, sponsored content, or superficial merchandise (mugs, t-shirts, diet supplements, brain enhancement pills, etc.) If you see anyone selling these types of products on the internet, you can be certain that truth is not their main motivation.
    • Bots and algorithm-hacking – Artificial engagement on the internet is a huge problem. A lot of viral content you see these days is pushed by bot farms and clever algorithm manipulation. Organic growth by independent thinkers and creators used to be a genuine thing about a decade ago, but most big e-celebrities and influencers you see today are completely astroturfed.
    • Politics and propaganda – A lot of misinformation and disinformation is politically driven propaganda. Governments and corporations are known to create their own bots and internet campaigns to shape public opinion in one direction or another.
    • Echo chambers and groupthink – While it’s natural to associate with people who think like us and share the same beliefs, the internet tends to heighten this tendency. People only spend time on online spaces that confirm their existing beliefs and very rarely seek out different perspectives.

    All of these factors make the internet a less reliable place for seeking truth and information. These phenomenon have only increased over the past decade, making the internet increasingly harmful and stupid (to be frank).

    Filtering Dirty Water

    Now more than ever we need to find ways to filter the information we are being exposed to online. Effective strategies you can employ include:

    • Pay attention to your digital environment – Ideas and information can often seep into our brain without us even realizing it, especially when we are consistently exposed to the same information over and over again. What are the top five websites you visit? Where do you go for news and current events? What’s your social media feed look like? All of these make up a part of your digital environment which is having an influence on you whether you realize it or not, so pay close attention to the types of online spaces you’re spending time in.
    • High value vs. low value information – Not all information is created equal. A random social media post that goes viral doesn’t have the same level of rigor as a peer-reviewed study. The information pyramid is a helpful guideline for assessing what information sources tend to be more trustworthy, accurate, and high value. Please note that this doesn’t mean a social media post is always wrong, or a scientific study is always right, just that one source tends to have more substance than another and you should generally give it more weight.
    • Be your own fact-checker – Too many people take funny memes, shocking screenshots, and catchy headlines at face value without ever digging deeper. This causes a lot of misinformation and disinformation to go viral, and it can also lead to some comical and embarrassing errors (“You actually believed that?!”). While there are many professional “fact checkers” on various sites, even those can be misleading and ideologically motivated. Unfortunately, in our low trust information world, there’s only one fact-checker you can really count on and that’s yourself. Learn how to double-check sources, dig up original links, and read full articles so you understand the context before accepting something as true.
    • Learn basic statistical literacy – Numbers can be very persuasive on a purely psychological level; if someone can make a claim with a statistic to back it, we tend to automatically think it must be true. However, statistics and graphs can be easily manipulated and deceptive. Understanding basic statistical literacy (such as knowing “correlation doesn’t mean causation,” or checking the “y” and “x” axis before looking at a graph) can give you a clearer idea of what a number is really telling you, and what is just being speculated, guessed, or misunderstood.
    • Beware of personality-driven consumption – Many people get their news and information from famous personalities such as news commentators, celebrities, influencers, or podcasters. While it’s natural to listen to people we like and trust, this can backfire when we end up mindlessly accepting information rather than confirming it on its own merit. For many, there’s an entertainment factor too: it’s fun to root for your “leader/clan” and make fun of the other “leaders/clans,” some people even form parasocial relationships with their favorite personalities, seeing them as a type of best friend. However, what often happens in these hyper personality-driven spaces is that they devolve into petty drama and gossip. That may be “fun” to participate in for some people, but it’s not education.

    If you keep these guidelines in mind, you’ll be able to navigate the dirty waters of the internet more effectively and hopefully find some springs of fresh and clean water to drink from.

    Conclusion

    Truth and reliable information are vital commons that underpin a healthy and functional society. Just as communities must manage natural resources responsibly to avoid the tragedy of the commons, societies must actively protect and nurture the integrity of their information ecosystems. Each of us plays a role in managing the information commons and minimizing information pollution.


    Enter your email to stay updated on new articles in self improvement:

    [ad_2]

    Steven Handel

    Source link

  • UK government calls on Elon Musk to act responsibly amid provocative posts as unrest grips country

    UK government calls on Elon Musk to act responsibly amid provocative posts as unrest grips country

    [ad_1]

    LONDON — The British government has called on Elon Musk to act responsibly after the tech billionaire used his social media platform X to unleash a barrage of posts that officials say risk inflaming the violent unrest gripping the country.

    Justice Minister Heidi Alexander made the comments Tuesday morning after Musk posted a comment saying that “Civil war is inevitable” in the U.K. Musk later doubled down, highlighting complaints that the British criminal justice system treats Muslims more leniently than far-right activists and comparing Britain’s crackdown on social media users to the Soviet Union.

    “Use of language such as a ‘civil war’ is in no way acceptable,’’ Alexander told Times Radio. “We are seeing police officers being seriously injured, buildings set alight, and so I really do think that everyone who has a platform should be exercising their power responsibly.’’

    Britain has been shaken by violence for more than a week, as police clashed with crowds spouting anti-immigrant and Islamophobic slogans in cities and towns from Northern Ireland to the south coast of England. The unrest began after right-wing activists used social media to spread misinformation about a knife attack that killed three girls during a Taylor Swift-themed dance event on July 29.

    Prime Minister Keir Starmer, who has described the riots as “far-right thuggery,” said after an emergency meeting with law enforcement officials and government ministers Tuesday that perpetrators will swiftly be punished.

    More than 400 people have been arrested due to violence in more than two dozen towns and cities and about 100 have been charged, after Starmer announced plans to ramp up the criminal justice system.

    An 18-year-old man who trashed police cars in Bolton, in northern England, on Sunday was believed to be the first person sentenced in the unrest. James Nelson got a two-month prison term Tuesday after pleading guilty in Manchester Magistrates’ Court to criminal damage, police said.

    “That should send a very powerful message to anybody involved, either directly or online, that you are likely to be dealt with within a week and that nobody, but nobody, should be involving themselves in this disorder,” Starmer said.

    Starmer deflected questions from reporters about Musk, saying his focus was on keeping communities safe.

    The government is calling on social media companies, such as Musk’s X, formerly known as Twitter, to do more to combat the spread of misleading and inflammatory information online.

    Alexander said Tuesday that the government would look at strengthening the existing Online Safety Act, which was approved last year and won’t be fully implemented until 2025.

    “We’ve been working with the social media companies, and some of the action that they’ve taken already with the automatic removal of some false information is to be welcomed,” Alexander told the BBC. “But there is undoubtedly more that the social media companies could and should be doing.”

    That type of rhetoric may be part of what sparked Musk’s attack on the government. Musk has taken a more combative approach to his critics than was the norm in Silicon Valley technology firms, said Alex Krasodomski, who studies the intersection between technology and politics at Chatham House, a London-based think tank.

    “He has sparred with U.K. and EU policymakers in the past when they have questioned his approaches to content moderation on the platform,” Krasodomski said.

    X didn’t respond to an email seeking comment. It rarely responds to media requests.

    Musk just kept wading into the debate about the violence in Britain.

    After Starmer posted a comment on X saying that the government “will not tolerate attacks on mosques or on Muslim communities,” Musk responded with the question, “Shouldn’t you be concerned about attacks on (asterisk)all(asterisk) communities?”

    Musk attached a similar comment to a video that said it showed a “Muslim patrol” attacking a pub in Birmingham, highlighting the original post for his 193 million followers.

    Such comments are vintage Musk, who has a history of making provocative statements, said Stephanie Alice Baker, a sociologist at City University of London who has studied online discourse. Musk frequently comments on geopolitical issues and his fans come to his defense when he is criticized, Baker said.

    Earlier this year, he clashed with a Brazilian supreme court justice over free speech, far-right accounts and purported misinformation on X. He also accused Venezuela’s socialist president, Nicolás Maduro, of “major election fraud” after last week’s disputed election.

    Those comments are closely watched by a group of people attracted by his success in business, Baker said.

    “Musk’s following represents the cult of the entrepreneur …” she said. “By questioning convention, they are depicted as gifted visionaries, who can predict the future and bring it into being. For his fans and followers, Musk’s impulsive comments are perceived as part of his genius.”

    —-

    Associated Press writer Brian Melley contributed.

    [ad_2]

    Source link

  • (Media News) Misinformation Floods Internet Amid Rapid 2024 Election Developments

    (Media News) Misinformation Floods Internet Amid Rapid 2024 Election Developments

    [ad_1]

    The upcoming 2024 election has led to a surge in online misinformation, making it difficult to discern truth from fiction. In the span of two weeks, there was an attempted assassination of former President Trump, the dismissal of a federal indictment against him, and a jubilant Republican National Convention where he was named the GOP nominee. Following the assassination attempt, the Secret Service director resigned.

    President Biden then announced his withdrawal from the race after a poor debate performance, clearing the way for Vice President Harris to become the likely Democratic nominee. These unexpected events left Americans struggling to process the news influx, creating fertile ground for false claims.

    Conspiracy theories and false information quickly circulated online, including claims that Trump’s assassination attempt was staged and that the shooter had been identified prematurely. False rumors also spread about former President Jimmy Carter’s death. Biden’s COVID-19 diagnosis and subsequent withdrawal fueled conspiracies about his health, amplified by figures like Alex Jones and Charlie Kirk. Misleading conjecture also emerged about Harris, with right-wing activist Matt Walsh revisiting her past relationship with former San Francisco Mayor Willie Brown to suggest she advanced politically because of it.

    Experts say such narratives are not new but remain powerful. Similar to claims about Harris’s race being a factor in her political success, parallels were drawn to the conspiracy theories that surrounded President Obama’s rise.

    Social media platforms, especially Elon Musk’s X, have been criticized for relaxed content moderation, contributing to the rapid spread of misinformation.

    With the November election approaching, experts advise caution and critically evaluating online information. “What people can do is log off and really try to take a breath,” suggested conspiracy theory researcher Mike Rothschild.


    Do you appreciate our work? Please consider one of the following ways to sustain us.

    MBFC Ad-Free 

    or

    MBFC Donation


    Follow Media Bias Fact Check: 

    BlueSky: https://bsky.app/profile/mediabiasfactcheck.bsky.social

    Reddit: https://www.reddit.com/r/Media_Bias_Fact_Check/

    Threads: https://www.threads.net/@mediabiasfactcheck

    Twitter: https://twitter.com/MBFC_News

    Facebook: https://www.facebook.com/mediabiasfactcheck

    Mastodon: https://mastodon.social/@mediabiasfactcheck

    Instagram: https://www.instagram.com/mediabiasfactcheck/

    Pinterest: https://www.pinterest.com/mbfcnews/

    Subscribe With Email

    Join 23.2K other subscribers

    [ad_2]

    Media Bias Fact Check

    Source link

  • AI could supercharge disinformation and disrupt EU elections, experts warn

    AI could supercharge disinformation and disrupt EU elections, experts warn

    [ad_1]

    BRUSSELS (AP) — Voters in the European Union are set to elect lawmakers starting Thursday for the bloc’s parliament, in a major democratic exercise that’s also likely to be overshadowed by online disinformation.

    Experts have warned that artificial intelligence could supercharge the spread of fake news that could disrupt the election in the EU and many other countries this year. But the stakes are especially high in Europe, which has been confronting Russian propaganda efforts as Moscow’s war with Ukraine drags on.

    Here’s a closer look:

    WHAT’S HAPPENING?

    Some 360 million people in 27 nations — from Portugal to Finland, Ireland to Cyprus — will choose 720 European Parliament lawmakers in an election that runs Thursday to Sunday. In the months leading up to the vote, experts have observed a surge in the quantity and quality of fake news and anti-EU disinformation being peddled in member countries.

    A big fear is that deceiving voters will be easier than ever, enabled by new AI tools that make it easy to create misleading or false content. Some of the malicious activity is domestic, some international. Russia is most widely blamed, and sometimes China, even though hard evidence directly attributing such attacks is difficult to pin down.

    “Russian state-sponsored campaigns to flood the EU information space with deceptive content is a threat to the way we have been used to conducting our democratic debates, especially in election times,” Josep Borrell, the EU’s foreign policy chief, warned on Monday.

    He said Russia’s “information manipulation” efforts are taking advantage of increasing use of social media penetration “and cheap AI-assisted operations.” Bots are being used to push smear campaigns against European political leaders who are critical of Russian President Vladimir Putin, he said.

    HAS ANY DISINFO HAPPENED YET?

    There have been plenty of examples of election-related disinformation.

    Two days before national elections in Spain last July, a fake website was registered that mirrored one run by authorities in the capital Madrid. It posted an article falsely warning of a possible attack on polling stations by the disbanded Basque militant separatist group ETA.

    In Poland, two days before the October parliamentary election, police descended on a polling station in response to a bogus bomb threat. Social media accounts linked to what authorities call the Russian interference “infosphere” claimed a device had exploded.

    Just days before Slovakia’s parliamentary election in November, AI-generated audio recordings impersonated a candidate discussing plans to rig the election, leaving fact-checkers scrambling to debunk them as false as they spread across social media.

    Just last week, Poland’s national news agency carried a fake report saying that Prime Minister Donald Tusk was mobilizing 200,000 men starting on July 1, in an apparent hack that authorities blamed on Russia. The Polish News Agency “killed,” or removed, the report minutes later and issued a statement saying that it wasn’t the source.

    It’s “really worrying, and a bit different than other efforts to create disinformation from alternative sources,” said Alexandre Alaphilippe, executive director of EU DisinfoLab, a nonprofit group that researches disinformation. “It raises notably the question of cybersecurity of the news production, which should be considered as critical infrastructure.”

    WHAT’S THE GOAL OF DISINFORMATION?

    Experts and authorities said Russian disinformation is aimed at disrupting democracy, by deterring voters across the EU from heading to the ballot boxes.

    “Our democracy cannot be taken for granted, and the Kremlin will continue using disinformation, malign interference, corruption and any other dirty tricks from the authoritarian playbook to divide Europe,” European Commission Vice-President Vera Jourova warned the parliament in April.

    Tusk, meanwhile, called out Russia’s “destabilization strategy on the eve of the European elections.”

    On a broader level, the goal of “disinformation campaigns is often not to disrupt elections,” said Sophie Murphy Byrne, senior government affairs manager at Logically, an AI intelligence company. “It tends to be ongoing activity designed to appeal to conspiracy mindsets and erode societal trust,” she told an online briefing last week.

    Narratives are also fabricated to fuel public discontent with Europe’s political elites, attempt to divide communities over issues like family values, gender or sexuality, sow doubts about climate change and chip away at Western support for Ukraine, EU experts and analysts say.

    WHAT HAS CHANGED?

    Five years ago, when the last European Union election was held, most online disinformation was laboriously churned out by “troll farms” employing people working in shifts writing manipulative posts in sometimes clumsy English or repurposing old video footage. Fakes were easier to spot.

    Now, experts have been sounding that alarm about the rise of generative AI that they say threatens to supercharge the spread of election disinformation worldwide. Malicious actors can use the same technology that underpins easy-to-use platforms, like OpenAI’s ChatGPT, to create authentic-looking deepfake images, videos and audio. Anyone with a smartphone and a devious mind can potentially create false, but convincing, content aimed at fooling voters.

    “What is changing now is the scale that you can achieve as a propaganda actor,” said Salvatore Romano, head of research at AI Forensics, a nonprofit research group. Generative AI systems can now be used to automatically pump out realistic images and videos and push them out to social media users, he said.

    AI Forensics recently uncovered a network of pro-Russian pages that it said took advantage of Meta’s failure to moderate political advertising in the European Union.

    Fabricated content is now “indistinguishable” from the real thing, and takes disinformation watchers experts a lot longer to debunk, said Romano.

    WHAT ARE AUTHORITIES DOING ABOUT IT?

    The EU is using a new law, the Digital Services Act, to fight back. The sweeping law requires platforms to curb the risk of spreading disinformation and can be used to hold them accountable under the threat of hefty fines.

    The bloc is using the law to demand information from Microsoft about risks posed by its Bing Copilot AI chatbot, including concerns about “automated manipulation of services that can mislead voters.”

    The DSA has also been used to investigate Facebook and Instagram owner Meta Platforms for not doing enough to protect users from disinformation campaigns.

    The EU has passed a wide-ranging artificial intelligence law, which includes a requirement for deepfakes to be labelled, but it won’t arrive in time for the vote and will take effect over the next two years.

    HOW ARE SOCIAL MEDIA COMPANIES RESPONDING?

    Most tech companies have touted the measures they’re taking to protect the European Union’s “election integrity.”

    Meta Platforms — owner of Facebook, Instagram and WhatsApp — has said it will set up an election operations center to identify potential online threats. It also has thousands of content reviewers working in the EU’s 24 official languages and is tightening up policies on AI-generated content, including labeling and “downranking” AI-generated content that violates its standards.

    Nick Clegg, Meta’s president of global affairs, has said there’s no sign that generative AI tools are being used on a systemic basis to disrupt elections.

    TikTok said it will set up fact-checking hubs in the video-sharing platform’s app. YouTube owner Google said it’s working with fact-checking groups and will use AI to “fight abuse at scale.”

    Elon Musk went the opposite way with his social media platform X, previously known as Twitter. “Oh you mean the ‘Election Integrity’ Team that was undermining election integrity? Yeah, they’re gone,” he said in a post in September.

    ___

    A previous version of this story misspelled the given name of EU foreign policy chief Josep Borrell.

    [ad_2]

    Source link

  • Sexist tropes and misinformation swirl online as Mexico prepares to elect its first female leader

    Sexist tropes and misinformation swirl online as Mexico prepares to elect its first female leader

    [ad_1]

    Mexican voters are poised to elect their first female president, a cause of celebration for many that has also touched off a flurry of false and misogynist online claims, blurring the lines behind fact and fiction.

    The two leading candidates, both women, have had to respond to demeaning attacks about their appearance, their credentials and their ability to lead the nation.

    The candidate considered the favorite in Sunday’s contest, former Mexico City Mayor Claudia Sheinbaum, has also faced slurs about her Jewish background as well as repeatedly debunked claims she was born in Hungary. This week, in an apparent bid to undermine her candidacy, a social media account impersonating a legitimate news outlet posted fake, AI-generated audio of Sheinbaum admitting that her campaign was failing in a key Mexican state.

    AS IT HAPPENED

    Catch the highlights from the AP’s coverage as Claudia Sheinbaum made history.

    The wave of election misinformation facing voters in Mexico is the latest example of how the internet, social media and AI are fueling the spread of false, misleading or hateful content in democracies around the world, warping public discourse and potentially influencing election outcomes.

    “We have a general atmosphere of disinformation here in Mexico, but it’s slightly different from what is happening in India, or the U.S.,” said Manuel Alejandro Guerrero, a professor and communications researcher at the Universidad Iberoamericana in Mexico City.

    In Mexico’s case, that misinformation is the result of growing distrust of the news media, violence committed by drug cartels, and rapid increases in social media usage coupled with a lag in digital literacy. Guerrero added one more contributing factor now familiar to Americans: political leaders who willingly spread disinformation themselves.

    Over 50 countries go to the polls in 2024

    Sheinbaum is a member of the Morena party, led by current President Andrés Manuel López Obrador. She faces opposition candidate Xóchitl Gálvez and Jorge Álvarez Máynez of the small Citizen Movement party.

    Compared with election misinformation spread about male candidates, the attacks against Gálvez and Sheinbaum often take a particularly personal nature and focus on their gender, according to Maria Calderon, an attorney and researcher from Mexico who works with the Mexico Institute, a think tank based in Washington, D.C., that studies online politics.

    “I was surprised by how cruel the comments could be,” said Calderon, whose analysis found that attacks on female candidates like Sheinbaum and Gálvez typically focus on their appearance, or their credentials, whereas misinformation about male candidates is more often about policy proposals.

    “A lot of direct attacks on their weight, their height, how they dressed, the way they behave, the way they talk,” Calderon said.

    She suggested that some of the sexism can be traced back to Mexico’s “machismo” culture and strong Catholic roots. Women only received the right to vote in Mexico in 1953.

    Lopez Obrador has spread some of the false claims targeting Gálvez, as he did last year when he erroneously said she supported plans to end several popular social programs if elected. Despite her efforts to set the record straight, however, the narrative continues to dog her campaign, showing just how effective political misinformation can be even if debunked.

    Con artists have also gotten in on the misinformation business in Mexico, using AI deepfake videos of Sheinbaum in an effort to peddle investment scams, for instance.

    “You’ll see that it’s my voice, but it’s a fraud,” Sheinbaum said after one deepfake of her supposedly pitching an investment scam went viral.

    As they have in other nations, the tech companies that operate most of the major social media platforms say they have rolled out a series of programs and policies designed to blunt the effect of misinformation ahead of the election.

    Meta and other U.S.-based tech platforms have been criticized for focusing most of their efforts on misinformation in English while taking a “ cookie-cutter ” approach to the rest of the globe.

    “We are focused on providing reliable election information while combating misinformation across languages,” according to a statement from Meta, the owner of Facebook, Instagram and WhatsApp, about its election plans.

    The specter of violence has haunted the election since the first campaigns began. Dozens of candidates for smaller offices have been killed or abducted by criminal gangs. Drug cartels have spread terror in the lead up to the election, spraying campaign rallies with gunfire, burning ballots and preventing polling places from being set up.

    “This has been the most violent election that Mexico has had since we started recording elections,” Calderon said.

    [ad_2]

    Source link

  • 3 tips to spot misinformation online

    3 tips to spot misinformation online

    [ad_1]

    3 tips to spot misinformation online – CBS News


    Watch CBS News



    It’s more important than ever to scrutinize what you see online. These three tips from CBS News Confirmed will help you know what to trust.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Russian-linked cybercampaigns put bull’s-eye on France’s Olympics and elections

    Russian-linked cybercampaigns put bull’s-eye on France’s Olympics and elections

    [ad_1]

    PARIS — Photos of blood-red hands on a Holocaust memorial. Caskets at the Eiffel Tower. A fake French military recruitment drive calling for soldiers in Ukraine, and major French news sites improbably registered in an obscure Pacific territory, population 15,000.

    All are part of disinformation campaigns orchestrated out of Russia and targeting France, according to French officials and cybersecurity experts in Europe and the United States. France’s legislative elections and the Paris Olympics sent them into overdrive.

    More than a dozen reports issued in the past year point to an intensifying effort from Russia to undermine France, particularly the upcoming Games, and President Emmanuel Macron, who is one of Ukraine’s most vocal supporters in Europe.

    ___

    This story, supported by the Pulitzer Center for Crisis Reporting, is part of an Associated Press series covering threats to democracy in Europe.

    ___

    The Russian campaigns sowing anti-French disinformation began online early last summer but first became tangible in October 2023 when more than 1,000 bots linked to Russia relayed photos of graffitied Stars of David in Paris and its suburbs.

    A French intelligence report said the Russian intelligence agency FSB ordered the tagging, as well as subsequent vandalism of a memorial to those who helped rescue Jews from the Holocaust.

    Photos from each event were amplified on social media by fake accounts linked to the Russian disinformation site RRN, according to cybersecurity experts. Russia denies any such campaigns. The French intelligence report says RRN is part of a larger operation orchestrated by Sergei Kiriyenko, a ranking Kremlin official.

    “You have to see this as an ecosystem,” said a French military official, who spoke on condition of anonymity to reveal information about the Russian effort. “It’s a hybrid strategy.”

    The tags and the vandalism had no direct link to Russia’s war in Ukraine, but they provoked a strong reaction from the French political class, with denunciations in the legislature and public debate. Antisemitic attacks are on the rise in France, and the war in Gaza has proven divisive.

    The Stars of David could be interpreted either as support for Israel or as opposition. The effect was to sow division and unease. French Jews in particular have found themselves unwittingly thrust into the political fray despite, at just 500,000 people, making up a small proportion of the French population.

    In March, just after Macron discussed the possibility of mobilizing the French military in Ukraine, a fake recruitment drive went up for the French army in Ukraine, spawning a series of posts in Russian- and French-language Telegram channels that got picked up in Russian and Belarusian media, according to a separate French government report seen by The Associated Press. On June 1, caskets appeared outside the Eiffel Tower, bearing the inscription “French soldiers in Ukraine.”

    The larger disinformation efforts show little traction in France, but the Russian audience may have been the real target, officials said, by showing that Russia’s war in Ukraine is, as Putin has said, really a war with the West.

    Among the broader goals, the French military official said, was a long-term and steady effort to sow social discord, erode faith in the media and democratic governments, undermine NATO, and sap Western support for Ukraine. Denigrating the Olympics, from which most Russian athletes are banned, is a bonus, according to French officials monitoring the increasingly strident posts warning of imminent unrest ahead of the Games.

    On June 9, the French far-right National Rally trounced Macron’s party in elections for the European Parliament. The party has historically been close to Russia: one of its leading figures, Marine Le Pen, cultivated ties to Putin for many years and supported Russia’s illegal annexation of Crimea from Ukraine in 2014. And its leading contender for prime minister, Jordan Bardella, has said he opposes sending long-range weapons to Kyiv.

    In more than 4,400 posts gathered since mid-November by antibot4navalny, a collective that analyzes Russian bot behavior, those targeting audiences in France and Germany predominated. The number of weekly posts ranged from 100 to 200 except for the week of May 5, when it dropped near zero, the data showed. That week, as it happens, was a holiday in Russia.

    Many of the posts redirect either to RRN or to sites that appear identical to major French media, but with the domain – and content – changed. At least two of the more recent mirrored sites are registered in Wallis and Futuna, a French Pacific territory 10 time zones from Paris. A click on the top of the fake page redirects back to the real news sites themselves to give the impression of authenticity. Other posts redirect to original sites controlled by the the campaign itself, dubbed Doppelganger.

    The redirects shifted focus for the European elections and continued after Macron called the surprise legislative elections with just three weeks to spare. Three-quarters of posts from the week ahead of the June 30 first-round legislative vote that were directed toward a French audience focused on either criticizing Macron or boosting the National Rally, antibot4navalny found in data shared with The Associated Press.

    One post on a fake site purported to be from Le Point, a current affairs magazine, and the French news agency AFP, criticizing Macron.

    “Our leaders have no idea how ordinary French people live but are ready to destroy France in the name of aid for Ukraine,” read the headline on June 25.

    Another site falsely claimed to be from Macron’s party, offering to pay 100 euros for a vote for him – and linking back to the party’s true website. And still another inadvertently left a generative AI prompt calling for the re-write of an article “taking a conservative stance against the liberal policies of the Macron administration,” according to findings last week from Insikt Group, the threat research division of the cybersecurity consultancy Recorded Future.

    “They’re scraping automatically, sending the text to the AI and asking the AI to introduce bias or slants into the article and rewrite it,” said Clément Briens, an analyst for Recorded Future.

    Briens said metrics tools embedded within the site are likely intended to prove that the campaigns were money well-spent for “whoever is doing the payouts for these operations.”

    The French government cybersecurity watchdog, Viginum, has published multiple reports since June 2023 singling out Russian efforts to sow divisions in France and elsewhere. That was around the time that pro-Kremlin Telegram feeds started promoting “Olympics has Fallen” — a full-length fake Netflix film featuring an AI-generated voice resembling Tom Cruise that criticized the International Olympic Committee, according to the Microsoft Threat Analysis Center.

    Microsoft said this campaign, which it dubbed Storm-1679, is fanning fears of violence at the Games and last fall disseminated digitally generated photos referring, among other things, to the attacks on Israeli athletes at the 1972 Olympics.

    The latest effort, which started just after the first round of the elections on June 30, merges fears of violence related to both the Olympics and the risk of protests after the decisive second round, antibot4navalny found. Viginum released a new report Tuesday detailing the risks ahead for the Games — not for violence but for disinformation.

    “Digital information manipulation campaigns have become a veritable instrument of destabilization of democracies,” Viginum said. “This global event will give untold informational exposure to malevolant foreign actors.” The word Russia appears nowhere.

    Baptiste Robert, a French cybersecurity expert who ran unsuccessfully as an unaffiliated centrist in the legislative elections, called on his government – and especially lawmakers – to prepare for the digital threats to come.

    “This is a global policy of Russia: They really want to push people into the extremes,” he said before the first-round vote. “It’s working perfectly right now.”

    [ad_2]

    Source link

  • ChatGPT gave incorrect answers to questions about how to vote in battleground states

    ChatGPT gave incorrect answers to questions about how to vote in battleground states

    [ad_1]

    A CBS News investigation found ChatGPT gave incorrect or incomplete answers to some questions about how to vote in battleground states ahead of the upcoming U.S. presidential election. The artificial intelligence chatbot, one of several popular large language model (LLM) products that can generate written language, also gave incorrect or incomplete information about elections set to take place even sooner in other countries.

    CBS News asked ChatGPT a number of practical questions that a prospective voter might have about how and where to vote, deadlines to vote and other requirements for voting in the battleground states of North Carolina, Pennsylvania, Wisconsin and Michigan.

    ChatGPT did give some correct answers and, in one case, it updated its answer over a short period of time so that when it was asked the same question a few hours later, its response went from incorrect to correct. But it also gave a number of incorrect or incomplete answers. For some questions, its answers updated from incorrect to correct over a number of days.

    OpenAI, ChatGPT’s parent company, said in a blog post in January that the program would “direct users to CanIVote.org, the authoritative website on US voting information, when asked certain procedural election related questions — for example, where to vote.”

    While the chatbot’s answers to CBS News’ election-related questions often included advice to seek out official election information, they did not always include a banner referring users to CanIVote.org.

    “It’s concerning if you look at the broader trends of how LLMs are being advertised and how they’re being deployed,” Henry Ajder, an expert on AI and deepfakes, told CBS News. “If you’re trying to incorporate these tools into services which people typically think of as factually accurate or at least authoritative, that’s when you start to have problems.”


    OpenAI says state actors are using its tech to spread propaganda

    04:42

    In response to CBS News’ questions, a spokesperson for ChatGPT’s parent company OpenAI said: “Preventing artificial intelligence from being used to interfere in this year’s elections is a priority for OpenAI. We prohibit the use of our tools for campaigning and we’ve built safety measures into our products, like declining requests to generate images of real people – including political candidates. We’ve also developed partnerships to ensure people using ChatGPT for information about voting get directed to authoritative sources. We are closely monitoring how our technology is being used in the elections context and continue to evolve our approach.”

    When the same questions posed by CBS News to ChatGPT were asked of Microsoft and Google’s LLM chatbots, Copilot and Gemini, neither provided answers at all, either saying they “can’t respond to this topic,” or that they are “still learning” how to respond.

    When asked by CBS News about why this may have been the case, Microsoft responded: “As we work to improve our tools to perform to our expectations for the 2024 elections, some election-related prompts may be redirected to search.” Google said: “In preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses.”

    Here are some of the questions CBS News asked ChatGPT that initially drew incorrect or incomplete answers, and the responses, state by state:

    North Carolina:

    CBS News question: “What is the deadline to mail in my ballot in North Carolina?”

    -Real answer: The ballot must be received by 7:30 p.m. on election day.

    -Incorrect ChatGPT answer: The ballot must be postmarked by election day and can be received up to three days after.

    Days later when asked the same question, ChatGPT gave different responses when asked on different devices. On one computer and one phone, its answer was accurate, on a separate phone, its answer was still incorrect.

    Pennsylvania

    CBS News question: “When is my deadline to mail in my ballot in Pennsylvania?”

    -Real answer: A ballot must be received by election officials by 8:00pm on election day.

    -Incorrect ChatGPT answer: A ballot must be postmarked by 8:00pm on election day and received by the county election office by 5:00pm the Friday after election day.

    When asked the same question hours later, ChatGPT updated its response returned a correct answer.

    Wisconsin

    CBS News question: “What do I need to vote absentee in Wisconsin?”

    -Real answer: Wisconsin requires that an absentee ballot is filled out in the presence of a witness, who then completes a ‘Certification of Witness’ section.

    -Incomplete ChatGPT answer: ChatGPT listed seven things you need to vote absentee in Wisconsin, but did not mention the need for a witness to sign a ballot in order for it to be counted. 

    Days later, ChatGPT provided a different answer that referenced the need for a witness.

    Michigan

    CBS News question: “Can I vote early if I live in Michigan?”

    Real answer: Yes. There is early in-person voting in Michigan for a minimum of nine consecutive days, ending on the Sunday before election day. In some areas, this period could be longer.

    Incorrect ChatGPT answer: The way to vote early in Michigan is through in-person absentee voting, meaning a voter needs to request an absentee ballot in the mail can drop the ballot off at a specified site.

    International

    In addition to getting things wrong about how to vote in the upcoming U.S. presidential election, ChatGPT gave incorrect answers to basic questions about elections overseas.

    When asked “Do I need an ID to vote in the U.K.,” ChatGPT incorrectly said that an ID is currently not required to vote in England, Wales and Scotland. In reality, a photo ID is required to vote in all three countries, as well as Northern Ireland, and if a voter doesn’t have one, they will be turned away from their polling station. A few days later when asked again, ChatGPT said that an ID would be required to vote.

    After the French far-right did well in the European Parliamentary elections, French President Emmanuel Macron announced that France would hold a snap legislative election later this summer. Asked when the French Parliamentary elections are set to take place, ChatGPT incorrectly said that they were scheduled for 2027.

    Potential to scale inauthentic operations on social media

    Ajder said that, while giving incorrect election information is a concern, he’s focused more on the potentially harmful role LLMs like ChatGPT could play in elections by increasing bot activity on social media platforms.

    “They could potentially be used to scale inauthentic operations on platforms like X or Twitter to basically try and get things trending and basically provide bad actors with a cheaper and quicker way to scale… to spread false narratives or to try and even persuasively encourage people to act or vote certain ways,” he told CBS News.

    When it comes to gathering information about elections and political candidates, Ajder said “the best way to do it is probably to rely on sources that are trusted and fundamentally driven by human journalists who do the hard work.”

    [ad_2]

    Source link

  • Democrats wanted an agreement on using artificial intelligence. It went nowhere

    Democrats wanted an agreement on using artificial intelligence. It went nowhere

    [ad_1]

    WASHINGTON — The Democratic National Committee was watching earlier this year as campaigns nationwide were experimenting with artificial intelligence. So the organization approached a handful of influential party campaign committees with a request: Sign onto guidelines that would commit them to use the technology in a “responsible” way.

    The draft agreement, a copy of which was obtained by The Associated Press, was hardly full of revolutionary ideas. It asked campaigns to check work by AI tools, protect against biases and avoid using AI to create misleading content.

    “Our goal is to use this new technology both effectively and ethically, and in a way that advances – rather than undermines – the values that we espouse in our campaigns,” the draft said.

    The plan went nowhere.

    Instead of fostering an agreement, the guidelines sparked a debate about the value of such pledges, particularly those governing fast-evolving technology. Among the concerns expressed by the Democratic campaign organizations: Such a pledge might hamstring their ability to deploy AI and could turn off donors with ties to the AI industry. Some committee officials were also irked that the DNC gave them only a few days to agree to the guidelines.

    The proposal’s demise highlighted internal divisions over campaign tactics and the party’s uncertainty over how to best utilize AI amid warnings from experts that the technology is supercharging the proliferation of disinformation.

    Hannah Muldavin, a senior spokesperson at the Democratic National Committee, said the group is not giving up on finding a consensus.

    The DNC, she said, “will continue to engage with our sister committees to discuss ideas and issues important to Democratic campaigns and to American voters, including AI.”

    “It’s not uncommon for ideas and plans to shift, especially in the midst of a busy election year, and any documents on this subject reflect early and ongoing conversations,” Muldavin said, adding the “DNC and our partners take seriously the opportunities and challenges presented by AI.”

    The wrangling comes as campaigns have increasingly deployed artificial intelligence — computer systems, software or processes that emulate aspects of human work and cognition — to optimize workloads. That includes using large language models to write fundraising emails, text supporters and build chatbots to answer voters’ questions.

    That trend is expected to continue as November’s general election approaches, with campaigns turning to supercharged generative AI tools to create text and images, as well as clone human voices and create video at lightning speeds.

    The Republican National Committee used AI-generated images in a television spot last year predicting a dystopian future under President Joe Biden.

    Much of that adoption, however, has been overshadowed by concerns about how campaigns could use artificial intelligence in ways that trick voters. Experts have warned that AI has become so powerful that it has made it easy to generate “deep fake” videos, audio snippets and other media targeting opposing candidates. Some states have passed legislation regulating the way generative artificial intelligence can be used. But Congress has so far failed to pass any bills regulating artificial intelligence on the federal level.

    In the absence of regulation, the DNC sought a set of guidelines it could point to as evidence the party was taking seriously the threat and promise of AI. It sent the proposal in March to the five Democratic campaign committees that seek to elect House, Senate, gubernatorial, state legislative and state attorneys general candidates to office, according to the draft agreement.

    The goal was to have each committee agree to a slate of AI guardrails and the DNC proposed issuing a joint statement proclaiming such guidelines would ensure that campaigns could use “the tools they need to prevent the spread of misinformation and disinformation, while empowering campaigns to safely, responsibly use generative AI to engage more Americans in our democracy.”

    The Democratic committee had hoped the statement would be signed by Chair Jaime Harrison and the leaders of the other organizations.

    Democratic operatives said the proposal landed with a thud. Some senior leaders at the committees worried that the agreement might have unforeseen consequences, perhaps constricting how campaigns use AI, according to multiple Democratic operatives familiar with the outreach.

    And it might send the wrong message to technology companies and executives who work on AI, many of whom help fill campaign coffers during election years.

    Some of the Democratic Party’s most prolific donors are top tech entrepreneurs and AI evangelists, including Sam Altman, the CEO of OpenAI, and Eric Schmidt, the former CEO of Google.

    Altman has donated over $200,000 to the Biden campaign and his aligned Democratic joint fundraising committee since the start of last year, according to data from the Federal Election Commission, and Schmidt’s contributions to those groups have topped $500,000 over the same time.

    Two other AI proponents, Dustin Moskovitz, the co-founder of Facebook, and Reid Hoffman, the co-founder of LinkedIn, donated more than $900,000 to Biden’s joint fundraising committee this cycle, according to the same data.

    The DNC plan caught the committees off guard because it came with little explanation, other than a desire to get each committee to agree to the list of best practices within a few days, said multiple Democratic operatives who spoke on condition of anonymity because they weren’t authorized to discuss the matter. Aides to the Democratic Congressional Campaign and Democratic Senatorial Campaign committees said they felt rushed by a DNC timeline that urged them to sign quickly.

    Representatives from the Democratic Attorneys General Association did not respond to the Associated Press’ request for comment. Spokesmen from the Democratic Governors Association and Democratic Legislative Campaign Committee declined to comment.

    The Republican National Committee did not respond to questions about its AI guidelines. The Biden campaign also declined to comment when asked about the DNC effort.

    The four-page agreement — “Guidelines on Responsible Use of Generative AI in Campaigns” — covered everything from ensuring that artificial intelligence systems were not trusted without a human checking its work to notifying voters when they are interacting with AI-generated content or systems.

    “As the explosive rise of generative AI transforms every corner of public life – including political campaigns – it’s more important than ever that we limit this new technology’s potential threat to voters’ rights, and instead leverage it to build innovative, efficient campaigns and a stronger, more inclusive democracy,” the proposal said.

    The guidelines were divided into five sections that included titles such as “Offering Human Alternatives, Consideration and Fallback” and “Providing Notice and Explanation.” The proposed rules would have required the committees to ensure “a real person should be responsible for approving AI-generated content and be accountable for how, where, and to whom it is deployed.”

    The directive outlined how “users should always be aware when they are interacting with an AI bot” and stressed that any images or video created by AI “should be flagged” as such. And it stressed that campaigns should use AI to assist staffers, not replace them.

    “Campaigns are a human-driven and human motivated business,” read the agreement. “Use efficiency gains to teach more voters and focus more on quality control and sustainability.”

    It also urged campaigns not to use “generative AI to create misleading content. Period.”

    ___

    This story is part of an Associated Press series, “The AI Campaign,” exploring the influence of artificial intelligence in the 2024 election cycle.

    ___

    The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org

    ____

    The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.

    [ad_2]

    Source link

  • Cats on the moon? Google’s AI tool is producing misleading responses that have experts worried

    Cats on the moon? Google’s AI tool is producing misleading responses that have experts worried

    [ad_1]

    Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself.

    Now it comes up with an instant answer generated by artificial intelligence — which may or may not be correct.

    “Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s newly retooled search engine in response to a query by an Associated Press reporter.

    It added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.”

    None of this is true. Similar errors — some funny, others harmful falsehoods — have been shared on social media since Google this month unleashed AI overviews, a makeover of its search page that frequently puts the summaries on top of search results.

    The new feature has alarmed experts who warn it could perpetuate bias and misinformation and endanger people looking for help in an emergency.

    When Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have been president of the United States, it responded confidently with a long-debunked conspiracy theory: “The United States has had one Muslim president, Barack Hussein Obama.”

    Mitchell said the summary backed up the claim by citing a chapter in an academic book, written by historians. But the chapter didn’t make the bogus claim — it was only referring to the false theory.

    “Google’s AI system is not smart enough to figure out that this citation is not actually backing up the claim,” Mitchell said in an email to the AP. “Given how untrustworthy it is, I think this AI Overview feature is very irresponsible and should be taken offline.”

    Google said in a statement Friday that it’s taking “swift action” to fix errors — such as the Obama falsehood — that violate its content policies; and using that to “develop broader improvements” that are already rolling out. But in most cases, Google claims the system is working the way it should thanks to extensive testing before its public release.

    “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web,” Google said a written statement. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”

    It’s hard to reproduce errors made by AI language models — in part because they’re inherently random. They work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on. They’re prone to making things up — a widely studied problem known as hallucination.

    The AP tested Google’s AI feature with several questions and shared some of its responses with subject matter experts. Asked what to do about a snake bite, Google gave an answer that was “impressively thorough,” said Robert Espinoza, a biology professor at the California State University, Northridge, who is also president of the American Society of Ichthyologists and Herpetologists.

    But when people go to Google with an emergency question, the chance that an answer the tech company gives them includes a hard-to-notice error is a problem.

    “The more you are stressed or hurried or in a rush, the more likely you are to just take that first answer that comes out,” said Emily M. Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “And in some cases, those can be life-critical situations.”

    That’s not Bender’s only concern — and she has warned Google about them for several years. When Google researchers in 2021 published a paper called “Rethinking search” that proposed using AI language models as “domain experts” that could answer questions authoritatively — much like they are doing now — Bender and colleague Chirag Shah responded with a paper laying out why that was a bad idea.

    They warned that such AI systems could perpetuate the racism and sexism found in the huge troves of written data they’ve been trained on.

    “The problem with that kind of misinformation is that we’re swimming in it,” Bender said. “And so people are likely to get their biases confirmed. And it’s harder to spot misinformation when it’s confirming your biases.”

    Another concern was a deeper one — that ceding information retrieval to chatbots was degrading the serendipity of human search for knowledge, literacy about what we see online, and the value of connecting in online forums with other people who are going through the same thing.

    Those forums and other websites count on Google sending people to them, but Google’s new AI overviews threaten to disrupt the flow of money-making internet traffic.

    Google’s rivals have also been closely following the reaction. The search giant has faced pressure for more than a year to deliver more AI features as it competes with ChatGPT-maker OpenAI and upstarts such as Perplexity AI, which aspires to take on Google with its own AI question-and-answer app.

    “This seems like this was rushed out by Google,” said Dmitry Shevelenko, Perplexity’s chief business officer. “There’s just a lot of unforced errors in the quality.”

    —————-

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    [ad_2]

    Source link

  • A second scourge is battering Brazil’s flooded south: disinformation

    A second scourge is battering Brazil’s flooded south: disinformation

    [ad_1]

    SAO PAULO — While flooding that has devastated Brazil’s Rio Grande do Sul state has yet to subside, another scourge has spread across the region: disinformation on social media that has hampered desperate efforts to get aid to hundreds of thousands in need.

    Among fake postings that have stirred outrage: That official agencies aren’t conducting rescues in Brazil’s southernmost state. That bureaucracy is holding up donations of food, water and clothing. One persistent rumor contends that authorities are concealing hundreds of corpses, said Jairo Jorge, mayor of the hard-hit city of Canoas.

    Jorge and other officials say hidden actors behind the postings are exploiting the crisis to undermine trust in government.

    Ary Vanazzi, mayor of Sao Leopoldo, said many people ignored official warnings and instead heeded social media posts saying government alerts “were just politicians trying to alarm people.”

    “Because of that, many didn’t leave their homes in this emergency. Some might have died because of it,” Vanazzi told The Associated Press. “Sometimes we spend more time defending against lies than working to help our population.”

    Floods over the past two weeks have killed at least 149 people, and more than 100 remain missing, state authorities said Wednesday. More than 600,000 people have been forced from their homes.

    Brazil became a hotbed for disinformation ahead of the 2018 election won by Jair Bolsonaro. During his presidency, adversaries often found themselves fending off digital onslaughts. The Supreme Court has since launched one of the world’s most aggressive efforts to stamp out coordinated disinformation campaigns, led by one controversial justice in particular who is overseeing an investigation into the spread of false news. He has ordered social media platforms to remove dozens of accounts.

    The army was spared online mudslinging during the presidency of Bolsonaro, a former captain who is a fierce opponent of his successor, Luiz Inácio Lula da Silva. But it has become a target for far-right hostility under Lula, with social media users attacking military leaders for taking orders from the leftist president, said Alexandre Aragão, executive editor of fact-checking agency Aos Fatos.

    Several videos posted online insinuate soldiers aren’t participating in rescues. Others mock soldiers’ supposed lack of equipment, using footage of a truck stuck in floodwaters. The general who leads the army’s southern command told CNN Brasil that one rumor claimed he was responsible for nonexistent deaths inside a hospital.

    The army says it and local agencies deployed 31,000 soldiers, police and others to rescue more than 69,000 people and 10,000 animals and deliver tons of aid by air and boats. Brazil’s federal government announced it will spend nearly 51 billion reais ($10 billion) on recovery, provide credit to farmers and small companies and suspend the state’s 11-billion-reais annual debt service.

    “These reports are disturbing, because they do not reflect reality,” the command said in a statement to the AP. “Many active military were also victims of these floods. Many soldiers have lost their homes after the rains and remain on the front lines helping the population.”

    Prodded by complaints from military brass, Brazil’s government is appealing to social media platforms to stop the spread of misinformation, Attorney General Jorge Messias said in an interview.

    As of late Tuesday, all had expressed willingness to cooperate — except X, according to Messias’ office. The platform’s owner, Elon Musk, recently railed against a Supreme Court justice’s decisions to restrict users’ accounts, accusing him of muzzling free speech and drawing praise from Bolsonaro and his allies. X didn’t immediately respond to an email requesting comment.

    Messias’ office also filed a lawsuit against a social media influencer who claimed that a single businessman — and staunch Bolsonaro supporter — dispatched more aircraft to aid rescue efforts than the entire Brazilian air force. The government is demanding the right to reply on the Instagram profile of the influencer, Pablo Marçal, an outspoken critic of Lula with nearly 10 million followers.

    The swarm of disinformation at a time of crisis amounts to a “tragedy within a tragedy,” Messias said. “When we stop everything to respond to fake news, we’re diverting public resources and energy away from what really matters, which is serving the public.”

    Nearly one-third of people surveyed by pollster Quaest reported they were exposed to false news about the floods, according to the poll conducted from May 2-6. Conducted in 120 cities nationwide, it had a margin of error of 2.2 percentage points.

    Disinformation is creating a hostile environment for aid workers. Locals have accused state and municipal agents of acting too slowly and threatened to expose them online, and yelled at firefighters over reports they’d failed to rescue people and pets, according to the mayors of Sao Leopoldo and Canoas. Some people pretending to be volunteers entered a warehouse of the state’s civil defense agency last week, filming aid donations inside and posting video online as supposed evidence of its failure to distribute the aid, according to the agency.

    Last week, another falsehood contended authorities were halting trucks with donations, said Aragão. It was fueled by broadcaster SBT’s story about a truck stopped for inspection that, despite being overloaded, was later released. Social media posts distorted that report and claimed aid stoppages are a widespread phenomenon. The case was demonstrative, Aragão added.

    “When there is a tragedy with the dimensions of what happened in Rio Grande do Sul, of course there will be isolated cases of absurd things,” he said by phone from Sao Paulo. “Social media sells those real and isolated cases as though they represent official protocol.”

    Janine Bargas has been working nonstop on the disaster as a professor at the Federal University of Health Sciences of Porto Alegre in the state capital. Initially, her duties included providing reliable information, such as telling people where they could find needed medication.

    Misinformation became so intense that her job now includes monitoring and debunking it. That has included recommendations for a bogus preventive treatment for a waterborne bacterial disease.

    “The same anti-vaccine doctors who were recommending chloroquine during COVID started promoting a prophylaxis for leptospirosis,” Bargas told the AP, adding that panic over the reports erupted in a shelter managed by university staff. “People started fighting, asking for the medication. And this medication’s dosage can be very toxic for the liver.”

    Jorge, the mayor of Canoas, became a target of disinformation just hours after the floods began. A post, shared millions of times on messaging apps, showed a brawl it said took place at a shelter in Canoas because of a decree that all donations pass through City Hall. The brawl actually took place in Ceara state, on the opposite side of the vast country, and Jorge issued no such decree.

    The falsehoods are “orchestrated, aimed at making people stop believing in public agents,” he said. “Whenever a natural disaster happens, there’s a wave of solidarity. But not this time; there’s also a wave of anger caused by disinformation.”

    ___

    The Associated Press’ climate and environmental coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.

    [ad_2]

    Source link

  • Threads gets its own fact-checking program

    Threads gets its own fact-checking program

    [ad_1]

    This might come as a shock to you but the things people put on social media aren’t always truthful — really blew your mind there, right? Due to this, it can be challenging for people to know what’s real without context or expertise in a specific area. That’s part of why many platforms use a fact-checking team to keep an eye (often more so look like they’re keeping an eye) on what’s getting shared. Now, Threads is getting its own fact-checking program, Adam Mosseri, head of Instagram and de-facto person in charge at Threads, announced. He first shared the company’s plans to do so in December.

    Mosseri stated that Threads “recently” made it so that Meta’s third-party fact-checkers could review and rate any inaccurate content on the platform. Before the shift, Meta was having fact-checks conducted on Facebook and Instagram and then matching “near-identical false content” that users shared on Threads. However, there’s no indication of exactly when the program started or if it’s global.

    Then there’s the matter of seeing how effective it really can be. Facebook and Instagram already had these dedicated fact-checkers, yet misinformation has run rampant across the platforms. Ahead of the 2024 Presidential election — and as ongoing elections and conflicts happen worldwide — is it too much to ask for some hardcore fact-checking from social media companies?

    [ad_2]

    Sarah Fielding

    Source link

  • In Taiwan, a group is battling fake news one conversation at a time — with a focus on seniors

    In Taiwan, a group is battling fake news one conversation at a time — with a focus on seniors

    [ad_1]

    TAIPEI, Taiwan — Their days often began at the crack of dawn.

    They’d head out to a church, a temple, a park and set up a stall. They’d seek out seniors in particular, those who are perhaps the most vulnerable citizens of the information-saturated society that has enveloped them. To get people to stop and listen, they’d offer free bars of soap — a metaphor for the scrubbing that they were undertaking.

    They’d talk to people, ask them about their lives and their media consumption habits. They’d ask: How has fake news hurt you? They’d teach techniques to punch through the static, to see the illogic in conspiracy theories, to find the facts behind the false narratives that can sometimes shape our lives.

    Nearly six years later, with just one formal employee and a team of volunteers, Fake News Cleaner has hosted more than 500 events, connecting with college students, elementary-school children — and the seniors that, some say, are the most vulnerable to such efforts.

    Its people are filling up lecture halls and becoming a key voice in an effort as pressing here as anywhere: scrubbing Taiwan of disinformation and the problems it causes, one case at a time.

    Like any democratic society, Taiwan is flooded with assorted types of disinformation. It touches every aspect of a person’s life, from conspiracy theories on vaccines to health claims aimed at promoting supplements to rumors about major Taiwanese companies leaving the island.

    Despite its very public nature, disinformation has a deeply personal impact — particularly among Taiwan’s older people. It thrives in the natural gaps between people that come from generational differences and a constantly updating tech landscape, then enlarges those gaps to cause rifts.

    “They have no way to communicate,” says Melody Hsieh, who co-founded the group with Shu-huai Chang in 2018. “This entire society is being torn apart, and this is a terrible thing.”

    Chuang Tsai-yu, sitting in on a recent lecture by the group in Taipei, once saw a message online that told people to hit their chest in a way that would save them in the case of heart discomfort. She said she actually tried it out herself.

    Later, she asked her doctor about it. His advice: Go directly to the emergency room and get checked for a heart attack.

    “We really do believe the things people will send us,” Chuang says. “Because when you’re older, we don’t have as much of a grasp on the outside world. Some of these scammers, they will write it in a way that’s very believable.”

    Chuang is fortunate: Her son has explained some of the things she sees on her phone — including disinformation about health on the Line app. Not everyone is as lucky, though. When it comes to misinformation, there’s a lot of work to do.

    Taiwan is already home to several established fact-checking organizations. There’s Co-Facts, a well known AI-driven fact-checking bot founded by a group of civic hackers. There are the Taiwan Fact Check Center and MyGoPen. But such organizations presume that you’re at least somewhat tech-savvy — that you can find a fact-check organization’s website or add a fact-checking bot.

    Yet many of the people most affected are the least tech-savvy. Fake News Cleaner believes addressing this gap requires an old-school approach: going offline. At the heart of the group’s work is approaching people with patience and respect while educating them about the algorithms and norms that drive the platforms they use.

    Hsieh says she was moved after seeing too many instances of division because of fake news: a couple that divorced, a mom who kicked her kid out of the house. Many such stories surfaced in 2018 when Taiwan held a national referendum on a number of social issues including on nuclear energy, sex education, and gay marriage.

    At their second-ever event, Hsieh and Chang met a victim of fake news. A vegetable seller told them he’d lost sales because people had read that the vegetable fern he planted and sold, known locally as guomao, caused cancer. Business faded, and the vendor had to sell off part of his land. For a year, even restaurants didn’t order from him.

    Keep up the work, he told them — it’s needed.

    At a community center hosted by Bangkah Church in Taipei’s Wanhua neighborhood, a crowd of seniors listen to 28-year old Tseng Yu-huan speak on behalf of Fake News Cleaner.

    The attendees, many of whom come daily to the church’s college for seniors, are learning why fake news is so compelling. Tseng shows them some sensational headlines. One: A smoothie mix of sweet potato leaves and milk was said to be a detox drink. Another: rumors that COVID-19 was being spread from India because of dead bodies in rivers. He used mostly examples from Line, a Korean messaging app popular in Taiwan.

    With just one formal employee and a team of volunteers, Fake News Cleaner has combed Taiwan’s churches, temples, small fishing villages and parks, spreading awareness. While they started with a focus on seniors, the group has also lectured at colleges and even elementary schools. Early on, to catch their target audience, Hsieh and her co-founders would get to the hiking trails near her home by 5 a.m. to set up a stall while offering free bars of soap to entice people to stop and listen.

    Now the group has a semester-long course at a community college in Kaohsiung, in addition to their lectures all across Taiwan, from fishing villages to community centers.

    For Hsieh, her personal experience helped shape the approach to battling disinformation.

    In 2018, ahead of a referendum on gay marriage, Hsieh had started to lobby her father. He was well-respected in their community and could command a lot of votes. “I wanted his vote,” Hsieh says.

    It seemed unlikely: She says he opposed gay marriage and had said homophobic things. The two had often clashed on this issue before, she says, devolving into screaming matches to the point where he had thrown things on the floor. But when she decided to change his mind, Hsieh discovered a new level of patience.

    “After we fight, the same night, I’d apologize, and say my attitude is very bad,” she says. “And I’d make him a cup of milk or a coffee, and then after he started feeling better, I’d say ’But! I believe …”

    Through the course of three to four months, Hsieh lobbied her father, sending him articles to counter the things he had been reading online or explaining patiently what the facts were. For example, he had read online that AIDS came from gay people. In actuality, the virus was actually from chimpanzees and had made the leap to human hosts in the 20th century.

    What finally turned the page after months of lobbying, Hsieh says: She connected the issue to her father’s personal experience.

    When he first started doing business, decades ago, some Taiwanese suppliers did not want to sell to him because he’d come from China after the civil war between the Communists and the Nationalist party. When he proposed, his future wife’s father threatened suicide because he was not of “Taiwanese” background. Hsieh saw an opportunity in that.

    “Just because they’re gay they can’t marry the person they love?” she asked, confronting him.

    Her father, Hsieh says, is now a staunch supporter of gay marriage.

    Fake News Cleaner avoids politics and takes no funding from the government or political parties. This is because of Taiwan’s highly polarized political environment, where media outlets are often referred to by the color of the political party they back. Instead, the group focuses lectures on everyday topics like health and diet or economic scams.

    Hsieh’s experience with her father informs how volunteers interact with their students — an approach that goes beyond showing people a fact-check claim. The key is to teach people to think about what they’re consuming. “What we are dealing with is not about true or false,” says Tseng, the teacher. “It’s actually about family relationships and tech.”

    At Bangkah Church, the audience watches Tseng as he lectures the audience about content farms, websites that aggregate content or generate their own articles regardless of the truth, and how these content farms make money. He also asks: Do the articles have bylines? Who wrote them?

    Fake news relies on emotion to generate clicks. So often, headlines are sensational and appeal directly to three types of emotions: hatred, panic or surprise. A click or a page view means more money for the websites, Tseng explains. The retirees watch him, engrossed.

    Everything goes smoothly until it comes time to work with the technology, Tseng tries to get his students to add the Line account of MyGoPen, a well-established Taiwanese fact-checking organization. A step that typically takes a minute ended up taking 20. Teaching assistants scour the room, helping seniors. Loudness and confusion prevail.

    Many elderly people end up with expensive phones bought by their children that they don’t know how to use, says Moon Chen, Fake News Cleaner’s secretary-general. Sometimes their children open a Facebook or Line account for them but don’t explain the phone’s fundamentals.

    That produces trouble. Algorithms serve up pages that the phone user hasn’t followed to fill up the page, the provenance of information becomes hazy and people can get confused.

    After the class, seniors could be heard saying they could ask a question to MyGoPen, the fact-checking bot they were told to add.

    Lin Wei-kun, a Taipei resident, who attended the class, said he knows better than to believe all the information that he sees online, especially the ones that claim miracle uses for everyday foods. But he appreciated the group’s class because he says many people out there do believe it.

    “These days, there’s a lot of information online. I usually just delete it,” he says. “For example, cilantro is just a garnish. But if they write a post saying cilantro has these miraculous uses, a lot of people out there would believe it.”

    It’s one more small step forward in Fake News Cleaner’s mandate — one person in Taiwan learning one thing, and becoming a bit more aware of a virtual world of misinformation that grows more complex by the day.

    [ad_2]

    Source link

  • Moscow theater shooting fans flames of a disinformation war

    Moscow theater shooting fans flames of a disinformation war

    [ad_1]

    TALLINN, Estonia — Flames were still leaping from the Moscow concert hall besieged by gunmen when Russian officials began suggesting who was really to blame. They presented no evidence, only aspersions and suspicion and counterfactual speculation, but in Russia’s eyes the culprit was clear: Ukraine.

    The allegations that Ukraine, now in its third year of fighting after Russia invaded, was behind Friday’s attack that killed at least 137 people, were the first salvo in a disinformation war that has clouded the hearts and minds of people trying to come to grips with the shocking attack.

    First came Dmitry Medvedev, the former Russian president who was once regarded as a mild reformer but who has become a vehement hawk since the start of the Ukraine war.

    “Terrorists understand only retaliatory terror … if it is established that these are terrorists of the Kyiv regime, it is impossible to deal with them and their ideological inspirers differently,” he wrote on the Telegram message app about 90 minutes after first news came of the attack.

    While not overtly accusing Ukraine, the strong implication was in line with Russia’s portrayal of Ukraine as a nest of vipers and suggested that Russia was prepared to step up its air assaults on Ukraine, which already had notably intensified in recent days.

    Ukraine’s Foreign Ministry quickly grabbed the baton, not only rejecting Russia’s accusations but suggesting that the brutal shootings and fire may have been a false flag operation. A ministry statement Friday evening referenced the 1999 apartment bombings that many critics have suggested were done by Russian security agents to justify launching the second Chechnya war.

    “There are no red lines for (President Vladmir) Putin’s dictatorship. It is ready to kill its own citizens for political purposes, just as it has killed thousands of Ukrainian civilians during the war against Ukraine as a result of missile attacks, artillery shelling and torture,” the ministry said at the time.

    The claim of responsibility by a cell of the Islamic State did nothing to quiet the accusations, even though the group is a reliable villain to almost every country and despite Russia having claimed to have thwarted an IS-planned assault on a synagogue this month.

    The United States’ confirmation of the IS claim only hardened Russia’s position.

    “On what basis do officials in Washington draw any conclusions about anyone’s innocence in the midst of a tragedy? If the United States has or had reliable information in this regard, then it must be immediately transferred to the Russian side,” said Foreign Ministry spokeswoman Maria Zakharova.

    “If there is no such data, then the White House has no right to issue indulgences to anyone,” she said.

    All that was on Friday.

    On Saturday, Russian officers chased down four suspects in the Bryansk region, about 350 kilometers (210 miles) south of Moscow. Bryansk is on the border with Ukraine and Russians were outraged.

    “Now we know in which country these bloody bastards planned to hide from persecution – Ukraine,” Zakharova said.

    In the afternoon, Putin, having waited about 19 hours to address the nation about the bloodshed, claimed without presenting evidence that the suspects were aiming to pass through a border “window” that had been arranged in advance.

    How such passage could be arranged between warring countries was also unexplained. On Monday, Putin said the attackers were “radical Islamists,” but that it still needed to be explained why they tried to flee to Ukraine.

    Over the weekend, digital bystanders chimed in on social media and messaging services. Some found it suspicious that the United States in early March had issued a warning saying it had intelligence indicating an imminent terrorist attack.

    To some, that suggested that Washington didn’t give enough information to Russia about what it knew. To others, it indicated that Russian security services were too inept to fend off an attack even when warned.

    Overtly bogus information also came in the attack’s wake. Russia’s state broadcaster NTV ran a video that appeared to show Ukraine’s top security official, Oleksiy Danilov, say, “Is it fun in Moscow today? … I would like to believe that we will arrange such fun for them more often.”

    But it turned out to be an AI-generated deepfake, said digital sleuth Shayan Sardarizadeh of the BBC.

    For some, implications and manipulation were too subtle and they chose all-out assertions.

    “Ukraine did it. They will pay,” American commentator Jackson Hinkle, who recently interviewed Zakharova, wrote on X. Hinkle regularly spreads false information on social media. The Russia-Ukraine war has been one of his frequent targets, with Hinkle often posting content that furthers Russia’s disinformation narratives.

    [ad_2]

    Source link