ReportWire

Tag: Misinformation

  • 3/24/2024: The Right to be Wrong; AMLO; Law of the Sea

    3/24/2024: The Right to be Wrong; AMLO; Law of the Sea

    [ad_1]

    3/24/2024: The Right to be Wrong; AMLO; Law of the Sea – CBS News


    Watch CBS News



    First, a report on the spread of misinformation on social media. Then, Mexican President Andrés Manuel López Obrador: The 60 Minutes Interview. And, U.S. fails to ratify treaty for ocean mining.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Misinformation spreads online as some in Congress fight what they see as censorship

    Misinformation spreads online as some in Congress fight what they see as censorship

    [ad_1]

    Misinformation spreads online as some in Congress fight what they see as censorship – CBS News


    Watch CBS News



    Misinformation is spreading on social media as some fight to stop what they call censorship. The Supreme Court is now grappling with how the First Amendment applies to the online world.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Supreme Court takes up dispute over how far government can go to combat controversial social media posts on topics like COVID-19 and election security

    Supreme Court takes up dispute over how far government can go to combat controversial social media posts on topics like COVID-19 and election security

    [ad_1]

    The justices are hearing arguments in a lawsuit filed by Louisiana, Missouri and other parties accusing administration officials of leaning on the social media platforms to unconstitutionally squelch conservative points of view. Lower courts have sided with the states, but the Supreme Court blocked those rulings while it considers the issue.

    The high court is in the midst of a term heavy with social media issues. On Friday, the court laid out standards for when public officials can block their social media followers. Less than a month ago, the court heard arguments over Republican-passed laws in Florida and Texas that prohibit large social media companies from taking down posts because of the views they express.

    The cases over state laws and the one being argued Monday are variations on the same theme, complaints that the platforms are censoring conservative viewpoints.

    The states argue that White House communications staffers, the surgeon general, the FBI and the U.S. cybersecurity agency are among those who coerced changes in online content on Facebook, X (formerly Twitter) and other media platforms.

    “It’s a very, very threatening thing when the federal government uses the power and authority of the government to block people from exercising their freedom of speech,” Louisiana Attorney General Liz Murrill said in a video her office posted online.

    The administration responds that none of the actions the states complain about come close to problematic coercion. The states “still have not identified any instance in which any government official sought to coerce a platform’s editorial decisions with a threat of adverse government action,” wrote Solicitor General Elizabeth Prelogar, the administration’s top Supreme Court lawyer. Prelogar wrote that states also can’t “point to any evidence that the government ever imposed any sanction when the platforms declined to moderate content the government had flagged — as routinely occurred.”

    The companies themselves are not involved in the case.

    Free speech advocates say the court should use the case to draw an appropriate line between the government’s acceptable use of the bully pulpit and coercive threats to free speech.

    “The government has no authority to threaten platforms into censoring protected speech, but it must have the ability to participate in public discourse so that it can effectively govern and inform the public of its views,” Alex Abdo, litigation director of the Knight First Amendment Institute at Columbia University, said in a statement.

    A panel of three judges on the New Orleans-based 5th U.S. Circuit Court of Appeals had ruled earlier that the administration had probably brought unconstitutional pressure on the media platforms. The appellate panel said officials cannot attempt to “coerce or significantly encourage” changes in online content. The panel had previously narrowed a more sweeping order from a federal judge, who wanted to include even more government officials and prohibit mere encouragement of content changes.

    A divided Supreme Court put the 5th circuit ruling on hold in October, when it agreed to take up the case.

    Justices Samuel Alito, Neil Gorsuch and Clarence Thomas would have rejected the emergency appeal from the Biden administration.

    Alito wrote in dissent in October: “At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news. That is most unfortunate.”

    A decision in Murthy v. Missouri, 23-411, is expected by early summer.

    Subscribe to the new Fortune CEO Weekly Europe newsletter to get corner office insights on the biggest business stories in Europe. Sign up for free.

    [ad_2]

    Mark Sherman, The Associated Press

    Source link

  • Bill that could make TikTok unavailable in the US advances quickly in the House

    Bill that could make TikTok unavailable in the US advances quickly in the House

    [ad_1]

    WASHINGTON — A bill that could lead to the popular video-sharing app TikTok being unavailable in the United States is quickly gaining traction in the House as lawmakers voice concerns about the potential for the platform to surveil and manipulate Americans.

    The measure gained the support of House Speaker Mike Johnson and could soon come up for a full vote in the House. The bill advanced out of committee Thursday in a unanimous bipartisan vote — 50-0.

    The White House has provided technical support in the drafting of the bill, though White House press secretary Karine Jean-Pierre said the TikTok legislation “still needs some work” to get to a place where President Joe Biden would endorse it.

    The bill takes a two-pronged approach. First, it requires ByteDance Ltd., which is based in Beijing, to divest TikTok and other applications it controls within 180 days of enactment of the bill or those applications will be prohibited in the United States. Second, it creates a narrow process to let the executive branch prohibit access to an app owned by a foreign adversary if it poses a threat to national security.

    “It’s an important, bipartisan measure to take on China, our largest geopolitical foe, which is actively undermining our economy and security,” Johnson said Thursday.

    Some lawmakers and critics of TikTok have argued the Chinese government could force the company to share data on American users. TikTok says it has never done that and wouldn’t do so if asked. The U.S. government also hasn’t provided evidence of that happening.

    Critics also claim the app could be used to spread misinformation beneficial to Beijing.

    Former President Donald Trump attempted to ban TikTok through executive order, but the courts blocked the action after TikTok sued, arguing such actions would violate free speech and due process rights.

    TikTok raised similar concerns about the legislation gaining momentum in the House.

    “This bill is an outright ban of TikTok, no matter how much the authors try to disguise it. This legislation will trample the First Amendment rights of 170 million Americans and deprive 5 million small businesses of a platform they rely on to grow and create jobs,” the company said in a prepared statement.

    The bill’s author, Rep. Mike Gallagher, the Republican chairman of a special House committee focused on China, rejected TikTok’s assertion of a ban. Rather, he said it’s an effort to force a change in TikTok’s ownership. He also took issue with TikTok urging some users to call their representatives and urge them to vote no on the bill.

    The notification urged TikTok users to “speak up now — before your government strips 170 million Americans of their Constitutional right to free expression.” The notification also warned that the “ban” of TikTok would damage millions of businesses and destroy the lives of countless creators around the country.

    TikTok users responded by flooding the offices of lawmakers with telephone calls. Some offices even shut off their phones because of the onslaught. A congressional aide not authorized to speak on the matter publicly said that lawmakers on the committee voting on the bill Thursday as well as others were inundated with calls.

    “Today, it’s about our bill and it’s about intimidating members considering that bill, but tomorrow it could be misinformation or lies about an election, about a war, about any number of things,” Gallagher said. “This is why we can’t take a chance on having a dominant news platform in America controlled or owned by a company that is behold to the Chinese Communist Party, our foremost adversary.”

    The bill comes about one year after TikTok’s CEO was grilled for hours by skeptical lawmakers on the House Energy and Commerce Committee concerned about data security and the distribution of harmful content. That same committee met Thursday to debate and vote on the bill.

    Rep. Cathy McMorris Rodgers, the committee’s Republican chair, said TikTok’s access to so many Americans makes it a valuable propaganda tool for the Chinese government to exploit. She also noted that its parent company ByteDance is currently under investigation by the U.S. Department of Justice for surveilling American journalists.

    “Through this access, the app is able to collect nearly every data point imaginable, from people’s location, to what they search on their devices, who they are connecting with, and other forms of sensitive information,” Rodgers said.

    To assuage concerns from lawmakers, TikTok has promised to wall off U.S. user data from its parent company through a separate entity run independently from ByteDance and monitored by outside observers. TikTok says new user data is currently being stored on servers maintained by the software company Oracle.

    The American Civil Liberties Union and other free speech advocacy groups urged lawmakers to reject the TikTok bill, saying in a letter to the Energy and Commerce Committee’s leadership that “passing this legislation would trample on the constitutional right to freedom of speech of millions of people in the United States.”

    Biden’s reelection campaign has opened a TikTok account as a way to boost its appeal with young voters, even as his administration continued to raise security concerns about whether the popular social media app might be sharing user data with China’s communist government.

    Jean-Pierre said the White House welcomes lawmakers’ efforts on the TikTok legislation, but lawmakers need to continue work on it.

    “Once it gets to a place where we think .. it’s on legal standing and it’s in a place where it can get out of Congress, then the president would sign it,” she told reporters on Wednesday during the daily White House briefing.

    She also defended the White House’s efforts to limit the dangers of TikTok, even as the president engages with influencers on the social-media platform and his campaign hosts a TikTok account.

    “We are going to try to meet the America people where they are,” Jean-Pierre said. “We are trying to reach everyone. The president is the president for all Americans .. it doesn’t mean that we’re not going to try to figure out how to protect our national security.”

    ___

    Associated Press staff writer Seung Min Kim contributed to this report and staff writer Mae Anderson contributed from Brooklyn, New York. Hadero reported from Jersey City, New Jersey.

    [ad_2]

    Source link

  • Tech companies sign accord to combat AI-generated election trickery

    Tech companies sign accord to combat AI-generated election trickery

    [ad_1]

    Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.

    Tech executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new voluntary framework for how they will respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies — including Elon Musk’s X — are also signing on to the accord.

    “Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.

    The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote.”

    The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.

    The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but may disappoint pro-democracy activists and watchdogs looking for stronger assurances.

    “The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”

    Clegg said each company “quite rightly has its own set of content policies.”

    “This is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play whack-a-mole and finding everything that you think may mislead someone.”

    Tech executives were also joined by several European and U.S. political leaders at Friday’s announcement. European Commission Vice President Vera Jourova said while such an agreement can’t be comprehensive, “it contains very impactful and positive elements.” She also urged fellow politicians to take responsibility to not use AI tools deceptively.

    She stressed the seriousness of the issue, saying the “combination of AI serving the purposes of disinformation and disinformation campaigns might be the end of democracy, not only in the EU member states.”

    The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Some have already done so, including Bangladesh, Taiwan, Pakistan, and most recently Indonesia.

    Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked U.S. President Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.

    Just days before Slovakia’s elections in November, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false, but they were already widely shared as real across social media.

    Politicians and campaign committees also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.

    Friday’s accord said in responding to AI-generated deepfakes, platforms “will pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression.”

    It said the companies will focus on transparency to users about their policies on deceptive AI election content and work to educate the public about how they can avoid falling for AI fakes.

    Many of the companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure from regulators and others to do more.

    That pressure is heightened in the U.S., where Congress has yet to pass laws regulating AI in politics, leaving AI companies to largely govern themselves. In the absence of federal legislation, many states are considering ways to put guardrails around the use of AI, in elections and other applications.

    The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.

    Misinformation experts warn that while AI deepfakes are especially worrisome for their potential to fly under the radar and influence voters this year, cheaper and simpler forms of misinformation remain a major threat. The accord noted this too, acknowledging that “traditional manipulations (”cheapfakes”) can be used for similar purposes.”

    Many social media companies already have policies in place to deter deceptive posts about electoral processes — AI-generated or not. For example, Meta says it removes misinformation about “the dates, locations, times, and methods for voting, voter registration, or census participation” as well as other false posts meant to interfere with someone’s civic participation.

    Jeff Allen, co-founder of the Integrity Institute and a former data scientist at Facebook, said the accord seems like a “positive step” but he’d still like to see social media companies taking other basic actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.

    Lisa Gilbert, executive vice president of the advocacy group Public Citizen, argued Friday that the accord is “not enough” and AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems.”

    In addition to the major platforms that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.

    Notably absent from the accord is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately return a request for comment Friday.

    The inclusion of X — not mentioned in an earlier announcement about the pending accord — was one of the biggest surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free speech absolutist.”

    But in a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to safeguard free and fair elections.”

    “X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,” she said.

    __

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    [ad_2]

    Source link

  • How Taiwan beat back disinformation and preserved the integrity of its election

    How Taiwan beat back disinformation and preserved the integrity of its election

    [ad_1]

    WASHINGTON — The rumors about vote fraud started swirling as the ballots in Taiwan’s closely watched presidential election were tallied on Jan. 13. There were baseless claims that people had fabricated votes and that officials had miscounted and skewed the results.

    In a widely shared video, a woman recording votes mistakenly enters one in the column for the wrong candidate. The message was clear: The election could not be trusted. The results were faked.

    It could have been Taiwan’s Jan. 6 moment. But it wasn’t.

    Worries that China would use disinformation to undermine the integrity of Taiwan’s vote dogged the recent election, a key moment in the young democracy’s development that highlighted tensions with its much larger neighbor.

    In repelling disinformation, Chinese and domestic, Taiwan offers an example to other democracies holding elections this year.

    This year , more than 50 countries that are home to half the planet’s population are due to hold national elections. From India to Mexico, the U.K. to Russia, the outcomes of the elections will test the strengths of democracies and countries with authoritarian leaders.

    In Taiwan, the response to disinformation was swift. Fact-checking groups debunked the rumors, while the Central Election Commission held a news conference to push back on claims of electoral discrepancies. Influencers like @FroggyChiu with more than 600,000 subscribers also put out explainers on YouTube explaining how votes are tallied.

    The video showing the election worker miscounting votes had been selectively edited, fact-checkers found. Voters at the voting station spotted the woman’s error and election workers quickly corrected the count, according to MyGoPen, an independent Taiwanese fact-checking chatbot.

    It was just one of dozens of videos that fact checkers had to debunk.

    “I believe some people genuinely believed this. And when the election results came out, they thought something was up,” said Eve Chiu, the editor-in-chief of Taiwan’s FactCheck Center, a nonprofit journalism organization.

    Supporters of the Taiwan People’s Party presidential candidate Ko Wen-je, many of whom are young, had shared the videos widely on TikTok, which were then shared on Facebook. Prior to the election results, many thought there was a chance of a Ko upset in the race given the candidate had drawn a lot of online attention. Taiwan’s FactCheck Center debunked multiple videos of alleged voter fraud, including another one in which voting officials make a human error caught on camera. The source of these videos is unclear.

    Notably, Taiwan has resisted calls for tougher laws that would require social media platforms to police their sites; a proposal to institute such rules was withdrawn in 2022 after free speech concerns were raised.

    China, which claims Taiwan as its own, targeted the island with a stream of disinformation ahead of its election, according to research from DoubleThink Lab.

    Much of it sought to undermine faith in the incumbent Democratic Progressive Party and cast it as belligerent and likely to start a war that Taiwan can’t win. Other narratives targeted U.S. support for Taiwan, arguing that America was an untrustworthy partner only interested in Taiwan’s semiconductor exports that wouldn’t support the island if it came to war with China.

    Taiwan has been able to effectively respond to Chinese disinformation in part because of how seriously the threat is perceived there, according to Kenton Thibaut, a senior resident fellow and expert on Chinese disinformation at the Atlantic Council’s Digital Forensic Research Lab. Instead of a piecemeal approach — focusing solely on media literacy, for instance, or relying only on the government to fact-check false rumors — Taiwan adopted a multifaceted approach, what Thibaut called a “whole of society response” that relied on government, independent fact-check groups and even private citizens to call out disinformation and propaganda.

    In an interview with The Associated Press, Alexander Tah-Ray Yui, Taipei’s economic and cultural representative to the U.S., said the government has learned it must identify and debunk false information as quickly as possible in order to counter false narratives. Yui is Taiwan’s de facto ambassador to the U.S.

    “Find it early, like a tumor or cancer. Cut it before it spreads,” Yui said of foreign disinformation.

    Taiwan’s civil society groups like MyGoPen and the Taiwan FactCheck Center, which received $1 million in funding from Google, have focused on raising public awareness through debunking individual rumors that members of the public report.

    The island has a strong civil society. Many of the fact-checker groups were founded by dedicated individuals, such as MyGoPen, whose founder Charles Yeh started the chatbot service because he found his relatives would get confused by online rumors. Others like, Taiwan FactCheck Center, are careful to not take government money so as to preserve their independence, said Chiu.

    Media literacy on fake news and the digital environment is growing, those on the front lines say, but slowly.

    “It’s like in the past when everyone dumped bottles and cans in the garbage and now they sort them, that was done through a period of societal education,” said Chiu. “Everyone needs to slowly develop this awareness, and this needs time.”

    In the U.S., government efforts to call out disinformation have themselves become politicized and criticized as government censorship or thought control.

    With a population more than 10 times the size of Taiwan and years of growing polarization, the U.S. has deep, internal political and social fault lines that create good conditions for disinformation to take root — and make it harder for the government to push back without being accused of censoring legitimate political views.

    In the United States, many of the narratives spread by Russia, for instance, are eagerly adopted by domestic groups that distrust the government. Donald Trump, the former president, and other Republicans have repeatedly made similar claims about the U.S. as those carried by Russian state media, for example.

    “We have a dynamic in American politics where if you’re Russia, China or Iran, you don’t have to inject divisive topics, because they’re already here,” said Jim Ludes, a former national defense analyst who now leads the Pell Center for International Relations at Salve Regina University.

    “The call is coming from inside the house,” he said, using a popular horror film metaphor.

    That dynamic can also be seen in Taiwan. Although Ko, the presidential candidate, said publicly he didn’t believe there was election fraud, legislators from the TPP held a conference Wednesday in which they shared videos of miscounting that had spread online, which had already been debunked, to call for greater adherence to voting regulations.

    Though the election passed without a major crisis, the challenge continues to evolve. Chinese efforts at disinformation have become increasingly localized and sophisticated, according to DoubleThink Lab’s post-election analysis.

    In one example, a Chinese-run Facebook page called C GaChuDao made a video describing an affair that it said a DPP legislator had with a woman from China. Unlike in years past, where Chinese disinformation was easily recognized and mocked for its use of simplified characters and vocabulary from China, this video featured a man speaking with a Taiwanese accent and in a way that appeared completely local.

    “In picking topics, they’d pick something that exists in your society, and then it’s relatively more convincing,” said Wu.

    ___

    Wu reported from Bangkok. Associated Press writer Didi Tang contributed to this report.

    [ad_2]

    Source link

  • Online rumors partially to blame for drop in water pressure in Mississippi capital, manager says

    Online rumors partially to blame for drop in water pressure in Mississippi capital, manager says

    [ad_1]

    JACKSON, Miss. — Law enforcement agencies are investigating whether social media rumors about a potential water outage prompted people to quickly fill bathtubs with tap water in Mississippi’s capital during a cold snap and cause a drop in pressure that temporarily made faucets run dry for thousands of customers of the city’s long-troubled system.

    Taps ran dry Wednesday and Thursday for almost a quarter of Jackson’s 52,000 water customers as icy conditions strained local infrastructure. Officials for JXN Water, the private corporation that has been under a federal order to run Jackson’s system since late 2022, said a “deliberate misinformation campaign” was partially to blame. People responded to social media posts by filling bathtubs with water in a short period, causing demand to spike beyond what the water system could support, water manager Ted Henifin said.

    JXN Water said in a statement Friday that U.S. District Judge Henry Wingate authorized the release of information about the investigation and advised the corporation on what to communicate to the public.

    The organization did not specify which law enforcement agencies are involved or what charges might be brought if people are found to have spread false information on social media.

    JXN Water identified one specific social media post, but Palacios said the organization had not traced its origin.

    “Just got word they are about to shut off water in Jackson,” the post said. “If you’re in Jackson, fill up your tubs and jugs! Get prepared for not having water.”

    The water woes began as an arctic blast kept temperatures below freezing in Jackson for nearly three days. The temperature rose on Thursday, but the National Weather Service warned that dangerously cold air would return this weekend.

    Jackson residents and officials were already concerned that frigid conditions could disrupt the water system. Cold snaps in 2021 and 2022 caused frozen pipes and drops in water pressure across the city of nearly 150,000 residents. People had been told to prepare for past disasters by keeping jugs or bathtubs full of water.

    Maintenance crews had restored water to all but about 1,000 customers Friday.

    Ameerah Palacios, a spokesperson for JXN Water, said the news release about an investigation was partially written by Wingate, who is overseeing a federal intervention to improve the water system.

    “Judge Wingate, that’s a man who chooses his words very carefully,” Palacios told The Associated Press in an interview. “The way that he worded it was, all of ‘the appropriate law enforcement agencies,’ so definitely more than one at play.”

    A court clerk took a phone message for Wingate on Friday, but the judge did not immediately return a call to the AP.

    It was unclear how many Jackson residents saw the social media posts or were influenced by them.

    Although JXN Water did not release names of anyone who shared the post it cited, AP identified a Facebook post from Wednesday that had the exact wording. The Facebook account belongs to Bob Hickingbottom of Jackson, who ran unsuccessfully for governor as a Constitution Party candidate in 2019 and tried to run for governor in 2023 before the state Democratic Party removed him from its primary ballot.

    In one phone interview with the AP, Hickingbottom said somebody might have put the post on his page.

    “Something like that would be outside the realm of civilized behavior,” Hickingbottom said.

    In a second phone call moments later, Hickingbottom said he put the water post on his page and he thought he was sharing information to help people.

    “I’m a flamethrower when it comes to politics, but this is not politics,” Hickinbottom said of Jackson’s water system.

    The latest disruption in Jackson water service came a week after Mississippi health officials issued and then quickly lifted a health advisory after tests identified E. coli in the water supplies of Jackson and a suburb. Henifin said he believed the tests were false positives caused by lab contamination, but the state health department stood by its tests.

    Wingate appointed Henifin in November 2022 to oversee reforms to Jackson’s water system after infrastructure breakdowns during the late summer of that year caused many city residents to go days without safe running water.

    [ad_2]

    Source link

  • Conflict, climate change and AI get top billing for meeting in Davos

    Conflict, climate change and AI get top billing for meeting in Davos

    [ad_1]

    DAVOS, Switzerland — The Earth is heating up, as is conflict in the Middle East. The world economy and Ukraine’s defense against Russia are sputtering along. Artificial intelligence could upend all our lives.

    The to-do list of global priorities has grown for this year’s edition of the World Economic Forum’s gabfest of business, political and other elites in the Alpine snows of Davos, Switzerland, which runs Tuesday through Friday.

    Over 60 heads of state and government, including Israeli President Isaac Herzog and Ukrainian President Volodymyr Zelenskyy will be heading to town to hold both public appearances and closed-door talks. They’ll be among more than 2,800 attendees, which also include academics, artists and international organization leaders.

    The gathering is mostly high-minded ambition — think business innovation, aims for peace-making and security cooperation, or life-changing improvements in health care — and a venue for decision-makers in an array of fields and industries to connect.

    It is also regularly panned by critics as an emblem of the yawning gap between rich and poor: Young Swiss Socialists staged a rally Sunday to blast the forum and brand attendees as “the richest and most powerful, who are responsible for today’s wars and crises.”

    “Davos is easily mocked. But in current times it is hard to get people together to talk in a room on shared global issues and the value of face-to-face conversations is very real, as the COVID-19 pandemic showed,” Bronwen Maddox, director of the Chatham House think tank, said in an e-mail.

    Here’s what to watch for:

    While Davos is generally big-picture, regional conflict can cast a long shadow — like Ukraine’s war did a year ago, prompting organizers to exclude any Russian delegation.

    This year, Israel’s three-month war with Hamas in Gaza, and recently U.S. and British airstrikes on Houthi militants in Yemen who have fired missiles into Red Sea shipping lanes, are looming large.

    Herzog, the Israeli president, whose job is more ceremonial than is Prime Minister Benjamin Netanyahu’s, will be on hand for a Davos session Thursday, and the prime ministers of Qatar, Jordan and Lebanon will also be attending.

    A “humanitarian briefing on Gaza” session gets a half-hour slot Tuesday.

    A testament to how technology has taken a large and growing slice of attention in Davos, this year the theme of Artificial Intelligence “as a driving force for the economy and society” will get about 30 separate sessions.

    The dizzying emergence of OpenAI’s ChatGPT over a year ago and rivals since then have elevated the power, promise and portent of artificial intelligence into greater public view. OpenAI chief Sam Altman will be in Davos along with top executives from Microsoft, which helped bankroll his company’s rise.

    AI in education, transparency about AI, its ethics and impact on creativity are all part of the menu — and the Davos Promenade is swimming in advertisements and displays pointing to the new technology.

    Forum organizers warned last week that the threat posed by misinformation generated by AI, such as through the creation of synthetic content, is the world’s greatest short-term threat.

    Such misinformation could surge this year, and one session explores the threat of “bots and plots” on democracies.

    Forum organizers say elections in countries whose populations together total 4.2 billion people will take place this year, and many will be contested. (Few doubt whether Russian President Vladimir Putin will get a new term.)

    It comes against the backdrop of talk about a new Cold War, the widening rift between dictatorships — or at least autocracies — and democratic countries.

    Back-to-back addresses Tuesday morning by Prime Minister Li Qiang of China and Ursula von der Leyen, the president of the European Commission, will highlight the contrast. President Joe Biden’s national security adviser, Jake Sullivan, gives a speech later in the day.

    French President Emmanuel Macron and U.S. Secretary of State Antony Blinken will speak Wednesday, as will Argentina’s new president, Javier Milei, a libertarian who has already announced plans to slash the government workforce.

    Davos corridors were already abuzz about whether former U.S. President Donald Trump — who made twotrips to Davos during his term — could be inaugurated again around this time next year, after November’s election. Biden was once a regular at Davos, but has not attended as president.

    Of all the lofty hopes in Davos, the perennial one of late has been the search for creative and promising ways to fight climate change.

    This year is no different: Top climate scientists from around the world reported this month that average global temperatures last year obliterated the record highs — raising the urgency level.

    John Kerry, who is stepping down as Biden’s climate adviser, takes part in a panel discussion on a U.S.-backed initiative that aims to draw the private sector into development of low-carbon technologies.

    Chatham House’s Maddox said plans to transition away from fossil fuels agreed during the U.N. climate conference in Dubai last month means climate finance will face a big year in 2024.

    “Davos is a powerful combination potentially, of a lot of concern about the environment, and a lot of high-powered finance present,” she said.

    [ad_2]

    Source link

  • Lab leak is not a conspiracy theory, Anthony Fauci concedes

    Lab leak is not a conspiracy theory, Anthony Fauci concedes

    [ad_1]

    Former White House coronavirus advisor Anthony Fauci doesn’t believe the lab leak explanation of COVID-19’s origins is a conspiracy theory. He admitted as much during a closed-door grilling session before the House Select Subcommittee on the Coronavirus Pandemic on Monday. Legislators did not release a transcript of his testimony, but Rep. Brad Wenstrup (R–Ohio), the chairman of the subcommittee, published some highlights on X (formerly Twitter).

    In recent months, Fauci has denied he ever categorically rejected the possibility that COVID-19 accidentally escaped from a laboratory. But he faces very serious allegations that he deterred scientific experts from considering it. At issue is “The Proximal Origin of Sars-CoV-2,” a paper that appeared in Nature Medicine, a scientific journal, in March 2020 at the very start of the global pandemic. Fauci—who was then head of the National Institutes of Allergy and Infectious Diseases (NIAID)—and Francis Collins—then director of the National Institutes of Health—participated in a conference call with the authors, whose initial openness to a lab leak explanation changed significantly prior to publication. The paper ultimately ruled out a lab leak as not just “unlikely”—the phrasing used in an early draft of the paper—but “improbable.”

    More recently, Fauci has contended that he always remained open to the idea, but was persuaded by scientific arguments—including those in the proximal origin paper—that a zoonotic spillover was more likely. This claim would be more persuasive if Fauci had not stated over and over and over and over again, in media interviews, that he “strongly favored” the zoonotic origin theory; his subsequent suggestion that he did not lean in either direction is flatly contradicted by his literal words.

    It was certainly in Fauci’s interest to downplay the possibility that human experimentation on viruses accidentally unleashed COVID-19 upon the world; during his career, Fauci remained one of the foremost advocates of public funding for gain-of-function research, in which scientists manipulate viruses in order to make them deadlier and more transmissible. Fauci and other public health experts have straightforwardly denied that the U.S. funded such research in Wuhan, China, but critics say this is an exercise in semantics. Indeed, EcoHealth Alliance—a U.S. nonprofit that obtained public funding to conduct research on bat coronaviruses in Wuhan, China—was caught actively misleading Pentagon officials about the nature of the experimentation: Peter Daszak, the head of EcoHealth Alliance, advised colleagues to deceive regulators about the fact that the research would be conducted in China under laxer lab safety standards.

    A cadre of elite scientists deliberately lied to U.S. security officials in order to spend American tax dollars performing risky experiments under substandard laboratory conditions in a notoriously secretive and authoritarian foreign country. Maybe those experiments created COVID-19, and maybe they didn’t. In any case, it’s clearly not a conspiracy theory; good of Fauci to recognize the obvious, however belatedly it might be.

    One can debate the extent of Fauci’s wrongdoing here—but it’s the mainstream media that really dropped the ball in terms of lab leak discourse. The Washington Post was an early offender, accusing Sen. Tom Cotton (R–Ark.) of “repeating a coronavirus theory that was already debunked.” The article explicitly applied the phrase “conspiracy theory” to the lab leak idea; The New York Times did the same, noting that the lab leak had been “dismissed by scientists.” In fact, The Times‘ lead coronavirus reporter, Apoorva Mandavilli, went a step further, calling lab leak a racist theory.

    Mandavilli’s tone toward the lab leak was broadly representative of a whole host of mainstream journalists, media commentators, and so-called fact-checkers and misinformation experts. Following this flawed consensus, social media sites—including Facebook—brutally suppressed any and all discussion of the lab leak theory on their platforms. As recently as August 2023, The Journal of the American Medical Association was still counting lab leak discourse online as evidence of the unstoppable spread of misinformation online. And the Global Disinformation Index—a British non-profit that received funding from the State Department, and tarred Reason as an unsafe news website—warned that blaming the pandemic on a lab leak could lead to racist attacks on Asian people.

    That’s a long way of saying that self-appointed misinformation cops went to great efforts to censor and stigmatize this topic of conversation, on grounds that it was either racist, or a conspiracy theory, or both. Yet it is neither; even Fauci says so. One might hope that this would prompt some self-reflection within media circles. The anti-misinformation crowd wasn’t just wrong—they were militant that it was of vital importance to stop everyone from even contemplating the possibility of a lab leak theory.

    There’s a perniciousness underlying this attitude, and one that clearly threatens free speech, as many U.S. political figures—including President Joe Biden and Sen. Elizabeth Warren (D–Mass.)—have decided that the federal government should do more to combat purported misinformation. They might consider whether they themselves have been misinformed.

    [ad_2]

    Robby Soave

    Source link

  • 1/2: Prime Time with John Dickerson

    1/2: Prime Time with John Dickerson

    [ad_1]

    1/2: Prime Time with John Dickerson – CBS News


    Watch CBS News



    John Dickerson reports on the death of a senior Hamas leader in Lebanon, the resignation of Harvard’s president, and how the brain processes misinformation.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • 1/1: CBS Evening News

    1/1: CBS Evening News

    [ad_1]

    1/1: CBS Evening News – CBS News


    Watch CBS News



    Israel begins pulling thousands of troops out of Gaza; Airlines race to get new pilots amid shortage

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • How social media can spread misinformation on Israel-Hamas war

    How social media can spread misinformation on Israel-Hamas war

    [ad_1]

    How social media can spread misinformation on Israel-Hamas war – CBS News


    Watch CBS News



    An investigation by CBS News found misinformation about the Israel-Hamas war can be easily found and spread on social media. One school in Dallas has a media literacy course to help teach teens, who often get their news from social media, how to separate fact from fiction. Tom Hanson reports.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • As social media guardrails fade and AI deepfakes go mainstream, experts warn of impact on elections

    As social media guardrails fade and AI deepfakes go mainstream, experts warn of impact on elections

    [ad_1]

    NEW YORK — Nearly three years after rioters stormed the U.S. Capitol, the false election conspiracy theories that drove the violent attack remain prevalent on social media and cable news: suitcases filled with ballots, late-night ballot dumps, dead people voting.

    Experts warn it will likely be worse in the coming presidential election contest. The safeguards that attempted to counter the bogus claims the last time are eroding, while the tools and systems that create and spread them are only getting stronger.

    Many Americans, egged on by former President Donald Trump, have continued to push the unsupported idea that elections throughout the U.S. can’t be trusted. A majority of Republicans (57%) believe Democrat Joe Biden was not legitimately elected president.

    Meanwhile, generative artificial intelligence tools have made it far cheaper and easier to spread the kind of misinformation that can mislead voters and potentially influence elections. And social media companies that once invested heavily in correcting the record have shifted their priorities.

    “I expect a tsunami of misinformation,” said Oren Etzioni, an artificial intelligence expert and professor emeritus at the University of Washington. “I can’t prove that. I hope to be proven wrong. But the ingredients are there, and I am completely terrified.”

    Manipulated images and videos surrounding elections are nothing new, but 2024 will be the first U.S. presidential election in which sophisticated AI tools that can produce convincing fakes in seconds are just a few clicks away.

    The fabricated images, videos and audio clips known as deepfakes have started making their way into experimental presidential campaign ads. More sinister versions could easily spread without labels on social media and fool people days before an election, Etzioni said.

    “You could see a political candidate like President Biden being rushed to a hospital,” he said. “You could see a candidate saying things that he or she never actually said. You could see a run on the banks. You could see bombings and violence that never occurred.”

    High-tech fakes already have affected elections around the globe, said Larry Norden, senior director of the elections and government program at the Brennan Center for Justice. Just days before Slovakia’s recent elections, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false, but they were shared as real across social media regardless.

    These tools might also be used to target specific communities and hone misleading messages about voting. That could look like persuasive text messages, false announcements about voting processes shared in different languages on WhatsApp, or bogus websites mocked up to look like official government ones in your area, experts said.

    Faced with content that is made to look and sound real, “everything that we’ve been wired to do through evolution is going to come into play to have us believe in the fabrication rather than the actual reality,” said misinformation scholar Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania.

    Republicans and Democrats in Congress and the Federal Election Commission are exploring steps to regulate the technology, but they haven’t finalized any rules or legislation. That’s left states to enact the only restrictions so far on political AI deepfakes.

    A handful of states have passed laws requiring deepfakes to be labeled or banning those that misrepresent candidates. Some social media companies, including YouTube and Meta, which owns Facebook and Instagram, have introduced AI labeling policies. It remains to be seen whether they will be able to consistently catch violators.

    It was just over a year ago that Elon Musk bought Twitter and began firing its executives, dismantling some of its core features and reshaping the social media platform into what’s now known as X.

    Since then, he has upended its verification system, leaving public officials vulnerable to impersonators. He has gutted the teams that once fought misinformation on the platform, leaving the community of users to moderate itself. And he has restored the accounts of conspiracy theorists and extremists who were previously banned.

    The changes have been applauded by many conservatives who say Twitter’s previous moderation attempts amounted to censorship of their views. But pro-democracy advocates argue the takeover has shifted what once was a flawed but useful resource for news and election information into a largely unregulated echo chamber that amplifies hate speech and misinformation.

    Twitter used to be one of the “most responsible” platforms, showing a willingness to test features that might reduce misinformation even at the expense of engagement, said Jesse Lehrich, co-founder of Accountable Tech, a nonprofit watchdog group.

    “Obviously now they’re on the exact other end of the spectrum,” he said, adding that he believes the company’s changes have given other platforms cover to relax their own policies. X didn’t answer emailed questions from The Associated Press, only sending an automated response.

    In the run-up to 2024, X, Meta and YouTube have together removed 17 policies that protected against hate and misinformation, according to a report from Free Press, a nonprofit that advocates for civil rights in tech and media.

    In June, YouTube announced that while it would still regulate content that misleads about current or upcoming elections, it would stop removing content that falsely claims the 2020 election or other previous U.S. elections were marred by “widespread fraud, errors or glitches.” The platform said the policy was an attempt to protect the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions.”

    Lehrich said even if tech companies want to steer clear of removing misleading content, “there are plenty of content-neutral ways” platforms can reduce the spread of disinformation, from labeling months-old articles to making it more difficult to share content without reviewing it first.

    X, Meta and YouTube also have laid off thousands of employees and contractors since 2020, some of whom have included content moderators.

    The shrinking of such teams, which many blame on political pressure, “sets the stage for things to be worse in 2024 than in 2020,” said Kate Starbird, a misinformation expert at the University of Washington.

    Meta explains on its website that it has some 40,000 people devoted to safety and security and that it maintains “the largest independent fact-checking network of any platform.” It also frequently takes down networks of fake social media accounts that aim to sow discord and distrust.

    “No tech company does more or invests more to protect elections online than Meta – not just during election periods but at all times,” the posting says.

    Ivy Choi, a YouTube spokesperson, said the platform is “heavily invested” in connecting people to high-quality content on YouTube, including for elections. She pointed to the platform’s recommendation and information panels, which provide users with reliable election news, and said the platform removes content that misleads voters on how to vote or encourages interference in the democratic process.

    The rise of TikTok and other, less regulated platforms such as Telegram, Truth Social and Gab, also has created more information silos online where baseless claims can spread. Some apps that are particularly popular among communities of color and immigrants, such as WhatsApp and WeChat, rely on private chats, making it hard for outside groups to see the misinformation that may spread.

    “I’m worried that in 2024, we’re going to see similar recycled, ingrained false narratives but more sophisticated tactics,” said Roberta Braga, founder and executive director of the Digital Democracy Institute of the Americas. “But on the positive side, I am hopeful there is more social resilience to those things.”

    Trump’s front-runner status in the Republican presidential primary is top of mind for misinformation researchers who worry that it will exacerbate election misinformation and potentially lead to election vigilantism or violence.

    The former president still falsely claims to have won the 2020 election.

    “Donald Trump has clearly embraced and fanned the flames of false claims about election fraud in the past,” Starbird said. “We can expect that he may continue to use that to motivate his base.”

    Without evidence, Trump has already primed his supporters to expect fraud in the 2024 election, urging them to intervene to “ guard the vote ” to prevent vote rigging in diverse Democratic cities. Trump has a long history of suggesting elections are rigged if he doesn’t win and did so before voting in 2016 and 2020.

    That continued wearing away of voter trust in democracy can lead to violence, said Bret Schafer, a senior fellow at the nonpartisan Alliance for Securing Democracy, which tracks misinformation.

    “If people don’t ultimately trust information related to an election, democracy just stops working,” he said. “If a misinformation or disinformation campaign is effective enough that a large enough percentage of the American population does not believe that the results reflect what actually happened, then Jan. 6 will probably look like a warm-up act.”

    Election officials have spent the years since 2020 preparing for the expected resurgence of election denial narratives. They’ve dispatched teams to explain voting processes, hired outside groups to monitor misinformation as it emerges and beefed up physical protections at vote-counting centers.

    In Colorado, Secretary of State Jena Griswold said informative paid social media and TV campaigns that humanize election workers have helped inoculate voters against misinformation.

    “This is an uphill battle, but we have to be proactive,” she said. “Misinformation is one of the biggest threats to American democracy we see today.”

    Minnesota Secretary of State Steve Simon’s office is spearheading #TrustedInfo2024, a new online public education effort by the National Association of Secretaries of State to promote election officials as a trusted source of election information in 2024.

    His office also is planning meetings with county and city election officials and will update a “Fact and Fiction” information page on its website as false claims emerge. A new law in Minnesota will protect election workers from threats and harassment, bar people from knowingly distributing misinformation ahead of elections and criminalize people who non-consensually share deepfake images to hurt a political candidate or influence an election.

    “We hope for the best but plan for the worst through these layers of protections,” Simon said.

    In a rural Wisconsin county north of Green Bay, Oconto County Clerk Kim Pytleski has traveled the region giving talks and presentations to small groups about voting and elections to boost voters’ trust. The county also offers equipment tests in public so residents can observe the process.

    “Being able to talk directly with your elections officials makes all the difference,” she said. “Being able to see that there are real people behind these processes who are committed to their jobs and want to do good work helps people understand we are here to serve them.”

    ___

    Fernando reported from Chicago. Associated Press writer Christina A. Cassidy in Atlanta contributed to this report.

    ___

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    [ad_2]

    Source link

  • Poland's new Cabinet moves to free state media from previous government's political control

    Poland's new Cabinet moves to free state media from previous government's political control

    [ad_1]

    WARSAW, Poland — Poland’s new pro-European Union government has begun to wrestle control of the country’s state media and some other state agencies from the conservative party that consolidated its grip on them during eight years in power.

    The Cabinet of Prime Minister Donald Tusk, which took office last week, said Wednesday it had fired the directors of the state television and radio outlets and the government-run news agency. It seeks to reestablish independent media in Poland in a legally binding and lasting way.

    Tusk’s government has made it a priority to restore objectivity and free expression in state media, which the previous government, under the Law and Justice party, used as aggressive propaganda tools, attacking Tusk and the opposition and spreading its euroskeptic views. During its rule, Law and Justice cut corners and ignored some procedures to gain control of the media supervisory bodies and of the key appointments as it tightened its grip.

    The new government’s first steps toward a return to media freedom were met with protest by Law and Justice. Party leader Jarosław Kaczyński, top party figures and many of its lawmakers occupied buildings housing the offices and studios of state-run television TVP in the hopes that their supporters would come out to demonstrate in big numbers. A rally was called for later Wednesday and a few dozen people gathered.

    “The (party) instructions are that all Law and Justice parliament members come here (to the TVP building,)” said Law and Justice Senator Marek Pek. “We must show through our presence that we are deeply against these lawless and brutal actions.”

    Law and Justice issued a statement saying the actions of the new government were “illegal” and that the change of the leaderships of the media was done “unlawfully.”

    The statement quoted Kaczynski, Poland’s most powerful politician until recently, insisting that the protest is a “defense of democracy because there is no democracy without media pluralism or a strong anti-government media. In any democracy, there must be strong anti-government media.”

    But for years the Law and Justice government actively sought to discredit and eliminate from the market the TVN station that was highly critical of it.

    There was no brutality involved Wednesday and the change of management was done in line with the law. Some police were deployed in front of the building, but their goal was to ensure calm, according to Warsaw police spokesman Sylwester Marczak.

    On Tuesday, Polish lawmakers adopted a resolution presented by Tusk’s government calling for the restoration of “legal order, objectivity and fairness” of TVP, Polish Radio and the PAP news agency.

    Following the resolution, Poland’s new culture minister, Bartłomiej Sienkiewicz, replaced the heads and the supervisory boards of state media, which chose new management.

    The new head of TVP’s supervisory board, lawyer Piotr Zemła, came to the broadcaster’s headquarters on Wednesday.

    In the first sign of change, the all-news TVP INFO channel, one of the previous government’s main propaganda tools, ceased to broadcast on air and over the internet on Wednesday morning.

    Earlier this week, the previous ruling team called a rally at the state television building to protest any planned changes, but only a few hundred people turned up.

    President Andrzej Duda, who was an ally of the previous government, has warned that he won’t accept moves that he believes to be against the law. However, his critics have long accused him of violating the Polish Constitution and other laws as he tried to support the policies of the Law and Justice party. Some of the party’s policies, especially in the judicial sector, drew strong criticism and financial sanctions from the EU, which saw them as undemocratic.

    In another setback for Law and Justice, a Warsaw court handed two-year prison terms to former interior minister Mariusz Kaminski and his deputy. The court found them guilty of abuse of power in 2007, when they served in the previous Law and Justice government. This means they will be stripped of their immunity as current lawmakers and should be replaced.

    The government took office last week and began reversing policies of the previous administration that many in Poland found divisive. In one such move, Tusk had new heads of the security, intelligence and anti-corruption offices appointed on Tuesday.

    Parties that make up the new government collectively won the majority of votes in the Oct. 15 election. They have vowed to jointly govern under the leadership of Tusk, who served as prime minister from 2007-2014 and was head of the European Council from 2014-2019.

    [ad_2]

    Source link

  • Teens struggle to identify misinformation about Israel-Hamas conflict — the world’s second “social media war”

    Teens struggle to identify misinformation about Israel-Hamas conflict — the world’s second “social media war”

    [ad_1]

    Decimated neighborhoods. Injured children. Terrorized festivalgoers running for their lives. Since the brutal war between Israel and Hamas began nearly three months ago, Maddy Miller, a 17-year-old high school senior in Dallas, Texas, has been trying to make sense of the horrific scenes unfolding daily on her phone. 

    “I’ll just open TikTok or Instagram and it’s like, ‘here’s a clip from inside Israel or inside Palestine,’” Miller said. “Sometimes I just need to sit down for like 10 minutes and actually figure out what’s happening. It’s hard to know what’s real and what’s fake.” 

    In February 2022, the war in Ukraine began to play out on Tik Tok and Instagram. The conflict in the Middle East is now the second war to be viewed in vivid, and often intimate, vignettes on social media, where 51% of younger Gen Z teens get their news, according to a Deloitte survey. The war between Israel and Hamas has also sparked a tidal wave of misinformation and disinformation, which is reaching American teens like Miller. 

    In a packed classroom at Highland Park High School, Miller and about 30 other students study media literacy, a course many teens across the United States are not required to take. Texas is one of only four states in the U.S. that mandate a media literacy curriculum in all public schools beginning in kindergarten. Fourteen other states offer some form of media literacy education or online resources to public school students.  

    Media literacy classes

    As part of every lesson, Brandon Jackson teaches students the tools needed to spot misinformation, which is false or misleading, and disinformation, which is deliberately deceptive. He also tests his students using real-world examples of fake videos that circulate on social media.  

    “The whole point of this is to analyze large international news events,” Jackson told his students. “How does information change when you’re looking at it on social media? Is it manipulative?” 

    Despite the technological edge young Americans have over older generations, Stanford University researchers Sam Wineburg and Joel Breakstone say teenagers’ ability to identify misinformation on social media is concerningly low. 

    “Video has a kind of immediacy, but we need to help people understand how to evaluate a video,” Wineburg said. “Is the person who’s providing the video an objective source? Does that person, are there reputational costs if that person is wrong, or are they some ‘rando’ that has sensationalist footage and is a rage merchant?” 


    Stanford research shows tech-savvy teens are still falling for fake videos

    01:15

    Wineburg and Breakstone tested the ability of high schoolers to identify misinformation on social media. They chose more than 3,000 students, whose backgrounds reflected the demographics of the U.S., and asked them to determine whether or not an anonymous video was real or fake. 

    “The video purported to claim to show voter fraud in the United States,” Breakstone explained. “If you did a quick internet search, within 30 seconds you could discover that the video actually showed voter fraud in Russia. However, out of those more than 3,000 students, how many students actually discovered the link to Russia? Three. That’s less than one-tenth of 1%.” 

    The experiment 

    A CBS News investigation revealed how quickly mis- and disinformation is reaching teenage accounts on social media. In an experiment, a team of journalists set up three different profiles on Instagram and TikTok.

    One account searched simple terms on Israel; another searched simple Palestinian terms; and the last account searched both. Each alias also followed several accounts with more than 1,000 followers and “liked” a handful of posts for each one.  

    While the faux-teen accounts were initially fed typical teenage content, like posts about getting ready for high school and makeup tutorials, on TikTok and Instagram, the algorithms also took into account the searches. Not long after the search terms were entered, each feed was flooded with war-related content, including misinformation.  

    In a widely debunked video, a person, who claimed to work at a hospital in Gaza, alleged Hamas had overrun the facility. 

    TikTok


    In one widely debunked video, a person, who claimed to work at a hospital in Gaza, alleged Hamas had overrun the facility. She said she had to perform surgery on a child without morphine. An analysis revealed the video was staged and even the explosions were manufactured. Another now-debunked video claimed to show an Iranian warplane landing on an Israeli aircraft carrier.  

    “It looks like a video game to me,” said Dan Evon at the News Literacy Project, a nonpartisan group that advocates for media literacy in schools.

    Evon has spent his career deciphering fact from fiction on social media. He also teaches young people how to spot mis- and disinformation. Key to that is what he calls “pre-bunking”: equipping them with the tools to help identify misinformation before they fall for it.

    “The same tip that I give every single time is to slow down,” said Evon. “Look for authenticity; look for the source; look for evidence; look for reasoning and to look for the context.”

    “More dangerous paths”

    From the highly publicized resignation of the president of the University of Pennsylvania, to high school walkouts in San Francisco and New York City, the war has undeniably created a tense climate in schools nationwide. Reports of antisemitic and Islamophobic threats and violence have soared.

    “It doesn’t feel like we’re living in 2023. Feels like we’re living in Nazi Germany,” one student said. 

    Experts like Evon, Breakstone and Wineburg said false or misleading information can intensify the already heated debates about this conflict.  

    “When young people are developing their views about the world, false claims alter that,” Evon said. “They drag people down more dangerous paths.”

    The students at Highland Park High School agree.

    “It can just be really dangerous if we don’t seek out the real information,” Miller said. “I hope that people in our generation start to become more educated about issues.” 

    Response from TikTok

    CBS News discussed the experiment findings with spokespeople from TikTok. After the team sent the company links to examples of misinformation, those posts were removed.

    “TikTok works relentlessly to remove harmful misinformation, and partners with independent fact-checkers who assess the accuracy of content in more than 50 languages,” a TikTok spokesperson said. We’ve removed more than 131,000 videos for misinformation since the start of the Israel-Hamas war and direct people searching for content related to the conflict to Reuters.”

    TikTok spokespeople also said:

    • Our Community Guidelines are clear that we do not allow inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent. We reviewed content sent to us by CBS and have removed those that violate our policies.
    • We use a combination of technology and human moderation to enforce those guidelines, and we review content at multiple stages including initial upload, when content is reported to us and as it rises in popularity. 
    • We have over 40,000 talented safety professionals dedicated to keeping TikTok safe. We also rely on independent fact-checking partners and our database of previously fact-checked claims to help assess the accuracy of content. We work with 17 fact checking partners globally, who cover over 50 languages globally.
    • We provide access to authoritative information at the very top of search to provide access to facts. For example, searching for “Israel” on TikTok directs people to resources from Reuters.

    Response from Meta about Instagram

    “We’ve taken significant steps to fight the spread of misinformation using a three-part strategy – remove content that violates our Community Standards, flag and reduce distribution of stories marked as false by third-party fact-checkers,” a Meta spokesperson said. “We also label content  and inform people so they can decide what to read, trust and share.”

    The Meta spokesperson also said:

    • We’re working with third-party fact-checkers in the region to debunk false claims. Meta has the largest third-party fact checking network of any platform, with coverage in both Arabic and Hebrew, through AFP, Reuters and Fatabyyano. When they rate something as false, we move this content lower in Feed so fewer people see it. 
    • We recognize the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.
    • We’re also giving people more information to decide what to read, trust, and share by adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers. 

    [ad_2]

    Source link

  • Harvard Scholar Joan Donovan Claims Facebook Pushed Her Out | Entrepreneur

    Harvard Scholar Joan Donovan Claims Facebook Pushed Her Out | Entrepreneur

    [ad_1]

    Dr. Joan Donovan, a former Harvard disinformation scholar, is claiming in a new disclosure that the university’s cozy relationship with alumni Mark Zuckerberg and his wife Priscilla Chan, led to her termination.

    Donovan is regarded as one of the world’s leading social media disinformation experts. In 2021, she testified before the House and Senate subcommittees about misinformation and social media.

    In the whistleblower declaration made public on Monday, Donovan claims her studies on media manipulation campaigns were restricted following a $500 million donation from the Chan Zuckerberg Initiative to fund an artificial intelligence center in 2021.

    “From that very day forward, I was treated differently by the university to the point where I lost my job,” Donovan told The Logic.

    The disclosure was sent on Donovan’s behalf to Harvard and U.S. Education Secretary Miguel Cardona by Whistleblower Aid last week.

    RELATED: How Meta CEO Mark Zuckerberg Makes, Spends His $65 Billion Fortune

    The Chan Zuckerberg Initiative is a philanthropic organization run by Zuckerberg and Chan.

    Donovan claims she was terminated in 2022 after Harvard shut down her research. She had worked at the university since 2018 running the Technology and Social Change Research Project for the Shorenstein Center at Harvard University’s John F. Kennedy School of Government.

    The disclosure calls for an investigation into the Kennedy School and “all appropriate corrective action.”

    Harvard, meanwhile, has refused Donovan’s allegations and claims she wasn’t fired.

    “Allegations of unfair treatment and donor interference are false. The narrative is full of inaccuracies and baseless insinuations, particularly the suggestion that Harvard Kennedy School allowed Facebook to dictate its approach to research,” said Harvard spokesperson James Francis Smith in a statement to CNN.

    “By longstanding policy to uphold academic standards, all research projects at Harvard Kennedy School need to be led by faculty members. Joan Donovan was hired as a staff member (not a faculty member) to manage a media manipulation project. When the original faculty leader of the project left Harvard, the School tried for some time to identify another faculty member who had time and interest to lead the project. After that effort did not succeed, the project was given more than a year to wind down. Joan Donovan was not fired, and most members of the research team chose to remain at the School in new roles,” he said.

    Entrepreneur reached out to Meta for comment.

    RELATED: Facebook Whistleblower Reveals Herself

    The disclosure notes that the Chan Zuckerberg donation came shortly after the 2021 “Facebook Papers” whistleblower complaint from former Facebook employee Frances Haugen.

    Harvard made the papers public with the help of Donovan, who archived the documents for public research.

    Since Donovan’s departure from Harvard, she announced in August she is joining Boston University’s College of Communication as an assistant professor.

    [ad_2]

    Sam Silverman

    Source link

  • Elon Musk’s X platform fueled far-right riots in Ireland, experts say

    Elon Musk’s X platform fueled far-right riots in Ireland, experts say

    [ad_1]

    Elon Musk’s social media platform X has fueled far-right disinformation in Ireland and played a key role in riots last month in the country’s capital Dublin, experts tell CBS News. The violent clashes erupted on Nov. 23 between about 200 civilians and riot police in central Dublin as demonstrators vented rage after a stabbing incident that left multiple people wounded earlier in the day, including a 5-year-old girl who was hospitalized with serious injuries. 

    False reports circulating on social media had suggested the stabbings were carried out by an illegal immigrant. The assailant was in fact a naturalized Irish citizen originally from Algeria. 

    The violence, which saw a tram and a bus set on fire and stores looted, was partially incited by far-right local actors with significant followings on X, which was called Twitter before Musk bought the platform. 

    Police in Dublin clash with violent rioters in the city center
    Police at the scene in Dublin as riots broke out following a stabbing incident in which five people were injured, including three young children, Nov. 23, 2023. 

    Brian Lawless/PA Images via Getty Images


    “What we saw at the beginning of the riot was what started out to be a protest, you know, either organized by the far-right or if it wasn’t organized by the far-right, the far-right wasn’t far behind,” Matthew Donoghue, an assistant professor in social policy at University College Dublin, told CBS News. 

    “The fact that we saw attacks on the [police] cordon and the crime scene, these are clearly organized and orchestrated activities which need quite a lot of background organization… this is where we see the far-right’s use of X,” he said. “They were able to get a lot of people there very quickly to basically take control of that situation, direct it.” 

    Eileen Culloty, a deputy director of the Institute for Media, Democracy and Society at Dublin City University, told CBS News the riots had been plotted by “a core group” of prominent right-wing influencers on X who “have a relatively high profile within that kind of alternative, right-wing world. Some of them will be alternative media outlets, some of them are right-wing anti-immigration activists.”

    “They went into overdrive in the lead-up to the riots,” Culloty told CBS News. “They were posting lots of public messages on Twitter [X], but also on Telegram and other platforms from lunchtime onwards and urging people to act. A lot of the hashtags they used were promoting this ethno-nationalist idea that Ireland is full, that Ireland belongs to the Irish.” 

    A study conducted by the Institute for Strategic Dialogue, an independent nonprofit think-tank that studies and offers policy advice on extremism and disinformation, just days before the riots in Ireland had also found that Twitter (X) is “used by virtually all of the most prominent actors in the Irish mis- and disinformation ecosystem.”

    The study focused on the growing online influence of the far-right in Ireland over the past three years, analyzing 13,180,820 posts from 1,640 accounts across 12 online platforms. X had the highest number of far-right accounts of those analyzed by the researchers.

    Following his October 2022 takeover of the platform, tech billionaire Musk has dismantled core features of the platform  — including its verification system and its Trust and Safety advisory group, as well as broader content moderation and hate speech enforcement.

    As the Associated Press reported in October, experts who study disinformation have said that X has deteriorated under Musk to the point that it’s not merely failing to detect and remove misinformation, but is favoring posts by accounts that pay for the platform’s blue-check subscription service, regardless of who’s running them.

    Crucially, according to Culloty, with respect to the violence in Dublin, the core group of far-right accounts suspected of inciting the violence had previously been removed from the platform for violating the company’s safety policies, but were reinstated following Musk’s takeover of the company.

    “They were able to move back to X and a lot of people who had been banned were able to come back,” she said. “It’s notable that there are more people not trying to conceal their identity [in the aftermath of Musk’s takeover.] So they now feel quite comfortable making these incendiary statements.” 

    In the aftermath of the riots, other prominent figures from the right-wing of American politics have pushed a conspiratorial, anti-immigration narrative on X in an attempt to vindicate the violence in Ireland.

    Former Fox News anchor Tucker Carlson, who now streams his own show on X, told his millions of followers last week that “the Irish government is trying to replace the population of Ireland with people from the third world.”

    Carlson’s interviewee on the show, former White House adviser and Trump ally Steve Bannon, called Ireland “a powder keg.” 

    Musk himself has weighed in on the violence in Ireland on X and took aim at the Irish government last month. 

    In a post the day after the scenes played out in Dublin, Musk said Irish Prime Minister Leo Varadkar, “hates the Irish people,” after the Irish government announced that it would aim to pass new laws against hate crimes and hate speech in response to the riots. 

    Speaking to the Irish parliament last week, Justice Minister Helen McEntee said X had refused to comply with requests from the Garda Síochána, Ireland’s national police force, to take down inflammatory posts in real time as violence flared in Dublin. 

    McEntee said she’d spoken with a detective “who was actively engaged with the social media companies” throughout the evening of the riots, Irish state broadcaster RTÉ reported. 

    Other social media companies including TikTok and Meta, which owns Facebook and Instagram, “were responding, they were engaging with gardaí and they were taking down these vile posts as they came up,” McEntee said. “X were not. They didn’t engage. They did not fulfill their own community standards.” 

    Elon Musk
    X owner Elon Musk speaks during the New York Times annual DealBook summit on Nov. 29, 2023 in New York City. 

    Michael M Santiago/Getty


    Musk and X are facing a major advertising withdrawal as brands like Disney, Apple, Coca Cola, CBS News parent company Paramount Global and other large companies have removed paid ads from the platform after Musk endorsed an antisemitic post on X that claimed Jews fomented hatred against White people. Musk’s comment on the post called it “the actual truth.” 

    While the controversial billionaire has subsequently apologized for his comment, he’s criticized companies who have suspended advertising on X. 

    At the 2023 DealBook Summit in New York on Wednesday, Musk told the audience: “If somebody’s going to try to blackmail me with advertising? Blackmail me with money? Go f— yourself. Go. F— yourself. Is that clear?” 

    The decline in advertising could deprive X of up to $75 million in revenue, according to a New York Times report.

    Responding to Musk’s comments, X CEO Linda Yaccarino said in a post on X last week that Musk’s remarks were an “explicit point of view about our position” and added: “We’re a platform that allows people to make their own decisions… And here’s my perspective when it comes to advertising: X is standing at a unique and amazing intersection of Free Speech and Main Street — and the X community is powerful and is here to welcome you.”

    CBS News has reached out to X for comment but had not received a response at the time of publication.

    [ad_2]

    Source link

  • Kamala Harris at climate summit: World must ‘fight’ those stalling action

    Kamala Harris at climate summit: World must ‘fight’ those stalling action

    [ad_1]

    DUBAI — The vast, global efforts to arrest rising temperatures are imperiled and must accelerate, U.S. Vice President Kamala Harris told the world climate summit on Saturday. 

    “We must do more,” she implored an audience of world leaders at the COP28 climate talks in Dubai. And the headwinds are only growing, she warned.

    “Continued progress will not be possible without a fight,” she told the gathering, which has drawn more than 100,000 people to this Gulf oil metropolis. “Around the world, there are those who seek to slow or stop our progress. Leaders who deny climate science, delay climate action and spread misinformation. Corporations that greenwash their climate inaction and lobby for billions of dollars in fossil fuel subsidies.” 

    Her remarks — less than a year before an election that could return Donald Trump to the White House — challenged leaders to cooperate and spend more to keep the goal of containing global warming to 1.5 degrees Celsius within reach. So far, the planet has warmed about 1.3 degrees since preindustrial times.

    “Our action collectively, or worse, our inaction will impact billions of people for decades to come,” Harris said.

    The vice president, who frequently warns about climate change threats in speeches and interviews, is the highest-ranking face of the Biden White House at the Dubai negotiations.

    She used her conference platform to push that image, announcing several new U.S. climate initiatives, including a record-setting $3 billion pledge for the so-called Green Climate Fund, which aims to help countries adapt to climate change and reduce emissions. The commitment echoes an identical pledge Barack Obama made in 2014 — of which only $1 billion was delivered. The U.S. Treasury Department later specified that the updated commitment was “subject to the availability of funds.”

    Meanwhile, back in D.C., the Biden administration strategically timed the release of new rules to crack down on planet-warming methane emissions from the oil and gas sector — a significant milestone in its plan to prevent climate catastrophe.

    The trip allows Harris to bolster her credentials on a policy issue critical to the young voters key to President Joe Biden’s re-election campaign — and potentially to a future Harris White House run. 

    “Given her knowledge base with the issue, her passion for the issue, it strikes me as a smart move for her to broaden that message out to the international audience,” said Roger Salazar, a California political strategist and former aide to then-Vice President Al Gore, a lifetime climate campaigner. 

    Yet sending Harris also presents political peril. 

    Biden has taken flak from critics for not attending the talks himself after representing the United States at the last two U.N. climate summits since taking office. And climate advocates have questioned the Biden administration’s embrace of the summit’s leader, Sultan al-Jaber, given he also runs the United Arab Emirates’ state-owned oil giant. John Kerry, Biden’s climate envoy, has argued the partnership can help bring fossil fuel megaliths to the table.

    Harris has been on a climate policy roadshow in recent months, discussing the issue during a series of interviews at universities and other venues packed with young people and environmental advocates. The administration said it views Harris — a former California senator and attorney general — as an effective spokesperson on climate. 

    “The vice president’s leadership on climate goes back to when she was the district attorney of San Francisco, as she established one of the first environmental justice units in the nation,” a senior administration official told reporters on a call previewing her trip. 

    Joining Harris in Dubai are Kerry, White House climate adviser Ali Zaidi and John Podesta, who’s leading the White House effort to implement Biden’s signature climate law. 

    Biden officials are leaning on that climate law — dubbed the Inflation Reduction Act — to prove the U.S. is doing its part to slash global emissions. Yet climate activists remain skeptical, chiding Biden for separately approving a series of fossil fuel projects, including an oil drilling initiative in Alaska and an Appalachian natural gas pipeline.

    Similarly, the Biden administration’s opening COP28 pledge of $17.5 million for a new international climate aid fund frustrated advocates for developing nations combating climate threats. The figure lagged well behind other allies, several of whom committed $100 million or more.

    Nonetheless, Harris called for aggressive action in her speech, which was followed by a session with other officials on renewable energy. The vice president committed the U.S. to doubling its energy efficiency and tripling its renewable energy capacity by 2030, joining a growing list of countries. The U.S. also said Saturday it was joining a global alliance dedicated to divorcing the world from coal-based energy. 

    Like other world leaders, Harris also used her trip to conduct a whirlwind of diplomacy over the war between Israel and Hamas, which has flared back up after a brief truce.

    U.S. National Security Council spokesperson John Kirby said Harris would be meeting with “regional leaders” to discuss “our desire to see this pause restored, our desire to see aid getting back in, our desire to see hostages get out.”

    The war has intruded into the proceedings at the climate summit, with Israeli President Isaac Herzog and Palestinian Authority leader Mahmoud Abbas both skipping their scheduled speaking slots on Friday. Iran’s delegation also walked out of the summit, objecting to Israel’s presence.

    Kirby said Harris will convey “that we believe the Palestinian people need a vote and a voice in their future, and then they need governance in Gaza that will look after their aspirations and their needs.”

    Although Biden won’t be going to Dubai, the administration said these climate talks are “especially” vital, given countries will decide how to respond to a U.N. assessment that found the world’s climate efforts are falling short. 

    “This is why the president has made climate a keystone of his administration’s foreign policy agenda,” the senior administration official said.

    Robin Bravender reported from Washington, D.C. Zia Weise and Charlie Cooper reported from Dubai. 

    Sara Schonhardt contributed reporting from Washington, D.C.

    [ad_2]

    Robin Bravender, Zia Weise and Charlie Cooper

    Source link

  • Musk threatens ‘thermonuclear lawsuit’ as X ad boycott gathers pace

    Musk threatens ‘thermonuclear lawsuit’ as X ad boycott gathers pace

    [ad_1]

    Elon Musk said on Saturday that he will file a “thermonuclear lawsuit” against non-profit watchdog Media Matters and others, as companies including Disney, Apple and IBM reportedly have paused advertising on X amid an antisemitism storm around the social media platform.

    Media Matters, a U.S. group that describes itself as “a progressive research and information center” that monitors “media outlets for conservative misinformation,” published earlier this week research showing that X has posted ads appearing next to pro-Nazi posts.

    X CEO Linda Yaccarino previously said that brands are now “protected from the risk of being next to” potentially toxic content on the platform.

    “The split second court opens on Monday,” Musk said in a post on X on Saturday. “X Corp will be filing a thermonuclear lawsuit against Media Matters and ALL those who colluded in this fraudulent attack on our company,” he said.

    Musk also posted a statement with the headline “Stand with X to protect free speech” where he said that Media Matters “completely misrepresented the real user experience on X.” He also said that “for speech to be truly free, we must also have the freedom to see or hear things that some people may consider objectionable” and added that “we will not allow agenda driven activists, or even our profits, to deter our vision.”

    Musk, owner of Tesla and Space X, who bought Twitter last year and renamed it X, was already under fire for tolerating and even encouraging antisemitism on the social media platform. The latest episode was this week when Musk endorsed an antisemitic post on X as “the actual truth” of what Jewish people were doing.

    The antisemitic post said that “Jewish communties (sic) have been pushing the exact kind of dialectical hatred against whites that they claim to want people to stop using against them.” The post also referenced “hordes of minorities” flooding Western countries, a popular antisemitic conspiracy theory.

    The White House condemned the post, recalling that the post Musk was responding to referred to a conspiracy theory that motivated the man who killed 11 people at a Pittsburgh synagogue in 2018.

    The companies suspending advertising on X include Disney, IBM, Apple, Paramount, NBCUniversal, Comcast, Lionsgate and Warner Bros. Discovery, according to media reports.

    In Brussels, the European Commission’s communications department has asked all EU executive services to stop running ads on X over “widespread concerns relating to the spread of disinformation,” according to an internal note seen by POLITICO’s Playbook.

    [ad_2]

    Jacopo Barigazzi

    Source link

  • Poll shows most US adults think AI will add to election misinformation in 2024

    Poll shows most US adults think AI will add to election misinformation in 2024

    [ad_1]

    NEW YORK — The warnings have grown louder and more urgent as 2024 approaches: The rapid advance of artificial intelligence tools threatens to amplify misinformation in next year’s presidential election at a scale never seen before.

    Most adults in the U.S. feel the same way, according to a new poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.

    The poll found that nearly 6 in 10 adults (58%) think AI tools — which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos in seconds — will increase the spread of false and misleading information during next year’s elections.

    By comparison, 6% think AI will decrease the spread of misinformation while one-third say it won’t make much of a difference.

    “Look what happened in 2020 — and that was just social media,” said 66-year-old Rosa Rangel of Fort Worth, Texas.

    Rangel, a Democrat who said she had seen a lot of “lies” on social media in 2020, said she thinks AI will make things even worse in 2024 — like a pot “brewing over.”

    Just 30% of American adults have used AI chatbots or image generators and fewer than half (46%) have heard or read at least some about AI tools. Still, there’s a broad consensus that candidates shouldn’t be using AI.

    When asked whether it would be a good or bad thing for 2024 presidential candidates to use AI in certain ways, clear majorities said it would be bad for them to create false or misleading media for political ads (83%), to edit or touch-up photos or videos for political ads (66%), to tailor political ads to individual voters (62%) and to answer voters’ questions via chatbot (56%).

    The sentiments are supported by majorities of Republicans and Democrats, who agree it would be a bad thing for the presidential candidates to create false images or videos (85% of Republicans and 90% of Democrats) or to answer voter questions (56% of Republicans and 63% of Democrats).

    The bipartisan pessimism toward candidates using AI comes after it already has been deployed in the Republican presidential primary.

    In April, the Republican National Committee released an entirely AI-generated ad meant to show the future of the country if President Joe Biden is reelected. It used fake but realistic-looking photos showing boarded-up storefronts, armored military patrols in the streets and waves of immigrants creating panic. The ad disclosed in small lettering that it was generated by AI.

    Ron DeSantis, the Republican governor of Florida, also used AI in his campaign for the GOP nomination. He promoted an ad that used AI-generated images to make it look as if former President Donald Trump was hugging Dr. Anthony Fauci, an infectious disease specialist who oversaw the nation’s response to the COVID-19 pandemic.

    Never Back Down, a super PAC supporting DeSantis, used an AI voice-cloning tool to imitate Trump’s voice, making it seem like he narrated a social media post.

    “I think they should be campaigning on their merits, not their ability to strike fear into the hearts of voters,” said Andie Near, a 42-year-old from Holland, Michigan, who typically votes for Democrats.

    She has used AI tools to retouch images in her work at a museum, but she said she thinks politicians using the technology to mislead can “deepen and worsen the effect that even conventional attack ads can cause.”

    College student Thomas Besgen, a Republican, also disagrees with campaigns using deepfake sounds or imagery to make it seem as if a candidate said something they never said.

    “Morally, that’s wrong,” the 21-year-old from Connecticut said.

    Besgen, a mechanical engineering major at the University of Dayton in Ohio, said he is in favor of banning deepfake ads or, if that’s not possible, requiring them to be labeled as AI-generated.

    The Federal Election Commission is currently considering a petition urging it to regulate AI-generated deepfakes in political ads ahead of the 2024 election.

    While skeptical of AI’s use in politics, Besgen said he is enthusiastic about its potential for the economy and society. He is an active user of AI tools such as ChatGPT to help explain history topics he’s interested in or to brainstorm ideas. He also uses image-generators for fun — for example, to imagine what sports stadiums might look like in 100 years.

    He said he typically trusts the information he gets from ChatGPT and will likely use it to learn more about the presidential candidates, something that just 5% of adults say they are likely to do.

    The poll found that Americans are more likely to consult the news media (46%), friends and family (29%), and social media (25%) for information about the presidential election than AI chatbots.

    “Whatever response it gives me, I would take it with a grain of salt,” Besgen said.

    The vast majority of Americans are similarly skeptical toward the information AI chatbots spit out. Just 5% say they are extremely or very confident that the information is factual, while 33% are somewhat confident, according to the survey. Most adults (61%) say they are not very or not at all confident that the information is reliable.

    That’s in line with many AI experts’ warnings against using chatbots to retrieve information. The artificial intelligence large language models powering chatbots work by repeatedly selecting the most plausible next word in a sentence, which makes them good at mimicking styles of writing but also prone to making things up.

    Adults associated with both major political parties are generally open to regulations on AI. They responded more positively than negatively toward various ways to ban or label AI-generated content that could be imposed by tech companies, the federal government, social media companies or the news media.

    About two-thirds favor the government banning AI-generated content that contains false or misleading images from political ads, while a similar number want technology companies to label all AI-generated content made on their platforms.

    Biden set in motion some federal guidelines for AI on Monday when he signed an executive order to guide the development of the rapidly progressing technology. The order requires the industry to develop safety and security standards and directs the Commerce Department to issue guidance to label and watermark AI-generated content.

    Americans largely see preventing AI-generated false or misleading information during the 2024 presidential elections as a shared responsibility. About 6 in 10 (63%) say a lot of the responsibility falls on the technology companies that create AI tools, but about half give a lot of that duty to the news media (53%), social media companies (52%), and the federal government (49%).

    Democrats are somewhat more likely than Republicans to say social media companies have a lot of responsibility, but generally agree on the level of responsibility for technology companies, the news media and the federal government.

    ____

    The poll of 1,017 adults was conducted Oct. 19-23, 2023, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, designed to represent the U.S. population. The margin of sampling error for all respondents is plus or minus 4.1 percentage points.

    ____

    O’Brien reported from Providence, Rhode Island. Associated Press writer Linley Sanders in Washington, D.C., contributed to this report.

    ____

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    [ad_2]

    Source link