[ad_1]
Watch CBS News
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
[ad_2]

[ad_1]
WASHINGTON — A bill that could lead to the popular video-sharing app TikTok being unavailable in the United States is quickly gaining traction in the House as lawmakers voice concerns about the potential for the platform to surveil and manipulate Americans.
The measure gained the support of House Speaker Mike Johnson and could soon come up for a full vote in the House. The bill advanced out of committee Thursday in a unanimous bipartisan vote — 50-0.
The White House has provided technical support in the drafting of the bill, though White House press secretary Karine Jean-Pierre said the TikTok legislation “still needs some work” to get to a place where President Joe Biden would endorse it.
The bill takes a two-pronged approach. First, it requires ByteDance Ltd., which is based in Beijing, to divest TikTok and other applications it controls within 180 days of enactment of the bill or those applications will be prohibited in the United States. Second, it creates a narrow process to let the executive branch prohibit access to an app owned by a foreign adversary if it poses a threat to national security.
“It’s an important, bipartisan measure to take on China, our largest geopolitical foe, which is actively undermining our economy and security,” Johnson said Thursday.
Some lawmakers and critics of TikTok have argued the Chinese government could force the company to share data on American users. TikTok says it has never done that and wouldn’t do so if asked. The U.S. government also hasn’t provided evidence of that happening.
Critics also claim the app could be used to spread misinformation beneficial to Beijing.
Former President Donald Trump attempted to ban TikTok through executive order, but the courts blocked the action after TikTok sued, arguing such actions would violate free speech and due process rights.
TikTok raised similar concerns about the legislation gaining momentum in the House.
“This bill is an outright ban of TikTok, no matter how much the authors try to disguise it. This legislation will trample the First Amendment rights of 170 million Americans and deprive 5 million small businesses of a platform they rely on to grow and create jobs,” the company said in a prepared statement.
The bill’s author, Rep. Mike Gallagher, the Republican chairman of a special House committee focused on China, rejected TikTok’s assertion of a ban. Rather, he said it’s an effort to force a change in TikTok’s ownership. He also took issue with TikTok urging some users to call their representatives and urge them to vote no on the bill.
The notification urged TikTok users to “speak up now — before your government strips 170 million Americans of their Constitutional right to free expression.” The notification also warned that the “ban” of TikTok would damage millions of businesses and destroy the lives of countless creators around the country.
TikTok users responded by flooding the offices of lawmakers with telephone calls. Some offices even shut off their phones because of the onslaught. A congressional aide not authorized to speak on the matter publicly said that lawmakers on the committee voting on the bill Thursday as well as others were inundated with calls.
“Today, it’s about our bill and it’s about intimidating members considering that bill, but tomorrow it could be misinformation or lies about an election, about a war, about any number of things,” Gallagher said. “This is why we can’t take a chance on having a dominant news platform in America controlled or owned by a company that is behold to the Chinese Communist Party, our foremost adversary.”
The bill comes about one year after TikTok’s CEO was grilled for hours by skeptical lawmakers on the House Energy and Commerce Committee concerned about data security and the distribution of harmful content. That same committee met Thursday to debate and vote on the bill.
Rep. Cathy McMorris Rodgers, the committee’s Republican chair, said TikTok’s access to so many Americans makes it a valuable propaganda tool for the Chinese government to exploit. She also noted that its parent company ByteDance is currently under investigation by the U.S. Department of Justice for surveilling American journalists.
“Through this access, the app is able to collect nearly every data point imaginable, from people’s location, to what they search on their devices, who they are connecting with, and other forms of sensitive information,” Rodgers said.
To assuage concerns from lawmakers, TikTok has promised to wall off U.S. user data from its parent company through a separate entity run independently from ByteDance and monitored by outside observers. TikTok says new user data is currently being stored on servers maintained by the software company Oracle.
The American Civil Liberties Union and other free speech advocacy groups urged lawmakers to reject the TikTok bill, saying in a letter to the Energy and Commerce Committee’s leadership that “passing this legislation would trample on the constitutional right to freedom of speech of millions of people in the United States.”
Biden’s reelection campaign has opened a TikTok account as a way to boost its appeal with young voters, even as his administration continued to raise security concerns about whether the popular social media app might be sharing user data with China’s communist government.
Jean-Pierre said the White House welcomes lawmakers’ efforts on the TikTok legislation, but lawmakers need to continue work on it.
“Once it gets to a place where we think .. it’s on legal standing and it’s in a place where it can get out of Congress, then the president would sign it,” she told reporters on Wednesday during the daily White House briefing.
She also defended the White House’s efforts to limit the dangers of TikTok, even as the president engages with influencers on the social-media platform and his campaign hosts a TikTok account.
“We are going to try to meet the America people where they are,” Jean-Pierre said. “We are trying to reach everyone. The president is the president for all Americans .. it doesn’t mean that we’re not going to try to figure out how to protect our national security.”
___
Associated Press staff writer Seung Min Kim contributed to this report and staff writer Mae Anderson contributed from Brooklyn, New York. Hadero reported from Jersey City, New Jersey.
[ad_2]

[ad_1]
Major technology companies signed a pact Friday to voluntarily adopt âreasonable precautionsâ to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.
Tech executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new voluntary framework for how they will respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies â including Elon Musk’s X â are also signing on to the accord.
âEverybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,â said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.
The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote.â
The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide âswift and proportionate responsesâ when that content starts to spread.
The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but may disappoint pro-democracy activists and watchdogs looking for stronger assurances.
âThe language isn’t quite as strong as one might have expected,â said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. âI think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.â
Clegg said each company âquite rightly has its own set of content policies.â
âThis is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play whack-a-mole and finding everything that you think may mislead someone.â
Tech executives were also joined by several European and U.S. political leaders at Fridayâs announcement. European Commission Vice President Vera Jourova said while such an agreement canât be comprehensive, âit contains very impactful and positive elements.â She also urged fellow politicians to take responsibility to not use AI tools deceptively.
She stressed the seriousness of the issue, saying the âcombination of AI serving the purposes of disinformation and disinformation campaigns might be the end of democracy, not only in the EU member states.â
The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Some have already done so, including Bangladesh, Taiwan, Pakistan, and most recently Indonesia.
Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked U.S. President Joe Bidenâs voice tried to discourage people from voting in New Hampshireâs primary election last month.
Just days before Slovakiaâs elections in November, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false, but they were already widely shared as real across social media.
Politicians and campaign committees also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.
Friday’s accord said in responding to AI-generated deepfakes, platforms âwill pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression.â
It said the companies will focus on transparency to users about their policies on deceptive AI election content and work to educate the public about how they can avoid falling for AI fakes.
Many of the companies have previously said theyâre putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what theyâre seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure from regulators and others to do more.
That pressure is heightened in the U.S., where Congress has yet to pass laws regulating AI in politics, leaving AI companies to largely govern themselves. In the absence of federal legislation, many states are considering ways to put guardrails around the use of AI, in elections and other applications.
The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.
Misinformation experts warn that while AI deepfakes are especially worrisome for their potential to fly under the radar and influence voters this year, cheaper and simpler forms of misinformation remain a major threat. The accord noted this too, acknowledging that âtraditional manipulations (âcheapfakes”) can be used for similar purposes.”
Many social media companies already have policies in place to deter deceptive posts about electoral processes â AI-generated or not. For example, Meta says it removes misinformation about âthe dates, locations, times, and methods for voting, voter registration, or census participationâ as well as other false posts meant to interfere with someone’s civic participation.
Jeff Allen, co-founder of the Integrity Institute and a former data scientist at Facebook, said the accord seems like a âpositive stepâ but he’d still like to see social media companies taking other basic actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.
Lisa Gilbert, executive vice president of the advocacy group Public Citizen, argued Friday that the accord is ânot enoughâ and AI companies should âhold back technologyâ such as hyper-realistic text-to-video generators âuntil there are substantial and adequate safeguards in place to help us avert many potential problems.â
In addition to the major platforms that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.
Notably absent from the accord is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately return a request for comment Friday.
The inclusion of X â not mentioned in an earlier announcement about the pending accord â was one of the biggest surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a âfree speech absolutist.â
But in a statement Friday, X CEO Linda Yaccarino said âevery citizen and company has a responsibility to safeguard free and fair elections.”
âX is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,â she said.
__
The Associated Pressâ¯receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about APâs democracy initiative here. The AP is solely responsible for all content.
[ad_2]

[ad_1]
WASHINGTON — The rumors about vote fraud started swirling as the ballots in Taiwan’s closely watched presidential election were tallied on Jan. 13. There were baseless claims that people had fabricated votes and that officials had miscounted and skewed the results.
In a widely shared video, a woman recording votes mistakenly enters one in the column for the wrong candidate. The message was clear: The election could not be trusted. The results were faked.
It could have been Taiwan’s Jan. 6 moment. But it wasn’t.
Worries that China would use disinformation to undermine the integrity of Taiwan’s vote dogged the recent election, a key moment in the young democracy’s development that highlighted tensions with its much larger neighbor.
In repelling disinformation, Chinese and domestic, Taiwan offers an example to other democracies holding elections this year.
This year , more than 50 countries that are home to half the planet’s population are due to hold national elections. From India to Mexico, the U.K. to Russia, the outcomes of the elections will test the strengths of democracies and countries with authoritarian leaders.
In Taiwan, the response to disinformation was swift. Fact-checking groups debunked the rumors, while the Central Election Commission held a news conference to push back on claims of electoral discrepancies. Influencers like @FroggyChiu with more than 600,000 subscribers also put out explainers on YouTube explaining how votes are tallied.
The video showing the election worker miscounting votes had been selectively edited, fact-checkers found. Voters at the voting station spotted the woman’s error and election workers quickly corrected the count, according to MyGoPen, an independent Taiwanese fact-checking chatbot.
It was just one of dozens of videos that fact checkers had to debunk.
“I believe some people genuinely believed this. And when the election results came out, they thought something was up,” said Eve Chiu, the editor-in-chief of Taiwan’s FactCheck Center, a nonprofit journalism organization.
Supporters of the Taiwan People’s Party presidential candidate Ko Wen-je, many of whom are young, had shared the videos widely on TikTok, which were then shared on Facebook. Prior to the election results, many thought there was a chance of a Ko upset in the race given the candidate had drawn a lot of online attention. Taiwan’s FactCheck Center debunked multiple videos of alleged voter fraud, including another one in which voting officials make a human error caught on camera. The source of these videos is unclear.
Notably, Taiwan has resisted calls for tougher laws that would require social media platforms to police their sites; a proposal to institute such rules was withdrawn in 2022 after free speech concerns were raised.
China, which claims Taiwan as its own, targeted the island with a stream of disinformation ahead of its election, according to research from DoubleThink Lab.
Much of it sought to undermine faith in the incumbent Democratic Progressive Party and cast it as belligerent and likely to start a war that Taiwan can’t win. Other narratives targeted U.S. support for Taiwan, arguing that America was an untrustworthy partner only interested in Taiwan’s semiconductor exports that wouldn’t support the island if it came to war with China.
Taiwan has been able to effectively respond to Chinese disinformation in part because of how seriously the threat is perceived there, according to Kenton Thibaut, a senior resident fellow and expert on Chinese disinformation at the Atlantic Council’s Digital Forensic Research Lab. Instead of a piecemeal approach — focusing solely on media literacy, for instance, or relying only on the government to fact-check false rumors — Taiwan adopted a multifaceted approach, what Thibaut called a “whole of society response” that relied on government, independent fact-check groups and even private citizens to call out disinformation and propaganda.
In an interview with The Associated Press, Alexander Tah-Ray Yui, Taipei’s economic and cultural representative to the U.S., said the government has learned it must identify and debunk false information as quickly as possible in order to counter false narratives. Yui is Taiwan’s de facto ambassador to the U.S.
“Find it early, like a tumor or cancer. Cut it before it spreads,” Yui said of foreign disinformation.
Taiwan’s civil society groups like MyGoPen and the Taiwan FactCheck Center, which received $1 million in funding from Google, have focused on raising public awareness through debunking individual rumors that members of the public report.
The island has a strong civil society. Many of the fact-checker groups were founded by dedicated individuals, such as MyGoPen, whose founder Charles Yeh started the chatbot service because he found his relatives would get confused by online rumors. Others like, Taiwan FactCheck Center, are careful to not take government money so as to preserve their independence, said Chiu.
Media literacy on fake news and the digital environment is growing, those on the front lines say, but slowly.
“It’s like in the past when everyone dumped bottles and cans in the garbage and now they sort them, that was done through a period of societal education,” said Chiu. “Everyone needs to slowly develop this awareness, and this needs time.”
In the U.S., government efforts to call out disinformation have themselves become politicized and criticized as government censorship or thought control.
With a population more than 10 times the size of Taiwan and years of growing polarization, the U.S. has deep, internal political and social fault lines that create good conditions for disinformation to take root — and make it harder for the government to push back without being accused of censoring legitimate political views.
In the United States, many of the narratives spread by Russia, for instance, are eagerly adopted by domestic groups that distrust the government. Donald Trump, the former president, and other Republicans have repeatedly made similar claims about the U.S. as those carried by Russian state media, for example.
“We have a dynamic in American politics where if you’re Russia, China or Iran, you don’t have to inject divisive topics, because they’re already here,” said Jim Ludes, a former national defense analyst who now leads the Pell Center for International Relations at Salve Regina University.
“The call is coming from inside the house,” he said, using a popular horror film metaphor.
That dynamic can also be seen in Taiwan. Although Ko, the presidential candidate, said publicly he didn’t believe there was election fraud, legislators from the TPP held a conference Wednesday in which they shared videos of miscounting that had spread online, which had already been debunked, to call for greater adherence to voting regulations.
Though the election passed without a major crisis, the challenge continues to evolve. Chinese efforts at disinformation have become increasingly localized and sophisticated, according to DoubleThink Lab’s post-election analysis.
In one example, a Chinese-run Facebook page called C GaChuDao made a video describing an affair that it said a DPP legislator had with a woman from China. Unlike in years past, where Chinese disinformation was easily recognized and mocked for its use of simplified characters and vocabulary from China, this video featured a man speaking with a Taiwanese accent and in a way that appeared completely local.
“In picking topics, they’d pick something that exists in your society, and then it’s relatively more convincing,” said Wu.
___
Wu reported from Bangkok. Associated Press writer Didi Tang contributed to this report.
[ad_2]

[ad_1]
JACKSON, Miss. — Law enforcement agencies are investigating whether social media rumors about a potential water outage prompted people to quickly fill bathtubs with tap water in Mississippi’s capital during a cold snap and cause a drop in pressure that temporarily made faucets run dry for thousands of customers of the city’s long-troubled system.
Taps ran dry Wednesday and Thursday for almost a quarter of Jackson’s 52,000 water customers as icy conditions strained local infrastructure. Officials for JXN Water, the private corporation that has been under a federal order to run Jackson’s system since late 2022, said a “deliberate misinformation campaign” was partially to blame. People responded to social media posts by filling bathtubs with water in a short period, causing demand to spike beyond what the water system could support, water manager Ted Henifin said.
JXN Water said in a statement Friday that U.S. District Judge Henry Wingate authorized the release of information about the investigation and advised the corporation on what to communicate to the public.
The organization did not specify which law enforcement agencies are involved or what charges might be brought if people are found to have spread false information on social media.
JXN Water identified one specific social media post, but Palacios said the organization had not traced its origin.
“Just got word they are about to shut off water in Jackson,” the post said. “If you’re in Jackson, fill up your tubs and jugs! Get prepared for not having water.”
The water woes began as an arctic blast kept temperatures below freezing in Jackson for nearly three days. The temperature rose on Thursday, but the National Weather Service warned that dangerously cold air would return this weekend.
Jackson residents and officials were already concerned that frigid conditions could disrupt the water system. Cold snaps in 2021 and 2022 caused frozen pipes and drops in water pressure across the city of nearly 150,000 residents. People had been told to prepare for past disasters by keeping jugs or bathtubs full of water.
Maintenance crews had restored water to all but about 1,000 customers Friday.
Ameerah Palacios, a spokesperson for JXN Water, said the news release about an investigation was partially written by Wingate, who is overseeing a federal intervention to improve the water system.
“Judge Wingate, that’s a man who chooses his words very carefully,” Palacios told The Associated Press in an interview. “The way that he worded it was, all of ‘the appropriate law enforcement agencies,’ so definitely more than one at play.”
A court clerk took a phone message for Wingate on Friday, but the judge did not immediately return a call to the AP.
It was unclear how many Jackson residents saw the social media posts or were influenced by them.
Although JXN Water did not release names of anyone who shared the post it cited, AP identified a Facebook post from Wednesday that had the exact wording. The Facebook account belongs to Bob Hickingbottom of Jackson, who ran unsuccessfully for governor as a Constitution Party candidate in 2019 and tried to run for governor in 2023 before the state Democratic Party removed him from its primary ballot.
In one phone interview with the AP, Hickingbottom said somebody might have put the post on his page.
“Something like that would be outside the realm of civilized behavior,” Hickingbottom said.
In a second phone call moments later, Hickingbottom said he put the water post on his page and he thought he was sharing information to help people.
“I’m a flamethrower when it comes to politics, but this is not politics,” Hickinbottom said of Jackson’s water system.
The latest disruption in Jackson water service came a week after Mississippi health officials issued and then quickly lifted a health advisory after tests identified E. coli in the water supplies of Jackson and a suburb. Henifin said he believed the tests were false positives caused by lab contamination, but the state health department stood by its tests.
Wingate appointed Henifin in November 2022 to oversee reforms to Jackson’s water system after infrastructure breakdowns during the late summer of that year caused many city residents to go days without safe running water.
[ad_2]

[ad_1]
DAVOS, Switzerland — The Earth is heating up, as is conflict in the Middle East. The world economy and Ukraine’s defense against Russia are sputtering along. Artificial intelligence could upend all our lives.
The to-do list of global priorities has grown for this year’s edition of the World Economic Forum’s gabfest of business, political and other elites in the Alpine snows of Davos, Switzerland, which runs Tuesday through Friday.
Over 60 heads of state and government, including Israeli President Isaac Herzog and Ukrainian President Volodymyr Zelenskyy will be heading to town to hold both public appearances and closed-door talks. They’ll be among more than 2,800 attendees, which also include academics, artists and international organization leaders.
The gathering is mostly high-minded ambition — think business innovation, aims for peace-making and security cooperation, or life-changing improvements in health care — and a venue for decision-makers in an array of fields and industries to connect.
It is also regularly panned by critics as an emblem of the yawning gap between rich and poor: Young Swiss Socialists staged a rally Sunday to blast the forum and brand attendees as “the richest and most powerful, who are responsible for today’s wars and crises.”
“Davos is easily mocked. But in current times it is hard to get people together to talk in a room on shared global issues and the value of face-to-face conversations is very real, as the COVID-19 pandemic showed,” Bronwen Maddox, director of the Chatham House think tank, said in an e-mail.
Here’s what to watch for:
While Davos is generally big-picture, regional conflict can cast a long shadow — like Ukraine’s war did a year ago, prompting organizers to exclude any Russian delegation.
This year, Israel’s three-month war with Hamas in Gaza, and recently U.S. and British airstrikes on Houthi militants in Yemen who have fired missiles into Red Sea shipping lanes, are looming large.
Herzog, the Israeli president, whose job is more ceremonial than is Prime Minister Benjamin Netanyahu’s, will be on hand for a Davos session Thursday, and the prime ministers of Qatar, Jordan and Lebanon will also be attending.
A “humanitarian briefing on Gaza” session gets a half-hour slot Tuesday.
A testament to how technology has taken a large and growing slice of attention in Davos, this year the theme of Artificial Intelligence “as a driving force for the economy and society” will get about 30 separate sessions.
The dizzying emergence of OpenAI’s ChatGPT over a year ago and rivals since then have elevated the power, promise and portent of artificial intelligence into greater public view. OpenAI chief Sam Altman will be in Davos along with top executives from Microsoft, which helped bankroll his company’s rise.
AI in education, transparency about AI, its ethics and impact on creativity are all part of the menu — and the Davos Promenade is swimming in advertisements and displays pointing to the new technology.
Forum organizers warned last week that the threat posed by misinformation generated by AI, such as through the creation of synthetic content, is the world’s greatest short-term threat.
Such misinformation could surge this year, and one session explores the threat of “bots and plots” on democracies.
Forum organizers say elections in countries whose populations together total 4.2 billion people will take place this year, and many will be contested. (Few doubt whether Russian President Vladimir Putin will get a new term.)
It comes against the backdrop of talk about a new Cold War, the widening rift between dictatorships — or at least autocracies — and democratic countries.
Back-to-back addresses Tuesday morning by Prime Minister Li Qiang of China and Ursula von der Leyen, the president of the European Commission, will highlight the contrast. President Joe Biden’s national security adviser, Jake Sullivan, gives a speech later in the day.
French President Emmanuel Macron and U.S. Secretary of State Antony Blinken will speak Wednesday, as will Argentina’s new president, Javier Milei, a libertarian who has already announced plans to slash the government workforce.
Davos corridors were already abuzz about whether former U.S. President Donald Trump — who made twotrips to Davos during his term — could be inaugurated again around this time next year, after November’s election. Biden was once a regular at Davos, but has not attended as president.
Of all the lofty hopes in Davos, the perennial one of late has been the search for creative and promising ways to fight climate change.
This year is no different: Top climate scientists from around the world reported this month that average global temperatures last year obliterated the record highs — raising the urgency level.
John Kerry, who is stepping down as Biden’s climate adviser, takes part in a panel discussion on a U.S.-backed initiative that aims to draw the private sector into development of low-carbon technologies.
Chatham House’s Maddox said plans to transition away from fossil fuels agreed during the U.N. climate conference in Dubai last month means climate finance will face a big year in 2024.
“Davos is a powerful combination potentially, of a lot of concern about the environment, and a lot of high-powered finance present,” she said.
[ad_2]

[ad_1]
Former White House coronavirus advisor Anthony Fauci doesn’t believe the lab leak explanation of COVID-19’s origins is a conspiracy theory. He admitted as much during a closed-door grilling session before the House Select Subcommittee on the Coronavirus Pandemic on Monday. Legislators did not release a transcript of his testimony, but Rep. Brad Wenstrup (R–Ohio), the chairman of the subcommittee, published some highlights on X (formerly Twitter).
✔️Dr. Fauci acknowledged that the lab-leak hypothesis is not a conspiracy theory.
This comes nearly four years after prompting the publication of the now infamous “Proximal Origin” paper that attempted to vilify and disprove the lab-leak hypothesis.
— Select Subcommittee on the Coronavirus Pandemic (@COVIDSelect) January 10, 2024
In recent months, Fauci has denied he ever categorically rejected the possibility that COVID-19 accidentally escaped from a laboratory. But he faces very serious allegations that he deterred scientific experts from considering it. At issue is “The Proximal Origin of Sars-CoV-2,” a paper that appeared in Nature Medicine, a scientific journal, in March 2020 at the very start of the global pandemic. Fauci—who was then head of the National Institutes of Allergy and Infectious Diseases (NIAID)—and Francis Collins—then director of the National Institutes of Health—participated in a conference call with the authors, whose initial openness to a lab leak explanation changed significantly prior to publication. The paper ultimately ruled out a lab leak as not just “unlikely”—the phrasing used in an early draft of the paper—but “improbable.”
More recently, Fauci has contended that he always remained open to the idea, but was persuaded by scientific arguments—including those in the proximal origin paper—that a zoonotic spillover was more likely. This claim would be more persuasive if Fauci had not stated over and over and over and over again, in media interviews, that he “strongly favored” the zoonotic origin theory; his subsequent suggestion that he did not lean in either direction is flatly contradicted by his literal words.
“I wasn’t leaning totally strongly one way or the other”
-Dr. Fauci pic.twitter.com/77hjmamKtc
— Matt Orfalea (@0rf) January 10, 2024
It was certainly in Fauci’s interest to downplay the possibility that human experimentation on viruses accidentally unleashed COVID-19 upon the world; during his career, Fauci remained one of the foremost advocates of public funding for gain-of-function research, in which scientists manipulate viruses in order to make them deadlier and more transmissible. Fauci and other public health experts have straightforwardly denied that the U.S. funded such research in Wuhan, China, but critics say this is an exercise in semantics. Indeed, EcoHealth Alliance—a U.S. nonprofit that obtained public funding to conduct research on bat coronaviruses in Wuhan, China—was caught actively misleading Pentagon officials about the nature of the experimentation: Peter Daszak, the head of EcoHealth Alliance, advised colleagues to deceive regulators about the fact that the research would be conducted in China under laxer lab safety standards.
A cadre of elite scientists deliberately lied to U.S. security officials in order to spend American tax dollars performing risky experiments under substandard laboratory conditions in a notoriously secretive and authoritarian foreign country. Maybe those experiments created COVID-19, and maybe they didn’t. In any case, it’s clearly not a conspiracy theory; good of Fauci to recognize the obvious, however belatedly it might be.
One can debate the extent of Fauci’s wrongdoing here—but it’s the mainstream media that really dropped the ball in terms of lab leak discourse. The Washington Post was an early offender, accusing Sen. Tom Cotton (R–Ark.) of “repeating a coronavirus theory that was already debunked.” The article explicitly applied the phrase “conspiracy theory” to the lab leak idea; The New York Times did the same, noting that the lab leak had been “dismissed by scientists.” In fact, The Times‘ lead coronavirus reporter, Apoorva Mandavilli, went a step further, calling lab leak a racist theory.
Mandavilli’s tone toward the lab leak was broadly representative of a whole host of mainstream journalists, media commentators, and so-called fact-checkers and misinformation experts. Following this flawed consensus, social media sites—including Facebook—brutally suppressed any and all discussion of the lab leak theory on their platforms. As recently as August 2023, The Journal of the American Medical Association was still counting lab leak discourse online as evidence of the unstoppable spread of misinformation online. And the Global Disinformation Index—a British non-profit that received funding from the State Department, and tarred Reason as an unsafe news website—warned that blaming the pandemic on a lab leak could lead to racist attacks on Asian people.
That’s a long way of saying that self-appointed misinformation cops went to great efforts to censor and stigmatize this topic of conversation, on grounds that it was either racist, or a conspiracy theory, or both. Yet it is neither; even Fauci says so. One might hope that this would prompt some self-reflection within media circles. The anti-misinformation crowd wasn’t just wrong—they were militant that it was of vital importance to stop everyone from even contemplating the possibility of a lab leak theory.
There’s a perniciousness underlying this attitude, and one that clearly threatens free speech, as many U.S. political figures—including President Joe Biden and Sen. Elizabeth Warren (D–Mass.)—have decided that the federal government should do more to combat purported misinformation. They might consider whether they themselves have been misinformed.
[ad_2]
Robby Soave
Source link

[ad_1]
Watch CBS News
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
[ad_2]

[ad_1]
WARSAW, Poland — Poland’s new pro-European Union government has begun to wrestle control of the country’s state media and some other state agencies from the conservative party that consolidated its grip on them during eight years in power.
The Cabinet of Prime Minister Donald Tusk, which took office last week, said Wednesday it had fired the directors of the state television and radio outlets and the government-run news agency. It seeks to reestablish independent media in Poland in a legally binding and lasting way.
Tusk’s government has made it a priority to restore objectivity and free expression in state media, which the previous government, under the Law and Justice party, used as aggressive propaganda tools, attacking Tusk and the opposition and spreading its euroskeptic views. During its rule, Law and Justice cut corners and ignored some procedures to gain control of the media supervisory bodies and of the key appointments as it tightened its grip.
The new government’s first steps toward a return to media freedom were met with protest by Law and Justice. Party leader Jarosław Kaczyński, top party figures and many of its lawmakers occupied buildings housing the offices and studios of state-run television TVP in the hopes that their supporters would come out to demonstrate in big numbers. A rally was called for later Wednesday and a few dozen people gathered.
“The (party) instructions are that all Law and Justice parliament members come here (to the TVP building,)” said Law and Justice Senator Marek Pek. “We must show through our presence that we are deeply against these lawless and brutal actions.”
Law and Justice issued a statement saying the actions of the new government were “illegal” and that the change of the leaderships of the media was done “unlawfully.”
The statement quoted Kaczynski, Poland’s most powerful politician until recently, insisting that the protest is a “defense of democracy because there is no democracy without media pluralism or a strong anti-government media. In any democracy, there must be strong anti-government media.”
But for years the Law and Justice government actively sought to discredit and eliminate from the market the TVN station that was highly critical of it.
There was no brutality involved Wednesday and the change of management was done in line with the law. Some police were deployed in front of the building, but their goal was to ensure calm, according to Warsaw police spokesman Sylwester Marczak.
On Tuesday, Polish lawmakers adopted a resolution presented by Tusk’s government calling for the restoration of “legal order, objectivity and fairness” of TVP, Polish Radio and the PAP news agency.
Following the resolution, Poland’s new culture minister, Bartłomiej Sienkiewicz, replaced the heads and the supervisory boards of state media, which chose new management.
The new head of TVP’s supervisory board, lawyer Piotr Zemła, came to the broadcaster’s headquarters on Wednesday.
In the first sign of change, the all-news TVP INFO channel, one of the previous government’s main propaganda tools, ceased to broadcast on air and over the internet on Wednesday morning.
Earlier this week, the previous ruling team called a rally at the state television building to protest any planned changes, but only a few hundred people turned up.
President Andrzej Duda, who was an ally of the previous government, has warned that he won’t accept moves that he believes to be against the law. However, his critics have long accused him of violating the Polish Constitution and other laws as he tried to support the policies of the Law and Justice party. Some of the party’s policies, especially in the judicial sector, drew strong criticism and financial sanctions from the EU, which saw them as undemocratic.
In another setback for Law and Justice, a Warsaw court handed two-year prison terms to former interior minister Mariusz Kaminski and his deputy. The court found them guilty of abuse of power in 2007, when they served in the previous Law and Justice government. This means they will be stripped of their immunity as current lawmakers and should be replaced.
The government took office last week and began reversing policies of the previous administration that many in Poland found divisive. In one such move, Tusk had new heads of the security, intelligence and anti-corruption offices appointed on Tuesday.
Parties that make up the new government collectively won the majority of votes in the Oct. 15 election. They have vowed to jointly govern under the leadership of Tusk, who served as prime minister from 2007-2014 and was head of the European Council from 2014-2019.
[ad_2]

[ad_1]
NEW YORK — The warnings have grown louder and more urgent as 2024 approaches: The rapid advance of artificial intelligence tools threatens to amplify misinformation in next year’s presidential election at a scale never seen before.
Most adults in the U.S. feel the same way, according to a new poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.
The poll found that nearly 6 in 10 adults (58%) think AI tools — which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos in seconds — will increase the spread of false and misleading information during next year’s elections.
By comparison, 6% think AI will decrease the spread of misinformation while one-third say it won’t make much of a difference.
“Look what happened in 2020 — and that was just social media,” said 66-year-old Rosa Rangel of Fort Worth, Texas.
Rangel, a Democrat who said she had seen a lot of “lies” on social media in 2020, said she thinks AI will make things even worse in 2024 — like a pot “brewing over.”
Just 30% of American adults have used AI chatbots or image generators and fewer than half (46%) have heard or read at least some about AI tools. Still, there’s a broad consensus that candidates shouldn’t be using AI.
When asked whether it would be a good or bad thing for 2024 presidential candidates to use AI in certain ways, clear majorities said it would be bad for them to create false or misleading media for political ads (83%), to edit or touch-up photos or videos for political ads (66%), to tailor political ads to individual voters (62%) and to answer voters’ questions via chatbot (56%).
The sentiments are supported by majorities of Republicans and Democrats, who agree it would be a bad thing for the presidential candidates to create false images or videos (85% of Republicans and 90% of Democrats) or to answer voter questions (56% of Republicans and 63% of Democrats).
The bipartisan pessimism toward candidates using AI comes after it already has been deployed in the Republican presidential primary.
In April, the Republican National Committee released an entirely AI-generated ad meant to show the future of the country if President Joe Biden is reelected. It used fake but realistic-looking photos showing boarded-up storefronts, armored military patrols in the streets and waves of immigrants creating panic. The ad disclosed in small lettering that it was generated by AI.
Ron DeSantis, the Republican governor of Florida, also used AI in his campaign for the GOP nomination. He promoted an ad that used AI-generated images to make it look as if former President Donald Trump was hugging Dr. Anthony Fauci, an infectious disease specialist who oversaw the nation’s response to the COVID-19 pandemic.
Never Back Down, a super PAC supporting DeSantis, used an AI voice-cloning tool to imitate Trump’s voice, making it seem like he narrated a social media post.
“I think they should be campaigning on their merits, not their ability to strike fear into the hearts of voters,” said Andie Near, a 42-year-old from Holland, Michigan, who typically votes for Democrats.
She has used AI tools to retouch images in her work at a museum, but she said she thinks politicians using the technology to mislead can “deepen and worsen the effect that even conventional attack ads can cause.”
College student Thomas Besgen, a Republican, also disagrees with campaigns using deepfake sounds or imagery to make it seem as if a candidate said something they never said.
“Morally, that’s wrong,” the 21-year-old from Connecticut said.
Besgen, a mechanical engineering major at the University of Dayton in Ohio, said he is in favor of banning deepfake ads or, if that’s not possible, requiring them to be labeled as AI-generated.
The Federal Election Commission is currently considering a petition urging it to regulate AI-generated deepfakes in political ads ahead of the 2024 election.
While skeptical of AI’s use in politics, Besgen said he is enthusiastic about its potential for the economy and society. He is an active user of AI tools such as ChatGPT to help explain history topics he’s interested in or to brainstorm ideas. He also uses image-generators for fun — for example, to imagine what sports stadiums might look like in 100 years.
He said he typically trusts the information he gets from ChatGPT and will likely use it to learn more about the presidential candidates, something that just 5% of adults say they are likely to do.
The poll found that Americans are more likely to consult the news media (46%), friends and family (29%), and social media (25%) for information about the presidential election than AI chatbots.
“Whatever response it gives me, I would take it with a grain of salt,” Besgen said.
The vast majority of Americans are similarly skeptical toward the information AI chatbots spit out. Just 5% say they are extremely or very confident that the information is factual, while 33% are somewhat confident, according to the survey. Most adults (61%) say they are not very or not at all confident that the information is reliable.
That’s in line with many AI experts’ warnings against using chatbots to retrieve information. The artificial intelligence large language models powering chatbots work by repeatedly selecting the most plausible next word in a sentence, which makes them good at mimicking styles of writing but also prone to making things up.
Adults associated with both major political parties are generally open to regulations on AI. They responded more positively than negatively toward various ways to ban or label AI-generated content that could be imposed by tech companies, the federal government, social media companies or the news media.
About two-thirds favor the government banning AI-generated content that contains false or misleading images from political ads, while a similar number want technology companies to label all AI-generated content made on their platforms.
Biden set in motion some federal guidelines for AI on Monday when he signed an executive order to guide the development of the rapidly progressing technology. The order requires the industry to develop safety and security standards and directs the Commerce Department to issue guidance to label and watermark AI-generated content.
Americans largely see preventing AI-generated false or misleading information during the 2024 presidential elections as a shared responsibility. About 6 in 10 (63%) say a lot of the responsibility falls on the technology companies that create AI tools, but about half give a lot of that duty to the news media (53%), social media companies (52%), and the federal government (49%).
Democrats are somewhat more likely than Republicans to say social media companies have a lot of responsibility, but generally agree on the level of responsibility for technology companies, the news media and the federal government.
____
The poll of 1,017 adults was conducted Oct. 19-23, 2023, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, designed to represent the U.S. population. The margin of sampling error for all respondents is plus or minus 4.1 percentage points.
____
O’Brien reported from Providence, Rhode Island. Associated Press writer Linley Sanders in Washington, D.C., contributed to this report.
____
The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.
[ad_2]