ReportWire

Tag: iab-social networking

  • Chinese city proposes lockdowns for flu — and faces a backlash | CNN

    Chinese city proposes lockdowns for flu — and faces a backlash | CNN

    [ad_1]


    Hong Kong
    CNN
     — 

    A Chinese city has sparked a backlash on social media after saying it would consider the use of lockdowns in the event of an influenza outbreak.

    The city of Xi’an – a tourism hotspot in Shaanxi province that is home to the famous terracotta warriors – revealed an emergency response plan this week that would enable it to shut schools, businesses and “other crowded places” in the event of a severe flu epidemic.

    That prompted a mixture of anxiety and anger on China’s social media websites among many users who said the plan sounded uncomfortably similar to some of the strict zero-Covid measures China had implemented throughout the pandemic and which have only recently been abandoned.

    “Vaccinate the public rather than using such time to create a sense of panic,” one user wrote on Weibo, China’s equivalent of Twitter.

    “How will people not panic given that Xi’an’s proposal to suspend work and business activities were issued without clear instruction on the national level to classify the disease?” asked another.

    While cases of Covid in China are falling, there has been a spike in flu cases across the country and some pharmacies are struggling to meet demand for flu remedies.

    However, Xi’an’s emergency response plan will not necessarily be used. Rather, it outlines how the city of almost 13 million people would respond to any future outbreak based on four levels of severity.

    At the first and highest level, it says, “the city can lock down infected areas, carry out traffic quarantines and suspend production and business activities. Shopping malls, theaters, libraries, museums, tourist attractions and other crowded places will also be closed.”

    “At this emergency level, schools and nurseries at all levels would be shut down and be made responsible for tracking students’ and infants’ health conditions.”

    The backlash comes as the central government in Beijing has emphasized the need to open the country back up following the removal of all Covid restrictions in January.

    Throughout the pandemic, China had enforced some of the world’s most severe Covid restrictions, including lockdowns that stretched into months in some cities. It was also one of the last countries in the world to end measures such as mass testing and strict border quarantine periods, even amid growing evidence of the damage being done to its economy.

    Xi’an itself was subject to a draconian lockdown between December 2021 and January 2022, with 13 million residents confined to their homes for weeks on end – and many left short of food and other essential supplies. Access to medical services was also affected. In an incident that shocked and angered the nation, a heavily pregnant woman was turned away from a hospital on New Year’s Day because she didn’t have a valid Covid-19 test, and suffered a miscarriage after she was finally admitted two hours later.

    Residents take nucleic acid tests in a closed community in Xi'an in January 2022.

    Shortly before China removed its pandemic era restrictions the country had been rocked by a series of demonstrations against its zero-Covid policy.

    Memories of being confined to their homes and of panic buying that in some areas led to food shortages remain fresh in people’s minds and the idea of a return to Covid-style measures appears to have hit a nerve.

    However, some voices called for calm.

    Epidemiologist Ben Cowling, from the University of Hong Kong’s School of Public Health, said he saw the rationale of the move.

    “I think it’s quite rational to make contingency plans. I wouldn’t expect a lockdown to be needed for flu, but presumably there are different response levels,” he said.

    One user on Weibo expressed a similar sentiment: “It is merely the revelation of a proposal, not putting it in place. It is quite normal to take precautions given this wave of flu is coming at us very strong.”

    [ad_2]

    Source link

  • Everyone hates switching the clocks for Daylight Saving Time. So why is it so hard to get rid of?  | CNN Business

    Everyone hates switching the clocks for Daylight Saving Time. So why is it so hard to get rid of? | CNN Business

    [ad_1]

    Everyone hates switching the clocks for Daylight Saving Time. So why is it so hard to get rid of?

    CNN’s Harry Enten tells “Nightcap’s” Jon Sarlin why Americans switch the clocks back and forth twice a year, even though the time change is pretty universally hated. Plus, Los Angeles Times columnist LZ Granderson on how legal sports betting has changed March Madness. And CNN’s Clare Duffy explains why the FTC’s investigation of Twitter could be a real problem for Elon Musk. To get the day’s business headlines sent directly to your inbox, sign up for the Nightcap newsletter.

    [ad_2]

    Source link

  • Elon Musk thinks he can fix Twitter’s advertising business after derailing it | CNN Business

    Elon Musk thinks he can fix Twitter’s advertising business after derailing it | CNN Business

    [ad_1]



    CNN
     — 

    Elon Musk on Tuesday offered an optimistic picture for how Twitter can improve the advertising business he helped derail and boost its bottom line while also admitting that keeping the social network running is proving to be a challenge after multiple rounds of layoffs.

    In remarks at a Morgan Stanley Conference, Musk laid out his vision to boost Twitter’s core advertising business by adopting the standard strategy of most of the company’s peers: improving the relevance of the ads it serves.

    “The advertising relevance is the most gigantic thing,” Musk said. “And this is going to sound totally bizarre, but Twitter did not consider relevance in advertising until three months ago.”

    With that change, and larger cost cuts across the organization, Musk said he believes Twitter has “got a shot at being cash flow positive next quarter.”

    “Going forward, Twitter will have very relevant, useful advertising,” Musk said. “And because it is useful, because it is relevant, there will be a massive increase in revenue, because it is now useful. So I’m very optimistic about the future. It’s been a very difficult four months, but I’m optimistic about the future.”

    Since taking over the platform in late October, Twitter has suffered a mass exodus of top brands as Musk relaxed some content moderation policies, restored incendiary accounts and made a number of erratic remarks concerning politics and world affairs. Musk, who has previously tweeted about his hatred for advertising, made a quick bet on bolstering a paid subscription offering instead, but it has reportedly struggled to gain traction.

    He also took the time to thank advertisers that have stuck with Twitter throughout his rocky takeover, including Disney and Apple.

    But even as Musk looks to grow Twitter’s ad business, which has long made up nearly all of the company’s revenue, there are sincere doubts about whether the platform can even stay online.

    Twitter has been inundated with outages, including a significant service disruption on Monday, and other user headaches since Musk took over, likely linked to the multiple rounds of mass layoffs that occurred under his watch. On Tuesday, he blamed the “overly complex” underlying technology for some of the recent service disruptions.

    “The code base is like a Rube Goldberg machine, and when you zoom in on one part of the Rube Goldberg machine, there’s another Rube Goldberg machine, and then there’s another one,” Musk said at the event on Tuesday. “So it’s quite difficult to keep this thing running, and then also difficult to advance the product because it is really overly complex, to say the least.”

    “We’ll make a change, what appears to be a small change somewhere, that then causes a massive disruption,” he said. Musk said Monday’s outage was the result of “what was supposed to be a small change to 1% of the Twitter user base [that] ended up being a catastrophic change to 100% of the Twitter user base.”

    At the same time, Musk continues to make controversial remarks that may give brands pause about returning to, or increasing their spending on, the platform. Musk was criticized by some this week after he publicly mocked a Twitter worker with a disability who asked the Twitter owner whether he had been laid off.

    At Tuesday’s event, Musk went on a series of unrelated tangents, including repeatedly taking aim at legacy media organizations. “What I’d say to advertisers and brands is, you know, use Twitter yourself and believe what you see on Twitter, not what you read in the newspapers,” Musk said. “Because what you see on Twitter is the real thing, and what you read in newspapers is not.”

    [ad_2]

    Source link

  • Facebook tests bringing back in-app messaging features as it competes with TikTok | CNN Business

    Facebook tests bringing back in-app messaging features as it competes with TikTok | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Nearly a decade after Facebook angered some users by splitting off messaging features from its flagship social networking application and forcing people to download a separate app to chat with friends, the company is now testing out reversing the move.

    In an interview with CNN, Facebook head Tom Alison said the platform is testing bringing messaging capabilities back to the Facebook app so users can more easily share content without having to use the Messenger app. The test comes as Facebook looks to beat back competition from TikTok by bolstering its position both as a platform to discover new content and discuss it.

    “We believe that content feeds into not just you consuming it but being conversation starters and starting that message thread with your friends or being something that you can share into a group of people who share your same interests,” Alison said. “I think the thing that will differentiate Facebook and Instagram from TikTok and others is just the depth of being able to start a conversation with your friends from this content and have that kind of social dimension.”

    The move, which Alison also announced in a blog post Tuesday, comes after Facebook revised its strategy last year amid concerns about a stagnant and aging user base. No longer would the platform simply be about connecting friends and family. Instead, founder Mark Zuckerberg wanted Facebook to become a “discovery engine.”

    Facebook redesigned its home feed to surface more entertaining posts from across the platform, with AI-powered content recommendations, rather than just showing posts from those specifically in a user’s network. (A new, separate tab fulfilled the desire for the latter.) The goal was clear: to keep users engaged longer and help the platform better compete with TikTok and its steady stream of recommended content.

    Nine months later, that shift has begun to pay off, Alison told CNN. The platform last month reported that it hit 2 billion daily active users in the December quarter.

    “A lot of the narrative leading up to this has been that Facebook is in decline or Facebook’s best days are behind it,” Alison said, “and part of what we’re trying to do with this milestone is say, ‘hey, look, that’s actually not true.”

    There have been no shortage of rumors of Facebook’s demise over the years, from its admission of having a “teen problem” a decade ago to the more recent series of PR debacles for the social network and its parent company, Meta. TikTok’s rapid rise and even the success of Facebook’s sister service, Instagram, have also taken some of the shine off the aging social network Zuckerberg launched in a dorm room nearly 20 years ago. But its audience has resumed growing, for now.

    Alison, who has been in charge of the Facebook app since July 2021, said the introduction of the “discovery engine” strategy is just the beginning of a larger shift for the platform, as Facebook works to forge a path to continued growth and relevance over the next two decades.

    “For the last almost 20 years … we’ve been really known for friends and family, but over the next 20 years, what we’re really working toward is being known for social discovery,” he said. “It’s going to be about helping you connect with the people that you know, the people that you want to know and the people that you should know.”

    While Facebook and Instagram have struggled in their attempts to keep pace with TikTok, including through copycat features like Reels, Alison argues Facebook has a leg up on TikTok thanks to its roots in helping people connect with their networks.

    For some creators, for example, Facebook has become a place to create groups of fans and hold conversations beyond the content they share to Instagram and TikTok, Alison said. “I think it’s helping them get closer to their fans on Facebook in a way they can’t do on other platforms.”

    As Facebook plots its evolution, it will have to contend with what Zuckerberg has called the company’s “year of efficiency,” an effort to cut costs after a broader reckoning in the tech industry and investor skepticism around its pricey plan to center its business model around the future version of the internet it calls the metaverse.

    “One of the things that we are embracing with the year of efficiency is prioritization and, frankly, just focusing more effort on some of our bigger bets,” Alison said. The platform has over the past year shuttered some smaller efforts, such as its Bulletin newsletter subscription service, in favor of investing in key areas like AI. “That’s a lot of the culture that we’re kind of instituting across Meta is just like, how do we do fewer things better? And how do we do them, sometimes, more quickly? Efficiency is not just about cost savings.”

    [ad_2]

    Source link

  • Twitter hit with one of the biggest outages since Elon Musk took over | CNN Business

    Twitter hit with one of the biggest outages since Elon Musk took over | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter’s website was inaccessible for many users on Monday while others reported issues seeing photos and clicking through links in the app, marking one of the most wide-ranging service disruptions to date under new owner Elon Musk.

    Some users who attempted to load Twitter.com or TweetDeck, a service that allows users to organize their Twitter feed into lists, were met with an error message: “your current API plan does not include access to this endpoint.” Other users were able to access the site (although it appeared to load slowly), but they were met with the same error message when clicking on links.

    Outage tracker site DownDetector showed more than 8,000 Twitter outage reports around noon on Monday. For users who were able to access the platform, “Twitter API” was trending as people tweeted about the issues.

    “Some parts of Twitter may not be working as expected right now,” the company said in a tweet. “We made an internal change that had some unintended consequences. We’re working on this now and will share an update when it’s fixed.”

    In a separate tweet on Monday, Musk said: “This platform is so brittle (sigh). Will be fixed shortly.”

    Within about an hour, the issues appeared to have largely resolved. “Things should now be working as normal,” the company tweeted around 1 pm ET.

    Monday’s outage marked the second Twitter glitch in less than a week and the third in under a month. Last Wednesday, some Twitter users who opened up their “for you” timeline were greeted with a blank screen and a message saying, “welcome to your timeline,” encouraging them to follow other users to get tweets to show up even if they already followed various accounts. Other users were met with a “Welcome to Twitter!” message as if they had just joined the platform.

    Three weeks ago, Twitter users encountered various issues with the platform, including the inability to tweet, send direct messages or follow new accounts.

    Twitter has experienced a range of technical glitches since Musk took over the company and laid off more than half its staff late last year. Users have previously reported issues with the app’s two-factor authentication tool, seeing replies listed above a tweet rather than below it and seeing old tweets show up repeatedly in their feed or mentions.

    Some former employees raised concerns that the mass layoffs under Musk could cause the platform to break in big or small ways, after workers with knowledge of Twitter’s key systems were ousted. But Musk has continued to cut staff in an effort to boost Twitter’s bottom line.

    The latest service disruptions come after Twitter reportedly laid off another 10% of its staff late last month, including some engineers responsible for site reliability, according to a report from the New York Times.

    [ad_2]

    Source link

  • Sen. Mike Lee says his personal Twitter account was suspended | CNN Business

    Sen. Mike Lee says his personal Twitter account was suspended | CNN Business

    [ad_1]



    CNN
     — 

    A personal Twitter account belonging to Utah Republican Sen. Mike Lee was suspended without warning or explanation, according to the senator.

    Tweeting from his official Senate account, Lee said he has reached out to Twitter “seeking answers.”

    “My personal Twitter account – @BasedMikeLee – has been suspended,” Lee tweeted. “Twitter did not alert me ahead of time, nor have they yet offered an explanation for the suspension.”

    CNN confirmed the suspension Wednesday afternoon by visiting the affected profile, which displayed a suspension message from Twitter. As of 2:30 pm ET, an hour after his tweet, the account appeared to be restored.

    Twitter, which has cut much of its public relations team, did not immediately respond to a request for comment from CNN. In a tweet, Twitter owner Elon Musk said the account was “incorrectly flagged as impersonation.”

    The suspension marks the second time in a month that Twitter has briefly suspended a sitting US senator. In February, Twitter temporarily suspended Montana Republican Sen. Steve Daines’ account over a profile photo that Twitter said violated its policies. Musk later personally reached out to Daines by phone and restored his account.

    [ad_2]

    Source link

  • Twitter users were unable to view tweets in latest service disruption under Musk | CNN Business

    Twitter users were unable to view tweets in latest service disruption under Musk | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter’s timeline page appeared to experience a prolonged outage on Wednesday morning, marking the latest service disruption for the platform under new owner Elon Musk.

    Some Twitter users who opened up their “for you” timeline were greeted with a blank screen and a message saying, “welcome to your timeline,” encouraging them to follow other users to get tweets to show up even if they already followed various accounts. Other users were met with a “Welcome to Twitter!” message as if they had just joined the platform. The “following” page also failed to load.

    There were more than 4,000 user reports of issues on DownDetector, an outage tracker, as of 5:30 a.m. ET on Wednesday. Within about two hours, the issue appeared to have largely resolved. #TwitterDown was trending on the platform Wednesday morning.

    The outage marked the second major glitch the platform has experienced in less than a month. Three weeks ago, Twitter users encountered various issues with the platform, including the inability to tweet, send direct messages or follow new accounts.

    Twitter has experienced a range of technical glitches since Musk took over the company and laid off more than half its staff late last year. Users have previously reported issues with the app’s two-factor authentication tool, seeing replies listed above a tweet rather than below it and seeing old tweets show up repeatedly in their feed or mentions.

    Some former employees raised concerns that the mass layoffs under Musk could cause the platform to break in big or small ways, after workers with knowledge of Twitter’s key systems were ousted. But Musk has continued to cut staff in an effort to boost Twitter’s bottom lime.

    Wednesday’s service disruption comes after Twitter reportedly laid off another 10% of its staff earlier this week, including some engineers responsible for site reliability, according to a report from the New York Times.

    It’s not clear what caused Wednesday’s apparent outage. Twitter, which eliminated much of its media relations staff last year, did not immediately respond to a request for comment about the issue.

    [ad_2]

    Source link

  • Fact check: Republicans at CPAC make false claims about Biden, Zelensky, the FBI and children | CNN Politics

    Fact check: Republicans at CPAC make false claims about Biden, Zelensky, the FBI and children | CNN Politics

    [ad_1]


    Washington
    CNN
     — 

    The Conservative Political Action Conference is underway in Maryland. And the members of Congress, former government officials and conservative personalities who spoke at the conference on Thursday and Friday made false claims about a variety of topics.

    Rep. Jim Jordan of Ohio uttered two false claims about President Joe Biden. Rep. Marjorie Taylor Greene of Georgia repeated a debunked claim about Ukrainian President Volodymyr Zelensky. Sen. Tommy Tuberville of Alabama used two inaccurate statistics as he lamented the state of the country. Former Trump White House official Steve Bannon repeated his regular lie about the 2020 election having been stolen from Trump, this time baselesly blaming Fox for Trump’s defeat.

    Rep. Kat Cammack of Florida incorrectly said a former Obama administration official had encouraged people to harass Supreme Court Justice Brett Kavanaugh. Rep. Ralph Norman of South Carolina inaccurately claimed Biden had laughed at a grieving mother and inaccurately insinuated that the FBI tipped off the media to its search of former President Donald Trump’s Florida residence. Two other speakers, Rep. Scott Perry of Pennsylvania and former Trump administration official Sebastian Gorka, inflated the number of deaths from fentanyl.

    And that’s not all. Here is a fact check of 13 false claims from the conference, which continues on Saturday.

    Marjorie Taylor Greene said the Republican Party has a duty to protect children. Listing supposed threats to children, she said, “Now whether it’s like Zelensky saying he wants our sons and daughters to go die in Ukraine…” Later in her speech, she said, “I will look at a camera and directly tell Zelensky: you’d better leave your hands off of our sons and daughters, because they’re not dying over there.”

    Facts First: Greene’s claim is false. Ukrainian President Volodymyr Zelensky didn’t say he wants American sons and daughters to fight or die for Ukraine. The false claim, which was debunked by CNN and others earlier in the week, is based on a viral video that clipped Zelensky’s comments out of context.

    19-second video of Zelensky goes viral. See what was edited out

    In reality, Zelensky predicted at a press conference in late February that if Ukraine loses the war against Russia because it does not receive sufficient support from elsewhere, Russia will proceed to enter North Atlantic Treaty Organization member countries in the Baltics (a region made up of Latvia, Lithuania and Estonia) that the US will be obligated to send troops to defend. Under the treaty that governs NATO, an attack on one member is considered an attack on all. Ukraine is not a NATO member, and Zelensky didn’t say Americans should fight there.

    Greene is one of the people who shared the out-of-context video on Twitter this week. You can read a full fact-check, with Zelensky’s complete quote, here.

    Right-wing commentator and former Trump White House chief strategist Steve Bannon criticized right-wing cable channel Fox at length for, he argued, being insufficiently supportive of Trump’s 2024 presidential campaign. Among other things, Bannon claimed that, on the night of the election in November 2020, “Fox News illegitimately called it for the opposition and not Donald J. Trump, of which our nation has never recovered.” Later, he said Trump is running again after “having it stolen, in broad daylight, of which they [Fox] participate in.”

    Facts First: This is nonsense. On election night in 2020, Fox accurately projected that Biden had won the state of Arizona. This projection did not change the outcome of the election; all of the votes are counted regardless of what media outlets have projected, and the counting showed that Biden won Arizona, and the election, fair and square. The 2020 election was not “stolen” from Trump.

    NATIONAL HARBOR, MARYLAND - MARCH 03: Former White House chief strategist for the Trump Administration Steve Bannon speaks during the annual Conservative Political Action Conference (CPAC) at the Gaylord National Resort Hotel And Convention Center on March 03, 2023 in National Harbor, Maryland. The annual conservative conference entered its second day of speakers including congressional members, media personalities and members of former President Donald Trump's administration. President Donald Trump will address the event on Saturday.  (Photo by Anna Moneymaker/Getty Images)

    Bannon has a harsh message for Fox News at CPAC

    Fox, like other major media outlets, did not project that Biden had won the presidency until four days later. Fox personalities went on to repeatedly promote lies that the election was stolen from Trump – even as they privately dismissed and mocked these false claims, according to court filings from a voting technology company that is suing Fox for defamation.

    Rep. Jim Jordan claimed that Biden, “on day one,” made “three key changes” to immigration policy. Jordan said one of those changes was this: “We’re not going to deport anyone who come.” He proceeded to argue that people knowing “we’re not going to get deported” was a reason they decided to migrate to the US under Biden.

    Facts First: Jordan inaccurately described the 100-day deportation pause that Biden attempted to impose immediately after he took office on January 20, 2021. The policy did not say the US wouldn’t deport “anyone who comes.” It explicitly did not apply to anyone who arrived in the country after the end of October 2020, meaning people who arrived under the Biden administration or in the last months of the Trump administration could still be deported.

    Biden did say during the 2020 Democratic primary that “no one, no one will be deported at all” in his first 100 days as president. But Jordan claimed that this was the policy Biden actually implemented on his first day in office; Biden’s actual first-day policy was considerably narrower.

    Biden’s attempted 100-day pause also did not apply to people who engaged in or were suspected of terrorism or espionage, were seen to pose a national security risk, had waived their right to remain in the US, or whom the acting director of Immigration and Customs Enforcement determined the law required to be removed.

    The pause was supposed to be in effect while the Department of Homeland Security conducted a review of immigration enforcement practices, but it was blocked by a federal judge shortly after it was announced.

    Rep. Ralph Norman strongly suggested the FBI had tipped off the media to its August search of Trump’s Mar-a-Lago home and resort in Florida for government documents in the former president’s possession – while concealing its subsequent document searches of properties connected to Biden.

    Norman said: “When I saw the raid at Mar-a-Lago – you know, the cameras, the FBI – and compare that to when they found Biden’s, all of the documents he had, where was the media, where was the FBI? They kept it quiet early on, didn’t let it out. The job of the next president is going to be getting rid of the insiders that are undermining this government, and you’ve gotta clean house.”

    Facts First: Norman’s narrative is false. The FBI did not tip off the media to its search of Mar-a-Lago; CNN reported the next day that the search “happened so quietly, so secretly, that it wasn’t caught on camera at all.” Rather, media outlets belatedly sent cameras to Mar-a-Lago because Peter Schorsch, publisher of the website Florida Politics, learned of the search from non-FBI sources and tweeted about it either after it was over or as it was just concluding, and because Trump himself made a public statement less than 20 minutes later confirming that a search had occurred. Schorsch told CNN on Thursday: “I can, unequivocally, state that the FBI was not one of my two sources which alerted me to the raid.”

    Brian Stelter, then CNN’s chief media correspondent, wrote in his article the day after the search: “By the time local TV news cameras showed up outside the club, there was almost nothing to see. Websites used file photos of the Florida resort since there were no dramatic shots of the search.”

    It’s true that the public didn’t find out until late January about the FBI’s November search of Biden’s former think tank office in Washington, which was conducted with the consent of Biden’s legal team. But the belated presence of journalists at Mar-a-Lago on the day of the Trump search in August is not evidence of a double standard.

    And it’s worth noting that media cameras were on the scene when Biden’s beach home in Delaware was searched by the FBI in February. News outlets had set up a media “pool” to make sure any search there was recorded.

    Sen. Tommy Tuberville, a former college and high school football coach, said, “Going into thousands of kids’ homes and talking to parents every year recruiting, half the kids in this country – I’m not talking about race, I’m just talking about – half the kids in this country have one or no parent. And it’s because of the attack on faith. People are losing faith because, for some reason, because the attack [on] God.”

    Facts First: Tuberville’s claim that half of American children don’t have two parents is incorrect. Official figures from the Census Bureau show that, in 2021, about 70% of US children under the age of 18 lived with two parents and about 65% lived with two married parents.

    About 22% of children lived with only a mother, about 5% with only a father, and about 3% with no parent. But the Census Bureau has explained that even children who are listed as living with only one parent may have a second parent; children are listed as living with only one parent if, for example, one parent is deployed overseas with the military or if their divorced parents share custody of them.

    It is true that the percentage of US children living in households with two parents has been declining for decades. Still, Tuberville’s statistic significantly exaggerated the current situation. His spokesperson told CNN on Thursday that the senator was speaking “anecdotally” from his personal experience meeting with families as a football coach.

    Tuberville claimed that today’s children are being “indoctrinated” in schools by “woke” ideology and critical race theory. He then said, “We don’t teach reading, writing and arithmetic anymore. You know, half the kids in this country, when they graduate – think about this: half the kids in this country, when they graduate, can’t read their diploma.”

    Facts First: This is false. While many Americans do struggle with reading, there is no basis for the claim that “half” of high school graduates can’t read a basic document like a diploma. “Mr. Tuberville does not know what he’s talking about at all,” said Patricia Edwards, a Michigan State University professor of language and literacy who is a past president of the International Literacy Association and the Literacy Research Association. Edwards said there is “no evidence” to support Tuberville’s claim. She also said that people who can’t read at all are highly unlikely to finish high school and that “sometimes politicians embellish information.”

    Tuberville could have accurately said that a significant number of American teenagers and adults have reading trouble, though there is no apparent basis for connecting these struggles with supposed “woke” indoctrination. The organization ProLiteracy pointed CNN to 2017 data that found 23% of Americans age 16 to 65 have “low” literacy skills in English. That’s not “half,” as ProLiteracy pointed out, and it includes people who didn’t graduate from high school and people who are able to read basic text but struggle with more complex literacy tasks.

    The Tuberville spokesperson said the senator was speaking informally after having been briefed on other statistics about Americans’ struggles with reading, like a report that half of adults can’t read a book written at an eighth-grade level.

    Rep. Jim Jordan claimed of Biden: “The president of the United States stood in front of Independence Hall, called half the country fascists.”

    Facts First: This is not true. Biden did not denounce even close to “half the country” in this 2022 speech at Independence Hall in Philadelphia. He made clear that he was speaking about a minority of Republicans.

    In the speech, in which he never used the word “fascists,” Biden warned that “MAGA Republicans” like Trump are “extreme,” “do not respect the Constitution” and “do not believe in the rule of law.” But he also emphasized that “not every Republican, not even the majority of Republicans, are MAGA Republicans.” In other words, he made clear that he was talking about far less than half of Americans.

    Trump earned fewer than 75 million votes in 2020 in a country of more than 258 million adults, so even a hypothetical criticism of every single Trump voter would not amount to criticism of “half the country.”

    Rep. Scott Perry claimed that “average citizens need to just at some point be willing to acknowledge and accept that every single facet of the federal government is weaponized against every single one of us.” Perry said moments later, “The government doesn’t have the right to tell you that you can’t buy a gas stove but that you must buy an electric vehicle.”

    Facts First: This is nonsense. The federal government has not told people that they can’t buy a gas stove or must buy an electric vehicle.

    The Biden administration has tried to encourage and incentivize the adoption of electric vehicles, but it has not tried to forbid the manufacture or purchase of traditional vehicles with internal combustion engines. Biden has set a goal of electric vehicles making up half of all new vehicles sold in the US by 2030.

    There was a January controversy about a Biden appointee to the United States Consumer Product Safety Commission, Richard Trumka Jr., saying that gas stoves pose a “hidden hazard,” as they emit air pollutants, and that “any option is on the table. Products that can’t be made safe can be banned.” But the commission as a whole has not shown support for a ban, and White House press secretary Karine Jean-Pierre said at a January press briefing: “The president does not support banning gas stoves. And the Consumer Product Safety Commission, which is independent, is not banning gas stoves.”

    Rep. Ralph Norman claimed that Biden had just laughed at a mother who lost two sons to fentanyl.

    “I don’t know whether y’all saw, I just saw it this morning: Biden laughing at the mother who had two sons – to die, and he’s basically laughing and saying the fentanyl came from the previous administration. Who cares where it came from? The fact is it’s here,” Norman said.

    Facts First: Norman’s claim is false. Biden did not laugh at the mother who lost her sons to fentanyl, the anti-abortion activist Rebecca Kiessling; in a somber tone, he called her “a poor mother who lost two kids to fentanyl.” Rather, he proceeded to laugh about how Republican Rep. Marjorie Taylor Greene had baselessly blamed the Biden administration for the young men’s deaths even though the tragedy happened in mid-2020, during the Trump administration. You can watch the video of Biden’s remarks here.

    Kiessling has demanded an apology from Biden. She is entitled to her criticism of Biden’s remarks and his chuckle – but the video clearly shows Norman was wrong when he claimed Biden was “laughing at the mother.”

    Rep. Kat Cammack told a story about the first hearing of the new Republican-led House select subcommittee on the supposed “weaponization” of the federal government. Cammack claimed she had asked a Democratic witness at this February hearing about his “incredibly vitriolic” Twitter feed in which, she claimed, he not only repeatedly criticized Supreme Court Justice Brett Kavanaugh but even went “so far as to encourage people to harass this Supreme Court justice.”

    Facts First: This story is false. The witness Cammack questioned in this February exchange at the subcommittee, former Obama administration deputy assistant attorney general Elliot Williams, did not encourage people to harass Kavanaugh. In fact, it’s not even true that Cammack accused him at the February hearing of having encouraged people to harass Kavanaugh. Rather, at the hearing, she merely claimed that Williams had tweeted numerous critical tweets about Kavanaugh but had been “unusually quiet” on Twitter after an alleged assassination attempt against the justice. Clearly, not tweeting about the incident is not the same thing as encouraging harassment.

    Williams, now a CNN legal analyst (he appeared at the subcommittee hearing in his personal capacity), said in a Thursday email that he had “no idea” what Cammack was looking at on his innocuous Twitter feed. He said: “I used to prosecute violent crimes, and clerked for two federal judges. Any suggestion that I’ve ever encouraged harassment of anyone – and particularly any official of the United States – is insulting and not based in reality.”

    Cammack’s spokesperson responded helpfully on Thursday to CNN’s initial queries about the story Cammack told at CPAC, explaining that she was referring to her February exchange with Williams. But the spokesperson stopped responding after CNN asked if Cammack was accurately describing this exchange with Williams and if they had any evidence of Williams actually having encouraged the harassment of Kavanaugh.

    Sen. John Kennedy of Louisiana boasted about the state of the country “when Republicans were in charge.” Among other claims about Trump’s tenure, he said that “in four years,” Republicans “delivered 3.5% unemployment” and “created 8 million new jobs.”

    Facts First: This is inaccurate in two ways. First, the economic numbers for the full “four years” of Trump’s tenure are much worse than these numbers Kennedy cited; Kennedy was actually referring to Trump’s first three years while ignoring the fourth, which was marred by the Covid-19 pandemic. Second, there weren’t “8 million new jobs” created even in Trump’s first three years.

    Kennedy could have correctly said there was a 3.5% unemployment rate after three years of the Trump administration, but not after four. The unemployment rate skyrocketed early in Trump’s fourth year, on account of the pandemic, before coming down again, and it was 6.3% when Trump left office in early 2021. (It fell to 3.4% this January under Biden, better than in any month under Trump.)

    And while the economy added about 6.7 million jobs under Trump before the pandemic-related crash of March and April 2020, that’s not the “8 million jobs” Kennedy claimed – and the economy ended up shedding millions of jobs in Trump’s fourth year. Over the full four years of Trump’s tenure, the economy netted a loss of about 2.7 million jobs.

    Lara Trump, Donald Trump’s daughter-in-law and an adviser to his 2020 campaign, claimed that the last time a CPAC crowd was gathered at this venue in Maryland, in February 2020, “We had the lowest unemployment in American history.” After making other boasts about Donald Trump’s presidency, she said, “But how quickly it all changed.” She added, “Under Joe Biden, America is crumbling.”

    Facts First: Lara Trump’s claim about February 2020 having “the lowest unemployment in American history” is false. The unemployment rate was 3.5% at the time – tied for the lowest since 1969, but not the all-time lowest on record, which was 2.5% in 1953. And while Lara Trump didn’t make an explicit claim about unemployment under Biden, it’s not true that things are worse today on this measure; again, the most recent unemployment rate, 3.4% for January 2023, is better than the rate at the time of CPAC’s 2020 conference or at any other time during Donald Trump’s presidency.

    Multiple speakers at CPAC decried the high number of fentanyl overdose deaths. But some of the speakers inflated that number while attacking Biden’s immigration policy.

    Sebastian Gorka, a former Trump administration official, claimed that “in the last 12 months in America, deaths by fentanyl poisoning totaled 110,000 Americans.” He blamed “Biden’s open border” for these deaths.

    Rep. Scott Perry claimed: “Meanwhile over on this side of the border, where there isn’t anybody, they’re running this fentanyl in; it’s killing 100,000 Americans – over 100,000 Americans – a year.”

    Facts First: It’s not true that there are more than 100,000 fentanyl deaths per year. That is the total number of deaths from all drug overdoses in the US; there were 106,699 such deaths in 2021. But the number of overdose deaths involving synthetic opioids other than methadone, primarily fentanyl, is smaller – 70,601 in 2021.

    Fentanyl-related overdoses are clearly a major problem for the country and by far the biggest single contributor to the broader overdose problem. Nonetheless, claims of “110,000” and “over 100,000” fentanyl deaths per year are significant exaggerations. And while the number of overdose deaths and fentanyl-related deaths increased under Biden in 2021, it was also troubling under Trump in 2020 – 91,799 total overdose deaths and 56,516 for synthetic opioids other than methadone.

    It’s also worth noting that fentanyl is largely smuggled in by US citizens through legal ports of entry rather than by migrants sneaking past other parts of the border. Contrary to frequent Republican claims, the border is not “open”; border officers have seized thousands of pounds of fentanyl under Biden.

    [ad_2]

    Source link

  • Mark Zuckerberg looks to ‘turbocharge’ Meta’s AI tools after viral success of ChatGPT | CNN Business

    Mark Zuckerberg looks to ‘turbocharge’ Meta’s AI tools after viral success of ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    Mark Zuckerberg said Meta is creating a new “top-level product group” to “turbocharge” the company’s work on AI tools, as it attempts to keep pace with a renewed AI arms race among Big Tech companies.

    In a Facebook post late Monday, Zuckerberg said the elite new group will initially be formed by pulling together teams across the company currently working on generative AI, the technology that underpins the viral AI chatbot, ChatGPT. This group will be “focused on building delightful experiences around this technology into all of our different products,” Zuckerberg said, starting with “creative and expressive tools.”

    “Over the longer term, we’ll focus on developing AI personas that can help people in a variety of ways,” Zuckerberg said. Those AI features may include new Instagram filters as well as chat tools in WhatsApp and Messenger, he said.

    The planned efforts come amid a heightened AI frenzy in the tech world, kicked off in late November when Microsoft-backed OpenAI released ChatGPT publicly. The tool quickly went viral for its ability to generate compelling, human-sounding responses to user prompts. Microsoft later announced it was incorporating the tech behind ChatGPT into its search engine Bing. A day before Microsoft’s announcement, Google unveiled its own AI-powered tool called Bard.

    Meta, by comparison, has been quiet so far. Yann LeCunn, Meta’s Chief AI scientist, has expressed some skepticism surrounding the ChatGPT hype. “It’s not a particularly big step towards, you know, more like human level intelligence,” LeCunn said in one interview late last month. “From the scientific point of view, ChatGPT is not a particularly interesting scientific advance,” he added.

    Generative AI tools are built on large language models that have been trained on vast troves of online data to create written and visual responses to user prompts. But these systems also have the potential to perpetuate biases and misinformation. Already, both Microsoft and Google’s AI tools have run into controversies for producing some inaccurate or uncanny responses.

    As with Microsoft and Google, there are some risks for Meta in embracing this technology. Last year, before the ChatGPT hype, Meta publicly released an AI-powered chatbot dubbed “BlenderBot 3.” It didn’t take long, however, for the chatbot to start making offensive comments.

    In his post Monday, Zuckerberg said: “We have a lot of foundational work to do before getting to the really futuristic experiences, but I’m excited about all of the new things we’ll build along the way.”

    [ad_2]

    Source link

  • New York Times: Twitter lays off another 10% of staff | CNN Business

    New York Times: Twitter lays off another 10% of staff | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter’s massive job cuts continued this weekend, as the company cut about 10% of its remaining staff, according to a report in the New York Times.

    The latest axing of about 200 jobs takes the company’s headcount down to under 2,000 staffers, according to the Times. That’s down from the 7,500 who worked for the social media platform before Elon Musk bought the company last fall for $44 billion.

    The paper reported that the cuts hit product managers, data scientists and engineers who worked on machine learning and site reliability, which, it said, helps keep Twitter’s various features online. The “monetization infrastructure team,” which maintains the services through which Twitter makes money, was reduced to fewer than eight people from 30, according to the report.

    Twitter did not respond to a request for comment from CNN on the Times report.

    Twitter has been losing advertisers since Musk took over. Ad revenue had been responsible for more than 90% of company revenue. Musk’s plans to raise revenue directly from Twitter users by selling verification of accounts has thus far not worked as planned.

    [ad_2]

    Source link

  • South Korean diplomats dance into Indian hearts in ‘Naatu Naatu’ viral video | CNN

    South Korean diplomats dance into Indian hearts in ‘Naatu Naatu’ viral video | CNN

    [ad_1]



    CNN
     — 

    Dancing South Korean diplomats have won the hearts of millions of Indians with their viral video performance of Oscar-nominated song “Naatu Naatu,” reinforcing Seoul’s soft power diplomacy and even earning a nod of approval from India’s leader.

    In a video clip posted to Twitter on Sunday, staff from South Korea’s embassy in India’s capital New Delhi – many wearing traditional clothing from both countries – dance to the popular song from Telugu-language movie “RRR.”

    The 53-second clip, which features South Korean Ambassador Chang Jae-bok, has gone viral on social media, garnering more than 4 million views on Twitter as of Tuesday – and much praise in India.

    “Lively and adorable team effort,” Prime Minister Narendra Modi wrote on Twitter on Sunday.

    “Love you for this!” author Kulpreet Yadav wrote, while another fan of the clip, Bhargav Mitra, called it “an excellent initiative.”

    “A fitting tribute to bilateral relations. How well can a song & dance sequence unite,” he wrote on Twitter.

    India’s positive response to the video reflects the growing popularity of South Korean culture in the country, where millions have embraced K-pop and K-dramas in recent years.

    Indians are also making inroads in South Korea’s entertainment industry. Singer Shreya Lenka became India’s first homegrown K-pop star when she joined girl group Blackswan last year, while Indian actor Anupam Tripathy starred in award-winning South Korean Netflix show, “Squid Game.”

    “Naatu Naatu,” which translates to “dance dance,” is composed by M.M. Keeravani, with lyrics from Chandrabose.

    Praised for its buoyant choreography and catchy tune, “Naatu Naatu” won India’s first ever Golden Globe in the best original song category last month and is favorite to win best original song at the 95th Academy Awards on March 12.

    The original song features Telugu superstars Ram Charan and N. T. Rama Rao Jr., known as Jr NTR, who dance in perfect synchronization to the lyrics. The video has more than 122 million views on YouTube.

    The Indian film industry produces tens of thousands of movies every year in multiple languages, and “RRR,” which stands for Rise Roar Revolt, is the country’s fourth-highest grossing picture, according to IMDb, earning nearly $155 million worldwide.

    It is set during India’s struggle for independence from British colonial rule and became Netflix’s most watched non-English movie last June.

    [ad_2]

    Source link

  • New Meta platform aims to prevent sextortion of teens on Facebook and Instagram | CNN Business

    New Meta platform aims to prevent sextortion of teens on Facebook and Instagram | CNN Business

    [ad_1]



    CNN
     — 

    Meta is taking steps to crack down on the spread of intimate images of teenagers on Facebook and Instagram.

    A new tool, called Take It Down, takes aim at a practice commonly referred to as “revenge porn,” where someone posts an explicit picture of an individual without their consent to publicly embarrass or cause them distress. The practice has skyrocketed in the last few years on social media, particularly among young boys.

    Take It Down, which is operated and run by the National Center for Missing and Exploited Children, will allow minors for the first time to anonymously attach a hash – or digital fingerprint – to intimate images or videos directly from their own devices, without having to upload them to the new platform. To create a hash of an explicit image, a teen can visit the website TakeItDown.NCMEC.org to install software onto their device. The anonymized number, not the image, will then be stored in a database linked to Meta so that if the photo is ever posted to Facebook or Instagram, it will be matched against the original, reviewed and potentially removed.

    “This issue has been incredibly important to Meta for a very, very long time because the damage done is quite severe in the context of teens or adults,” said Antigone Davis, Meta’s global safety director. “It can do damage to their reputation and familial relationships, and puts them in a very vulnerable position. It’s important that we find tools like this to help them regain control of what can be a very difficult and devastating situation.”

    The tool works for any image shared across Facebook and Instagram, including Messenger and direct messages, as long as the pictures are unencrypted.

    People under 18 years old can use Take It Down, and parents or trusted adults can also use the platform on behalf of a young person. The effort is fully funded by Meta and builds off a similar platform it launched in 2021 alongside more than 70 NGOs, called StopNCII, to prevent revenge porn among adults.

    Since 2016, NCMEC’s cyber tip line has received more than 250,000 reports of online enticement, including sextortion, and the number of those reports more than doubled between 2019 and 2019. In the last year, 79% of the offenders were seeking money to keep photos offline, according to the nonprofit. Many of these cases played out on social media.

    Meta’s efforts come nearly a year and a half after Davis was grilled by Senators about the impact its apps have on younger users, after an explosive report indicated the company was aware that Facebook-owned Instagram could have a “toxic” effect on teen girls. Although the company has rolled out a handful of new tools and protections since then, some experts say it has taken too long and more needs to be done.

    Meanwhile, President Biden demanded in his latest State of the Union address more transparency about tech companies’ algorithms and how they impact their young users’ mental health.

    In response, Davis told CNN that Meta “welcomes efforts to introduce standards for the industry on how to ensure that children can safely navigate and enjoy all that online services have to offer.”

    In the meantime, she said the company continues to double down on efforts to help protect its young users, particularly when it comes to keeping explicit photos off its site.

    “Sextortion is one of the biggest growing crimes we see at the National Center for Missing and Exploited Children,” said Gavin Portnoy, vice president of communications and branding at NCMEC. “We’re calling it the hidden pandemic, and nobody is really talking about it.”

    Portnoy said there’s also been an uptick in youth dying by suicide as a result of sextortion. “That is the driving force behind creating Take It Down, along with our partners,” he said. “It really gives survivors an opportunity to say, look, I’m not going to let you do this to me. I have the power over my images and my videos.”

    In addition to Meta’s platforms, OnlyFans and Pornhub’s parent company MindGeek are also adding this technology into their services.

    But limitations do exist. To get around the hashing technology, people can alter the original images, such as by cropping, adding emojis or doctoring them. Some changes, such as adding a filter to make the photo sepia or black and white, will still be flagged by the system. Meta recommends teens who have multiple copies of the image or edited versions make a hash for each one.

    “There’s no one panacea for the issue of sextortion or the issue of the non-consensual sharing of intimate images,” Davis said. “It really does take a holistic approach.”

    The company has rolled out a series of updates to help teens have an age-appropriate experience on its platforms, such as adding new supervision tools for parents, an age-verification technology and defaulting teens into the most private settings on Facebook and Instagram.

    This is not the first time a major tech company has poured resources into cracking down on explicit imagery of minors. In 2022, Apple abandoned its plans to launch a controversial tool that would check iPhones, iPads and iCloud photos for child sexual abuse material following backlash from critics who decried the feature’s potential privacy implications.

    “Children can be protected without companies combing through personal data, and we will continue working with governments, child advocates, and other companies to help protect young people, preserve their right to privacy, and make the internet a safer place for children and for us all,” the company said in a statement provided to Wired at the time.

    Davis did not comment on whether it’s expecting criticism for Meta’s approach, but noted “there were significant differences between the tool that Apple launched and the tool that NCMEC is launching today.” She emphasized Meta will not be checking for images on users phones.

    “I do welcome any member of the industry trying to invest in efforts to prevent this kind of terrible crime from happening on their apps,” she added.

    [ad_2]

    Source link

  • Tommy Fury defeats social media influencer Jake Paul by split decision | CNN

    Tommy Fury defeats social media influencer Jake Paul by split decision | CNN

    [ad_1]



    CNN
     — 

    YouTuber Jake Paul suffered the first defeat of his fledgling boxing career on Sunday night as pro boxer-turned reality TV star Tommy Fury edged him out in a split decision in Saudi Arabia.

    Two judges both scored the fight 76-73 for Fury, with the third judge ruling 75-74 in favor of Paul at the end of their eight-round cruiserweight bout at the Diriyah Arena near the capital Riyadh.

    “In my first main event, 23 years old, I had the world on my shoulders, and I came through,” Fury said after the fight. “This, to me, is a world title fight. I trained so hard for this. This was my destiny.”

    Paul, a 26-year-old American with more than 20 million YouTube subscribers, is now 6-1 while Fury, the British younger half-brother of heavyweight champion Tyson Fury, remains undefeated with nine wins after taking time out of his boxing career in 2019 to star in reality show “Love Island.”

    Millions of fans were expected to tune into the fight, the latest pay-per-view bout involving a YouTuber to generate a social media buzz as promoters played up the bad blood between the two fighters. And just like a genuine world title fight, there were plenty of stars ringside, including former champ Mike Tyson, soccer superstar Cristiano Ronaldo and comedian Kevin Hart.

    Fury asserted his jab early to keep Paul at a distance but the YouTuber gained momentum in the fifth round, appearing to stun Fury.

    Both had points deducted – Paul in the fifth for pushing his opponent’s head down while clinching and Fury in the sixth for too much clinching, a defensive tactic that’s often only allowed for short periods of time.

    Mike Tyson is seen before the fight in Riyadh's Diriyah Arena, Saudi Arabia.

    Fury was knocked down in the eighth round on a short-left hand by Paul but it was too late as the judges ruled in Fury’s favor.

    Paul said he intended to use a rematch clause for Sunday’s fight that could be exercised if Fury won.

    “All respect to Tommy, he won, don’t judge me by my wins, judge me by my losses. I’ll come back, I thought I deserve that rematch, it was a great fight, a close fight,” Paul said after the fight.

    Paul only began boxing in 2018, but had since gone on to record six wins prior to Sunday – including four knockouts. Fury, however, was the first professional boxer Paul faced.

    [ad_2]

    Source link

  • Opinion: The one critical step Congress could take to protect kids online | CNN

    Opinion: The one critical step Congress could take to protect kids online | CNN

    [ad_1]

    Editor’s Note: Patrick T. Brown is a fellow at the Ethics and Public Policy Center, a conservative think tank and advocacy group based in Washington, DC. He is also a former senior policy adviser to Congress’ Joint Economic Committee. Follow him on Twitter. The views expressed in this piece are his own. View more opinion on CNN.



    CNN
     — 

    This week the US Supreme Court heard oral arguments in a case that raised thorny questions over algorithms and free speech on the Internet. In Gonzalez v. Google, lawyers for the parents of a teenager killed in an Islamic State attack are arguing that YouTube should be held liable for promoting content from the group.

    The political debates over how much speech protections online cover Big Tech firms have inflamed the right for years. In the oral arguments, at least, the justices seemed uncertain about how best to proceed with the complex issues at play.

    But new research shows some issues surrounding tech don’t have a political divide. A new report I wrote for the Ethics and Public Policy Center and the Institute for Family Studies shows widespread concern around kids online. And a set of policy proposals that aim to restore parents’ ability to shepherd their kids on the wild west that is the Web all recorded high levels of support across parents from both political sides.

    This issue is something that nearly every parent has to navigate. A recent report from Common Sense Media found that the average age of first exposure to pornography is now 12, and that three-quarters of teens had seen porn online by age 17.

    But parents have plenty to worry about kids online in addition to early exposure to pornography. All manner of online content can impact a child’s life. As this week’s Supreme Court case reminds us, youth can be lured into extremism or self-harm via online content. Parents might want to know if their child is becoming increasingly drawn toward figures who share racist or misogynistic views online.

    Documents released by a whistleblower indicated Facebook’s (now known as Meta, Instagram’s parent company) internal data showed the site made “body image issues worse for 1 in 3 teen girls,” and also led to more severe and self-destructive thoughts. While the company disputed the claims, it also postponed an “Instagram for Kids” offering. Cyberbullying and non-consensual nude photo sharing have plagued high schools.

    These concerns are resonating with policymakers. Current law and decades of Supreme Court precedent establish much more leeway for Congress to protect kids online without having to hash out the complexities of more wide-ranging free speech concerns.

    A bipartisan effort to take modest steps to protect kids online might bear fruit. Republican Sen. Marsha Blackburn of Tennessee and Connecticut Democrat Sen. Richard Blumenthal have been pushing their colleagues to pass their Kids Online Safety Act (KOSA), which would update the framework for how tech companies serve minors online.

    Among other things, it would require social media sites to default minors into the strongest possible privacy protections and give parents new tools to monitor harmful content. It would mandate social media platforms mitigate harms to minors, such as by restricting or eliminating content relating to self-harm, suicide and eating disorders. And it would set up require an annual audit of risks to minors, including providing broader data access to researchers to study the impact of social media on kids’ development.

    The bill was opposed by some civil rights and LGBTQ groups, who worried that putting greater content restrictions on what kids may come across online could prevent them from accessing information about sexual education without their parents’ knowledge. But that concern may ring hollow with parents who believe they should have better tools to know if their 13- or 14-year-old child is searching for information about birth control.

    Some say parents should be the ultimate gatekeeper of their kids online, which is true. But we have laws relating to the minimum age to consume alcohol or drive a car precisely because we know adolescents’ brains are still developing, and the potential to cause harm to oneself or others is high. After all, unless a critical mass of families agree to move social life offline, minors who don’t have access to Instagram, TikTok or Facebook may be missing out on crucial information or opportunities to socialize.

    Moreover, while some tools exist for helping keep kids safe online, they are often easily circumvented. Asking individual parent to be an expert on the plethora of user settings, filters and options for keeping age-inappropriate content away from their kids places an undue burden on families. Establishing age-based controls, and policing them effectively, would be an appropriate step for Congress to take.

    Indeed, some say the Blackburn-Blumenthal framework doesn’t go far enough. The policy solutions polled in our recent report are more aggressive than those included in KOSA, and still receive support from three in four parents.

    For example, nearly 9 in 10 Republican parents, and 77% of all parents, agreed with a proposal to require social media platforms to grant parents full access to what their children are seeing and who they are communicating with online, the most popular policy polled among that subgroup. 81% were in favor of a law that would require social media platforms to get parents’ permission before allowing minors to open an account. Another two-thirds of parents agreed or strongly agreed that internet service providers should be required to obtain age verification (like a drivers’ license or credit card) before allowing individuals to view pornography.

    Future action will likely take up these concerns. Just last week, Republican Sen. Josh Hawley of Missouri introduced a bill that would bar users from under age 16 from opening a social media account. While the implementation mechanism would likely need to be improved on – relying on Big Tech companies to keep copies of every American’s drivers’ license safe may not work out – the direction of the legislation is laudable, recognizing that American parents are looking for bold action when it comes to keeping kids safe online.

    The battles over Big Tech and accusations of algorithmic bias may be what gets the Republican base riled up. But in a divided Congress, both parties should listen to the parents who make up their base – giving families more tools to protect their kids online is not only long overdue, it’s a political winner.

    [ad_2]

    Source link

  • Video: Four-day work week, cracking down on junk travel fees, and more on CNN Nightcap | CNN Business

    Video: Four-day work week, cracking down on junk travel fees, and more on CNN Nightcap | CNN Business

    [ad_1]

    The Points Guy Founder Brian Kelly tells “Nightcap’s” Jon Sarlin how consumers can avoid paying junk hotel and airline fees. Plus, EZPR’s Ed Zitron says the ad-based model of social media is dying. And Bloomberg Commentator and author of “The Nowhere Office” Julia Hobsbawm explains why the largest 4-day work week trial ever conducted could change the future of work. To get the day’s business headlines sent directly to your inbox, sign up for the Nightcap newsletter.

    [ad_2]

    Source link

  • Chinese apps remove ChatGPT as global AI race heats up | CNN Business

    Chinese apps remove ChatGPT as global AI race heats up | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Several popular Chinese apps have removed access to ChatGPT, the artificial intelligence chatbot that has taken the world by storm even as major Chinese tech companies race to develop their own equivalent.

    ChatGPT, developed by the American research lab OpenAI, is not officially available in China, but several apps on the Chinese social media platform WeChat had previously allowed access to the chatbot without the use of a VPN or foreign mobile number.

    Those doors now appear shut. Earlier this week, the apps ChatGPTRobot and AIGC Chat Robot said their programs had been suspended due to “violation of relevant laws and regulations,” without specifying which laws.

    Two other apps, ChatgptAiAi and Chat AI Conversation, said their ChatGPT services went offline due to “relevant business changes” and policy changes.

    The app Shenlan BL was even more vague, citing “various reasons” for the shutdown.

    Though it’s unclear what prompted these closures, there are other signs China may be souring on ChatGPT. On Monday, state-run media released a video claiming the chatbot could be used by US authorities to “spread disinformation and manipulate public opinion,” pointing to its responses regarding Xinjiang as supposed evidence of bias.

    When prompted on Xinjiang, ChatGPT describes the Chinese government’s alleged human rights abuses against ethnic minorities in the far western region, including mass detentions and forced labor. Beijing has repeatedly denied these accusations, claiming detention camps are “vocational education and training centers” that have since been dismantled.

    Other recent state media articles have voiced criticism and skepticism toward ChatGPT, with China Daily declaring that its rise highlights the need for “strict regulations.”

    Several Chinese tech companies saw their shares drop on Thursday after news spread that WeChat apps had removed ChatGPT services. Beijing Haitian Ruisheng Science Technology, which develops and produces AI data products, closed 8.4% lower.

    Meanwhile, Hanwang Technology and Beijing Deep Glint Technology, both developers of AI products and services, closed 10% and 5.5% lower respectively.

    ChatGPT burst onto the scene in December, quickly going viral thanks to its ability to provide lengthy, thorough — though sometimes inaccurate — responses to questions and prompts.

    Since its release, the tool has been used to write articles for at least one news publication, drafted research paper abstracts that fooled some scientists and even passed graduate-level law and business exams (albeit with low marks).

    It has also prompted alarm about its unknown long-term consequences, such as its impact on education and students’ ability to cheat on assignments.

    Despite these concerns, the success of ChatGPT has spurred a global AI race.

    Microsoft plans to invest billions in the San Francisco-based OpenAI and unveiled its AI-powered Bing chatbot last week, though it made headlines for veering into darker, sometimes disturbing conversation. Earlier this month, Google announced it will soon roll out Bard, its own answer to ChatGPT.

    China’s government has previously sought to restrict major Western websites and apps, such as Google, Facebook and Amazon, leading to accusations from some of digital protectionism.

    In the absence of foreign competition within the domestic market, Chinese tech companies have since grown into major international players — many of which are now revving their gears with an eye toward AI.

    In early February, Chinese behemoth Alibaba said it was testing its own ChatGPT-style tool, though it didn’t provide details on when it would launch.

    A team at China’s Fudan University developed their own version called MOSS, which instantly went viral, causing the platform to crash this week due to too many users.

    And on Wednesday, tech giant Baidu said its AI chatbot ERNIE Bot, slated for a March release, will be used across various platforms such as its search engine, voice assistant for smart devices and even its autonomous driving technology.

    The rollout will “create a new entry point for the next-generation internet,” Baidu CEO Robin Li said in an earnings call, adding that the company expects “more and more business owners and entrepreneurs to build their own models and applications on our AI Cloud.”

    [ad_2]

    Source link

  • Capitol rioter who tweeted threat to Rep. Ocasio-Cortez sentenced to 38 months in prison | CNN Politics

    Capitol rioter who tweeted threat to Rep. Ocasio-Cortez sentenced to 38 months in prison | CNN Politics

    [ad_1]



    CNN
     — 

    A Texas man was sentenced to more than three years in prison Wednesday for assaulting police officers during the US Capitol riot and threatening Rep. Alexandria Ocasio-Cortez on Twitter shortly after the attack.

    Garret Miller, 36, pleaded guilty in December to charges related to his conduct on January 6, 2021. He was arrested weeks after the riot – on Inauguration Day – while wearing a shirt that said: “I was there, Washington, D.C., January 6, 2021.”

    According to court documents, Miller brought gear with him to DC, including a rope, a grappling hook and a mouth guard, and prosecutors said he was “at the forefront of every barrier overturned, police line overrun, and entryway breached within his proximity that day.” Miller was detained twice during the riot, according to court documents.

    When he left the Capitol building, he took the fight to Twitter, according to court documents. In response to a tweet from Ocasio-Cortez calling for then-President Donald Trump’s impeachment, Miller responded: “Assassinate AOC.”

    “At the time that I tweeted at the Congresswoman, I intended that the communication be perceived as a serious intent to commit violence against the Congresswoman,” Miller said in court documents as part of his guilty plea. He also levied threats against the officer who shot and killed a pro-Trump rioter during the melee, according to court documents, saying that he wanted to “hug his neck with a nice rope.”

    Clint Broden, Miller’s laywer, said in a statement to CNN that the sentence “ultimately reflects Judge Nichols careful consideration of the case,” and said that his client “has expressed his sincere remorse for his actions.”

    Correction: An earlier version of this story misstated the nature of Garret Miller’s guilty plea.

    [ad_2]

    Source link

  • Takeaways from the Supreme Court’s hearing on Twitter’s liability for terrorist use of its platform | CNN Business

    Takeaways from the Supreme Court’s hearing on Twitter’s liability for terrorist use of its platform | CNN Business

    [ad_1]



    CNN
     — 

    After back-to-back oral arguments this week, the Supreme Court appears reluctant to hand down the kind of sweeping ruling about liability for terrorist content on social media that some feared would upend the internet.

    On Wednesday, the justices struggled with claims that Twitter contributed to a 2017 ISIS attack in Istanbul by hosting content unrelated to the specific incident. Arguments in that case, Twitter v. Taamneh, came a day after the court considered whether YouTube can be sued for recommending videos created by ISIS to its users.

    The closely watched cases carry significant stakes for the wider internet. An expansion of apps and websites’ legal risk for hosting or promoting content could lead to major changes at sites including Facebook, Wikipedia and YouTube, to name a few.

    For nearly three hours of oral argument, the justices asked attorneys for Twitter, the US government and the family of Nawras Alassaf – a Jordanian citizen killed in the 2017 attack – how to weigh several factors that might determine Twitter’s level of legal responsibility, if any. But while the justices quickly identified what the relevant factors were, they seemed divided on how to analyze them.

    The court’s conservatives appeared more open to Twitter’s arguments that it is not liable under the Anti-Terrorism Act, with Justice Amy Coney Barrett at one point theorizing point-by-point how such an opinion could be written and Justice Neil Gorsuch repeatedly offering Twitter what he believed to be a winning argument about how to read the statute.

    The panel’s liberals, by contrast, seemed uncomfortable with finding that Twitter should face no liability for hosting ISIS content. They pushed back on Twitter’s claims that the underlying law should only lead to liability if the help it gave to ISIS can be linked to the specific terrorist attack that ultimately harmed the plaintiffs.

    Here are the takeaways from Wednesday:

    The justices spent much of the time picking through the text of the Anti-Terrorism Act, the law that Twitter is accused of violating – especially the meaning of the words “knowingly” and “substantial.”

    The law says liability can be established for “any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.”

    Justice Sonia Sotomayor seemed unpersuaded by Twitter attorney Seth Waxman’s arguments that Twitter could have been liable if the company were warned that specific accounts were planning a specific attack, but that those were not the facts of the case and Twitter was therefore not liable in the absence of such activity and such warnings.

    Chief Justice John Roberts grappled with the meaning of “substantial” assistance: Hypothetically, he asked, would donating $100 to ISIS suffice, or $10,000?

    “Substantial assistance” would hinge on the degree to which a terror group actually uses a platform such as Twitter to plan, coordinate and carry out a terrorist attack, Waxman said at one point. The existence of some tweets that generally benefited ISIS, he argued, should not be considered substantial assistance.

    The justices alluded to the gravity of the dilemma as they drew analogies to other industries that have grappled with related claims.

    “We’re used to thinking about banks as providing very important services to terrorists,” said Justice Elena Kagan. “Maybe we’re not so used to, but it seems to be true, that various kinds of social media services also provide very important services to terrorists,” the liberal justice said. “If you know you’re providing a very important service to terrorists, why aren’t you [said to be] providing substantial assistance and doing it knowingly?”

    Eric Schnapper, an attorney representing the Alassaf family – who had also argued on behalf of the plaintiffs in Tuesday’s Supreme Court arguments in Gonzalez v. Google – again struggled to answer justices’ questions as they sought to find some limiting principle to constrain the scope of the Anti-Terrorism Act.

    Justice Brett Kavanaugh asked Schnapper to respond to concerns that a ruling finding Twitter liable for the ISIS attack — even when the tweets it hosted had nothing to do with it — would negatively affect charities and humanitarian organizations that might incidentally assist terrorist organizations through their work.

    Schnapper suggested those groups might be insulated from liability due to the law’s “knowledge” requirement, but did not offer the justices a way to draw a bright-line distinction.

    Justice Clarence Thomas hinted at the potential expansiveness of what Schnapper was proposing in calling for Twitter to be held liable for the ISIS tweets.

    “If we’re not pinpointing cause-and-effect or proximate cause for specific things, and you’re focused on infrastructure or just the availability of these platforms, then it would seem that every terrorist attack that uses this platform would also mean that Twitter is an aider and abettor in those instances,” Thomas said.

    “I think in the way that you phrased it, that would probably be, yes,” Schnapper replied, going on to suggest a test involving “remoteness and time, weighed together with volume of activity.”

    Several justices asked the parties to respond to hypotheticals about what liability a business would have for dealing with Osama bin Laden. Their reliance of the terrorist in their examples seemed to get at the “knowing” requirement of the law.

    However, the court is being asked to issue an opinion that will guide lower courts in cases that likely will not involve such high-profile figures.

    Kagan invoked bin Laden’s name when she put forward a hypothetical for US Deputy Solicitor General Edwin Kneedler about a bank that offered services to a known terrorist that were the same services it provided its non-terrorist clients. Kneedler, arguing that Twitter should not be found liable under the anti-terrorist law in this case, said that in that scenario, the bank could be sued under the law.

    Other exchanges during the hearing revolved around the liability for a business that sold bin Laden a cell phone, with Justice Ketanji Brown Jackson asking if the business could be sued even if bin Laden did not use the cell phone for the terrorist attack that injured the plaintiff. Schnapper said that bin Laden would not need to use the cell phone in an attack for the seller to be found liable.

    Gorsuch put forward a theory for why Twitter should prevail in the case but neither Twitter nor the US Justice Department took him up on it.

    Gorsuch gave Waxman a chance to reframe his arguments for why Twitter shouldn’t be liable, based on language in the law suggesting a defendant is liable for assistance provided to a person who commits an act of international terrorism. Gorsuch noted the lawsuit against Twitter doesn’t link Twitter to the three people involved in the 2017 attack on the Istanbul nightclub.

    Waxman declined to fully adopt that view, arguing instead that the “aid and abet” language in the statute should be tied to the terrorist activity that gives rise to a suit.

    When Kneedler was up to podium, Gorsuch offered up the theory again, implying it would be a way for Twitter to avoid liability in this case.

    “It seems to me that that’s a pretty important limitation on aiding and abetting liability and conspiracy liability … that you have to aid an actual person,” Gorsuch said. “It’s not just a pedantic point. It has to do with the idea that you’re singling somebody out, and that is different than just doing your business normally, and that does help limit the scope of the act.”

    Jackson later hypothesized why Twitter and the US government were reluctant to endorse Gorsuch’s interpretation of the law, suggesting it was not the limitation Gorsuch thought it was.

    “I’m wondering whether the concern about that is, if you’re focusing on the person [who committed a terrorist act]… that it seems to take the focus away from the act itself,” she told Kneedler. “You could ‘aid and abet’ a person who committed the act, even if it’s not with respect to that act.”

    Justice Kagan voices concern on whether Supreme Court should step in. Listen why

    The Taamneh case is viewed as a turning point for the future of the internet, because a ruling against Twitter could expose the platform – and numerous other websites – to new lawsuits based on their hosting of terrorist content in spite of their efforts to remove such material.

    While it’s too early to tell how the justices may decide the case, the questioning on Wednesday suggested some members of the court believe Twitter should bear some responsibility for indirectly supporting ISIS in general, even if the company may not have been responsible for the specific attack in 2017 that led to the current case.

    But a key question facing the court is whether the Anti-Terrorism Act is the law that can reach that issue – or alternatively, whether the justices can craft a ruling in such a way that it does.

    Rulings in the cases heard this week are expected by late June.

    This story has been updated with Wednesday’s developments.

    [ad_2]

    Source link

  • Two Supreme Court cases this week could upend the entire internet | CNN Business

    Two Supreme Court cases this week could upend the entire internet | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The Supreme Court is set to hear back-to-back oral arguments this week in two cases that could significantly reshape online speech and content moderation.

    The outcome of the oral arguments, scheduled for Tuesday and Wednesday, could determine whether tech platforms and social media companies can be sued for recommending content to their users or for supporting acts of international terrorism by hosting terrorist content. It marks the Court’s first-ever review of a hot-button federal law that largely protects websites from lawsuits over user-generated content.

    The closely watched cases, known as Gonzalez v. Google and Twitter v. Taamneh, carry significant stakes for the wider internet. An expansion of apps and websites’ legal risk for hosting or promoting content could lead to major changes at sites, including Facebook, Wikipedia and YouTube, to name a few.

    The litigation has produced some of the most intense rhetoric in years from the tech sector about the potential impact on the internet’s future. US lawmakers, civil society groups and more than two dozen states have also jumped into the debate with filings at the Court.

    At the heart of the legal battle is Section 230 of the Communications Decency Act, a nearly 30-year-old federal law that courts have repeatedly said provide broad protections to tech platforms but that has since come under scrutiny alongside growing criticism of Big Tech’s content moderation decisions.

    The law has critics on both sides of the aisle. Many Republican officials allege that Section 230 gives social media platforms a license to censor conservative viewpoints. Prominent Democrats, including President Joe Biden, have argued Section 230 prevents tech giants from being held accountable for spreading misinformation and hate speech.

    In recent years, some in Congress have pushed for changes to Section 230 that might expose tech platforms to more liability, along with proposals to amend US antitrust rules and other bills aimed at reining in dominant tech platforms. But those efforts have largely stalled, leaving the Supreme Court as the likeliest source of change in the coming months to how the United States regulates digital services.

    Rulings in the cases are expected by the end of June.

    The case involving Google zeroes in on whether it can be sued because of its subsidiary YouTube’s algorithmic promotion of terrorist videos on its platform.

    According to the plaintiffs in the case — the family of Nohemi Gonzalez, who was killed in a 2015 ISIS attack in Paris — YouTube’s targeted recommendations violated a US antiterrorism law by helping to radicalize viewers and promote ISIS’s worldview.

    The allegation seeks to carve out content recommendations so that they do not receive protections under Section 230, potentially exposing tech platforms to more liability for how they run their services.

    Google and other tech companies have said that that interpretation of Section 230 would increase the legal risks associated with ranking, sorting and curating online content, a basic feature of the modern internet. Google has claimed that in such a scenario, websites would seek to play it safe by either removing far more content than is necessary, or by giving up on content moderation altogether and allowing even more harmful material on their platforms.

    Friend-of-the-court filings by Craigslist, Microsoft, Yelp and others have suggested that the stakes are not limited to algorithms and could also end up affecting virtually anything on the web that might be construed as making a recommendation. That might mean even average internet users who volunteer as moderators on various sites could face legal risks, according to a filing by Reddit and several volunteer Reddit moderators. Oregon Democratic Sen. Ron Wyden and former California Republican Rep. Chris Cox, the original co-authors of Section 230, argued to the Court that Congress’ intent in passing the law was to give websites broad discretion to moderate content as they saw fit.

    The Biden administration has also weighed in on the case. In a brief filed in December, it argued that Section 230 does protect Google and YouTube from lawsuits “for failing to remove third-party content, including the content it has recommended.” But, the government’s brief argued, those protections do not extend to Google’s algorithms because they represent the company’s own speech, not that of others.

    The second case, Twitter v. Taamneh, will decide whether social media companies can be sued for aiding and abetting a specific act of international terrorism when the platforms have hosted user content that expresses general support for the group behind the violence without referring to the specific terrorist act in question.

    The plaintiffs in the case — the family of Nawras Alassaf, who was killed in an ISIS attack in Istanbul in 2017 — have alleged that social media companies including Twitter had knowingly aided ISIS in violation of a US antiterrorism law by allowing some of the group’s content to persist on their platforms despite policies intended to limit that type of content.

    Twitter has said that just because ISIS happened to use the company’s platform to promote itself does not constitute Twitter’s “knowing” assistance to the terrorist group, and that in any case the company cannot be held liable under the antiterror law because the content at issue in the case was not specific to the attack that killed Alassaf. The Biden administration, in its brief, has agreed with that view.

    Twitter had also previously argued that it was immune from the suit thanks to Section 230.

    Other tech platforms such as Meta and Google have argued in the case that if the Court finds the tech companies cannot be sued under US antiterrorism law, at least under these circumstances, it would avoid a debate over Section 230 altogether in both cases, because the claims at issue would be tossed out.

    In recent years, however, several Supreme Court justices have shown an active interest in Section 230, and have appeared to invite opportunities to hear cases related to the law. Last year, Supreme Court Justices Samuel Alito, Clarence Thomas and Neil Gorsuch wrote that new state laws, such as Texas’s that would force social media platforms to host content they would rather remove, raise questions of “great importance” about “the power of dominant social media corporations to shape public discussion of the important issues of the day.”

    A number of petitions are currently pending asking the Court to review the Texas law and a similar law passed by Florida. The Court last month delayed a decision on whether to hear those cases, asking instead for the Biden administration to submit its views.

    [ad_2]

    Source link

  • Twitter to charge for SMS two-factor authentication | CNN Business

    Twitter to charge for SMS two-factor authentication | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter Blue subscribers will be the platform’s only users able to use text messages as a two-factor authentication method, Twitter announced Friday.

    The change will take place on March 20. Twitter users will have two other ways to authenticate their Twitter log-ins at no cost: an authentication mobile app and a security key.

    Two factor authentication, or 2FA, requires users to type in their password and then enter a code or security key to access their accounts. It is one of the primary methods for users to keep their Twitter account secure.

    “While historically a popular form of 2FA, unfortunately we have seen phone-number based 2FA be used – and abused – by bad actors,” the company said in a blog post Friday. “So starting today, we will no longer allow accounts to enroll in the text message/SMS method of 2FA unless they are Twitter Blue subscribers.”

    Twitter Blue, which costs $11 a month for iOS and Android subscribers, adds a blue checkmark to the account of anyone willing to pay for one.

    As of 2021, only 2.6% of Twitter users had a 2FA method enabled – and of those, 74.4% used SMS authentication, a Twitter account security report said.

    Twitter said non-subscribers will have 30 days to disable the text method and enroll in another way to sign in using 2FA. Disabling text message 2FA won’t automatically disassociate the user’s phone number from their account, Twitter said.

    Musk responded “Yup” to a tweet claiming a telecommunications company used bot accounts “to Pump 2FA SMS” and that Twitter was losing $60 million a year “on scam SMS.”

    [ad_2]

    Source link