ReportWire

Tag: youtube.com

  • Facebook and TikTok are approving ads with ‘blatant’ misinformation about voting in midterms, researchers say | CNN Business

    Facebook and TikTok are approving ads with ‘blatant’ misinformation about voting in midterms, researchers say | CNN Business

    [ad_1]


    New York
    CNN Business
     — 

    Facebook and TikTok failed to block advertisements with “blatant” misinformation about when and how to vote in the US midterms, as well as about the integrity of the voting process, according to a new report from human rights watchdog Global Witness and the Cybersecurity for Democracy Team (C4D) at New York University.

    In an experiment, the researchers submitted 20 ads with inaccurate claims to Facebook, TikTok and YouTube. The ads were targeted to battleground states such as Arizona and Georgia. While YouTube was able to detect and reject every test submission and suspend the channel used to post them, the other two platforms fared noticeably worse, according to the report.

    TikTok approved 90% of ads that contained blatantly false or misleading information, the researchers found. Facebook, meanwhile, approved a “significant number,” according to the report.

    The ads, posted in both English and Spanish, included information falsely stating that voting days would be extended and that social media accounts could double as a means of voter verification. The ads also contained claims designed to discourage voter turnout, such as claims that the election results could be hacked or the outcome was pre-decided.

    The researchers withdrew the ads after going through the approval process, if they were approved, so the ads containing misinformation were not shown to users.

    “YouTube’s performance in our experiment demonstrates that detecting damaging election disinformation isn’t impossible,” Laura Edelson, co-director of NYU’s C4D team, said in a statement with the report. “But all the platforms we studied should have gotten an ‘A’ on this assignment. We call on Facebook and TikTok to do better: stop bad information about elections before it gets to voters.”

    In response to the report, a spokesperson for Facebook-parent Meta said the tests “were based on a very small sample of ads, and are not representative given the number of political ads we review daily across the world.” The spokesperson added: “Our ads review process has several layers of analysis and detection, both before and after an ad goes live.”

    A TikTok spokesperson said the platform “is a place for authentic and entertaining content which is why we prohibit and remove election misinformation and paid political advertising from our platform. We value feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies.”

    Google did not immediately respond to CNN’s requests for comment.

    While limited in scope, the experiment could renew concerns about the steps taken by some of the biggest social platforms to combat not just misinformation about candidates and issues but also seemingly clear cut misinformation about the process of voting itself, with just weeks to go before the midterms.

    TikTok, whose influence and scrutiny in US politics has grown in recent election cycles, launched an Elections Center in August to “connect people who engage with election content to authoritative information,” including guidance on where and how to vote, and added labels to clearly identify content related to the midterm elections, according to a company blog post.

    Last month, TikTok took additional steps to safeguard the veracity of political content ahead of the midterms. The platform began to require “mandatory verification” for political accounts based in the United States and rolled out a blanket ban on all political fundraising.

    “As we have set out before, we want to continue to develop policies that foster and promote a positive environment that brings people together, not divide them,” Blake Chandlee, President of Global Business Solutions at TikTok, said in a blog post at the time. “We do that currently by working to keep harmful misinformation off the platform, prohibiting political advertising, and connecting our community with authoritative information about elections.”

    Meta said in September that its midterm plan would include removing false claims as to who can vote and how, as well as calls for violence linked to an election. But Meta stopped short of banning claims of rigged or fraudulent elections, and the company told The Washington Post those types of claims will not be removed.

    Google also took steps in September to protect against election misinformation, elevating trustworthy information and displaying it more prominently across services including search and YouTube.

    The big social media companies typically rely on a mix of artificial intelligence systems and human moderators to vet the vast amount of posts on their platforms. But even with similar approaches and objectives, the study is a reminder that the platforms can differ wildly in their content enforcement actions.

    According to the researchers, the only ad they submitted that TikTok rejected contained claims that voters had to have received a Covid-19 vaccination in order to vote. Facebook, on the other hand, accepted that submission.

    [ad_2]

    Source link

  • Dream, the Minecraft-playing YouTube star, finally reveals his face | CNN Business

    Dream, the Minecraft-playing YouTube star, finally reveals his face | CNN Business

    [ad_1]


    New York
    CNN Business
     — 

    Dream, a YouTube star with more than 30 million subscribers, has finally revealed his face after hiding behind a smiley-face mask for years.

    “My name is Clay, maybe you’ve heard of me, maybe not,” he posted in a YouTube video Sunday night. “Maybe you clicked on this video out of pure curiosity and you don’t care who I am.”

    He explained that the purpose of the face reveal was that he was meeting a friend for the first time after chatting online for several years. He added that he wanted to live a more public life and “start doing things” including meeting fellow internet creators.

    The face reveal took place about one minute and thirty seconds in the five-minute video, which has amassed 16 million views in less than 12 hours.

    Dream has been an internet presence since 2014 and is best known for being a Minecraft gamer. He’s occasionally garnered controversy for cheating at the game and said he’s received some hate online. At one point the FBI reaching out to him about a “threat” on his life.

    “I feel like I got so desensitized to hate, that I find it funny,” he said in the clip.

    He ended the video said that his channel is “living proof that anyone can do anything” and that he doesn’t want his face reveal to take away from that.

    [ad_2]

    Source link

  • YouTube removed video of Robert F. Kennedy, Jr. for violating vaccine misinformation policy | CNN Business

    YouTube removed video of Robert F. Kennedy, Jr. for violating vaccine misinformation policy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    YouTube said on Monday that it had removed a video of presidential hopeful Robert F. Kennedy, Jr. being interviewed by podcast host Jordan Peterson for violating its policy prohibiting vaccine misinformation.

    A YouTube spokesperson told CNN that the platform removed the video from Peterson’s channel because it does not allow “content that alleges that vaccines cause chronic side effects, outside of rare side effects that are recognized by health authorities.”

    The platform’s latest move comes as Kennedy, an environmental lawyer and anti-vaccine activist, has gained more mainstream attention with his views and recently had his account reinstated on Instagram as a result of his long-shot presidential campaign.

    YouTube began cracking down broadly on vaccine misinformation in 2021, following an earlier policy preventing false or misleading claims about Covid-19. At the time, YouTube said it would remove the channels of “several well-known vaccine misinformation spreaders,” including one belonging to the Children’s Health Defense, a group affiliated with Kennedy. (The YouTube channel for Kennedy’s presidential campaign remains active.)

    Under its policy, YouTube removes false claims about currently administered vaccines that the World Health Organization and local authorities have approved and confirmed to be safe.

    Although YouTube removed the video, it remains available on Twitter, showing the fractured approach to vaccine misinformation across the internet as his campaign gets underway.

    In a tweet on Sunday, Kennedy noted YouTube’s removal of the video saying, “What do you think … Should social media platforms censor presidential candidates?”

    Kennedy also gained attention for his anti-vaccine views on a different podcast this week.

    On Monday, prominent vaccine scientist Peter Hotez said he was accosted outside of his home after a Twitter exchange with podcaster Joe Rogan, who challenged Hotez to debate Kennedy over the weekend.

    Hotez had tweeted in support of a Vice article criticizing Spotify’s handling of vaccine misinformation in an interview with Kennedy on Rogan’s show. After Twitter owner Elon Musk and hedge fund manager Bill Ackman weighed in, Hotez said he was “stalked in front of my home by a couple of antivaxxers.”

    Kennedy suggested to Hotez that they have a “respectful, congenial, informative debate.” Hotez said he would go on Rogan’s podcast but would not debate Kennedy.

    [ad_2]

    Source link

  • Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Arkansas Gov. Sarah Huckabee Sanders has signed a sweeping bill imposing a minimum age limit for social media usage, in the latest example of states taking more aggressive steps intended to protect teens online.

    But even as Sanders signed the bill into law on Wednesday afternoon, the legislation appeared to contain vast loopholes and exemptions benefiting companies that lobbied on the bill and raising questions about how much of the industry it truly covers.

    The legislation, known as the Social Media Safety Act and taking effect in September, is aimed at giving parents more control over their kids’ social media usage, according to lawmakers. It defines social media companies as any online forum that lets users create public profiles and interact with each other through digital content.

    It requires companies that operate those services to verify the ages of all new users and, if the users are under 18 years old, to obtain a parent’s consent before allowing them to create an account. To perform the age checks, the law relies on third-party companies to verify users’ personal information, such as a driver’s license or photo ID.

    “While social media can be a great tool and a wonderful resource, it can have a massive negative impact on our kids,” Sanders said at a press conference before signing the bill.

    Utah finalized a similar law last month, raising concerns among some users and advocacy groups that the legislation could make user data less secure, internet access less private and infringe upon younger users’ basic rights.

    The push by states to legislate on social media comes after years of mounting scrutiny of the industry and claims that it has harmed users’ well-being and mental health, particularly among teens.

    Despite its seemingly universal scope, however, the new law, also known as SB396, includes numerous carveouts for certain types of digital services and, in some cases, individual companies. And although its sponsors have said the law is specifically meant to apply to certain platforms, including TikTok, parts of the legislative language appear to result in the exact opposite effect.

    In the final days of negotiation over the bill, Arkansas lawmakers approved an amendment that created several categorical exemptions from the age verification requirements. Media companies that “exclusively” offer subscription content; social media platforms that permit users to “generate short video clips of dancing, voice overs, or other acts of entertainment”; and companies that “exclusively offer” video gaming-focused social networking features were exempted.

    Another amendment carved out companies that sell cloud storage services, business cybersecurity services or educational technology and that simultaneously derive less than 25% of their total revenue from running a social media platform.

    Sen. Tyler Dees, a lead co-sponsor of the legislation, explained in remarks on the Arkansas senate floor on April 6 that the exemptions and tweaks to the bill, some of which he said were made in consultation with Apple, Meta and Google, were intended to shield non-social media services from the bill’s age requirements and to focus attention on new accounts created by children, not existing adult accounts.

    “There’s other services that Google offers … like cloud storage, et cetera,” Dees said. “So that’s really the intent of carving out — like LinkedIn, that is a social – I’m sorry, that is a business networking site, and so that’s the intent of those bills.”

    Microsoft-owned LinkedIn is apparently exempt from SB396 under a provision that carves out companies that provide “career development opportunities, including professional networking, job skills, learning certifications, and job posting and application services.”

    Other lawmakers have questioned whether the legislation — which has now become law — exempts a giant of the social media industry: YouTube, whose auto-play features and algorithmic recommendation engine have been accused of promoting extremism and radicalizing viewers.

    The confusion over YouTube appears to stem from the carveout for businesses that offer cloud storage and that make less than 25% of their revenue from social media.

    What is unclear is whether YouTube is subject to SB396 because it is a distinct company within Google whose revenue comes almost entirely from operating a social media platform, or whether it is not covered because YouTube is a part of Google and Google is exempt because it derives only a small share of its revenues from YouTube.

    In response to questions by CNN, Dees said SB396 targets platforms including Facebook, Instagram and TikTok, but omitted any mention of Google and declined to answer whether YouTube specifically would be covered by the law.

    “The purpose of this bill was to empower parents and protect kids from social media platforms, like Facebook, Instagram, TikTok and Snapchat,” Dees said in a statement. “We worked with stakeholders to ensure that email, text messaging, video streaming, and networking websites were not covered by the bill.”

    In remarks at Wednesday’s bill signing, Sanders told reporters that Google and Amazon are exempted from the law, implying that YouTube will not be subject to the age verification requirements imposed on other major social media sites.

    Meanwhile, Dees’ statement appeared to contradict the language in SB396 that purports to exempt any company that “allows a user to generate short video clips of dancing, voice overs, or other acts of entertainment in which the primary purpose is not educational or informative” — content that can be commonly found on TikTok, Snapchat and the other social media platforms Deese named.

    According to Meta spokesperson, “We want teens to be safe online. We’ve developed more than 30 tools to support teens and families, including tools that let parents and teens work together to limit the amount of time teens spend on Instagram, and age-verification technology that helps teens have age-appropriate experiences.”

    Meta “automatically set teens’ accounts to private when they join Instagram, we’ve further restricted the options advertisers have to reach teens, as well as the information we use to show ads to teens… and we don’t allow content that promotes suicide, self-harm or eating disorders,” according to the spokesperson, who added: “We’ll continue to work closely with experts, policymakers and parents on these important issues.”

    Spokespeople for Snapchat, TikTok and YouTube didn’t immediately respond to a request for comment.

    [ad_2]

    Source link

  • YouTube rolls out new policies for eating disorder content | CNN Business

    YouTube rolls out new policies for eating disorder content | CNN Business

    [ad_1]


    New York
    CNN
     — 

    YouTube on Tuesday announced a series of changes to how it deals with content related to eating disorders.

    The platform has long removed content that glorifies or promotes eating disorders, and YouTube’s Community Guidelines will now also prohibit content that features behaviors such as purging after eating or extreme calorie counting that at-risk users could be inspired to imitate. For videos that feature such “imitable behaviors” in the context of recovery, YouTube will allow the content to remain on the site but restrict it to users who are logged into the site and are over the age of 18.

    The policy changes, developed in consultation with the National Eating Disorder Association and other nonprofit organizations, aim to ensure “that YouTube creates space for community recovery and resources, while continuing to protect our viewers,” YouTube’s Global Head of Healthcare Garth Graham told CNN in an interview.

    “We’re thinking about how to thread the needle in terms of essential conversations and information that people might have,” Graham said, “allowing people to hear stories about recovery and allowing people to hear educational information but also realizing that the display of that information … can serve as a trigger as well.”

    The changes come as social media platforms have faced increased scrutiny for their effects on the mental health of users, especially young people. In 2021, lawmakers called out Instagram and YouTube for promoting accounts featuring content depicting extreme weight loss and dieting to young users. And TikTok has faced criticism from an online safety group that claimed the app served eating disorder related content to teens (although the platform pushed back against the research). They also follow several updates by YouTube in recent years to how it handles misinformation about medical issues such as abortion and vaccines.

    In addition to removing or age restricting some videos, YouTube plans to add panels pointing viewers to crisis resources under eating disorder-related content in nine countries, with plans to expand to more areas. And when a creators’ video is removed for violating its eating disorder policy, Graham said YouTube will send them resources about how to create content that’s less likely to harm other viewers.

    As with many social media policies, however, the challenge often isn’t introducing it but enforcing it, a challenge YouTube could face in discerning which videos are, for example, pro-recovery. YouTube said it will be rolling out enforcement of the policy globally in the coming weeks, and plans to use both human and automated moderation to review videos and their context.

    “These are complicated, societal public health [issues],” Graham said, “I want never to profess perfection, but to understand that we have to be proactive, we have to be thoughtful … it’s taken a while to get here because we wanted to articulate a process that had different layers and understood the challenges.”

    [ad_2]

    Source link