ReportWire

Tag: iab-social networking

  • Parents urged to delete their kids’ social media accounts ahead of possible Israeli hostage videos | CNN Business

    Parents urged to delete their kids’ social media accounts ahead of possible Israeli hostage videos | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Schools in Israel, the UK and the US are advising parents to delete their children’s social media apps over concerns that Hamas militants will broadcast or disseminate disturbing videos of hostages who have been seized in recent days.

    A Tel Aviv school’s parent’s association said it expects videos of hostages “begging for their lives” to surface on social media. In a message to parents, shared with CNN by a mother of children at a high school in Tel Aviv, the association asked parents to remove apps such as TikTok from their children’s phones.

    “We cannot allow our kids to watch this stuff. It is also difficult, furthermore – impossible – to contain all this content on social media,” according to the parent’s association. “Thank you for your understanding and cooperation.”

    Hamas has warned that it will post murders of hostages on social media if Israel targets people in Gaza without warning.

    There are additional concerns that terrorists will exploit social media algorithms to specifically target such videos to followers of Jewish or Israeli influencers in an effort to wage psychological warfare on Israelis and Jews and their supporters globally.

    During the onslaught on Saturday, armed Hamas militants poured over the heavily-fortified border into Israel and took as many as 150 hostages, including Israeli army officers, back to Gaza. The surprise attacks killed at least 1,200 people, according to the Israel Defense Forces, and injured thousands more.

    Since Israel began airstrikes on the Palestinian enclave Saturday, at least 1,055 people have been killed in Gaza, including hundreds of children, women, and entire families, according to the Palestinian health ministry. It said a further 5,184 have been injured, as of Wednesday.

    As the war wages on, some Jewish schools in the US are also asking parents not to share related videos or photos that may surface, and to prevent children – and themselves – from watching them. The schools are also advising community members to delete their social media apps during this time.

    “Together with other Jewish day schools, we are warning parents to disable social media apps such as Instagram, X, and Tiktok from their children’s phones,” the head of a school in New Jersey wrote in an email. “Graphic and often misleading information is flowing freely, augmenting the fears of our students. … Parents should discuss the dangers of these platforms and ask their children on a daily basis about what they are seeing, even if they have deleted the most unfiltered apps from their phones.”

    Another school in the UK said it asked students to delete their social media apps during a safety assembly.

    TikTok, Instagram and X – formerly known as Twitter – did not immediately respond to requests for comment on how they are combating the increase of videos being posted online and for comment on schools asking parents to delete these apps.

    But X said on its platform is has experienced an increase in daily active users in the conflict area and its escalation teams have “actioned tens of thousands of posts for sharing graphic media, violent speech, and hateful conduct.” It did not respond to a request to comment further or define “actioned.”

    “We’re also continuing to proactively monitor for antisemitic speech as part of all our efforts,” X’s safety team said. “Plus we’ve taken action to remove several hundred accounts attempting to manipulate trending topics.”

    The company added it remains “laser focused” on enforcing the site’s rules and reminded users they can limit sensitive media they may encounter by visiting the “Content you see” option in Settings.

    Still, misinformation continues to run rampant on social media platforms, including X.

    A post viewed more than 500,000 times – featuring the hashtag #PalestineUnderAttack – claimed to show an airplane being shot down. But the clip was from the video game Arma 3, as was later noted in a “community note” appended to the post.

    Another video that is purported to show Israeli generals after being captured by Hamas fighters was viewed more than 1.7 million times by Monday. The video, however, instead shows the detention of separatists in Azerbaijan.

    On Tuesday, the European Union warned Elon Musk of “penalties” for disinformation circulating on X amid Israel-Hamas war.

    The EU also informed Meta CEO Zuckerberg on Wednesday of a disinformation surge on its platforms – which include Facebook – and demanded the company respond in 24 hours with how it plans to combat the issue.

    In an Instagram story on Tuesday, Zuckerberg called the attack “pure evil” and said his focus “remains on the safety of our employees and their families in Israel and the region.”

    [ad_2]

    Source link

  • X appears to slow load times for links to several news outlets and rival platforms | CNN Business

    X appears to slow load times for links to several news outlets and rival platforms | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Link loading times to some Twitter competitors and news media sites posted to X, the social media platform formerly known as Twitter, appeared to be delayed or throttled for much of Tuesday.

    Links posted to X that directed to sites including the New York Times, Reuters, Facebook, Substack and X competitors Bluesky and Threads took around 5 seconds to load — a notable slowdown from the typically nearly instantaneous loading times, according to observations by CNN reporters. Many other sites, such as NBA.com, CNN, retailer Target and other sites did not appear to be affected by the issue.

    The delays were first reported by users of the technology forum Hacker News.

    The reason for the delays in loading links to some sites was not clear. X did not respond to multiple requests for comment from CNN. The site has been plagued by technical issues after Musk bought the site last year and laid off the majority of the staff. And the issue seemed to have resolved for some users by Tuesday afternoon.

    However, the delays affected the sites for rival platforms, as well as news outlets that Twitter owner Elon Musk has previously criticized. Musk earlier this year feuded with the New York Times over its unwillingness to pay for his platform’s new paid verification program, and he has separately called for the outlet to be “cancelled.”

    The apparent delay in visiting links to the New York Times was easy to verify with simple commands on a computer. Will Dormann, a cybersecurity researcher, plugged the New York Times website into a basic command program on his Mac and compared the loading time for that website with that of a dummy website. The load time for the New York Times site was about 4.5 seconds longer, Dormann told CNN Tuesday.

    X, like other platforms, uses a link-shortener service to collect information on users who click on links shared on the platform. When a link for a New York Times article plugged into X’s link-shortener takes far longer to load than other websites using the same link-shortening service, “this is the clear indicator that there are server-side [at the X-operated shortener] shenanigans going on,” Dormann told CNN.

    The New York Times said in a statement to CNN that it had observed the delay, but, “We have not received any explanation from the platform about this move.”

    “While we don’t know the rationale behind the application of this time delay, we would be concerned by targeted pressure applied to any news organization for unclear reasons,” it said in the statement. “The mission of The New York Times is to report the news impartially without fear or favor, and we’ll continue to do so, undeterred by any attempts to hinder this.”

    Meta, the parent company of Facebook and Threads, did not respond to a request for comment on the delay. But CEO Mark Zuckerberg responded to a post about the issue on Threads with a thinking face emoji.

    Musk and Zuckerberg have in recent weeks been making plans to take one another on in a cage fight, although Zuckerberg this week signaled that the fight may be off because he believes Musk “isn’t serious.” “Elon won’t confirm a date, then says he needs surgery, and now asks to do a practice round in my backyard instead,” Zuckerberg wrote on Threads Sunday. Musk on Monday appeared to respond by suggesting in a series of tweets that he might show up at Zuckerberg’s home to fight anyway.

    Substack cofounders Chris Best, Hamish McKenzie and Jairaj Sethi said in a statement to CNN that they hoped X would reverse the delay but that “Substack was created in direct response to this kind of behavior by social media companies.”

    “Writers cannot build sustainable businesses if their connection to their audience depends on unreliable platforms that have proven they are willing to make changes that are hostile to the people who use them,” the Substack cofounders said.

    Reuters said in a statement that it was aware of reports “of a delay in opening links to Reuters stories on X. We are looking into the matter.”

    Bluesky did not immediately respond to a request for comment about the link delay.

    X briefly sparked backlash in December over a decision to ban links to rival social media services, including Facebook, Instagram and Twitter alternatives like Mastodon, which was later reversed. The platform has also faced a series of outages and technical issues in recent months that have affected users’ ability to read tweets, view photos and click through links after Musk slashed the company’s staff and cut back on infrastructure spending.

    -CNN’s Jon Passantino and Oliver Darcy contributed to this report.

    [ad_2]

    Source link

  • Elon Musk should be forced to testify on X’s ‘chaotic environment,’ US regulator tells court | CNN Business

    Elon Musk should be forced to testify on X’s ‘chaotic environment,’ US regulator tells court | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Elon Musk should be forced to testify in an expansive US government probe of X, the company formerly known as Twitter, the US government said.

    The government said mass layoffs and other decisions Musk made raised questions about X’s ability to comply with the law and to protect users’ privacy.

    The US government’s attempt to compel Musk’s testimony is the latest turn in an investigation that predates Musk’s acquisition of X that has intensified due to Musk’s own actions, according to a court filing by the Justice Department on behalf of the Federal Trade Commission.

    The court filing dated Monday cites depositions with multiple former X executives, including its former chief information security officer and former chief privacy officer, who testified that a barrage of layoffs and resignations following Musk’s $44 billion takeover may have hindered X from meeting its security obligations under a 2011 FTC consent agreement.

    Twitter and its outside attorney didn’t immediately respond to a request for comment.

    According to testimony cited in the filing, there were so few employees left after the departures that anywhere from 37% to 50% of the company’s security program lacked effective management and oversight, with no one available to take responsibility for those controls. Other planned upgrades to the company’s security program were “impaired,” the filing said, citing a deposition by the former chief information security officer, Lea Kissner.

    In another example, Musk personally tried to rush the rollout of Twitter Blue, the company’s paid subscription service, the filing said. That forced the company’s security team to bypass the required security and privacy checks that were a part of Twitter’s own policies and that had been mandated in the FTC order, according to the testimony of Damien Kieran, the former chief privacy officer.

    The filing also alleges that Musk’s move to grant several journalists access to internal company records — access that would culminate in the so-called Twitter Files claiming to show evidence of politically motivated censorship — initially involved a plan that could potentially have led to the exposure of private user data in violation of the FTC order.

    According to the filing, Musk’s plan originally called for providing access through a dedicated company laptop with “elevated privileges beyond just what a[n] average employee might have.”

    “Longtime information security employees intervened and implemented safeguards to mitigate the risks,” the filing said, but even then, the former employees testified, the process raised doubts about Musk’s commitment to privacy and security.

    X has moved to block Musk from being forced to testify and has asked a federal court to invalidate the entire FTC order requiring it to safeguard user privacy, accusing the FTC of asking too many questions in its probe.

    But in its filing, the US government said its interest in Musk’s testimony is well-justified based on the appearance of a “chaotic environment” at X driven by “sudden, radical changes at the company” following Musk’s acquisition.

    “The FTC had every reason to seek information about whether these developments signaled a lapse in X Corp.’s compliance” with the 2011 order, the filing said. Confirmed violations of the FTC order could lead to billions of dollars in fines for X, as well as potential legal ramifications for individual executives such as Musk if they are deemed personally responsible for them.

    The FTC investigation traces back to bombshell allegations — raised by Twitter’s former security chief Peiter “Mudge” Zatko and predating Musk’s acquisition — that for years Twitter has failed to live up to its legally binding commitments to the FTC to protect user privacy and security. Those allegations were first reported last year by CNN and The Washington Post.

    The investigation has proven politically charged as Musk — and his allies including Republicans on the House Judiciary Committee — have responded to the probe by publicly accusing the FTC of harassment and overreach.

    [ad_2]

    Source link

  • EU officials warn TikTok over Israel-Hamas disinformation | CNN Business

    EU officials warn TikTok over Israel-Hamas disinformation | CNN Business

    [ad_1]



    CNN
     — 

    EU officials warned TikTok Thursday about “illegal content and disinformation” on its platform linked to the war between Hamas and Israel, calling for CEO Shou Zi Chew to respond within 24 hours.

    In a letter to Chew, European Commissioner Thierry Breton said failure to comply with European Union laws around content moderation could result in penalties.

    It is the third such letter Breton has sent to large social media platforms this week, after he sent similar warnings to X, the platform formerly known as Twitter, and Meta.

    In August, a recently passed EU law known as the Digital Services Act went into effect for large online platforms including the companies Breton addressed this week. The law sets out specific obligations for social media companies to protect user privacy and safety.

    Since the war began, Breton wrote, TikTok has reportedly spread graphic videos and misleading content on the platform.

    “I therefore invite you to urgently step up your efforts and ensure your systems are effective, and report on the crisis measures taken to my team,” Breton wrote in the letter, which he shared on X.

    TikTok didn’t immediately respond to a request for comment.

    [ad_2]

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    [ad_2]

    Source link

  • TikTok steps up efforts to counter misinformation about Israel-Hamas war | CNN Business

    TikTok steps up efforts to counter misinformation about Israel-Hamas war | CNN Business

    [ad_1]


    London
    CNN
     — 

    TikTok is stepping up efforts to counter misinformation, incitement to violence and hate relating to the Israel-Hamas war on its online platform, it announced Sunday, days after the European Union (EU) warned social media companies they risked falling foul of the bloc’s content moderation laws.

    As part of its measures, TikTok is launching a command center to coordinate the work of its “safety professionals” around the world, improving the software it uses to automatically detect and remove graphic and violent content, and hiring more Arabic and Hebrew speakers to moderate content.

    TikTok said in a statement that, following the brutal attack by Hamas on Israeli civilians on October 7, it had “immediately mobilized significant resources and personnel to help maintain the safety of [its] community and integrity of [its] platform.”

    “We do not tolerate attempts to incite violence or spread hateful ideologies,” it added. “We have a zero-tolerance policy for content praising violent and hateful organizations and individuals.”

    The firm, owned by China’s ByteDance, said it had already removed more than 500,000 videos and shut down 8,000 livestream videos from the “impacted region” since the Hamas attack.

    As the conflict escalates — Israel has blocked the provision of electricity, food, fuel and water to Gaza, and has been signaling it is preparing for a ground invasion of the area — millions have turned to social media for updates, while misinformation has proliferated on these sites.

    One recent TikTok video, seen by more than 300,000 users and reviewed by CNN, promoted conspiracy theories about the origins of the Hamas attack, including false claims that it was orchestrated by the media.

    Last week, the EU told social media companies they needed to better protect “children and teenagers from violent content and terrorist propaganda” on their platforms.

    EU Commissioner Thierry Breton wrote to TikTok Thursday, in a letter shared on X, the platform formerly known as Twitter, saying the company had 24 hours to detail the steps it was taking to comply with EU rules on content moderation. Breton has sent similar letters to X, Google and Meta, the owner of Instagram and Facebook.

    [ad_2]

    Source link

  • Illinois passes a law that requires parents to compensate child influencers | CNN Business

    Illinois passes a law that requires parents to compensate child influencers | CNN Business

    [ad_1]



    CNN
     — 

    When 16-year-old Shreya Nallamothu from Normal, Illinois, scrolled through social media platforms to pass time during the pandemic, she became increasingly frustrated with the number of children she saw featured in family vlogs.

    She recalled the many home videos her parents filmed of herself and her sister over the years: taking their first steps, going to school and other “embarrassing stuff.”

    “I’m so glad those videos stayed in the family,” she said. “It made me realize family vlogging is putting very private and intimate moments onto the internet.”

    She said reminders and lectures from her parents about how everything is permanent online intensified her reaction to the videos she saw of kid influencers. “The fact that these kids are either too young to grasp that or weren’t given the chance to grasp that is really sad.”

    Nallamothu wrote a letter last year to her state senator, Democrat Dave Koehler, urging him to consider legislation to protect young influencers. Last week, her home state became the first to pass a law that establishes safeguards for minors who are featured in online videos – and how they’re compensated.

    Illinois Gov. J. B. Pritzker on Friday signed a bill, inspired by Nallamothu’s letter, amending the state’s Child Labor Law that will allow teenagers over the age of 18 to take legal action against their parents if they were featured in monetized social media videos and not properly compensated, similar to the rights held by child actors.

    Starting July 1 2024, parents in Illinois will be required to put aside 50% of earnings for a piece of content into a blocked trust fund for the child, based on the percentage of time they’re featured in the video. For example, if a child is in 50% of a video, they should receive 25% of the funds; if they’re in 100%, they are required to get 50% of the earnings. However, this only applies in scenarios during which the child appears on the screen for more than 30% of the vlogs in a 12-month period.

    “We understand that parents should receive compensation too because they have equity in this, but we don’t want to forget about the child,” Koehler told CNN.

    Many YouTube parent vloggers or social media influencers post multiple videos each month or weekly, sharing intimate details about their lives, ranging from family financial troubles and the birth of a new baby to opening new toys or going through a child’s phone or report card. Although children are predominantly featured in these monetized videos, parents have had no legal obligation to give them any portion of the earnings.

    Meanwhile, kid influencer accounts, which can at times earn $20,000 or more for sponsored posts, are typically run by parents and not often set up in the child’s name due to age restrictions on social media platforms.

    “We often see with emerging technology and trends that legislation is always a reaction to that,” Koehler said. “But we know with the explosion of social media that parents are using it to monetize kids being on videos. If money is being made and nothing is set up for the children, it’s the same thing as a child actor.”

    The new law is modeled off of the 1936 Jackie Coogan’s Law, the Hollywood silent actor discovered by Charlie Chaplin whose parents swindled him out of his earnings. That California law required parents to set aside a portion of 15% of child earnings in a blocked trust account that the child actor could access after the age of 18.

    Although similar bills have been proposed in California and Washington, Jessica Maddox — an assistant professor at The University of Alabama who studies the social media influencer community — said she’s hopeful other states will follow in Illinois’ footsteps.

    “Even though Illinois is the first state to pass such a law, this legislation is a long time coming,” Maddox said. “Social media labor and careers are becoming increasingly common and viable forms of income, and it’s important that the law catches up with technology to ensure minors aren’t being exploited.”

    Maddox said it also breathes new life into the long-simmering debate over what is appropriate for parents to document online and whether a child can really consent to participating.

    “I’ve seen organic conversations start to emerge between individuals who had been featured heavily in their parents’ social media content but are now of age to tell their stories and admit that had they really understood what was going on, they would have never consented for their lives to be broadcast for everyone.”

    Chris McCarty — the 19-year-old founder of Quit Clicking Kids, an advocacy and education site to combat the monetization of children on social media, who is helping to develop child influencer legislation in Washington State — believes that as the kids featured in family vlogs grow up and share their stories, there will be an increase in public pressure to provide more privacy protections.

    “When children are slightly older, often the narratives get increasingly personal; for example. detailing trouble with bullies, first periods, doctor’s visits, and mental health issues,” McCarty said. “A lot of consumers assume that children working in a family vlog and child actors have the same experiences. This is not the case. As difficult as it is to be a child actor, child actors are still playing a part rather than having their intimate personal details shared for entertainment and monetary purposes.”

    Nallamothu agrees that the next step is for legislation to evolve over time to include more regulations around consent.

    “I know this bill isn’t going to be perfect off the bat but I don’t want perfection to get in the way of progress because regulations have only started coming up,” she said. “I’m glad it’s getting there.”

    [ad_2]

    Source link

  • Chinese artists boycott big social media platform over AI-generated images | CNN Business

    Chinese artists boycott big social media platform over AI-generated images | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Artists across China are boycotting one of the country’s biggest social media platforms over complaints about its AI image generation tool.

    The controversy began in August when an illustrator who goes by the name Snow Fish accused the privately owned social media site Xiaohongshu of using her work to train its AI tool, Trik AI, without her knowledge or permission.

    Trik AI specializes in generating digital art in the style of traditional Chinese paintings; it is still undergoing testing and has not yet been formally launched.

    Snow Fish, whom CNN is identifying by her Xiaohongshu username for privacy reasons, said she first became aware of the issue when friends sent her posts of artwork from the platform that looked strikingly similar to her own style: sweeping brush-like strokes, bright pops of red and orange, and depictions of natural scenery.

    “Can you explain to me, Trik AI, why your AI-generated images are so similar to my original works?” Snow Fish wrote in a post which quickly circulated online among her followers and the artist community.

    The controversy erupted just weeks after China unveiled rules for generative AI, becoming one of the first governments to regulate the technology as countries around the world wrestle with AI’s potential impact on jobs, national security and intellectual property.

    Screenshots of AI-generated artworks on Xiaohongshu, taken by the artist Snow Fish.

    Trik AI and Xiaohongshu, which says it has 260 million monthly active users, do not publicize what materials are used to train the program and have not publicly commented on the allegations.

    The companies have not responded to multiple requests from CNN for comment.

    But Snow Fish said a person using the official Trik AI account had apologized to her in a private message, acknowledging that her art had been used to train the program and agreed to remove the posts in question. CNN has reviewed the messages.

    However, Snow Fish wants a public apology. The controversy has fueled online protests on the Chinese internet against the creation and use of AI-generated images, with several other artists claiming their works had been similarly used without their knowledge.

    Hundreds of artists have posted banners on Xiaohongshu saying “No to AI-generated images,” while a related hashtag has been viewed more than 35 million times on the Chinese Twitter-like platform Weibo.

    The boycott in China comes as debates about the use of AI in arts and entertainment are playing out globally, including in the United States, where striking writers and actors have ground most film and television production to a halt in recent months over a range of issues — including studios’ use of AI.

    Many of the artists boycotting Xiaohongshu have called for better rules to protect their work online — echoing similar complaints from artists around the world worried about their livelihoods.

    These concerns have grown as the race to develop AI heats up, with new tools developed and released almost faster than governments can regulate them — ranging from chatbots such as OpenAI’s ChatGPT to Google’s Bard.

    China’s tech giants, too, are rapidly developing their own generative artificial intelligence, from Baidu’s ERNIE Bot launched in March to SenseTime’s chatbot SenseChat.

    Besides Trik AI, Xiaohongshu has also developed a new function called “Ci Ke” which allows users to post content using AI-generated images.

    For artists like Snow Fish, the technology behind AI isn’t the problem, she said; it’s the way these tools use their work without permission or credit.

    Many AI models are trained from the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    Snow Fish added that these complaints had been slowly growing within the artist community but had mostly been privately shared rather than openly protested.

    “It’s an outbreak this time,” she said. “If it easily goes away without any splash, people will maintain silent, and those AI developers will keep harming our rights.”

    Another Chinese illustrator Zhang, who CNN is identifying by his last name for privacy reasons, joined the boycott in solidarity. “They’re shameless,” said Zhang. “They didn’t put in any effort themselves, they just took parts from other artists’ work and claimed it as their own, is that appropriate?”

    “In the future, AI images will only be cheaper in people’s eyes, like plastic bags. They will become widespread like plastic pollution,” he said, adding that tech leaders and AI developers care more about their own profits than about artists’ rights.

    Tianxiang He, an associate professor of law City University of Hong Kong, said the use of AI-generated images also raises larger questions among the artistic community about what counts as “real” art, and how to preserve its “spiritual value.”

    Similar boycotts have been seen elsewhere around the world, against popular AI image generation tools such as Stable Diffusion, released last year by London-based Stability AI, and California-based Midjourney.

    Stable Diffusion is embroiled in an ongoing lawsuit brought by stock image giant Getty Images, alleging copyright infringement.

    Fareed Zakaria special MoMA AI Art

    GPS web extra: How does AI make art?

    Despite the speed at which AI image generation tools are being developed, there is “no global consensus about how to regulate this kind of training behavior,” said He.

    He added that many such tools are developed by tech giants who own huge databases, which allows them to “do a lot of things … and they don’t care whether it’s protected by the law or not.”

    Because Trik AI has a smaller database to pull from, the similarities between its AI-generated content and artists’ original works are more obvious, making an easier legal case, he said.

    Cases of copyright infringement would be harder to detect if more works were put in a larger database, he added.

    Governments around the world are now grappling with how to set global standards for the wide-ranging technology. The European Union was one of the first in the world to set rules in June on how companies can use AI, with the United States still holding discussions with Capitol Hill lawmakers and tech companies to develop legislation.

    China was also an early adopter of AI regulation, publishing new rules that took effect in August. But the final version relaxed some of the language that had been included in earlier drafts.

    Experts say major powers like China likely prioritize centralizing power from tech giants when drafting regulations, and pulling ahead in the global tech race, rather than focusing on individuals’ rights.

    He, the Hong Kong law professor, called the regulations a “very broad general regulatory framework” that provide “no specific control mechanisms” to regulate data mining.

    “China is very hesitant to enact anything related to say yes or no to data mining, because that will be very dangerous,” he said, adding that such a law could strike a blow to the emerging market, amid an already slow national economy.

    [ad_2]

    Source link

  • Australia fines X, accusing it of ’empty talk’ on fighting child sexual abuse online | CNN Business

    Australia fines X, accusing it of ’empty talk’ on fighting child sexual abuse online | CNN Business

    [ad_1]



    CNN
     — 

    Australia issued a fine of $610,500 Australian dollars ($386,000) on Monday against the company formerly known as Twitter for “falling short” in disclosing information on how it tackles child sex abuse content, in yet another setback for the Elon Musk-owned social media platform.

    Just days earlier, the European Commission formally opened an investigation into X after issuing a previous warning about disinformation and illegal content on its platform linked to the Israel-Hamas war.

    Australia’s e-Safety Commission, the online safety regulator, said in a statement Monday that X had failed to adequately respond to a number of questions about the way it was dealing with the problem of child abuse materials.

    The commission accused the platform of not providing any response to some questions, leaving some sections entirely blank or providing answers that were incomplete or inaccurate.

    “Twitter/X has stated publicly that tackling child sexual exploitation is the number 1 priority for the company, but it can’t just be empty talk, we need to see words backed up with tangible action,” eSafety Commissioner Julie Inman Grant said in the statement.

    In February, Inman Grant had asked five tech firms — X, TikTok, Google (including YouTube), Discord and Twitch — about the steps they were taking to tackle the “proliferation” of crimes against children taking place on their services.

    “Their answers revealed … troubling shortfalls and inconsistencies,” Inman Grant said. X’s failure to comply was “more serious” than other companies, the commissioner added.

    The platform has 28 days to either request a withdrawal of the notice or pay up.

    X did not immediately respond to a request for comment by CNN.

    The commission said X did not respond to a number of important questions such as “the time it takes the platform to respond to reports of child sexual exploitation; the measures it has in place to detect child sexual exploitation in livestreams; and the tools and technologies it uses to detect child sexual exploitation material.”

    When asked about the measures the platform has in place to prevent grooming of children by sexual predators, X responded by saying that it is “not a service used by large number of young people,” adding that its technology was currently “not of sufficient capability or accuracy.”

    The regulator said Google also failed to answer a number of key questions on child abuse. The American tech giant has been given a formal warning to deter it from future non-compliance, it added.

    Lucinda Longcroft, Google’s director of government affairs and public policy for Australia and New Zealand, told CNN the platform has “invested heavily in the industry-wide fight to stop the spread of child sexual abuse material” and remains “committed to … collaborating constructively and in good faith with the eSafety Commissioner.”

    In an earlier report, the Australian regulator said it had uncovered “serious shortfalls” in how Apple, Meta, Microsoft, Skype, Snap, WhatsApp and Omegle tackle online child sexual exploitation.

    [ad_2]

    Source link

  • Two brands suspend advertising on X after their ads appeared next to pro-Nazi content | CNN Business

    Two brands suspend advertising on X after their ads appeared next to pro-Nazi content | CNN Business

    [ad_1]


    New York
    CNN
     — 

    At least two brands have said they will suspend advertising on X, the platform formerly known as Twitter, after their ads and those of other companies were run on an account promoting fascism. The issue came less than a week after X CEO Linda Yaccarino publicly affirmed the company’s commitment to brand safety for advertisers.

    The nonprofit news watchdog Media Matters for America documented in a report published Wednesday that ads for a host of mainstream brands have been run on the account, which has shared content celebrating Hitler and the Nazi Party.

    Ads for brands including Adobe, Gilead Sciences, the University of Maryland’s football team, New York University Langone Hospital and NCTA-The Internet and Television Association were run alongside tweets from the account that had garnered hundreds of thousands of views, CNN observed.

    Spokespeople for NCTA and pharmaceutical company Gilead said that they immediately paused their ad spending on X after CNN flagged their ads on the pro-Nazi account.

    “We take the responsible placement of NCTA ads very seriously and are concerned that our post about the future of broadband technology appeared next to this highly disturbing content,” NCTA spokesperson Brian Dietz said in a statement, adding that the organization had opted into X’s brand safety measures including keyword restrictions and limiting its ad placement to the “home feed of target audiences.”

    “Brand safety will remain an utmost priority for NCTA, which means suspending advertising on Twitter/X for the foreseeable future and heavily limiting NCTA’s organic presence on the platform,” Dietz said.

    A spokesperson for Gilead said the company will pause its ad spending while X investigates the issue.

    Jason Yellin, University of Maryland’s associate athletic director, expressed concern about the placement of the football team’s post on the account and said Maryland Football has not spent money on advertising on X since 2021, meaning X may have promoted the post despite it not being a paid ad.

    A spokesperson for NYU Langone said in a statement that the hospital was “completely surprised by this and are extremely concerned with any appearance of our advertising and brand next to obviously objectionable content that promotes hatred,” adding that it expects its advertising partners to “act responsibly.”

    X did not immediately respond to a request for comment from CNN. Hours after the Media Matters report was published Wednesday morning and CNN observed additional brands’ ads running on the account, the account appeared to be suspended.

    Adobe did not immediately respond to requests for comment from CNN.

    The issue comes as X has been trying to lure advertisers back to the platform after many left in the wake of Elon Musk’s takeover of the company last fall over concerns about content moderation, mass layoffs and general uncertainty over the platform’s direction. Musk said last month that the company still had negative cash flow because of a nearly 50% drop in its core advertising revenue.

    Yaccarino — who joined the company in June, just ahead of a major rebrand from Twitter to X — told CNBC in her first public interview as chief executive last week that many of the platform’s advertisers have returned and that the company is “close to break-even.” She touted the company’s “freedom of speech, not freedom of reach” policy, which aims to limit the reach of so-called lawful but awful content on the platform and to protect brands from having their ads appear alongside such content.

    X last week said it had rolled out additional brand safety controls for advertisers, including the ability to avoid having their ads show next to “targeted hate speech, sexual content, gratuitous gore, excessive profanity, obscenity, spam, drugs.” In addition to human content moderation reviewers that monitor for content that violates the platform’s rules, X says it has automated software that determines where and how ads are placed on the platform.

    “Your ads will only air next to content that is appropriate for you,” Yaccarino said during last week’s interview.

    But Wednesday’s report suggests that the company still has work to do if it wants to avoid monetizing, and placing ads alongside, objectionable content. “Media Matters and other observers have documented how X has remained a dangerous cesspool of content, especially for advertisers,” Wednesday’s report states. Media Matters says it has also documented instances of brands’ ads being placed next to content from Holocaust denial and white nationalist accounts.

    While she did not publicly comment on the ads appearing alongside pro-Nazi content, Yaccarino did post on X Wednesday that, “Sensitivity Settings is live globally in the X Ads Manager — making it even simpler for all advertisers to find the right balance between reach and suitability.”

    [ad_2]

    Source link

  • ‘Where is the phone?’ Huawei keeps quiet about Mate 60 Pro but takes aim at Tesla | CNN Business

    ‘Where is the phone?’ Huawei keeps quiet about Mate 60 Pro but takes aim at Tesla | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Huawei has disappointed legions of fans — and US officials — eager to know more about its Mate 60 Pro smartphone, which has quickly become a symbol of the tech rivalry between the United States and China since it went on sale last month.

    Huawei’s consumer chief, Richard Yu, showed off a slew of new products including a tablet, smartwatch, earphones and even a challenge to Tesla (TSLA) on Monday, without going into detail about its flagship device, which has provoked calls in Washington for more sanctions against the Chinese tech and mobile giant.

    The United States has spent years trying to hobble Huawei’s ability to access the most advanced semiconductors, and the unveiling of its 5G phone in August has taken Western observers by surprise.

    The launch event became the most discussed topic on Chinese social network Weibo, racking up six billion views and 1.6 million posts. Meanwhile, a hashtag titled “#HuaweiConferenceWithoutMentioningMobilePhones,” trended on Weibo, with 24.5 million views.

    “You’re telling me there will be no talk about the phone?” one user wrote on the social network.

    “Where is the phone?” said another.

    Huawei quietly started selling the Mate 60 Pro in August, without a formal launch event or sharing full technical specifications.

    Yu said onstage that the company was “working overtime” to urgently produce devices in the Mate 60 series “to allow more people to buy and use our products.”

    But “today, we will not introduce” those devices, he added.

    At one point, Huawei whetted viewers’ appetite by unveiling a new premium collection called Ultimate Design, introduced by Hong Kong singer and actor Andy Lau.

    The line consists of a luxury smartphone and smartwatch. Few details were released, though the company said the watch was made using bars of real gold — giving it a hefty price tag of 21,999 Chinese yuan ($3,009).

    Ben Sin, an independent tech reviewer, said he was “baffled” as to why Huawei did not discuss its smartphones.

    The company “knows everyone wants to know more about the chip [in the Mate 60 Pro], so them not talking about it is almost like defiance,” he said.

    Analysts who have examined the handset have said it includes a 5G chip, suggesting Huawei may have found a way to overcome American export controls.

    Huawei, formerly the world’s second largest maker of smartphones, has been attempting a comeback in China’s smartphone market after being hit by US export restrictions, which were first imposed in 2019.

    The company’s woes later forced it to sell off its budget mobile brand, Honor, leaving it in bad shape.

    But it is starting to find its way back.

    The firm’s smartphone sales grew in China by 58% in the second quarter of this year, compared to the same period last year, according to Counterpoint Research. Its share of the Chinese market rose from 6.9% to 11.3% over that period.

    Ivan Lam, a senior analyst at Counterpoint, said Huawei benefited from “its high brand exposure to” wealthy Chinese consumers. Because of this, Huawei’s market share in China is expected to further grow in 2024, he added.

    Huawei’s new phone is a boon for the company and may even pose a challenge to Apple’s (AAPL) market share in China, Lam said.

    The Shenzhen-based company has seen a recent “surge in sales” for its Mate 60 series, with weekly sales almost tripling to 225,000 units, according to Counterpoint.

    Yu demonstrated a number of other new products, starting with the latest version of its MatePad Pro, describing it as the lightest and thinnest tablet of its kind in the world. He said the device had been 10 years in the making.

    In addition, the company unveiled a new smart TV, wireless earphones and other gadgets.

    Huawei also took an aggressive swipe at Tesla, saying it would release its first sedan, the Luxeed S7, in November. The car will surpass Tesla’s Model S “in every specification,” said Yu.

    The company plans to release the Aito M9, an SUV, in December. Huawei has partnered with Chinese automakers to produce the two previously announced electric vehicles.

    Yu also announced Huawei was “ready to launch” an updated operating system, HarmonyOS NEXT.

    The system will include “native applications,” Yu said, without elaborating.

    Speculation has mounted that Huawei may be building an operating system that won’t be compatible with any Android apps.

    Huawei did not immediately respond to a request for comment on the matter.

    [ad_2]

    Source link

  • EU asks Meta for more details on efforts to stop illegal and inaccurate content on Israel-Hamas war | CNN Business

    EU asks Meta for more details on efforts to stop illegal and inaccurate content on Israel-Hamas war | CNN Business

    [ad_1]


    London
    CNN
     — 

    The European Union has told Meta it has a week to explain in greater detail how it is fighting the spread of illegal content and disinformation on its Facebook and Instagram platforms following the attacks across Israel by Hamas.

    The European Commission, the bloc’s executive arm, said it had sent the formal request for information to Meta (META) Thursday.

    The commission also asked TikTok for more information on the steps it had taken to prevent the spread of “terrorist and violent content and hate speech,” it said, but without referring to the Israel-Hamas war.

    Last week, EU Commissioner Thierry Breton wrote to several social media companies, including Meta and TikTok, giving them 24 hours to detail the measures they were taking to comply with EU rules on content moderation enshrined in the recently enacted Digital Services Act (DSA).

    On Friday, Meta said its teams had been working “around the clock” since the attacks by Hamas on October 7 to monitor its platforms and outlined some of its actions against misinformation and content that violates its policies and standards.

    And on Sunday, TikTok announced that it had, among other measures, launched a command center to coordinate the work of its “safety professionals” around the world and improve the software it uses to automatically detect and remove graphic and violent content.

    But the European Commission has made it clear it needs more information. In its Thursday announcement, the body gave both Meta and TikTok until October 25 to respond to its requests and warned that it had the power to impose financial penalties if it was not satisfied with their responses.

    Both companies also have until November 8 to detail how they intend to protect the “integrity of elections” on their platforms, the commission said.

    Both Meta and TikTok are bound by obligations set out in the DSA, a landmark piece of legislation, enacted in August, that seeks to more stringently regulate large tech companies, and protect people’s rights online.

    The commission’s formal requests come a week after it issued a similar ultimatum to X, the company formerly known as Twitter, asking for information on how it intends to stop the spread of illegal, misleading, violent and hateful content.

    The commission said it had opened an investigation into X’s compliance with the DSA. It has not announced parallel investigations into Meta or TikTok.

    [ad_2]

    Source link

  • Justin Trudeau blasts Facebook for blocking news as Canada’s wildfires rage | CNN Business

    Justin Trudeau blasts Facebook for blocking news as Canada’s wildfires rage | CNN Business

    [ad_1]



    CNN
     — 

    Canadian Prime Minister Justin Trudeau blasted Facebook for “putting corporate profits ahead of people’s safety” as the social media platform continues to block news content while wildfires rage in Canada’s Northwest Territories and British Columbia.

    “It is so inconceivable that a company like Facebook is choosing to put corporate profits ahead of ensuring that local news organizations can get up-to-date information to Canadians, and reach them where Canadians spend a lot of their time; online, on social media, on Facebook,” Trudeau said during a news conference Monday.

    Some 60,000 people across the Northwest Territories and British Columbia have been placed under evacuation orders since this weekend, according to the most recent numbers from Canadian officials. Also on Monday, Trudeau described the devastation wrought by the wildfires as “apocalyptic” and praised Canadians for stepping up to support evacuees.

    Earlier this month, Facebook’s parent-company Meta began to block news links from Facebook and Instagram in Canada, in response to recently-passed legislation in the country that requires tech companies to negotiate payments to news organizations for hosting their content.

    A Meta spokesperson told CNN in a statement on Monday that Canadians “continue to use our technologies in large numbers to connect with their communities and access reputable information, including content from official government agencies, emergency services and non-governmental organizations.”

    The new legislation in Canada “forces us to end access to news content in order to comply with the legislation but we remain focused on making our technologies available,” the statement added, pointing to Meta’s Safety Check tool, which the company said more than 45,000 people had used as of Friday to mark themselves as safe.

    The Meta spokesperson added that 300,000 people have visited the Yellowknife and Kelowna Crisis Response pages on Facebook.

    The Canadian legislation, known as Bill C-18 or the Online News Act, was given final approval in June. It aims to support the sustainability of news organizations by regulating “digital news intermediaries with a view to enhancing fairness in the Canadian digital news marketplace.”

    Meta has previously stated, via a company blogpost, that the legislation “misrepresents the value news outlets receive when choosing to use our platforms.” The ongoing controversy in Canada comes amid a global debate over the relationship between news organizations and social media companies about the value of news content, and who gets to benefit from it.

    During his remarks Monday, Trudeau said Facebook’s move to block news content is “bad for democracy” in the long run. “But right now, in an emergency situation, where up-to-date local information is more important than ever, Facebook’s putting corporate profits ahead of people’s safety,” Trudeau said.

    CNN’s Brian Fung contributed to this report.

    [ad_2]

    Source link

  • Major Supreme Court cases to watch in the new term | CNN Politics

    Major Supreme Court cases to watch in the new term | CNN Politics

    [ad_1]



    CNN
     — 

    Looking at an upcoming Supreme Court term from the vantage point of the first Monday in October rarely tells the full story of what lies ahead, but the docket already includes major cases concerning the intersection between the First Amendment and social media, gun rights, racial gerrymandering and the power of the executive branch when it comes to regulation.

    The court will still determine if it will hear oral arguments on issues such as medication abortion and transgender rights, not to mention the possibility of a flurry of emergency requests related to the 2024 election.

    Here are some of the key cases on which the court will hear oral arguments this term:

    After the Supreme Court issued a major decision last year expanding gun rights nationwide, lower courts began reconsidering hundreds of firearms regulations across the country under the new standard crafted by Justice Clarence Thomas that a gun law passes legal muster only if it is rooted in history and tradition.

    On the heels of that decision, a federal appeals court invalidated a federal law that bars an individual who is subject to a domestic violence restraining order from possessing a firearm. That law, the 5th US Circuit Court of Appeals ruled, “is an outlier that our ancestors would never have accepted.”

    The Biden administration has appealed, saying the ruling “threatens grave harms for victims of domestic violence.”

    In 2019, nearly two-thirds of domestic homicides in the United States were committed with a gun, according to Everytown for Gun Safety.

    Lawyers for Zackey Rahimi, a man who was prosecuted under the law in 2020 after a violent altercation with his girlfriend, have urged the justices to let the lower court opinion stand, arguing in part that there is no law from the founding era comparable to the statute at hand.

    Racial gerrymandering: South Carolina congressional maps

    Justices will consider a congressional redistricting plan drawn by South Carolina’s Republican-controlled legislature in the wake of the 2020 census. Critics say it was designed with discriminatory purpose and amounts to an illegal racial gerrymander.

    The case focuses the court’s attention once again on the issue of race and map drawing and comes after the court ordered Alabama to redraw the state’s congressional map last term to account for the fact that the state is 27% black. The decision, penned by Chief Justice John Roberts, surprised liberals who feared the court was going to make it harder for minorities to challenge maps under Section 2 of the historic Voting Rights Act.

    In the latest case, the South Carolina State Conference of the NAACP and a Black voter named Taiwan Scott, are challenging the state’s congressional District 1 that is located along the southeastern coast and is anchored in Charleston County. Although the district consistently elected Republicans from 1980 to 2016, in 2018 a Democrat was elected in a political upset, though a Republican recaptured the seat in 2020.

    The person who devised the map has testified that he was instructed to make the district “more Republican leaning,” but that he did not consider race. He did, however, acknowledge that he examined racial data after drafting each version and that the Black voting age population of the district was likely viewed during the drafting process.

    A three-judge district court panel struck down the plan in January, saying that race had been the predominant motivating factor. “To achieve a target of 17% African American population,” the court said, “Charleston County was racially gerrymandered and over 30,000 African Americans were removed from their home district.”

    Expert explains why Justice Thomas’ gifts from wealthy friends are problematic

    In the latest attack against the so-called administrative state, the justices are considering whether to overturn decades old precedent to scale back the power of federal agencies, impacting how the government tackles issues such as climate change, immigration, labor conditions and public health.

    At issue is an appeal from herring fishermen in the Atlantic who say the National Marine Fisheries Service does not have the authority to require them to pay the salaries of government monitors who ride aboard the fishing vessels.

    In agreeing to hear the case, the justices signaled they will reconsider a 1984 decision – Chevron v. Natural Resources Defense Council – that sets forward factors to determine when courts should defer to a government agency’s interpretation of the law. First, they examine a statute to see if Congress’ intent is clear. It if is – then the matter is settled. But if there is ambiguity – the court defers to the agency’s expertise.

    Solicitor General Elizabeth Prelogar told the justices that the agency was acting within the scope of its authority under the Magnuson-Stevens Fishery Conservation and Management Act and said the fishermen are not responsible for all the costs. The regulation was put in place to combat overfishing of the fisheries off the coasts of the US.

    Representing the fishermen, former Solicitor General Paul Clement argues that the government exceeded its authority and needs direct and clear congressional authorization to make such a demand. “The ‘net effect’ of Chevron,” Clement said, is that it “incentives a dynamic where Congress does far less than the Framers anticipated, and the executive branch is left to do far more by deciding controversial issues via regulatory fiat”

    For the second time in recent years, the court is taking aim at a watchdog agency created to combat unfair and deceptive practices against consumers, in a case that could deal a fatal blow to the future of the agency and send reverberations throughout the financial services industry.

    At the center of the case at hand is the Consumer Financial Protection Bureau – an independent agency set up in the wake of the 2008 financial meltdown that works to monitor the practices of lenders, debt collectors and credit rating agencies.

    Congress chose to fund the CFPB from outside the annual appropriations process to ensure its independence. As such, the agency receives its funding each year from the earnings of the Federal Reserve System. But the conservative 5th US Circuit Court of Appeals held last year that the funding scheme violates the Appropriations Clause of the Constitution, that, the court said “ensures Congress’ “exclusive power over the federal purse.”

    According to the CFPB, the agency has obtained more than $18.9 billion in ordered relief, including restitution and canceled debts, for more than 195 million consumers, and more than $4.1 billion in penalties, in actions brought by the agency against financial institutions and individuals that have broken federal consumer financial protection laws.

    A handful of other agencies have similar funding schemes including the Federal Reserve, the Federal Deposit Insurance Corporation and the Office of the Comptroller of the Currency.

    Three years ago, the Supreme Court limited the independence of the CFPB by invalidating its leadership structure. A 5-4 court held that the structure violated the separation of powers because the president was restricted from removing the director, even if they had policy disagreements.

    Agency regulatory authority: Securities and Exchange Commission

    The justices are looking at the in-house enforcement proceedings of the US Securities and Exchange Commission in another case that invites the conservative majority to pare back the regulatory authority of federal agencies.

    The court’s decision could impact whether the SEC and other agencies can conduct enforcement proceedings in-house, using administrative courts staffed with agency employees, or whether such actions must be brought in federal court.

    On one side are critics of such agency courts who argue that they allow federal employees to serve as prosecutors, judges and jury, issuing rulings that could particularly hurt small businesses. On the other side are those who point out that several agencies, including the Social Security Administration, have such internal proceedings because the topics are often complex and the agency has more expertise than a federal judge.

    The case arose in 2013 after the SEC brought an enforcement action against George Jarkesy, who had established two hedge funds with his advisory firm, Patriot28, for securities fraud.

    The 5th Circuit ruled that the SEC’s proceedings deprive individuals of their Seventh Amendment right to a civil jury. In addition, the court said that Congress had improperly delegated legislative power to the SEC, which gave the agency unconstrained authority at times to choose the in-house administrative proceeding rather than filing suit in district court.

    In December, the court will examine the historic multibillion-dollar Purdue Pharma bankruptcy settlement with several states that would ultimately offer the Sackler family broad protection from OxyContin-related civil claims.

    Until recently, Purdue was controlled by the Sackler family, who withdrew billions of dollars from the company before it filed for bankruptcy. The family has now agreed to contribute up to $6 billion to Purdue’s reorganization fund on the condition that the Sacklers receive a release from civil liability.

    The Biden administration, representing the US Trustee, the executive branch agency that monitors the administration of bankruptcy cases, has called the plan “exceptional and unprecedented” in court papers, noting that lower courts have divided on when parties can be released from liability for actions that caused societal harm.

    “The plan’s release ‘absolutely, unconditionally, irrevocably, fully, finally, forever and permanently releases’ the Sacklers from every conceivable type of opioid-related civil claim – even claims based on fraud and other forms of willful misconduct that could not be discharged if the Sacklers filed for bankruptcy in their individual capacities,” Prelogar argued in court papers.

    For the second year running, the justices will leap into the online moderation debate and decide whether states can essentially control how social media companies operate.

    If upheld, laws from Florida and Texas could open the door to more state legislation requiring platforms such as Facebook, YouTube and TikTok to treat content in specific ways within certain jurisdictions – and potentially expose the companies to more content moderation lawsuits.

    It could also make it harder for platforms to remove what they determine is misinformation, hate speech or other offensive material.

    “These cases could completely reshape the digital public sphere. The question of what limits the First Amendment imposes on legislatures’ ability to regulate social media is immensely important – for speech, and for democracy as well,” said Jameel Jaffer, the executive director of Columbia University’s Knight First Amendment Institute, in a statement.

    “It’s difficult to think of any other recent First Amendment cases in which the stakes were so high,” Jaffer added.

    [ad_2]

    Source link

  • Dozens of states sue Instagram-parent Meta over ‘addictive’ features and youth mental health harms | CNN Business

    Dozens of states sue Instagram-parent Meta over ‘addictive’ features and youth mental health harms | CNN Business

    [ad_1]



    CNN
     — 

    Dozens of states sued Instagram-parent Meta on Tuesday, accusing the social media giant of harming young users’ mental health through allegedly addictive features such as infinite news feeds and frequent notifications that demand users’ constant attention.

    In a federal lawsuit filed in California by 33 attorneys general, the states allege that Meta’s products have harmed minors and contributed to a mental health crisis in the United States.

    “Meta has profited from children’s pain by intentionally designing its platforms with manipulative features that make children addicted to their platforms while lowering their self-esteem,” said Letitia James, the attorney general for New York, one of the states involved in the federal suit. “Social media companies, including Meta, have contributed to a national youth mental health crisis and they must be held accountable.”

    Eight additional attorneys general sued Meta on Tuesday in various state courts around the country, making similar claims as the massive multi-state federal lawsuit.

    And the state of Florida sued Meta in its own separate federal lawsuit, alleging that Meta misled users about potential health risks of its products.

    Tuesday’s multistate federal suit — filed in the US District Court for the Northern District of California — accuses Meta of violating a range of state-based consumer protection statutes, as well as a federal children’s privacy law known as COPPA that prohibits companies from collecting the personal information of children under 13 without a parent’s consent.

    “Meta’s design choices and practices take advantage of and contribute to young users’ susceptibility to addiction,” the complaint reads. “They exploit psychological vulnerabilities of young users through the false promise that meaningful social connection lies in the next story, image, or video and that ignoring the next piece of social content could lead to social isolation.”

    The federal complaint calls for court orders prohibiting Meta from violating the law and, in the case of many states, unspecified financial penalties.

    “We share the attorneys generals’ commitment to providing teens with safe, positive experiences online, and have already introduced over 30 tools to support teens and their families,” Meta said in a statement. “We’re disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path.”

    The wave of lawsuits is the result of a bipartisan, multistate investigation dating back to 2021, Colorado Attorney General Phil Weiser said at a press conference Tuesday, after Facebook whistleblower Frances Haugen came forward with tens of thousands of internal company documents that she said showed how the company knew its products could have negative impacts on young people’s mental health.

    “We know that there were decisions made, a series of decisions to make the product more and more addictive,” Tennessee Attorney General Jonathan Skrmetti told reporters. “And what we want is for the company to undo that, to make sure that they are not exploiting these vulnerabilities in children, that they are not doing all the little, sophisticated, tricky things that we might not pick up on that drive engagement higher and higher and higher that allowed them to keep taking more and more time and data from our young people.”

    Tuesday’s multipronged legal assault also marks the newest attempt by states to rein in large tech platforms over fears that social media companies are fueling a spike in youth depression and suicidal ideation.

    “There’s a mountain of growing evidence that social media has a negative impact on our children,” said California Attorney General Rob Bonta, “evidence that more time on social media tends to be correlated with depression with anxiety, body image issues, susceptibility to addiction and interference with daily life, including learning.”

    The suits follow a raft of legislation in states ranging from Arkansas to Louisiana that clamp down on social media by establishing new requirements for online platforms that wish to serve teens and children, such as mandating that they obtain a parent’s consent before creating an account for a minor, or that they verify users’ ages.

    In some cases, the tech industry has challenged those laws in court — for example, by claiming that Arkansas’ social media law violates residents’ First Amendment rights to access information.

    New Hampshire Attorney General John Formella said the states expect Meta to mount a similar defense but that the company will not succeed because the multistate suit targets Meta’s conduct, not speech.

    Formella added that in addition to consumer protection claims, New Hampshire is also bringing negligence and product liability claims as part of the federal suit.

    The complaints filed in state courts allege violations of various state-specific laws. For example, the complaint from District of Columbia Attorney General Brian Schwalb accuses Meta of violating the district’s consumer protection statute by misleading the public about the safety of company platforms.

    Tuesday’s lawsuits come days before a federal judge in California is set to consider a slew of similar allegations against the wider tech industry. In a hearing Friday morning, District Judge Yvonne Gonzalez Rogers is expected to hear arguments by Google, Meta, Snap and TikTok urging her to dismiss nearly 200 complaints involving private plaintiffs that have accused the companies of addicting or harming their users.

    It is possible that Tuesday’s multistate suit could be merged with the consumers’ cases, said Weiser, adding that the main difference of the multistate case is that it could lead to nationwide relief.

    “The coordination that we bring across the AG community, we believe is invaluable to this,” Weiser said.

    Participating in Tuesday’s multistate federal suit are California, Colorado, Connecticut, Delaware, Georgia, Hawaii, Idaho, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maine, Maryland, Michigan, Minnesota, Missouri, Nebraska, New Jersey, New York, North Carolina, North Dakota, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Virginia, Washington, West Virginia and Wisconsin.

    The additional suits filed in state courts were brought by the District of Columbia, Massachusetts, Mississippi, New Hampshire, Oklahoma, Tennessee, Utah and Vermont.

    [ad_2]

    Source link

  • X has ditched a political misinformation reporting feature, researchers say | CNN Business

    X has ditched a political misinformation reporting feature, researchers say | CNN Business

    [ad_1]



    CNN
     — 

    X, the social media company formerly known as Twitter, has scrapped a feature that lets users self-report political misinformation on the platform, a research group says, marking the latest safety-focused guardrail that X has rolled back since billionaire Elon Musk took the helm.

    The move was first spotted by an Australia-based digital policy think tank, Reset Australia. The group of researchers sent an open letter to X warning of the potential harms this can cause as it came just weeks ahead of a major referendum vote on whether to change the Australian constitution to establish an Indigenous advisory group with a direct line to government.

    “There now appears to be no channel to report electoral misinformation when discovered on your platform,” the letter from Reset Australia states. “It is extremely concerning that Australians would lose the ability to report serious misinformation weeks away from a major referendum.”

    The rollback also comes as political campaigning for the United States 2024 presidential election ramps up, and concerns about the spread of misinformation online remains a keystone issue ahead of the US vote.

    X did not immediately respond to CNN’s request for comment Wednesday morning. X users, notably, can still report content on the platform for violations in other categories — such as “Hate,” “Abuse & Harassment,” and “Violent Speech,” among other issues. Musk has also long touted the platforms “Community Notes” feature, which lets users add context they think is missing to posts.

    The user-reporting feature initially launched as a test for a small group of users in the US, South Korea and Australia, X (then called Twitter) announced in August 2021. The feature allowed users to report a post as “it’s misleading” when they encountered problematic political content. In January 2022, the company said it was expanding the misinformation reporting feature to more countries and users.

    Musk’s rocky takeover of Twitter, meanwhile, was officially completed in October 2022.

    With Musk at the helm, the platform has also made other changes, such as reinstating controversial accounts, including those belonging to former US President Donald Trump and rapper Kanye West. Musk has long opined concerns about perceived censorship on the platform and its need to focus on promoting what he views as “free speech.”

    In other recent changes to its approach to political content, X announced last month that it will again allow political ads on the platform — for the first time since 2019 — and said that it is hiring for its safety and election teams ahead of the 2024 US presidential vote.

    [ad_2]

    Source link

  • Meta’s Threads is finally available on desktop | CNN Business

    Meta’s Threads is finally available on desktop | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Threads users, rejoice: the app is rolling out its highly anticipated web version Tuesday.

    The update — perhaps the most requested by users since Threads’ mobile-only launch last month — puts the new platform one step closer to recreating the functions offered by rival X, the platform formerly known as Twitter, and could help reignite user growth following a sluggish period.

    Parent company Meta says Threads users will soon be able to log in, post, view and interact with other posts via a browser on a desktop computer, as the web version rolls out to users in the coming days. The company says it plans to add more desktop features in the future. In an early access test of some of the web-based features, CNN was able to post on the platform but could not yet scroll the home feed.

    Threads launched in early July with stunning success, garnering more than 100 million sign-ups in its first week on the back of months of chaos at Twitter. But the buzz faded somewhat as users realized the bare-bones platform still lacked many of the features that made Twitter popular, such as trending topics, robust search functions and direct messaging. Threads has been steadily rolling out smaller updates but the hotly demanded web version could help reignite stronger user engagement.

    The new web version could also raise fresh competitive concerns for X, after owner Elon Musk sparked user backlash last week by suggesting he might do away with the platform’s block feature.

    Meta employees have for weeks teased that a desktop version of Threads was in the works and being tested internally. Just last week, Instagram head Adam Mosseri, who is also leading Threads, said he had been posting from the platform’s desktop version and suggested “it’ll be ready soon but it needs more work.”

    Web access is just one of a series of recent updates to Threads as Meta continues to build out the new platform. Other features added over the past month include new “reposts” and “likes” tabs that show users the posts they have reshared and liked in their profiles, a chronological following feed and a button to share threads posts to Instagram DMs.

    Continued updates to Threads are essential if Meta wants to maintain the early traction it had with users. Despite the app’s stunning success following its launch, by the end of July, Threads’ daily active user count had fallen 82% to around 8 million users, according to a report from market research firm Sensor Tower earlier this month. By August 16, updates to Threads had helped the app notch slight gains to 11 million daily active users, Sensor Tower said in a report Monday.

    Meta CEO Mark Zuckerberg has said he is “quite optimistic” about the app’s potential.

    “We saw unprecedented growth out of the gate and more importantly we’re seeing more people coming back daily than I’d expected,” he said last month during the company’s earnings call. “And now, we’re focused on retention and improving the basics. And then after that, we’ll focus on growing the community to the scale we think is possible.”

    [ad_2]

    Source link

  • Federal appeals court extends limits on Biden administration communications with social media companies to top US cybersecurity agency | CNN Business

    Federal appeals court extends limits on Biden administration communications with social media companies to top US cybersecurity agency | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    A federal appeals court has expanded the scope of a ruling that limits the Biden administration’s communications with social media companies, saying it now also applies to a top US cybersecurity agency.

    The ruling last month from the conservative 5th Circuit US Court of Appeals severely limits the ability of the White House, the surgeon general, the Centers for Disease Control and Prevention and the FBI to communicate with social media companies about content related to Covid-19 and elections that the government views as misinformation.

    The preliminary injunction had been on pause and a recent procedural snafu over a request from the plaintiffs in the case to broaden its scope led the court on Tuesday to withdraw its earlier opinion and issue a new one that now includes the US Cybersecurity and Infrastructure Security Agency. That agency is charged with protecting non-military networks from hacking and other homeland security threats.

    Similar to the ruling last month, in which the appeals court said the federal government had “likely violated the First Amendment” when it leaned on platforms to moderate some content, the new ruling says CISA violates the Constitution.

    “CISA used its frequent interactions with social media platforms to push them to adopt more restrictive policies on censoring election-related speech,” the three-judge panel wrote.

    “The platforms’ censorship decisions were made under policies that CISA has pressured them into adopting and based on CISA’s determination of the veracity of the flagged information,” they continued. “Thus, CISA likely significantly encouraged the platforms’ content-moderation decisions and thereby violated the First Amendment.”

    The plaintiffs in the suit, which include Missouri and Louisiana’s attorneys general, as well as several individual plaintiffs, had also asked the court to expand the scope in other ways, including by making it apply to some State Department officials. But the court’s new ruling was only modified to add CISA as an enjoined entity.

    The judges said they were pausing their new injunction for 10 days, and the Biden administration has the option of asking the Supreme Court to issue a more lasting pause on the modified ruling.

    [ad_2]

    Source link

  • Large US tech companies face new EU rules | CNN Business

    Large US tech companies face new EU rules | CNN Business

    [ad_1]



    CNN
     — 

    The world’s largest tech companies must comply with a sweeping new European law starting Friday that affects everything from social media moderation to targeted advertising and counterfeit goods in e-commerce — with possible ripple effects for the rest of the world.

    The unprecedented EU measures for online platforms will apply to companies including Amazon, Apple, Google, Meta, Microsoft, Snapchat and TikTok, among many others, reflecting one of the most comprehensive and ambitious efforts by policymakers anywhere to regulate tech giants through legislation. It could lead to fines for some companies and to changes in software affecting consumers.

    The rules seek to address some of the most serious concerns that critics of large tech platforms have raised in recent years, including the spread of misinformation and disinformation; possible harms to mental health, particularly for young people; rabbit holes of algorithmically recommended content and a lack of transparency; and the spread of illegal or fake products on virtual marketplaces.

    Although the European Union’s Digital Services Act (DSA) passed last year, companies have had until now to prepare for its enforcement. Friday marks the arrival of a key compliance deadline — after which tech platforms with more than 45 million EU users will have to meet the obligations laid out in the law.

    The EU also says the law intends “to establish a level playing field to foster innovation, growth and competitiveness both in the European Single Market and globally.” The action reinforces Europe’s position as a leader in checking the power of large US tech companies.

    For all platforms, not just the largest ones, the DSA bans data-driven targeted advertising aimed at children, as well as targeted ads to all internet users based on protected characteristics such as political affiliation, sexual orientation and ethnicity. The restrictions apply to all kinds of online ads, including commercial advertising, political advertising and issue advertising. (Some platforms had already in recent years rolled out restrictions on targeted advertising based on protected characteristics.)

    The law bans so-called “dark patterns,” or the use of subtle design cues that may be intended to nudge consumers toward giving up their personal data or making other decisions that a company might prefer. An example of a dark pattern commonly cited by consumer groups is when a company tries to persuade a user to opt into tracking by highlighting an acceptance button with bright colors, while simultaneously downplaying the option to opt out by minimizing that choice’s font size or placement.

    The law also requires all online platforms to offer ways for users to report illegal content and products and for them to appeal content moderation decisions. And it requires companies to spell out their terms of service in an accessible manner.

    For the largest platforms, the law goes further. Companies designated as Very Large Online Platforms or Very Large Online Search Engines will be required to undertake independent risk assessments focused on, for example, how bad actors might try to manipulate their platforms, or use them to interfere with elections or to violate human rights — and companies must act to mitigate those risks. And they will have to set up repositories of the ads they’ve run and allow the public to inspect them.

    Just a handful of companies are considered very large platforms under the law. But the list finalized in April includes the most powerful tech companies in the world, and, for those firms, violations can be expensive. The DSA permits EU officials to issue fines worth up to 6% of a very large platform’s global annual revenue. That could mean billions in fines for a company as large as Meta, which last year reported more than $116 billion in revenue.

    Companies have spent months preparing for the deadline. As recently as this month, TikTok rolled out a tool for reporting illegal content and said it would give EU users specific explanations when their content is removed. It also said it would stop showing ads to teens in Europe based on the data the company has collected on them, all to comply with the DSA rules.

    “We’ve been supportive of the objectives of the DSA and the creation of a regulatory regime in Europe that minimizes harm,” said Nick Clegg, Meta’s president of global affairs and a former deputy prime minister of the UK, in a statement Tuesday. He said Meta assembled a 1,000-person team to prepare for DSA requirements. He outlined several efforts by the company including limits on what data advertisers can see on teens ages 13 to 17 who use Facebook and Instagram. He said advertisers can no longer target the teens based on their activity on those platforms. “Age and location is now the only information about teens that advertisers can use to show them ads,” he said.

    In a statement, a Microsoft spokesperson told CNN the DSA deadline “is an important milestone in the fight against illegal content online. We are mindful of our heightened responsibilities in the EU as a major technology company and continue to work with the European Commission on meeting the requirements of the DSA.”

    Snapchat parent Snap told CNN that it is working closely with the European Commission to ensure the company is compliant with the new law. Snap has appointed several dedicated compliance employees to monitor whether it is living up to its obligations, the company said, and has already implemented several safeguards.

    And Apple said in a statement that the DSA’s goals “align with Apple’s goals to protect consumers from illegal and harmful content. We are working to implement the requirements of the DSA with user privacy and security as our continued North Star.”

    Google and Pinterest told CNN they have also been working closely with the European Commission.

    “We share the DSA’s goals of making the internet even more safe, transparent and accountable, while making sure that European users, creators and businesses continue to enjoy the benefits of the web,” a Google spokesperson said.

    A Pinterest spokesperson said the company would “continue to engage with the European Commission on the implementation of the DSA to ensure a smooth transition into the new legal framework.” The spokesperson added: “The wellbeing, safety and privacy of our users is a priority and we will continue to build on our efforts.”

    Many companies should be able to comply with the law, given their existing policies, teams and monitoring tools, according to Robert Grosvenor, a London-based managing director at the consulting firm Alvarez & Marsal. “Europe’s largest online service providers are not starting from ground zero,” Grosvenor said. But, he added: “Whether they are ready to become a highly regulated sector is another matter.”

    EU officials have signaled they will be scrutinizing companies for violations. Earlier this summer, European officials performed preemptive “stress tests” of X, the company formerly known as Twitter, as well as Meta and TikTok to determine the companies’ readiness for the DSA.

    For much of the year, EU Commissioner Thierry Breton has been publicly reminding X of its coming obligations as the company has backslid on some of its content moderation practices. Even as Breton concluded that X was taking its stress test seriously in June, the company had just lost a top content moderation official and had withdrawn from a voluntary EU commitment on disinformation that European officials had said would be part of any evaluation of a platform’s compliance with the DSA.

    X told CNN ahead of Friday’s deadline that it was on track to comply with the new law.

    Analysts anticipate that the EU will be watching even more closely after the deadline — and some hope that the rules will either encourage tech platforms to replicate their practices in the EU voluntarily around the world or else drive policymakers to adopt similar measures.

    “We hope that these new laws will inspire other jurisdictions to act because these are, after all, global companies which apply many of the same practices worldwide,” said Agustin Reyna, head of legal and economic affairs at BEUC, a European consumer advocacy group. “Europe got the ball rolling, but we need other jurisdictions to win the match against tech giants.”

    Already, Amazon has sought to challenge the very large platform label in court, arguing that the DSA’s requirements are geared toward ad-based online speech platforms, that Amazon is a retail platform and that none of its direct rivals in Europe have likewise been labeled, despite being larger than Amazon within individual EU countries.

    The legal fights could present the first major test of the DSA’s durability in the face of Big Tech’s enormous resources. Amazon told CNN that it plans to comply with the EU General Court’s decision, either way.

    “Amazon shares the goal of the European Commission to create a safe, predictable and trusted online environment, and we invest significantly in protecting our store from bad actors, illegal content, and in creating a trustworthy shopping experience,” an Amazon spokesperson said. “We have built on this strong foundation for DSA compliance.”

    TikTok did not immediately respond to a request for comment on this story.

    [ad_2]

    Source link

  • ADL says it will resume advertising on X following feud with Elon Musk | CNN Business

    ADL says it will resume advertising on X following feud with Elon Musk | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The Anti-Defamation League on Wednesday said it plans to resume advertising on X, the platform formerly known as Twitter, following a spat with owner Elon Musk.

    Musk last month threatened to sue the ADL for defamation, claiming that the nonprofit organization’s statements about rising hate speech on the social media platform had hurt X’s advertising revenue. ADL CEO Jonathan Greenblatt pushed back on the claims, saying that while the ADL was part of a coalition of groups that called on companies to pause advertising on the platform immediately following Musk’s acquisition last year, it had not been engaged in such calls in recent months.

    Musk’s statements about the group also amplified a campaign of antisemitic hate against the organization that had begun prior to Musk’s legal threat, leading to a surge of threats directed at the ADL, Greenblatt told CNN last month.

    The rights group reiterated in a statement Wednesday that “any allegation that ADL has somehow orchestrated a boycott of X or caused billions of dollars of losses to the company or is ‘pulling the strings’ for other advertisers is false.”

    “Indeed, we ourselves were advertising on the platform until the anti-ADL attacks began a few weeks ago,” the group said. “We now are preparing to do so again to bring our important message on fighting hate to X and its users.”

    Musk responded to the ADL’s statement in a post Wednesday saying, “Thank you for clarifying that you support advertising on X.”

    The statement appears to mark a resolution — for now — to weekslong tension between Musk and the ADL, which has coincided with incidents of antisemitism rising across the United States. But the group says it will continue to monitor for antisemitic content on X.

    “As we have noted in our research over the past several years, X – along with other social media platforms — has a serious issue with antisemites and other extremists using these platforms to push their hateful ideas and, in some cases, bully Jewish and other users,” it said. “A better, healthier, and safer X would be a win for the world … As we do with all platforms, we will credit X as it moves in that direction, and we also will call it out when it has not.”

    The ADL and other similar organizations, including the Center for Countering Digital Hate, have said in reports that the volume of hate speech on the website has grown dramatically under Musk’s stewardship. (Musk has criticized the findings.)

    Two brands in August paused their ad spending on X after their advertisements ran alongside an account promoting Nazism. X suspended the account after the issue was flagged and said ad impressions on the page were minimal.

    X has emphasized its new “freedom of speech, not freedom of reach” policy that aims to limit the reach of so-called lawful but awful content on the platform and to protect brands from having their ads appear alongside such content. CEO Linda Yaccarino has also promoted additional brand safety controls for advertisers, including the ability to avoid having their ads show next to “targeted hate speech, sexual content, gratuitous gore, excessive profanity, obscenity, spam, [and] drugs.”

    Asked about Musk’s threats to sue the ADL in an interview last week, Yaccarino said, “I wish that would be different … We’re looking into that.” She added that the ADL should acknowledge X’s progress on addressing antisemitism.

    It appears the platform may have more work to do. A search on Wednesday for Greenblatt’s name immediately surfaced multiple hateful and antisemitic tweets about the ADL leader.

    [ad_2]

    Source link