ReportWire

Tag: content moderation

  • The EU wants to cure your teen’s smartphone addiction 

    The EU wants to cure your teen’s smartphone addiction 

    [ad_1]

    Glazed eyes. One syllable responses. The steady tinkle of beeps and buzzes coming out of a smartphone’s speakers. 

    It’s a familiar scene for parents around the world as they battle with their kids’ internet use. Just ask Věra Jourová: When her 10-year old grandson is in front of a screen “nothing around him exists any longer, not even the granny,” the transparency commissioner told a European Parliament event in June.

    Countries are now taking the first steps to rein in excessive — and potentially harmful — use of big social media platforms like Facebook, Instagram, and TikTok.

    China wants to limit screen time to 40 minutes for children aged under eight, while the U.S. state of Utah has imposed a digital curfew for minors and parental consent to use social media. France has targeted manufacturers, requiring them to install a parental control system that can be activated when their device is turned on.

    The EU has its own sweeping plans. It’s taking bold steps with its Digital Services Act (DSA) that, from the end of this month, will force the biggest online platforms — TikTok, Facebook, Youtube — to open up their systems to scrutiny by the European Commission and prove that they’re doing their best to make sure their products aren’t harming kids.

    The penalty for non-compliance? A hefty fine of up to six percent of companies’ global annual revenue.

    Screen-sick 

    The exact link between social media use and teen mental health is debated. 

    These digital giants make their money from catching your attention and holding on to it as long as possible, raking in advertisers’ dollars in the process. And they’re pros at it: endless scrolling combined with the periodic, but unpredictable, feedback from likes or notifications, dole out hits of stimulation that mimic the effect of slot machines on our brains’ wiring.  

    It’s a craving that’s hard enough for adults to manage (just ask a journalist). The worry is that for vulnerable young people, that pull comes with very real, and negative, consequences: anxiety, depression, body image issues, and poor concentration. 

    Large mental health surveys in the U.S. — where the data is most abundant — have found a noticeable increase over the last 15 years in adolescent unhappiness, a tendency that continued through the pandemic.

    These increases cut across a number of measures: suicidal thoughts, depression, but also more mundanely, difficulties sleeping. This trend is most pronounced among teenage girls. 

    Smartphone use has exploded, with more people getting one at a younger age | Sean Gallup/Getty Images

    At the same time smartphone use has exploded, with more people getting one at a younger age. Social media use, measured as the number of times a given platform is accessed per day, is also way up. 

    There are some big caveats. The trend is most visible in the Anglophone world, although it’s also observable elsewhere in Europe. And there’s a whole range of confounding factors. Waning stigma around mental health might mean that young people are more comfortable describing what they’re going through in surveys. Changing political and socio-economic factors, as well as worries about climate change, almost certainly play a role. 

    Researchers on all sides of the debate agree that technology factors into it, but also that it doesn’t fully explain the trend. They diverge on where to put the emphasis. 

    Luca Braghieri, an assistant professor of economics at Bocconi university in Italy, said he originally thought concerns over Facebook were overblown, but he’s changed his mind after starting to research the topic (and has since deleted his Facebook account). 

    Braghieri and his colleagues combed through U.S. college mental health surveys from 2004-2006, the period when Facebook was first rolled-out in U.S. colleges, and before it was available to the general public. He found that in colleges where Facebook was introduced, students’ mental health dipped in a way not seen in universities where it hadn’t yet launched.

    Braghieri said the comparison with colleges where Facebook hadn’t yet arrived allowed the researchers to rule out unidentified other variables that might have been simultaneous. 

    Faced with mounting pressure in the last years, platforms like Instagram, YouTube and TikTok have introduced various tools to assuage concerns, including parental control | Staff/AFP via Getty Images

    Elia Abi-Jaoude, a psychiatrist and academic at the University of Toronto, said he observed the effect first-hand when working at a child and adolescent psychiatric in-patient unit starting in 2015.

    “I was basically on the front lines, witnessing the dramatic rise in struggles among adolescents,” said Abi-Jaoude, who has also published research on the topic. He noticed “all sorts of affective complaints, depression, anxiety — but for them to make it to the inpatient setting — we’re talking suicidality. And it was very striking to see.”  

    His biggest concern? Sleep deprivation — and the mood swings and worse school performance that accompany it. “I think a lot of our population is chronically sleep deprived,” said Abi-Jaoude, pointing the finger at smartphones and social media use.

    The flipside    

    New technologies have gotten caught up in panics before. Looking back, they now seem quaint, even funny.   

    “In the 1940s, there were concerns about radio addiction and children. In the 1960s it was television addiction. Now we have phone addiction. So I think the question is: Is now different? And if so, how?” asks Amy Orben, from the U.K. Medical Research Council’s Cognition and Brain Sciences Unit at the University of Cambridge.  

    She doesn’t dismiss the possible harms of social media, but she argues for a nuanced approach. That means honing in on the specific people who are most vulnerable, and the specific platforms and features that might be most risky. 

    Another major ask: more data.  

    There’s a “real disconnect” between the general belief and the actual evidence that social media use is harmful, said Orben, who went on to praise the new EU’s rules. Among its various provisions, the new EU rules will allow researchers for the first time to get their hands on data usually buried deep inside company servers.   

    Orben said that while much attention has gone into the negative effects of digital media use at the expense of positive examples, research she conducted into adolescent well-being during pandemic lockdowns, for example, showed that teens with access to laptops were happier than those without. 

    But when it comes to risk of harm to kids, Europe has taken a precautionary approach.

    “Not all kids will experience harm due to these risks from smartphones and social media use,” Patti Valkenburg, head of the Center for Research on Children, Adolescents and the Media at the University of Amsterdam, told a Commission event in June. “But for minors, we need to adopt the precautionary principle. The fact that harm can be caused should be enough to justify measures to prevent or mitigate potential risk.”

    Parental controls  

    Faced with mounting pressure in the past years, platforms like Instagram, YouTube and TikTok have introduced various tools to assuage concerns, including parental control. Since 2021, YouTube and Instagram send teenagers using their platform reminders to take breaks. TikTok in March announced minors have to enter a passcode after an hour on the app to continue watching videos. 

    Very large online platforms will also be banned from tracking kids’ online activity to show them personalized advertisements | Lionel Bonaventure/AFP via Getty Images

    But the social media companies will soon have to go further.  

    By the end of August, very large online platforms with over 45 million users in the European Union — including companies like Instagram, Snapchat, TikTok, Pinterest and YouTube — will have to comply with the longest list of rules. 

    They will have to hand in to the Digital Services Act watchdog — the European Commission — their first yearly assessment of the major impact of their design, algorithms, advertising and terms of services on a range of societal issues such as the protection of minors and mental wellbeing. They will then have to propose and implement concrete measures under the scrutiny of an audit company, the Commission and vetted researchers.

    Measures could include ensuring that algorithms don’t recommend videos about dieting to teenage girls or turning off autoplay by default so that minors don’t stay hooked watching content.

    Platforms will also be banned from tracking kids’ online activity to show them personalized advertisements. Manipulative designs such as never-ending timelines to glue users to platforms have been connected to addictive behavior, and will be off limits for tech companies. 

    Brussels is also working with tech companies, industry associations and children’s groups on rules for how to design platforms in a way that protects minors. The Code of Conduct on Age Appropriate Design planned for 2024 would then provide an explicit list of measures that the European Commission wants to see large social media companies carry out to comply with the new law.

    Yet, the EU’s new content law won’t be the magic wand parents might be looking for. The content rulebook doesn’t apply to popular entertainment like online games, messaging apps nor the digital devices themselves. 

    It remains unclear how the European Commission will potentially investigate and go after social media companies if they consider that they have failed to limit their platforms’ negative consequences for mental well-being. External auditors and researchers could also face obstacles to wade through troves of data and lines of code to find smoking guns and challenge tech companies’ claims. 

    How much companies are willing to run up against their business model in the service of their users’ mental health is also an open question, said John Albert, a policy expert at the tech-focused advocacy group AlgorithmWatch. Tech giants have made a serious effort at fighting the most egregious abuses, like cyber-bullying, or eating disorders, Albert said. And the level of transparency made possible by the new rules was unprecedented.

    “But when it comes to much broader questions about mental health and how these algorithmic recommender systems interact with users and affect them over time… I don’t know what we should expect them to change,” he explained. The back-and-forth vetting process is likely going to be drawn out as the Commission comes to grips with the complex platforms.

    “In the short term, at least, I would expect some kind of business as usual.”

    [ad_2]

    Carlo Martuscelli and Clothilde Goujard

    Source link

  • TikTok to face European privacy fine by September

    TikTok to face European privacy fine by September

    [ad_1]

    TikTok is set to face a privacy fine by early September for its handling of teenagers’ and children’s data, according to three people with knowledge of the matter.

    Europe’s network of national privacy regulators, the European Data Protection Board (EDPB), on Wednesday resolved disagreements among agencies in an investigation into the popular video-sharing platform used by 125 million people in the bloc.

    Their decision kicks off a process giving TikTok’s lead privacy regulator in the EUthe Irish Data Protection Commission, a month to issue the final penalty and any potential measures. The size and details of the fine are unknown.

    The Irish data authority in 2021 started probing whether TikTok was respecting children’s privacy under the requirements of the EU’s landmark privacy rulebook, the General Data Protection Regulation (GDPR).

    The Irish regulator wanted to check whether the Chinese-owned app ensured its default settings sufficiently protected children’s privacy and if the company was transparent enough in how it processed minors’ data. One of the trickiest points has also been TikTok’s age-verification practices, intended to keep minors under 13 off its platform. TikTok is supervised by the Irish Data Protection Commission because its EU headquarters are in the country.

    The Irish DPC sent the case to the EDPB in May following disagreements with its German and Italian counterparts.

    “We’ve yet to receive the final decision so we’re not in a position to comment,” said a TikTok spokesperson.

    TikTok in 2021 received a €750,000 fine from the Dutch data protection authority for failing to protect Dutch children’s privacy by not having a privacy policy in their native language. The company is also being investigated by Ireland over the potentially unlawful shipping of European users’ data to China.

    [ad_2]

    Clothilde Goujard

    Source link

  • As France burns, Macron blames social media for fanning the flames

    As France burns, Macron blames social media for fanning the flames

    [ad_1]

    PARIS — French rioters have set the country on fire and Emmanuel Macron is pointing the finger at TikTok and Snapchat for pouring gasoline on the inferno.

    In the past three days, violent protests erupted across France after a police officer in a Paris suburb shot and killed 17-year-old Nahel M., who was of North African descent. Rioters targeted public buildings, transport systems and shops with projectiles and Molotov cocktails, leaving 249 members of law enforcement injured and 875 people arrested. 

    Unlike the deadly outbreak of violence in 2005, the turmoil — which has led to public transportation shutdowns, concert cancelations and armored vehicles being deployed across the country — can be documented in real time, shared online and seen by tens of thousands on social media platforms such as TikTok, Snapchat and Twitter. 

    That online phenomenon is worrying France’s political leaders, who have been scurrying to find solutions as the unrest shows no sign of fizzling out.

    “We’ve seen violent gatherings organized on several [social media platforms] — but also a kind of mimicry of violence,” French President Emmanuel Macron said Friday after a government crisis meeting. He accused younger rioters of exiting reality and “living the video games that have intoxicated them.”

    The French president wants tech companies to delete violent content and provide law enforcement with the identity of protesters who use social media to stoke — and exacerbate — the disorder. “I expect these platforms to be responsible,” he said. 

    According to research by France’s most-watched news channel BFM, TikTok and Snapchat were flooded Friday morning with videos from the rioting and looting across France. On TikTok, hashtags linked to the riots were pushed by the platform’s algorithm. Police officials also told BFM some protesters coordinate and communicate in real time through messaging services on WhatsApp and Telegram via online tools that did not exist in 2005, when riots left hundreds of public buildings damaged and thousands of cars burned.

    The government is scheduled to meet with social media platforms Friday evening, where company executives will be pressed to cooperate.

    Some, however, say social media platforms are unfairly blamed by grandstanding politicians who should focus their attention elsewhere.

    On Friday, the U.N.’s human rights office weighed in, saying France needs to address “issues of racism and discrimination in law enforcement,” referring to the killing of the teenager.

    Tech has long been used to coordinate demonstrations and protests, political communications expert Philippe Moreau Chevrolet told POLITICO, adding that the government would be “terribly out of touch” to respond to the crisis by focusing on tech companies and video games.

    “Text messages used to be accused [of facilitating riots], now it’s social networks. Yellow Vests protests were blamed on Facebook,” Moreau Chevrolet said.

    Two sides of the coin

    But the role of online platforms goes beyond showcasing fires and looting, and helping rioters get organized. This week’s violent unrest began with a video that was, of course, posted on social media.

    “There’s clearly been a change, with more and more people adopting the reflex of filming the police. Above all, the activists’ community is now able to quickly and widely circulate the videos,” said Magda Boutros, a sociology scholar at the University of Washington who studied activism against police violence in France.

    When a police officer shot and killed Nahel M. (the name by which he has been identified publicly) on Tuesday, media reports originally relied on law enforcement sources claiming a driver threatened the police officer’s life. But a video, filmed by a bystander and posted on Twitter, showed a different story: Two cops stood next to a car and one shot the driver at close range.

    Another recent incident (crucially, not filmed) showed the power of social media to hold violent police officers accountable and the ability to set a country on fire — or not.

    Two weeks ago, a teenager died in similar circumstances as Nahel M. in the Charente region of western France. The young man was reportedly shot dead by a police officer for refusing to comply.

    That went relatively unnoticed, explained former French MP Thomas Mesnier, because Charente is in a more remote area compared to the dense banlieues of the French capital.

    It also went unnoticed, Mesnier said, because “there was no video that went viral on social networks, participating in and reinforcing people’s emotions and sense of dread.”

    Elisa Bertholomey contributed reporting.

    [ad_2]

    POLITICO Europe

    Source link

  • YouTube Reverses Ban On 2020 Election Denial As 2024 Race Ramps Up

    YouTube Reverses Ban On 2020 Election Denial As 2024 Race Ramps Up

    [ad_1]

    YouTube announced Friday that it would no longer remove election lies from its platform as former President Donald Trump and the MAGA-faithful continue to deny the results of the 2020 presidential election.

    In a statement released on an official company blog, one of the world’s largest video platforms cited the “ability to openly debate political ideas, even those that are controversial or based on disproven assumptions,” as the reason for the change. A 2020 Pew Research study found that a quarter of American adults get their news from the platform. 

    “Two years, tens of thousands of video removals, and one election cycle later, we recognized it was time to reevaluate the effects of this policy in today’s changed landscape,” Google-owned YouTube said. 

    “With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections.”

    In its statement, the company clarified that it would continue to remove content that misleads voters about the voting process. 

    YouTube announced the policy in December 2020, just under a month before the January 6th attack on the U.S. Capitol. A study from the independent research group Transparency.tube found that videos peddling election lies garnered more than 137 million views during the election week. Those videos frequently spread to other social media platforms, comprising about one-third of all election-related videos posted to Twitter in November 2020. But after YouTube introduced the policy, the amount of election fraud videos shared on social media declined, The New York Times reported

    In a statement responding to the change, Julie Millican, vice president of liberal watchdog Media Matters for America, noted that Youtube was “one of the last major social media platforms to keep in place a policy attempting to curb 2020 election misinformation.” Twitter stopped suspending, banning, or fact-checking users spreading election lies in March 2021, while Facebook reduced its efforts to quell the spread of misinformation in the lead-up to the 2022 midterms. This March, YouTube reinstated Trump’s account, following Meta and Twitter’s lead. 

    YouTube “is now allowing people to say whatever they wish about the 2020 election,” far-right Republican congresswoman Lauren Boebert tweeted on Saturday, responding to the news. “Looks like even YouTube is ready for people to start talking TRUTH again.” 

    [ad_2]

    Jack McCordick

    Source link

  • Are Bluesky Social’s Good Vibes Doomed?

    Are Bluesky Social’s Good Vibes Doomed?

    [ad_1]

    Bluesky, the hot new invite-only Twitter look-alike, was supposed to provide a much-needed reprieve from an otherwise toxic social media ecosystem. But by the time I joined Bluesky, in early May, I wondered if the party was over. For the uninitiated, Bluesky began in 2019 as a decentralized social media experiment at Twitter and separated into its own company last year, with former Twitter CEO Jack Dorsey as a board member. (By “decentralized,” the company means it’s creating an open-sourced protocol for building social apps—Bluesky Social being one of them.) In recent months—namely after Elon Musk’s Twitter takeover—the site has become a playground for those in media, politics, and tech deemed capable of ushering a new platform to the masses. Outlets like Wired and Rolling Stone highlighted the app as a pleasant possible alternative to Twitter. It began as a playful, punny, and free environment that looked a whole lot like Twitter, but where posts were “skeets” and ranged from quaint pictures of blue skies to nudes.

    But one exchange between Bluesky CEO Jay Graber and Bluesky users has suggested that the app has yet to actually grapple with the difficult questions around content moderation that have roiled other social media platforms. It began when multiple users called for the removal of a user with the handle @commie.cafe, who had allegedly deadnamed trans women, harassed and doxxed them, and engaged in other harmful behavior. In response to users’ concerns, Graber wrote, “We’re watching, and will take action based on behavior. Blocks prevent interaction.” That reply prompted many to question why the company wouldn’t take its users’ concerns seriously and act proactively against users accused of engaging in harmful behavior on other platforms. 

    “How many people have to directly inform you of the presence of a dangerous, toxic person before you are willing to stop watching and take action?” one user wrote in response to Graber. 

    “A lot of folks are scared/worried here, especially after years of Twitter not really dealing with this stuff well. Don’t be Twitter, be better,” another user replied to Graber. The user accused of transphobic acts appears to no longer be on the platform. Bluesky did not respond to a request for comment on the incident, nor did it answer a question regarding  whether the company took any actions against the user. 

    This not only offered a glimpse into Bluesky’s content-moderation approach, it also called into question whether the company would take any steps to preserve its good vibes. Its nine-person team is building a platform with a wait list of 1.9 million email addresses, belonging to those seeking to join the more than 72,000 users on the invite-only beta version of the app. But as users flock to the app for its potential as a replacement for Twitter, some early users wonder whether the platform can continue being a welcome relief from harassment, hate speech, and graphic content. Or will it ultimately make mistakes similar to those of its predecessors?

    The company remains mum on its plans for dealing with these issues going forward, aside from posting some details on its Frequently Asked Questions page. I reached out to the company’s spokesperson with a detailed list of questions regarding whether the company would prioritize users of marginalized backgrounds, the degree to which it would enforce its content-moderation policies, and what investments it would make into content moderation. But a spokesperson for the company is not granting interviews, because everyone is “heads down on work.”

    On its site, Bluesky notes that it plans to use automated filtering, manual administrator actions, and community labeling to moderate the platform. In addition to its basic filtering for objectionable content, the company wants to enable users and developers to add additional filters and other moderation controls on top. In another post, Graber notes that developers running their own servers will be able to set their own content-moderation policies at the server and community levels, “but I need it to be calm enough for long enough that we can build out the rest of the system to give people more direct controls.” The company declined to say whether it plans to hire more human moderators and implement additional measures to protect users who are part of marginalized communities, especially as the user base grows.

    Twitter, Instagram, Facebook, and other prominent social media platforms made the mistake of underestimating the extent to which dangerous online rhetoric could lead to offline harm, Yoel Roth, Twitter’s former head of trust & safety and a tech policy fellow at the University of California Berkeley, said. And while it’s not feasible to take a totally localized approach to content moderation as it expands abroad, Roth said he hopes the next generation of social platforms will take seriously what has worked and not worked with their veteran predecessors. “One of the promises of federated platforms like Bluesky is that it can give people more choices about what goes and what doesn’t,” Roth said, referencing Bluesky’s idea to give creators independence from the platform itself. “But you still have to draw that line somewhere, of what doesn’t go anywhere, and that’s the battlefield of content moderation.”

    As for AI helping with some content-moderation functions, Miro Dittrich, a senior researcher at the Center for Monitoring, Analysis and Strategy, said that the technology cannot be trusted to work at scale on its own, as has been true for other social platforms. Roth agreed: If Bluesky does use AI as part of its content moderation, companies should test these tools before building their whole moderation strategy around them, he said. Enabling developers to create their own interfaces to set content boundaries could have unintended consequences too. If, for example, a user doxxes someone or posts nonconsensual sexual imagery, those posts could be de-indexed so that Bluesky users can’t view them, but the images could still be available on someone’s personal server and end up on the internet; it’s not certain whether that’s a sufficient solution, said Sol Messing, a research associate professor at New York University and former discovery data science lead at Twitter.

    [ad_2]

    Tatiana Walk-Morris

    Source link

  • What the hell is wrong with TikTok? 

    What the hell is wrong with TikTok? 

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Western governments are ticked off with TikTok. The Chinese-owned app loved by teenagers around the world is facing allegations of facilitating espionage, failing to protect personal data, and even of corrupting young minds.

    Governments in the United States, United Kingdom, Canada, New Zealand and across Europe have moved to ban the use of TikTok on officials’ phones in recent months. If hawks get their way, the app could face further restrictions. The White House has demanded that ByteDance, TikTok’s Chinese parent company, sell the app or face an outright ban in the U.S.

    But do the allegations stack up? Security officials have given few details about why they are moving against TikTok. That may be due to sensitivity around matters of national security, or it may simply indicate that there’s not much substance behind the bluster.

    TikTok’s Chief Executive Officer Shou Zi Chew will be questioned in the U.S. Congress on Thursday and can expect politicians from all sides of the spectrum to probe him on TikTok’s dangers. Here are some of the themes they may pick up on: 

    1. Chinese access to TikTok data

    Perhaps the most pressing concern is around the Chinese government’s potential access to troves of data from TikTok’s millions of users. 

    Western security officials have warned that ByteDance could be subject to China’s national security legislation, particularly the 2017 National Security Law that requires Chinese companies to “support, assist and cooperate” with national intelligence efforts. This law is a blank check for Chinese spy agencies, they say.

    TikTok’s user data could also be accessed by the company’s hundreds of Chinese engineers and operations staff, any one of whom could be working for the state, Western officials say. In December 2022, some ByteDance employees in China and the U.S. targeted journalists at Western media outlets using the app (and were later fired). 

    EU institutions banned their staff from having TikTok on their work phones last month. An internal email sent to staff of the European Data Protection Supervisor, seen by POLITICO, said the move aimed “to reduce the exposure of the Commission from cyberattacks because this application is collecting so much data on mobile devices that could be used to stage an attack on the Commission.” 

    And the Irish Data Protection Commission, TikTok’s lead privacy regulator in the EU, is set to decide in the next few months if the company unlawfully transferred European users’ data to China. 

    Skeptics of the security argument say that the Chinese government could simply buy troves of user data from little-regulated brokers. American social media companies like Twitter have had their own problems preserving users’ data from the prying eyes of foreign governments, they note. 

    TikTok says it has never given data to the Chinese government and would decline if asked to do so. Strictly speaking, ByteDance is incorporated in the Cayman Islands, which TikTok argues would shield it from legal obligations to assist Chinese agencies. ByteDance is owned 20 percent by its founders and Chinese investors, 60 percent by global investors, and 20 percent by employees. 

    There’s little hope to completely stop European data from going to China | Alex Plavevski/EPA

    The company has unveiled two separate plans to safeguard data. In the U.S., Project Texas is a $1.5 billion plan to build a wall between the U.S. subsidiary and its Chinese owners. The €1.2 billion European version, named Project Clover, would move most of TikTok’s European data onto servers in Europe.

    Nevertheless, TikTok’s chief European lobbyist Theo Bertram also said in March that it would be “practically extremely difficult” to completely stop European data from going to China.

    2. A way in for Chinese spies

    If Chinese agencies can’t access TikTok’s data legally, they can just go in through the back door, Western officials allege. China’s cyber-spies are among the best in the world, and their job will be made easier if datasets or digital infrastructure are housed in their home territory.

    Dutch intelligence agencies have advised government officials to uninstall apps from countries waging an “offensive cyber program” against the Netherlands — including China, but also Russia, Iran and North Korea.

    Critics of the cyber espionage argument refer to a 2021 study by the University of Toronto’s Citizen Lab, which found that the app did not exhibit the “overtly malicious behavior” that would be expected of spyware. Still, the director of the lab said researchers lacked information on what happens to TikTok data held in China.

    TikTok’s Project Texas and Project Clover include steps to assuage fears of cyber espionage, as well as legal data access. The EU plan would give a European security provider (still to be determined) the power to audit cybersecurity policies and data controls, and to restrict access to some employees. Bertram said this provider could speak with European security agencies and regulators “without us [TikTok] being involved, to give confidence that there’s nothing to hide.” 

    Bertram also said the company was looking to hire more engineers outside China. 

    3. Privacy rights

    Critics of TikTok have accused the app of mass data collection, particularly in the U.S., where there are no general federal privacy rights for citizens.

    In jurisdictions that do have strict privacy laws, TikTok faces widespread allegations of failing to comply with them.

    The company is being investigated in Ireland, the U.K. and Canada over its handling of underage users’ data. Watchdogs in the Netherlands, Italy and France have also investigated its privacy practices around personalized advertising and for failing to limit children’s access to its platform. 

    TikTok has denied accusations leveled in some of the reports and argued that U.S. tech companies are collecting the same large amount of data. Meta, Amazon and others have also been given large fines for violating Europeans’ privacy.

    4. Psychological operations

    Perhaps the most serious accusation, and certainly the most legally novel one, is that TikTok is part of an all-encompassing Chinese civilizational struggle against the West. Its role: to spread disinformation and stultifying content in young Western minds, sowing division and apathy.

    Earlier this month, the director of the U.S. National Security Agency warned that Chinese control of TikTok’s algorithm could allow the government to carry out influence operations among Western populations. TikTok says it has around 300 million active users in Europe and the U.S. The app ranked as the most downloaded in 2022.

    A woman watches a video of Egyptian influencer Haneen Hossam | Khaled Desouki/AFP via Getty Images

    Reports emerged in 2019 suggesting that TikTok was censoring pro-LGBTQ content and videos mentioning Tiananmen Square. ByteDance has also been accused of pushing inane time-wasting videos to Western children, in contrast to the wholesome educational content served on its Chinese app Douyin.

    Besides accusations of deliberate “influence operations,” TikTok has also been criticized for failing to protect children from addiction to its app, dangerous viral challenges, and disinformation. The French regulator said last week that the app was still in the “very early stages” of content moderation. TikTok’s Italian headquarters was raided this week by the consumer protection regulator with the help of Italian law enforcement to investigate how the company protects children from viral challenges.

    Researchers at Citizen Lab said that TikTok doesn’t enforce obvious censorship. Other critics of this argument have pointed out that Western-owned platforms have also been manipulated by foreign countries, such as Russia’s campaign on Facebook to influence the 2016 U.S. elections. 

    TikTok says it has adapted its content moderation since 2019 and regularly releases a transparency report about what it removes. The company has also touted a “transparency center” that opened in the U.S. in July 2020 and one in Ireland in 2022. It has also said it will comply with new EU content moderation rules, the Digital Services Act, which will request that platforms give access to regulators and researchers to their algorithms and data.

    Additional reporting by Laura Kayali in Paris, Sue Allan in Ottawa, Brendan Bordelon in Washington, D.C., and Josh Sisco in San Francisco.

    [ad_2]

    Clothilde Goujard

    Source link

  • France aims to protect kids from parents oversharing pics online

    France aims to protect kids from parents oversharing pics online

    [ad_1]

    PARIS — French parents had better think twice before posting too many pictures of their offspring on social media.

    On Tuesday, members of the National Assembly’s law committee unanimously green-lit draft legislation to protect children’s rights to their own images.

    “The message to parents is that their job is to protect their children’s privacy,” Bruno Studer, an MP from President Emmanuel Macron’s party who put the bill forward, said in an interview. “On average, children have 1,300 photos of themselves circulating on social media platforms before the age of 13, before they are even allowed to have an account,” he added.

    The French president and his wife Brigitte have made child protection online a political priority. Lawmakers are also working on age-verification requirements for social media and rules to limit kids’ screen time.

    Studer, who was first elected in 2017, has made a career out of child safety online. In the past few years, he authored two groundbreaking pieces of legislation: one requiring smartphone and tablet manufacturers to give parents the option to control their children’s internet access, and another introducing legal protections for YouTube child stars.

    So-called sharenting (combining “sharing” and “parenting,” referring to posting sensitive pictures of one’s kids online) constitutes one of the main risks to children’s privacy, according to the bill’s explanatory statement. Half of the pictures shared by child sexual abusers were initially posted by parents on social media, according to reports by the National Center for Missing and Exploited Children, mentioned in the text.

    The legislation adopted on Tuesday includes protecting their children’s privacy among parents’ legal duties. Both parents would be jointly responsible for their offspring’s image rights and “shall involve the child … according to his or her age and degree of maturity.”

    In case of disagreement between parents, a judge can ban one of them from posting or sharing a child’s pictures without authorization from the other. And in the most extreme cases, parents can lose their parental authority over their kids’ image rights “if the dissemination of the child’s image by both parents seriously affects the child’s dignity or moral integrity.”

    The bill still needs to go through a plenary session next week and the Senate before it would become law.

    [ad_2]

    Laura Kayali

    Source link

  • Thierry Breton: Brussels’ bulldozer digs in against US

    Thierry Breton: Brussels’ bulldozer digs in against US

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Thierry Breton is winning the war of ideas in Brussels.

    The ex-CEO is a political whirlwind with a gigantic portfolio as internal market chief, the backing of French President Emmanuel Macron and lots of proposals. He’s been touring European Union capitals to win support for plans to shield Europe’s industry from crippling energy prices, American subsidies and “naive” EU free traders.

    France’s decades-long push for more state intervention is finally finding some echo in Berlin and the 13th floor of the Berlaymont building, occupied by European Commission President Ursula von der Leyen, who largely owes her job to Macron.

    Omnipresent and ebullient, Breton is playing a key role in marshaling industry and political support for sweeping but so far vague plans to boost clean tech, secure key raw materials and overhaul EU checks on government support that he blasts as too slow to help companies.

    “Of course there is resistance; my job is precisely to manage and align everyone,” he told French TV this week of his January meetings with Spanish, Polish and Belgian leaders to flog a forthcoming industrial policy push that could be a turning point in how far European governments will finance companies.

    Time is short. Von der Leyen wants to line up proposals for a February summit. European industry is complaining that it can’t swallow far higher energy prices and tighter regulation for much longer, with at least one announcing a European shutdown and an Asian expansion.

    Breton said governments don’t need convincing on the need for rapid action. But he’s running up against one of Europe’s sacred cows — EU state aid rules run by Executive Vice President Margrethe Vestager that curb government support with lengthy checks to make sure companies don’t get unfair help. She’s also under intense pressure to preserve a “level playing field” as smaller countries worry about German and French financial firepower.

    The French internal market commissioner’s bullish style often sees him act as if he’s got a role in subsidies. In the fall, he sent a letter to EU countries asking them to send views on emergency state aid rules to the internal market department, which is under his supervision, two EU officials recalled. 

    In a meeting with European diplomats, a Commission representative had to correct it, the EU officials said, asking capitals to make sure the input goes instead to the competition department overseen by Vestager. 

    Europe First

    While Breton doesn’t like to be called a protectionist, his latest mission has been to protect Europe from its transatlantic friend.

    As early as September, one Commission official said, the Frenchman was mandated by Europe’s industry to speak out against U.S. President Joe Biden’s Inflation Reduction Act, which provides tax credits for U.S.-made electric cars and support to American battery supply chains.

    U.S President Joe Biden gives remarks during an event celebrating the passage of the Inflation Reduction Act on September 13, 2022 | Anna Moneymaker/Getty Images

    His Paris-backed campaign charged ahead while EU officials and diplomats tiptoed around the subject. Some within the Commission headquarters found his bad cop routine helpful in keeping pressure on the U.S. 

    “He’s been constructive, though clearly disruptive,” said Tyson Barker, head of the technology and global affairs program at the German Council of Foreign Relations.

    The Frenchman has even pitched himself as the bloc’s “sheriff” against Silicon Valley giants, warning billionaire Elon Musk that an overhaul of the Twitter social network can only go so far since “in Europe, the bird will fly by our rules.”

    “Big Tech companies only understand balances of power,” said Cédric O, a former French digital minister who worked with Breton during the French EU Council presidency. “When [Breton and Musk] see each other, it necessarily remains cordial, but Breton shows his teeth and rightly so. It’s his job.”

    Breton can even surprise his own services, according to two EU officials. In May, the Commission’s department responsible for digital policy — DG CONNECT — was caught off guard when Breton announced in the press that he would unveil plans by year-end to make sure that technology giants forked out for telecoms networks. 

    In so doing, Breton — who was CEO of France Télécom in the early 2000s — resurrected a long-dormant and fractious policy debate that had been put to rest almost a decade ago, when erstwhile Digital Commissioner Neelie Kroes ordered Europe’s telecoms operators to “adapt or die” rather than seek money from content providers.

    After Breton’s commitments, the Commission’s services were soon scrambling to develop some sort of a coherent policy program to deliver on the Frenchman’s comments. A consultation is scheduled for early this year. 

    Carte blanche

    Breton is a rare creature in the halls of the Berlaymont, where policy is hatched slowly after extensive consultation. To a former CEO with a broad remit — his portfolio runs from the expanse of space to the tiniest of microchips — rapid reaction matters more than treading on toes or singing from the hymn sheet. This often sees him floating ideas and then pulling back.

    Last year he alarmed environmentalists by raising the prospect of a U-turn on the EU’s polluting car ban. He wagged his finger at German Chancellor Olaf Scholz for a solo trip to China. He called for nuclear energy to be considered green. He has pushed out grand projects — such as industrial alliances on batteries and cloud, or a cyber shield — that he doesn’t always follow up on.

    He’s even pushed forward a multibillion-euro EU communication satellite program dubbed Iris², a favorite of French aerospace companies, that will see the bloc build a rival to Musk’s space-based Starlink broadband constellation.

    “It’s clear that he’s been given more free rein than others,” said one EU official. “He has von der Leyen’s ear,” the official added, noting that Breton enjoys “privileged access” to the Commission president — who may be mindful that she’ll need French support for a second term.

    According to an official, Breton “has von der Leyen’s ear” and enjoys “privileged access” to the Commission president | Valeria Mongeli/AFP via Getty Images

    Indeed, Breton’s massive role was partly designed as a counterweight to a German president.

    “There is a criticism of von der Leyen for being too German,” explained Sébastien Maillard, director of the Jacques Delors Institute think tank. “There may inevitably be a division of roles between them — [where Breton is] a counterbalance.”

    He’s been called an “unguided missile,” but more often than not, the Frenchman has Paris’ backing when going off script. His October op-ed with Italian colleague Paolo Gentiloni, which called for greater European financial solidarity, was part of France’s agenda, according to one high-ranking Commission official.

    “When he went out in the press with Gentiloni against Scholz’s €200 billion, he was clearly doing the job for Macron,” the official said. 

    His November call for a rethink on the 2035 car engine ban came just after a week after critical green legislation had been finalized by Commission Executive Vice President Frans Timmermans and jarred with the EU’s own position at the COP 27 climate summit in Indonesia. But it aped the position of French auto industry captains, such as Stellantis CEO Carlos Tavares and Renault’s Luca de Meo, who wanted Brussels to slam the brakes on the climate drive.

    Breton had not coordinated his car comments with colleagues in advance, according to two Commission officials.

    Less than 10 days later, French Prime Minister Elisabeth Borne echoed caution about the “extremely ambitious” engine ban and warned that pivoting to electric car manufacturing was daunting.

    Going A-list

    Breton acknowledged himself that he wasn’t Macron’s first choice for the critical EU post, telling POLITICO at a live event that he was a “plan B commissioner.”

    Asked if he was targeting an A-list job for the new Commission mandate in 2024, he said he “may be able to consider a new plan B assignment — if it is a plan B.”

    “He is thinking about the future,” said one EU official. “Look at his LinkedIn posts. He is thinking past the next European elections. He definitely wants to convince Macron to get an expanded portfolio.” 

    Grabbing the Commission’s top job may be tricky, relying on how EU leaders will line up, according to multiple EU and French officials. 

    There are other jobs, including overturning the unwritten law that no French or German candidate can hold the economically powerful competition portfolio. Another option could be becoming Europe’s official digital czar, combining the enforcement powers of the Digital Services Act and the Digital Markets Act into a supranational digital enforcement agency, one EU official said.

    Breton has shrugged off speculation on his long-term plans.

    “All my life, I have been informed of my next potential job 15 minutes before,” he said last month.

    Jakob Hanke Vela, Stuart Lau, Barbara Moens, Camille Gijs and Mark Scott contributed reporting.

    [ad_2]

    Laura Kayali, Samuel Stolton and Joshua Posaner

    Source link

  • Europe turns on TikTok

    Europe turns on TikTok

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    In the United States, TikTok is a favorite punching ball for lawmakers who’ve compared the Chinese-owned app to “digital fentanyl” and say it should be banned.

    Now that hostility is spreading to Europe, where fears about children’s safety and reports that TikTok spied on journalists using their IP locations are fueling a backlash against the video-sharing app used by more than 250 million Europeans.

    As TikTok Chief Executive Shou Zi Chew heads to Brussels on Tuesday to meet with top digital policymaker Margrethe Vestager amid a wider reappraisal of EU ties with China, his company faces a slew of legal, regulatory and security challenges in the bloc — as well as a rising din of public criticism.

    One of the loudest critics is French President Emmanuel Macron, who has called TikTok “deceptively innocent” and a cause of “real addiction” among users, as well as a source of Russian disinformation. Such comments have gone hand-in-hand with aggressive media coverage in France, including Le Parisien daily’s December 29 front page calling TikTok “A real danger for the brains of our children.”

    New restrictions may be in order. During a trip to the United States in November, Macron told a group of American investors and French tech CEOs that he wanted to regulate TikTok, according to two people in the room. TikTok denies it is harmful and says it has measures to protect kids on the app.

    While it wasn’t clear what rules Macron was referring to — his office declined to comment — the remarks added to a darkening tableau for TikTok. In addition to two EU-wide privacy probes that are set to wrap up in coming months, TikTok has to contend with extensive new requirements on content moderation under the bloc’s new digital rulebook, the DSA, from mid-2023 — as well as the possibility of being caught up in the bloc’s new digital competition rulebook, the Digital Markets Act.

    In answers to emailed questions, France’s digital minister Jean-Noel Barrot said that France would rely on the DSA and DMA to regulate TikTok at an EU level, though he “remained vigilant on these ever-evolving models” of ad-supported social media. Barrot added that he “never failed to maintain a level of pressure appropriate to the stakes of the DSA” in meetings with TikTok executives.

    Ahead of Chew’s visit to Brussels, Thierry Breton, the bloc’s internal market commissioner, warned him about the need to “respect the integrality of our rules,” according to comments the commissioner made in Spain, reported by Reuters. A spokesperson for Vestager said she aimed to “review how the company was preparing for complying with its (possible) obligations under our regulation.”

    That said, the probes TikTok is facing deal with suspected violations that have already taken place. If Ireland’s data regulator, which leads investigations on behalf of other EU states, finds that TikTok has broken the bloc’s privacy rulebook, the General Data Protection Regulation, fines could amount to up to 4 percent of the firm’s global turnover. Penalties can be even higher under the DSA, which starts applying to big platforms in mid-2023.

    Spying fears

    And yet, having to fork over a few million euros could be the least of TikTok’s troubles in Europe, as some lawmakers here are following their U.S. peers to call for much tougher restrictions on the app amid fears that data from TikTok will be used for spying.

    TikTok is under investigation for sending data on EU users to China — one of two probes being led by Ireland. Reports that TikTok employees in China used TikTok data to track the movements of two Western journalists only intensified spying fears, especially in privacy-conscious Germany. (TikTok acknowledged the incident and fired four employees over what they said was unauthorized access to user data.)

    One of the loudest critics is French President Emmanuel Macron, who has called TikTok “deceptively innocent” and a cause of “real addiction” among users | Pool photo by Ludovic Marin/AFP via Getty Images

    Citing a “lack of data security and data protection” as well as data transfers to China, the digital policy spokesman for Germany’s Social Democratic Party group in the Bundestag said that the U.S. ban on TikTok for federal employees’ phones was “understandable.”

    “I think it makes sense to also critically examine applications such as TikTok and, if necessary, to take measures. I would therefore advise civil servants, but also every citizen, not to install untrustworthy services and apps on their smartphones,” Jens Zimmermann added.

    Maximilian Funke-Kaiser, digital policy spokesman for the liberal FDP group in German parliament, went even further raising the prospect of a full ban on use of TikTok on government phones. “In view of the privacy and security risks posed by the app and the app’s far-reaching access rights, I consider the ban on TikTok on the work phones of U.S. government officials to be appropriate. Corresponding steps should also be examined in Germany.”

    For Moritz Körner, a centrist lawmaker in European Parliament, the potential risks linked to TikTok are far greater than with Twitter due to the former’s larger user base — at least five times as many users as Twitter in Europe — and the fact that up to a third of its users are aged 13-19. 

    “The China-app TikTok should be under the special surveillance of the European authorities,” he wrote in an email. “The fight between autocratic and democratic systems will also be fought via digital platforms. Europe has to wake up.”

    In Switzerland, lawmakers called earlier this month for a ban on officials’ phones.

    Call for a ban

    So far, though, no European government or public body has followed the U.S. in banning TikTok usage on officials’ phones. In response to questions from POLITICO, a spokesperson for the European Commission — which previously advised its employees against using Meta’s WhatsApp — wrote that any restriction on TikTok usage for EU civil servants would “require a political decision and will be based on the careful assessment of data protection cybersecurity concerns, and others.”

    The spokesperson also pointed out that “there are no official Commission accounts” on TikTok.

    A spokesperson for the European Parliament said its services “continuously monitor” for cybersecurity issues, but that “due to the nature of security matters, we don’t comment further on specific platforms.”

    POLITICO reached out to cybersecurity agencies for the EU, the U.K. and Germany to ask if they had or were planning any restrictions or recommendations having to do with TikTok. None flagged any specific restrictions, which doesn’t mean there aren’t any. In Germany, for example, officials who use iPhones can’t use or download TikTok in the section of their phone where confidential data can be accessed.

    The European Commission has previously advised its employees against using Meta’s WhatsApp | Kirill Kudryavtsev/AFP via Getty Images

    For Hamburg’s data protection agency, one of 16 in Germany’s federal system, restricting TikTok on official phones would be a good idea.

    “Based on what we know from the available sources, we share, among other things, the concerns of the U.S. government that you mentioned and would therefore welcome it appropriate for government agencies in the EU to refrain from using TikTok,” a spokesperson said.

    This suggests that the most immediate public threat for TikTok in Europe is privacy-related. Of the two probes being conducted by Ireland’s privacy regulator, the one looking into child safety on the app is the closest to wrapping up, according to a spokesperson for the Irish Data Protection Commission.

    Depending on the outcome of discussions between EU privacy regulators — the child safety probe is likely to trigger a dispute resolution mechanism — TikTok could face new requirements to verify age in the EU. The other probe, looking into TikTok’s transfers of data to China, is likely to wrap up around mid-year or toward the end of 2023 if a dispute is triggered, the spokesperson said.

    Antoaneta Roussi contributed reporting.

    [ad_2]

    Nicholas Vinocur, Clothilde Goujard, Océane Herrero and Louis Westendarp

    Source link

  • Twitter cuts workers addressing hate speech and trust and safety as Elon Musk’s chaotic revamp continues

    Twitter cuts workers addressing hate speech and trust and safety as Elon Musk’s chaotic revamp continues

    [ad_1]

    Twitter Inc., under new owner Elon Musk, has made deeper cuts into its already radically diminished trust and safety team handling global content moderation, as well as to the unit related to hate speech and harassment, according to people familiar with the matter. 

    At least a dozen more cuts on Friday night affected workers in the company’s Dublin and Singapore offices, according to the people, who asked not to be identified discussing non-public changes. They included Nur Azhar Bin Ayob, the head of site integrity for Twitter’s Asia-Pacific region, a relatively recent hire; and Analuisa Dominguez, Twitter’s senior director of revenue policy.

    Workers on teams handling the social network’s misinformation policy, global appeals and state media on the platform were also eliminated. 

    Ella Irwin, Twitter’s head of trust and safety, confirmed several members of the teams were cut but denied that they targeted some of the areas mentioned by Bloomberg. 

    “It made more sense to consolidate teams under one leader (instead of two) for example,” Irwin said in an emailed response to a request for comment. 

    She said Twitter did eliminate roles in areas of the company that didn’t get enough “volume” to justify continued support. But she said that Twitter had increased staffing in its appeals department, and that it would continue to have a head of revenue policy and a head for the platform’s Asia-Pacific region for trust and safety.

    Musk bought Twitter for $44 billion in October, partly financing the deal with almost $13 billion of debt that entailed interest repayments of around $1.5 billion a year. He has since embarked on a frantic mission to revamp the social-media platform, which he has said is at risk of going bankrupt and was losing $4 million a day as of early November. 

    Speaking on a Twitter Spaces event last month, the mercurial entrepreneur likened the company to a “plane that is headed towards the ground at high speed with the engines on fire and the controls don’t work.”

    Since taking over the company, Musk has overseen firings or departures of roughly 5,000 of Twitter’s 7,500 employees and instituted a “hardcore” work environment for those remaining.

    Twitter faces multiple suits over unpaid bills, including for private chartered plane flights, software services and rent at one of its San Francisco offices.

    Our new weekly Impact Report newsletter examines how ESG news and trends are shaping the roles and responsibilities of today’s executives. Subscribe here.

    [ad_2]

    Kurt Wagner, Bloomberg

    Source link

  • Europe troubled but powerless over Twitter’s journalist ban

    Europe troubled but powerless over Twitter’s journalist ban

    [ad_1]

    European politicians said they were troubled by Twitter’s suspension of U.S. journalists from its platform but the move shows the limits of their planned new rules for online content and media freedom online. 

    France’s digital affairs minister Jean-Noël Barrot said he was “dismayed” about the direction Twitter was taking under Elon Musk after the platform removed nine U.S. journalists and other high-profile accounts in a seemingly arbitrary decision.

    “Freedom of the press is the very foundation of democracy. To attack one is to attack the other,” Barrot tweeted.

    European Commission Vice President Věra Jourová called the “arbitrary” removal of journalists worrying. French industry minister Roland Lescure announced he was temporarily quitting the platform in protest.

    The Twitter ban for tech journalists from media organizations such as the New York Times, the Washington Post and CNN appeared to come after they criticized the tech billionaire and self-proclaimed free speech advocate and wrote about the suspension of more than 20 accounts for sharing publicly available information about Musk’s private jet location.

    “Talking a lot about #FreeSpeech, but stopping it as soon as one is criticized oneself: that’s a strange understanding of #FreedomOfExpression,” said Germany’s Justice Minister Marc Buschmann.

    The German Foreign Affairs Ministry’s own Twitter account said press freedom should not “be switched on and off arbitrarily.”

    Twitter has been mired in controversy since it was acquired by Musk in October and shed staff that worked on content moderation and policy affairs. The platform is now struggling to stem disinformation, potentially falling foul of commitments it took in June 2022. This week the company disbanded its board of experts advising the company on its content policy.

    But restricting journalists’ access to a platform loved by the press risks a serious blow to media freedom and free speech. None of the banned journalists received an explanation of the social media platform’s decision. It was unclear if and when they would be allowed back on the platform. There had been calls to join alternatives such as Mastodon but links to it have reportedly been blocked on Twitter. The account for the open-source platform was also blocked.

    Flying by EU rules?

    In Brussels, politicians have pointed to the European Union’s legislative arsenal as a powerful tool to curb platforms’ power, with Internal Market Commissioner Thierry Breton insisting in October that Twitter’s bird logo “will fly by our rules” in the region.

    Those laws or proposals aren’t yet ready for use and can’t yet counter Musk’s unilateral decisions for the platform he owns. The Commission is preparing to enforce the EU’s content law, the Digital Services Act (DSA), from summer 2023. The new Media Freedom Act is also being negotiated and may not become law until at least late 2024.

    The DSA — and its ability to levy hefty fines — would require lengthy investigations by a Commission team that isn’t yet fully in place. The Media Freedom Act doesn’t specifically tackle an issue such as “deplatforming” or removing a person from a social network like Twitter.

    The Commission’s Jourová warned Twitter about the possibility of future penalties under the DSA — up to 6 percent of a company’s global revenue if they restrict EU-based users and content in an arbitrary and discriminatory manner. 

    Twitter could also be sanctioned in the future if it doesn’t tell users why they have been sanctioned. Large online platforms with over 45 million users in the EU will have to assess and limit potential harms to freedom of expression and information as well as media freedom and pluralism.

    “EU’s Digital Services Act requires respect of media freedom and fundamental rights. This is reinforced under our #MediaFreedomAct,” she tweeted. “@elonmusk should be aware of that. There are red lines. And sanctions, soon.”

    Politicians’ threats don’t reassure media and journalists’ organizations.

    “The European legal arsenal is not sufficient to oppose acts of arbitrary censorship,” said Ricardo Gutierrez, general secretary of the European Federation of Journalists (EFJ). 

    The draft Media Freedom Act largely aims at how Big Tech might treat news organizations. Very large online platforms would have to inform news outlets before they take down their content. It also foresees talks between media organizations and big social media to discuss content moderation problems.

    Wouter Gekiere from the European Broadcasting Union in Brussels echoed similar worries saying public media services couldn’t see how the DSA could prevent takedowns of journalists’ accounts.

    “The European Media Freedom Act would not do much more to protect the media online,” he said.” Journalists and editors need to have the ability to report on stories without fear of arbitrary platform controls.”

    Laura Kayali and Mark Scott contributed reporting.

    [ad_2]

    Clothilde Goujard

    Source link

  • Elon Musk ‘wanted to punch’ Kanye West after deeming the rapper’s swastika tweet an ‘incitement to violence’ 

    Elon Musk ‘wanted to punch’ Kanye West after deeming the rapper’s swastika tweet an ‘incitement to violence’ 

    [ad_1]

    Elon Musk explained on Saturday why he suspended Kanye West’s Twitter account on Friday following an antisemitic post from the rap mogul. The suspension occurred just days Musk allowed West back on Twitter. A few months earlier, West had been locked out of his account because of hate speech toward Jews.

    Musk has described himself as a “free-speech absolutist,” vowing to be less restrictive with content moderation than Twitter’s previous leadership. Advertisers, fearful of their brands appearing alongside hateful content, paused their advertising after Musk’s $44 billion takeover of Twitter on Oct. 27.

    Musk’s reasoning regarding West’s suspension might offer insights into where he’ll draw the lines on content moderation in the future.

    West’s account was suspended Friday after he posted an image of a swastika inside a Star of David. That followed West repeatedly praising Adolf Hitler while appearing live on far-right conspiracy theorist Alex Jones’ Infowars program, where he said “I love Nazis,” whom he insisted “did good things too.”

    Incitement to violence ‘against the law’

    “At some point you have to say what is incitement to violence because it is against the law in the U.S.,” Musk said Saturday during a live Q&A on Twitter Spaces. “Posting swastikas in what obviously is not a good way is an incitement to violence.”

    He added, “I personally wanted to punch Kanye, so that was definitely inciting me to violence. That’s not cool.”

    Musk had earlier tweeted of West, “I tried my best. Despite that, he again violated our rule against incitement to violence. Account will be suspended.”

    Musk also said in the Q&A that Apple had “fully resumed” advertising on Twitter, adding that the iPhone maker is the platform’s largest advertiser. He thanked other advertisers for returning, too. (Amazon plans to restart advertising on Twitter to the tune of about $100 million a year, according to a tweet on Saturday from a Platformer reporter, citing anonymous sources.)

    Musk’s content moderation plans for Twitter

    Musk has fired many Twitter employees involved in content moderation, increasing concerns about hateful content running rampant on the platform.

    The company told Reuters this week that it’s leaning heavily on automation to moderate content, favoring restrictions on distribution over outright removal of certain speech. 

    “Hate speech impressions (# of times tweet was viewed) continue to decline, despite significant user growth,” Musk tweeted. “Freedom of speech doesn’t mean freedom of reach. Negativity should & will get less reach than positivity.”

    That followed the Center for Countering Digital Hate, a London nonprofit, saying on Friday that the number of daily tweets containing slurs was substantially higher compared to the monthly rate before Musk’s takeover. 

    Our new weekly Impact Report newsletter will examine how ESG news and trends are shaping the roles and responsibilities of today’s executives—and how they can best navigate those challenges. Subscribe here.

    [ad_2]

    Steve Mollman

    Source link

  • Musk fires chief Brussels lobbyist in Twitter’s layoff round

    Musk fires chief Brussels lobbyist in Twitter’s layoff round

    [ad_1]

    Twitter’s director for EU public policy Stephen Turner is among the thousands of employees laid off by its new owner Elon Musk, Turner announced on the platform Monday.

    “After six years I am officially retired from Twitter. From starting the office in Brussels to building an awesome team it has been an amazing ride. Privileged and honoured to have the best colleagues in the world, great partners, and never a dull moment. Onto the next adventure,” he tweeted.

    Since taking over Twitter, Musk reportedly sacked half of the company’s workforce — including lobbyists and content moderators. The deep cuts in the policy teams have raised concerned among regulators and politicians.

    On Monday morning, two of Twitter’s six-persons-strong policy team in Brussels still had a job, one person with first-hand knowledge of the issue told POLITICO.

    Turner spearheaded Twitter’s engagement and lobbying in Brussels at a time when the EU crafted a series of strict laws regulating privacy, content moderation, media freedom, online advertising and more.

    [ad_2]

    Laura Kayali

    Source link

  • Russia, China and Islamic State jump on Musk’s Twitter bandwagon

    Russia, China and Islamic State jump on Musk’s Twitter bandwagon

    [ad_1]

    Press play to listen to this article

    Elon Musk has some new super fans: Russia, China and the Islamic State.  

    After the world’s richest man bought Twitter for $44 billion last month, officials and journalists linked to Russia and China — and even some jihadists — urged him to lift restrictions on their use of the platform. 

    So far, their pleas have fallen on deaf ears. But the repeated requests — including from high-profile figures like Maria Zakharova, the spokesperson for Russia’s foreign ministry — are part of efforts by these individuals to use Musk’s takeover as a chance to make a comeback on Twitter. 

    Right-wing extremist groups in the West have already heralded Musk’s ownership as a signal that they can post hate-filled and potentially illegal content online with little, or no, resistance. 

    Now, Russian and Chinese state-backed Twitter accounts have taken up the same free speech argument, demanding the platform reinstate them, remove labels that identify these accounts as linked to Beijing or Moscow, and allow them to post more freely, including on hot-button topics like the war in Ukraine. 

    “They are doing this to jump on the bandwagon now that the right-wing community are putting pressure on Musk,” said Felix Kartte, a senior adviser at Reset, a technology accountability lobbying group. “They are pushing it because everyone else is pushing Musk, too.”

    A representative for Twitter did not respond to a request for comment. The company has previously said its policies regarding online hate content have not changed since Musk’s takeover. 

    The pressure is a crucial early test of Musk’s willingness to police his new platform. Fears are already mounting that under his leadership, Twitter could be reshaped to make it a more toxic place for political debate  and potentially even incite an increase in violent extremism or foreign interference within Western democracies.

    The resurgence of interest from the state-backed and jihadist accounts comes as Twitter undergoes a fundamental shift under Musk. The South African-born billionaire laid off half of the company’s employees on Friday, including many in senior public policy and content moderation roles.

    After Vladimir Putin’s forces invaded Ukraine, the European Union imposed sanctions banning content from the likes of Russia’s RT and Sputnik, a move that forced Twitter to adopt its own restrictions, which it expanded beyond the borders of the 27-country bloc. Now senior figures at RT — and Kremlin officials — are demanding Musk lift those measures. 

    Margarita Simonyan, RT’s editor-in-chief, and other prominent RT journalists, messaged Musk in the days before and after the acquisition to urge him to end the so-called shadow bans against their state-affiliated news organization. Those restrictions include RT’s content not appearing when people search on Twitter. 

    “Elon @elonmusk, since you’re all for free speech, maybe unban RT and Sputnik accounts and take the shadow ban off mine as well?” Simonyan wrote on Twitter.

    George Galloway, a former British politician who now hosts a show on RT, called on Musk to remove the “Russia state-affiliated media” label that had been placed on his account. 

    Chinese accounts also jumped on the bandwagon. While Beijing blocks Twitter for its domestic audience, the country’s officials and state media have repeatedly used the platform to spread propaganda and attack other users who criticize the Chinese Communist Party. 

    In August 2020, Twitter began labeling these accounts as state-affiliated, and since then, there has been a significant drop in engagement, including likes and shares, of those accounts, according to an analysis by the China Media Project, a research group at the University of Hong Kong.

    Ever since Musk bought Twitter, Chinese officials and state-backed journalists have been urging him to live by his free speech beliefs. He must “remove all those McCarthyist discriminatory” policies for Chinese accounts, according to a Twitter post from Chen Weihua, the European bureau chief of the state-run China Daily newspaper. 

    “Can you please free the warning to Chinese media to give us a better and pleasant experience? Thank you,” added Zhang Heqing, an official in the Chinese embassy in Pakistan in response to Musk when he said Twitter would become a bastion for free speech.

    It’s not just authoritarian governments. Islamic State supporters are also pushing to get back on the platform. 

    Within jihadist online communities, Musk’s takeover of Twitter has been welcomed as an opportunity to return. 

    Before 2015, Islamic State-related accounts had posted indiscriminately, including videos and images of beheadings and other acts of violence. Over the last seven years, Twitter’s content moderation tools had forced such activity to go underground. 

    Yet the number of Islamic State-affiliated accounts on Twitter has seen a sharp rise, compared to the previous 11-day period before Musk’s acquisition on October 27. The activity includes jihadist-supporting accounts likening the global clampdown they face to Musk’s own statements that both the left and right of politics are attacking him. In the last week, Islamic State-related Twitter users have also held so-called Twitter Spaces, or online voice conversations, with at least one of the sessions called “The Islamic Caliphate is remaining and expanding.”

    Yoel Roth, Twitter’s head of safety and integrity, said the company’s policies toward hateful content and so-called online trolls have not changed since Musk’s takeover. Twitter’s “core moderation capabilities” have not been hampered by the recent layoffs, which saw about 15 percent of Twitter’s global trust and safety team fired, Roth added. 

    Not everyone is convinced. “Through the changing of the guard, it seems as if Islamic State accounts have gotten more brazen,” according to Moustafa Ayad, executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue, a think tank that tracks online extremism. “If you make others feel like the group is back, it ultimately creates a sense of relief, or that it’s alright to post again as the Islamic State.”

    This article is part of POLITICO Pro

    The one-stop-shop solution for policy professionals fusing the depth of POLITICO journalism with the power of technology


    Exclusive, breaking scoops and insights


    Customized policy intelligence platform


    A high-level public affairs network

    [ad_2]

    Shannon Van Sant and Mark Scott

    Source link

  • Advertisers Flee As Twitter Lays Off Nearly Half Of Its Workforce

    Advertisers Flee As Twitter Lays Off Nearly Half Of Its Workforce

    [ad_1]

    The news that a reported 3,700 Twitter
    TWTR
    employees were let go via email, was no surprise. However, it does call into question how badly advertising revenue will be hit as ad agencies try and figure out what the new Twitter will look like sans almost half of their employees.

    Employees likely didn’t sleep well after getting an email on Thursday night telling them they would be notified what their future employment status was by 9:00 a.m. PST on Friday by email with the header “Your Role At Twitter.” It would be sent to their personal accounts if they are being let go, or to their work accounts if they would be staying, they were notified in an unsigned email.

    The company shut all offices on Friday “to help ensure the safety of each employee as well as Twitter systems and customer data.” Some employees who had been sleeping in their offices due to heavy time demands from Musk were shocked as they were escorted out of the building.

    Many employees got early warnings as their access to work platforms was shut off at about 8:00 p.m. and their email accounts were turned off at 11:00 p.m. “It’s a break-up by text,” a person affected by the layoffs said.

    “My entire team is gone,” one person impacted by the layoffs in New York told a reporter. They worked on a team of more than 30. Another employee estimated that 90% of their team was gone.

    Most of the management of the ad sales team, including Chief Marketing Officer Leslie Berland, VP of Global Client Solutions Jean-Phillipe Maheu and Chief Customer Officer Sarah Personette, had already been let go by Chief Twit Elon Musk, who has been trying to placate advertisers by doing in-person and video meetings.

    Three of the worst hit teams were product and engineering for advertising, Redbird (the infrastructure team that runs data centers), and corporate communications.

    Musk’s strategy to butter up advertisers has clearly not worked. General Mills
    GIS
    Inc., snack food manufacturer Mondalez International Inc., Pfizer
    PFE
    Inc. and Volkswagen AG’s Audi are joining a growing list of companies “pausing” campaigns on Twitter, according to The Wall Street Journal.

    One ad agency executive told The Journal that about 20 of its clients are no longer advertising on Twitter, and that’s just one agency. Musk himself gave a clue as to how bad things are when he tweeted that the company has had a massive drop in ad revenue since he acquired it a week ago. He said it was “due to activist groups pressuring advertisers, even though nothing has changed with content moderation and we did everything we could to appease the activists.” He added, “Extremely messed up! They’re trying to destroy free speech in America.”

    What he failed to consider is the fact that, like stock market investors, the thing that advertisers like the least is uncertainty. If they don’t know exactly where and when their ad is being placed, and more importantly, what the demographic will be, they will simply stop their campaigns.

    Another issue is the mood of the remaining employees given Musk has no hesitancy in quickly letting people go if they don’t carry out his vision. Many employees being laid off under the plan which has been dubbed “Project Tundra,” are being given only 60 days of severance pay. Twitter Chief Accounting Officer Robert Kaiden left the company after the list of layoffs was solidified, one of the last remaining Twitter execs to leave the company.

    Musk may in fact be losing even more employees than those he has laid off. Morale is likely to plummet with the massive layoffs paired with the fact that everyone working at home is being required to return to the office. Axios reports that employees are being given as little as 60 days to relocate to a Twitter office.

    This is a complete reversal of company policy that employees can work remotely on a permanent basis, and many took this opportunity to move somewhere cheaper and are unlikely to sell their homes and try and relocate to a much more expensive location such as San Francisco.

    Surprisingly, Twitter did not take down a flurry of tweets from prominent California Attorney Lisa Bloom (@LisaBloom) late Thursday night including:

    · Hey Twitter employees getting laid off tomorrow! IMPORTANT INFO from a CA employment attorney (me): CA’s “WARN” law requires Twitter to give you 60 days notice of a massive layoffs. A layoff of 50+ employees within a 30 day period qualifies. I know you didn’t get that notice;

    · This WARN law applies to all California employers of 75+ employees, which obviously includes Twitter with its thousands of employees. Purpose of the law is to give laid off employee’s time to figure out how to handle this disruption. And Elon completely ignores it;

    · Twitter will be liable for all of these (civil penalties, lost compensation, lost medical and other benefits) & attorney’s fees for 60 days it failed to give workers notice. This flagrant violation of worker’s rights is outrageous. Who’s in for a class action? LET’S DO THIS;

    · Also, CA’s strong antidiscrimination laws apply to Twitter’s big layoff tomorrow. Are people of color, women and/or older workers disproportionately chosen for example? This was done so hastily, so slapdash, so that the world’s richest man gen get even richer faster;

    · Employees laid off in violation of the WARN Act receive back pay at the employee’s final rate or a 3 year average of compensation, whichever is higher. Twitter would also be liable for workers’ medical expenses that would have been covered under an employee benefit plan;

    · Twitter employees, DO NOT SIGN
    IGN
    ANYTHING when you’re laid off. Consult with an attorney first. Buried in the fine print may be a waiver of your rights under CA and Federal law. Those employers like Twitter who violate the WARN Act face civil penalties of $500/day for each violation. With thousands of employees this could be significant, though maybe not to Elon; and

    · We’ll see how long Twitter lets my posts stay up. If they take them down tonight, before the layoffs, that means they were on notice of the law I cite and chose to punish me rather than follow it. That’s consciousness of guilt and I’d use it as the basis for punitive damages.

    A class action lawsuit was indeed filed against Twitter for not giving enough notice to employees prior to the layoffs by Shannon Liss-Riordan, who unsuccessfully sued Tesla in June of 2022 when the company cut about 10% of their workforce.

    However, Musk apparently has already thought this through by keeping people on the payroll who are laid off. The New York Times received an email from a worker who was notified that her job had been “impacted” but that they would stay employed through a separation date in February.

    “During this time, you will be on a Non-Working Notice period and your access to Twitter systems will be deactivated,” read the email, which was signed “Twitter.”

    [ad_2]

    Derek Baine, Contributor

    Source link

  • Shonda Rhimes tells her 1.9M Twitter followers, ‘Not hanging around for whatever Elon has planned. Bye.’

    Shonda Rhimes tells her 1.9M Twitter followers, ‘Not hanging around for whatever Elon has planned. Bye.’

    [ad_1]

    Shonda Rhimes isn’t impressed with Elon Musk’s plans for Twitter, and she isn’t stick around. Best known for creating and writing Grey’s Anatomy, the TV mogul shared what might be her last tweet Saturday, telling her nearly 2 million followers, “Not hanging around for whatever Elon has planned. Bye.”

    Musk, a self-described free-speech absolutist, completed his $44 billion takeover of the social media platform on Thursday and promptly fired top executives he had criticized for being too suppressive. 

    While he was quick to reassure advertisers on Thursday that the platform wouldn’t become a “free-for-all hellscape,” not everyone was convinced. General Motors said it would temporarily pause advertising on Twitter, adding, “We are engaging with Twitter to understand the direction of the platform under their new ownership.” 

    Advertisers, of course, are not keen on appearing near offensive content, and there’s been a sharp increase in that since Musk took control, with Twitter trolls flooding the platform with racial slurs and Nazi memes.

    “The danger here is that in the name of ‘free speech,’ Musk will turn back the clock and make Twitter into a more potent engine of hatred, divisiveness, and misinformation about elections, public health policy, and international affairs,” Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, told the Associated Press.

    On Friday, the Tesla and SpaceX CEO tweeted, “To be super clear, we have not yet made any changes to Twitter’s content moderation policies.” That followed him tweeting earlier: “Twitter will be forming a content moderation council with widely diverse viewpoints. No major content decisions or account reinstatements will happen before that council convenes.” 

    He also offered glimpses into his thinking about the platform’s future on Friday and early Saturday while replying to Twitter suggestions. When a user noted Facebook has something similar to the content moderation council but still angers both the left the right, Musk replied, “Good point. Being able to select which version of Twitter you want is probably better, much as it would be for a movie maturity rating. The rating of the tweet itself could be self-selected, then modified by user feedback.”

    As Musk toys with ideas, however, an increase in hateful content may in the meantime drive some users away from the platform—including prominent ones like Rhimes.

    According to the Network Contagion Research Institute, which analyzes social media content and predicts emerging threats, instances of the N-word increased by nearly 500% in the 12 hours immediately after Musk’s takeover was finalized. 

    Rhimes, an African-American, didn’t elaborate on why she was leaving the platform. But up until now she’s been a prolific user of Twitter, building a large following since joining the platform in November 2008.

    Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.

    [ad_2]

    Steve Mollman

    Source link