ReportWire

Tag: Digital Services Act

  • X vs. EU: Elon Musk hit with probe over spread of toxic content

    X vs. EU: Elon Musk hit with probe over spread of toxic content

    [ad_1]

    Elon Musk just got an early, unwelcome Christmas present from Europe: the bloc’s first-ever investigation via its new social media law into X.

    The European Commission on Monday opened infringement proceedings under the Digital Services Act (DSA) into X, formerly known as Twitter, after the billionaire and his company were subjected to repeated claims they were not doing enough to stop disinformation and hate speech from spreading online.

    The four investigations focus on X’s failure to comply with rules to counter illegal content and disinformation as well as rules on transparency on advertising and data access for researchers. They will also scrutinize whether X misled its users by changing its so-called blue checks, which were initially launched as a verification tool but now serve as an indicator that a user is paying a subscription fee.

    “The Commission will carefully investigate X’s compliance with the DSA, to ensure European citizens are safeguarded online — as the regulation mandates,” Margrethe Vestager, the Commission’s executive vice president for digital policy, said in a statement.

    “We now have clear rules, ex-ante obligations, strong oversight, speedy enforcement and deterrent sanctions and we will make full use of our toolbox to protect our citizens and democracies,” said EU Internal Market Commissioner Thierry Breton. 

    “X remains committed to complying with the Digital Services Act, and is cooperating with the regulatory process,” Joe Benarroch, an X executive, said in an email to POLITICO.

    The investigations, which do not constitute wrongdoing and will lead to a monthslong probe,  could lead to fines of up to 6 percent of a company’s global revenue. 

    The rulebook, which started applying in late August, represents the most widespread attempt by any region or country in the Western world to hold social media companies to account for what is posted on their platforms. That includes lengthy risk assessments and outside audits to prove to regulators these companies are clamping down on illegal content like hate speech.

    The Commission, which enforces the DSA on 19 so-called Very Large Online Platforms, or VLOPs, has already taken preliminary steps like requests for information against several other social media networks including Instagram, Facebook, TikTok, YouTube and Snapchat. The focus has been on how they handle illegal content, combat disinformation and protect minors. 

    While Europe’s new social media rules only came into full force in late summer, X has been squarely on Brussels’ radar.

    Musk fired half of the company’s employees — including almost all of its trust and safety team — in November, 2022. That included many of the company’s European Union-focused policy jobs, either in Brussels or in Dublin, where the company has its EU headquarters.

    The social networking giant also pulled out of the EU’s code of practice on disinformation in May, an industry pledge coordinated by the Commission that will soon serve as a part of the bloc’s DSA rules. 

    Musk publicly committed X to complying with the bloc’s DSA rules, though he remains a vocal advocate for almost unfettered free speech rights for people that use his platform.

    Yet it was after Hamas militants attacked Israel on October 7 that Commission regulators upped their attention, according to four officials with direct knowledge of the matter who were granted anonymity to discuss internal discussions. Part of the investigations, linked to potentially illegal content, resulted from posts associated with the ongoing Middle East war.

    In the days and weeks following the Middle East attack, X was flooded with often gruesome images of suspected beheadings — often with few, if any, removals by the tech giant. Repeated requests for information from the company went unanswered, while discussions with X representatives, including at meetings in San Francisco with X engineers in the summer, often left Commission officials unsatisfied, according to two of the individuals who spoke to POLITICO.

    The company was the first to receive a request for information from the Commission in October about how it has tackled problematic content like graphic illegal content and disinformation linked to Hamas’ attack on Israel.

    The Commission on Monday said it would investigate whether X’s requirement to quickly remove illegal content, once flagged, had been respected, including “in light of X’s content moderation resources.” It said it would also examine whether X’s so-called community notes, or crowdsourced fact-checking program, and policies to limit risks for election integrity complied with the DSA.

    Brussels will also review whether X’s so-called blue checks, markers that can be bought by accounts to show they have been verified, could trick users into thinking blue check-holding accounts are more trustworthy. Regulators will similarly look into changes to how outsiders could analyze X’s data after the company replaced free access to this data with a paid version that costs up to $240,000 (€220,000) a month. X’s mandatory publicly accessible library of ads that ran on its platform will also be part of the investigations. 

    The investigations could lead to different results in the coming months from a sweeping fine to orders to impose specific measures and commitments from X to make changes. 

    “It is important that this process remains free of political influence and follows the law,” added Benarroch, the X executive. “X is focused on creating a safe and inclusive environment for all users on our platform, while protecting freedom of expression, and we will continue to work tirelessly toward this goal.”

    This article was updated to include new details.

    [ad_2]

    Clothilde Goujard and Mark Scott

    Source link

  • Israel floods social media to shape opinion around the war

    Israel floods social media to shape opinion around the war

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    BRUSSELS — A photo with a bloody dead baby whose face is blurred has been circulating on X for the last four days. 

    “This is the most difficult image we’ve ever posted. As we are writing this we are shaking,” the accompanying message says. 

    The footage is not from a reporter covering the conflict in Israel and Gaza, or from one of the countless accounts sharing horrifying videos of the atrocities. 

    It’s a paid message from the Israeli Foreign Affairs Ministry.

    Since Hamas attacked thousands of its citizens last week, the Israeli government has started a sweeping social media campaign in key Western countries to drum up support for its military response against the group. Part of its strategy: pushing dozens of ads containing brutal and emotional imagery of the deadly militant violence in Israel across platforms such as X and YouTube, according to data reviewed by POLITICO.

    Israel’s attempt to win the online information war is part of a growing trend of governments around the world moving aggressively online in order to shape their image, especially during times of crisis. PR campaigns in and around wars are nothing new. But paying for online advertising targeted at specific countries and demographics is now one of governments’ main tools to get their messages in front of more eyeballs. 

    The Israeli government’s efforts come as Hamas has pumped out its own propaganda on platforms including Telegram and X. The group — which is designated as a terrorist organization by the European Union, United States and United Kingdom — on Monday published online a first hostage video of a young French-Israeli woman.

    The social media campaigns began shortly after Hamas militants killed more than 1,200 and abducted nearly 200 people in a surprise assault. Israel’s military responded with retaliatory strikes and a siege of the Gaza Strip, killing more than 2,330 Palestinians to date. 

    More than 2 million Palestinians trapped in Gaza have been subjected to worsening conditions ahead of an expected upcoming offensive, and Western leaders are increasingly calling on the Israeli government to exercise restraint and respect humanitarian law. 

    A barrage of ads

    In a little over a week, Israel’s Foreign Affairs Ministry has run 30 ads that have been seen over 4 million times on X, according to the platform’s data. The paid videos and photos that started appearing on October 12 were aimed at adults over 25 in Brussels, Paris, Munich and The Hague, according to the same data. 

    The ads portrayed Hamas as a “vicious terrorist group,” similar to the Islamic State, and showed the scale and types of the abuse, including gruesome images like that of a lifeless, naked woman in a pickup truck. Another paid video posted to X, with text alternating between “ISIS” and “Hamas,” has disturbing imagery that gradually speeds up until the names of the two terrorist organizations blend into one. 

    “The world defeated ISIS. The world will defeat Hamas,” the ad ends.  

    A cyclist rides past kidnap and disappearance posters, showing recently kidnapped or missing Israelis, following the Hamas attacks on Israel, in central Paris on October 17, 2023 | Kiran Ridley/AFP via Getty Images

    Over on YouTube, the Israeli Foreign Affairs Ministry has released over 75 different ads, including some that are particularly graphic. They have been directed at viewers in Western countries — including France, Germany, the U.S. and the U.K. — and have aired between the initial Hamas attack on October 7 and Monday, according to Google’s transparency database. 

    “We would never post such graphic things before,” said a spokesperson for Israel’s Mission to the EU, who was granted anonymity because of security concerns to speak candidly. “This is something that is not part of our culture. We have a lot of respect [for] the deceased,” they said, adding that “war is not only on the ground.”

    In one ad, titled “Babies Can’t Read The Text in This Video But Their Parents Can,” a lullaby plays against a backdrop of a rainbow and a unicorn flies across the screen. The ad says, “We know that your child cannot read this,” but pleads with parents to sympathize with those whose children were killed during the attack on Israel.

    Another ad notes that “Israel will take every measure necessary to protect our citizens against these barbaric terrorists.” Yet another shows images of bloodied hostages with their faces blurred. 

    Israel has largely targeted Europe with its narrative to win over support. Nearly 50 video ads in English were directed to EU countries, while viewers in the U.S. and the U.K. were pushed 10 and 13 ads, respectively. One of the videos had been seen over 3 million times as of Tuesday afternoon European time.

    Platforms’ ongoing content challenge

    The ad campaign has posed some challenges to social media companies, which have set standards for what type of content can be posted on their streams.

    Google, for example, removed about 30 ads containing violent images from its public library after POLITICO reached out for a comment on Monday — meaning there is no public record that such ads ran for several days on YouTube. The company said it didn’t allow ads containing violent language, gruesome or disgusting imagery, or graphic images or accounts of physical trauma. (Some of the graphic videos are still available on the Israeli Foreign Affairs Ministry’s YouTube channel with some warnings.)

    X did not respond to a request for comment. The tech company is currently being investigated by the European Commission over whether its handling of illegal content and disinformation connected to the Hamas attack has respected the EU’s content-moderation law, the Digital Services Act (DSA). 

    Under the DSA, companies have to swiftly remove illegal content, including terrorist propaganda, and limit the spread of falsehoods — or else face sweeping fines of up to 6 percent of their global annual revenue. 

    No similar ads were running on Meta’s Instagram and Facebook, LinkedIn and TikTok, according to the platforms’ public ad libraries as of Monday. 

    Some of the ads online have been met with some pushback by viewers who have sought ways to stop being targeted by the foreign ministry. But experts in the field say that this is simply the new reality of PR campaigns built around wars.

    “This tactic is almost as old as war … Stirring moral outrage to build support for war is a very old practice,” said Emerson Brooking, a senior fellow at the Atlantic Council. “But I do not think it has collided with social media in quite this way before.”

    The EU reminded Google’s CEO Sundar Pichai last week to be “very vigilant” to ensure that YouTube respects the DSA | AFP via Getty Images

    Still, amid an onslaught of disinformation and illegal content connected to the attacks, Israel’s online push may prove more complicated. The European commissioner in charge of enforcing the DSA, Thierry Breton, has warned some online platforms to step up their efforts to protect young viewers from harmful content. The EU also reminded Google’s CEO Sundar Pichai last week to be “very vigilant” to ensure that YouTube respects the DSA. 

    As Israel amps up its war online, its army’s retaliatory airstrikes have damaged Gaza’s telecommunications infrastructure, leaving millions on the verge of a total network blackout. 

    “It is difficult to imagine a robust counter-messaging effort by pro-Palestinian groups which could make use of the same advertising medium,” Brooking said. “It’s one part of the social media battlefield in which Israel has a real advantage.”

    Hailey Fuchs contributed reporting from Washington. Liv Martin and Clothilde Goujard contributed reporting from Brussels.

    [ad_2]

    Liv Martin, Clothilde Goujard and Hailey Fuchs

    Source link

  • Hamas hate videos make Elon Musk Europe’s digital enemy No. 1

    Hamas hate videos make Elon Musk Europe’s digital enemy No. 1

    [ad_1]

    Elon Musk has made himself Europe’s digital public enemy No. 1.

    Since Hamas attacked Israel on Saturday, the billionaire’s social network X has been flooded with gruesome images, politically-motivated lies and terrorist propaganda that authorities say appear to violate both its own policies and the European Union’s new social media law.

    Now Musk is facing the threat of sanctions — including potentially hefty fines — as officials in Brussels start gathering evidence in preparation for a formal investigation into whether X has broken the European Union’s rules. Authorities in the U.K. and Germany have joined the criticism.

    The tussle represents a critical test for all sides. Musk will be keen to fight any claim that he’s failing to be a responsible owner of the social network formerly known as Twitter — all while upholding his commitment to free speech. The EU will want to show its new regulation, known as the Digital Services Act (DSA), has teeth.

    Thierry Breton, Europe’s commissioner in charge of social media content rules, demanded that Musk explain why graphic images and disinformation about the Middle East crisis were widespread on X.

    “I urge you to ensure a prompt, accurate and complete response to this request within the next 24 hours,” Breton wrote on X late Tuesday.

    “We will include your answer in our assessment file on your compliance with the DSA,” said Breton, who also wrote to Meta’s Mark Zuckerberg to remind him of his obligations under Europe’s rules. TikTok’s head Shou Zi Chew was also asked on October 12 to explain how his platform was dealing with misinformation and graphic content.

    “I remind you that following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed,” Breton said. Those fines can total up to 6 percent of a company’s global revenue.

    In response, Linda Yaccarino, X’s chief executive, wrote to Breton Thursday to outline how the social media giant had responded to the ongoing Middle East conflict. That included removing or labelling potentially harmful content, working with law enforcement agencies and adding so-called “community notes,” or crowd-sourced fact-checks, to posts.

    The heat on Twitter did not begin with the Hamas attacks. Ever since Musk bought the platform, he’s been hit by criticism that he’s failing to stop hate speech from spreading online.

    X has cut back on its content moderation teams, in the spirit of promoting free speech; pulled out of a Brussels-backed pledge to tackle digital foreign interference; and tweaked its social media algorithms to promote often shady content over verified material from news organizations and politicians.

    Musk has responded — via his social media account with 159 million followers — with jeers and attacks on his naysayers. But the latest uproar over content apparently inciting and praising terrorism has made it a surefire bet that X will be one of the first companies to be investigated under the EU’s social media rules.

    In response to Breton’s demand, Musk asked the French commissioner to outline how X had potentially violated Europe’s content regulations. “Our policy is that everything is open source and transparent,” he added. In the U.K., Michelle Donelan, the country’s digital minister, also met with social media executives Wednesday to discuss how their firms were combatting online hate speech.

    The probe is coming

    In truth, an investigation into X’s compliance with Europe’s new content rulebook has been on the cards for months. Over the summer, Breton and senior EU officials visited the company’s headquarters in San Francisco for a so-called “stress test” to see how it was complying.

    Under the EU’s legislation, tech giants like X, TikTok and Facebook must carry out lengthy risk assessments to figure out how hate speech and other illegal content can spread on their platforms. These firms must also allow greater access to external auditors, regulators and civil society groups that will track how social media companies are complying with the new oversight.

    Investigations into potential wrongdoing under Europe’s content rules will likely involve months-long inquiries into a company’s behavior, the Commission taking a legal decision on whether to levy fines or other sanctions, and a likely appeal from the firm in response. Such cases are expected to take years to complete.

    Within Brussels, the Commission has been compiling evidence of potential wrongdoing across multiple social media companies, even before the EU’s new content legislation came into full force in August, according to five officials and other individuals with direct knowledge of the matter.

    The goal is to start at least three investigations linked to the Digital Services Act by early next year, according to three of those people. They spoke on condition of anonymity because the discussions are not public and remain ongoing.

    In recent days, Commission officials have been compiling evidence associated with Hamas’ attacks on Israel — much of which has been shared on X with little, if any, pushback from the company.

    That content included verified X accounts with ties to Russia and Iran reposting graphic footage of alleged atrocities targeting Israeli soldiers. Some of these posts have been viewed hundreds of thousands of times. Other accounts linked to Hezbollah and ISIS have similarly posted widely with few, if any, removals.

    It is unclear whether such footage will lead to a specific investigation into X’s handling of the most recent violent content. But it has reaffirmed the likelihood Musk will soon face legal consequences for not removing such material from his social network.

    Combating violent and terrorist content requires “people sitting at a computer screen and looking at this and making judgments,” said Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Lab, which has tracked the online footprint of Hamas’ ongoing attacks. “It used to be that there were dozens of people that do that at Twitter, and now there’s only a handful.”

    Steven Overly contributed reporting from Washington. This article has been updated.

    [ad_2]

    Mark Scott

    Source link

  • Graphic videos of Hamas attacks spread on X

    Graphic videos of Hamas attacks spread on X

    [ad_1]

    Videos and images of mass shootings, kidnapped civilians and soldiers and other violence linked with Hamas’ attack on Israel are being widely shared on X, formerly known as Twitter, in violation of the company’s own rules against inciting violence.

    POLITICO’s review of Elon Musk’s social media platform in the wake of Hamas’ attacks, which began on October 7, discovered scores of videos that allegedly showed militants murdering civilians and Israeli soldiers; viral hashtags associated with the ongoing violence that praised Hamas’ activities; and social media posts that included graphic pictures of those killed and antisemitic hate speech.

    Such extremist material was also accessible on other social media platforms, most notably on Telegram. But the level at which the terrorist-related content was circulated on X was significantly higher compared with others, according to analysis by POLITICO and two outside researchers who independently reviewed the tech companies’ response to the Middle East crisis.

    “There is a huge prevalence of extremely graphic violent material on X,” said Adam Hadley, director of Tech Against Terrorism, a nonprofit organization that works with social media platforms and governments to combat how terrorist organizations spread their propaganda online. “This doesn’t appear to be the same on other large platforms.”

    Hadley and Moustafa Ayad, executive director for Africa, the Middle East and Asia for the Institute for Strategic Dialogue, a think tank that tracks online extremism, reviewed how graphic content tied to the unfolding violence spread across social media.

    A representative for X did not respond to a request for comment. The company’s internal rules say users cannot promote violent acts or share propaganda related to terrorist activities. “There is no place on X for violent and hateful entities,” the firm’s policy says.

    Under the European Union’s new social media rules, known as the Digital Services Act, large social media platforms like X also must combat the spread of hate speech — including content related to terrorist groups — or face fines of up to 6 percent of annual global revenue. Musk said X would comply with the 27-country bloc’s rules despite the billionaire’s free speech ethos and the firing of much of X’s global content moderation team.

    Yet in the days following Hamas’ widespread attacks on Israel, which have left hundreds of people dead, POLITICO easily found graphic images and videos on X in violation of both the EU and X’s separate rules.

    The content included grainy footage of militants gunning down Israeli soldiers, other social media posts of alleged Hamas fighters desecrating the bodies of victims, and videos of beheadings that, while promoted as taken from the most recent attacks, had, in fact, been reused from earlier jihadi violence in Syria.

    Hamas-related hashtags that praised the ongoing violence had also begun to trend across X despite much of this content either including graphic imagery or promoting terrorist attacks in violation of X’s own terms of service, based on POLITICO’s review of the social media platform.

    While such gruesome material is outlawed under all the tech companies’ internal policies, these firms’ executives and European regulators still find themselves in a difficult position when deciding how to respond to the ongoing conflict in the Middle East.

    Alongside the graphic violence shared online, people across the world have similarly taken to social media to voice their support for different sides of the conflict. Much of this content represents political speech and does not meet the threshold of promoting terrorism. With the violence spreading, tech giants’ content moderation teams and regulators must determine the fine line between what represents legitimate speech and what veers into jihadi propaganda.

    The lack of moderation tools and verification systems, particularly on X, also could lead to further offline violence — both inside and outside Israel. 

    Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Lab, which tracks online misinformation, said he had already seen spikes in antisemitism and Islamophobia correlated directly to Hamas attacks in Israel. 

    “Those [social media] platforms are already trending towards more hate speech, and this is going to exacerbate that problem even more,” he said.

    Rebecca Kern contributed reporting.

    [ad_2]

    Mark Scott

    Source link

  • Musk ousts X team curbing election disinformation

    Musk ousts X team curbing election disinformation

    [ad_1]

    Elon Musk, the owner of X (formerly Twitter) said overnight that a global team working on curbing disinformation during elections had been dismissed — a mere two days after being singled out by the EU’s digital chief as the online platform with the most falsehoods.

    Responding to reports about cuts, the tech mogul said on X, “Oh you mean the ‘Election Integrity’ Team that was undermining election integrity? Yeah, they’re gone.”

    Several Ireland-based staff working on a threat-disruption team — including senior manager Aaron Rodericks — were allegedly fired this week, according to tech media outlet The Information. Rodericks has, however, secured a court order halting disciplinary action over allegedly liking tweets critical of the company, according to Irish media.

    Vice President Vera Jourová this week warned that EU-supported research showed that X had become the platform with the largest ratio of posts containing misinformation or disinformation. The company under Musk left the European Commission’s anti-disinformation charter in late May after failing its first test.

    Jourová also urged tech companies to prepare for numerous national and European elections in the coming months, especially given the “particularly serious” risk that Russia will seek to meddle in them. Slovakia will hold its parliamentary election on Saturday. Poland, Luxembourg and the Netherlands will also head to the polls in the coming weeks.

    X must comply with the EU’s content rules, the Digital Services Act (DSA), which requires large tech platforms with over 45 million EU users to mitigate the risks of disinformation campaigns. Failure to follow the rulebook could lead to sweeping fines of up to 6 percent of companies’ global annual revenue.

    [ad_2]

    Clothilde Goujard

    Source link

  • TikTok hit with €345M fine for violating children’s privacy

    TikTok hit with €345M fine for violating children’s privacy

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Booming social media application TikTok needs to pay up in Europe for violating children’s privacy.

    The popular Chinese-owned app failed to protect children’s personal information by making their accounts publicly accessible by default and insufficiently tackled risks that under-13 users could access its platform, the Irish Data Protection Commission (DPC) said in a decision published Friday.

    The regulator slapped TikTok with a €345 million fine for breaching the EU’s landmark privacy law, the General Data Protection Regulation (GDPR).

    The penalty comes amid high tensions between the European Union and China, following the EU’s announcement that it plans to probe Chinese state subsidies of electric cars. European Commission Vice President Věra Jourová is also set to visit China next Monday-Tuesday and meet Vice Premier Zhang Guoqing to discuss the two sides’ technology policies, amid growing concerns over Beijing’s data gathering and cyber espionage practices.

    “Alone the fine of [€345 million] is a headline sanction to impose but reflects the extent to which the DPC identified child users were exposed to risk in particular arising from TikTok’s decision at the time to default child user accounts to public settings on registration,” said Helen Dixon, the Irish data protection commissioner, in a written statement.

    The Irish privacy regulator said that, in the period from July to December 2020, TikTok had unlawfully made accounts of users aged 13 to 17 public by default, effectively making it possible for anyone to watch and comment on videos they posted. The company also did not appropriately assess the risks that users under the age of 13 could gain access to its platform. It also found that TikTok is still pushing teenagers joining the platform to make their accounts and videos public through manipulative pop-ups. The regulator ordered the firm to change these misleading designs, known as dark patterns, within the next three months.

    Minors’ accounts could be paired up with unverified adult accounts during the second half of 2020. The authority said the video platform had also previously failed to explain to teenagers the consequences of making their content and accounts public.

    “We respectfully disagree with the decision, particularly the level of the fine imposed,” said Morgan Evans, a TikTok spokesperson. “The [Data Protection Commission]’s criticisms are focused on features and settings that were in place three years ago, and that we made changes to well before the investigation even began, such as setting all under-16 accounts to private by default.”

    TikTok added it will comply with the order to change misleading designs by extending such default-privacy settings to accounts of new users aged 16 and 17 later in September. It will also roll out in the next three months changes to the pop-up young users get when they first post a video.

    The decision marks the largest-ever privacy fine for TikTok, which is now actively used by 134 million Europeans monthly, and the fifth-largest fine imposed on any tech company under the GDPR.

    The platform popular among teenagers has previously faced criticism for insufficiently mitigating harms it poses to its young users, including deadly viral challenges and its addictive algorithm. TikTok — like 18 other online platforms — also now has to limit risks like cyberbullying or face steep fines under the Digital Services Act (DSA).

    The costly fine adds to TikTok’s woes in Europe, after it saw a wave of new restrictions on its use earlier this year due to concerns about its connection to China.

    The social media app, whose parent company ByteDance is based in Beijing, has struggled to quash concerns over its data security. The company said this month it had started moving its European data to a center within the bloc. Yet, it is still under investigation by the Irish Data Protection Commission over the potentially unlawful transfer of European users’ data to China.

    The social media app, whose parent company ByteDance is based in Beijing, has struggled to quash concerns over its data security | Roslan Rahman/AFP via Getty Images

    The Irish data authority in 2021 started probing whether TikTok was respecting children’s privacy requirements. TikTok set up its legal EU headquarters in Dublin in late 2020, meaning the Irish privacy watchdog has been the company’s supervisor for the whole bloc under the GDPR.

    Other national watchdogs weighed in on the investigation over the summer via the European Data Protection Board (EDPB), after two German privacy agencies and Italy’s regulator disagreed with Ireland’s initial findings. The group instructed Ireland to sanction TikTok for nudging its users toward public accounts in its misleading pop-ups.

    The board of European regulators also had “serious doubts” that TikTok’s measures to keep under-13 users off its platform were effective in the second half of 2020. The EDPB said the mechanisms “could be easily circumvented” and that TikTok was not checking ages “in a sufficiently systematic manner” for existing users. The group said, however, that it couldn’t find an infringement because of a lack of information available during their cooperation process.

    The United Kingdom’s data regulator in April fined TikTok £12.7 million (€14.8 million) for letting children under 13 on its platform and using their data. The company also received a €750,000 fine in 2021 from the Dutch privacy authority for failing to protect Dutch children by not having a privacy policy in their native language.

    This article has been updated.

    [ad_2]

    Clothilde Goujard

    Source link

  • The EU wants to cure your teen’s smartphone addiction 

    The EU wants to cure your teen’s smartphone addiction 

    [ad_1]

    Glazed eyes. One syllable responses. The steady tinkle of beeps and buzzes coming out of a smartphone’s speakers. 

    It’s a familiar scene for parents around the world as they battle with their kids’ internet use. Just ask Věra Jourová: When her 10-year old grandson is in front of a screen “nothing around him exists any longer, not even the granny,” the transparency commissioner told a European Parliament event in June.

    Countries are now taking the first steps to rein in excessive — and potentially harmful — use of big social media platforms like Facebook, Instagram, and TikTok.

    China wants to limit screen time to 40 minutes for children aged under eight, while the U.S. state of Utah has imposed a digital curfew for minors and parental consent to use social media. France has targeted manufacturers, requiring them to install a parental control system that can be activated when their device is turned on.

    The EU has its own sweeping plans. It’s taking bold steps with its Digital Services Act (DSA) that, from the end of this month, will force the biggest online platforms — TikTok, Facebook, Youtube — to open up their systems to scrutiny by the European Commission and prove that they’re doing their best to make sure their products aren’t harming kids.

    The penalty for non-compliance? A hefty fine of up to six percent of companies’ global annual revenue.

    Screen-sick 

    The exact link between social media use and teen mental health is debated. 

    These digital giants make their money from catching your attention and holding on to it as long as possible, raking in advertisers’ dollars in the process. And they’re pros at it: endless scrolling combined with the periodic, but unpredictable, feedback from likes or notifications, dole out hits of stimulation that mimic the effect of slot machines on our brains’ wiring.  

    It’s a craving that’s hard enough for adults to manage (just ask a journalist). The worry is that for vulnerable young people, that pull comes with very real, and negative, consequences: anxiety, depression, body image issues, and poor concentration. 

    Large mental health surveys in the U.S. — where the data is most abundant — have found a noticeable increase over the last 15 years in adolescent unhappiness, a tendency that continued through the pandemic.

    These increases cut across a number of measures: suicidal thoughts, depression, but also more mundanely, difficulties sleeping. This trend is most pronounced among teenage girls. 

    Smartphone use has exploded, with more people getting one at a younger age | Sean Gallup/Getty Images

    At the same time smartphone use has exploded, with more people getting one at a younger age. Social media use, measured as the number of times a given platform is accessed per day, is also way up. 

    There are some big caveats. The trend is most visible in the Anglophone world, although it’s also observable elsewhere in Europe. And there’s a whole range of confounding factors. Waning stigma around mental health might mean that young people are more comfortable describing what they’re going through in surveys. Changing political and socio-economic factors, as well as worries about climate change, almost certainly play a role. 

    Researchers on all sides of the debate agree that technology factors into it, but also that it doesn’t fully explain the trend. They diverge on where to put the emphasis. 

    Luca Braghieri, an assistant professor of economics at Bocconi university in Italy, said he originally thought concerns over Facebook were overblown, but he’s changed his mind after starting to research the topic (and has since deleted his Facebook account). 

    Braghieri and his colleagues combed through U.S. college mental health surveys from 2004-2006, the period when Facebook was first rolled-out in U.S. colleges, and before it was available to the general public. He found that in colleges where Facebook was introduced, students’ mental health dipped in a way not seen in universities where it hadn’t yet launched.

    Braghieri said the comparison with colleges where Facebook hadn’t yet arrived allowed the researchers to rule out unidentified other variables that might have been simultaneous. 

    Faced with mounting pressure in the last years, platforms like Instagram, YouTube and TikTok have introduced various tools to assuage concerns, including parental control | Staff/AFP via Getty Images

    Elia Abi-Jaoude, a psychiatrist and academic at the University of Toronto, said he observed the effect first-hand when working at a child and adolescent psychiatric in-patient unit starting in 2015.

    “I was basically on the front lines, witnessing the dramatic rise in struggles among adolescents,” said Abi-Jaoude, who has also published research on the topic. He noticed “all sorts of affective complaints, depression, anxiety — but for them to make it to the inpatient setting — we’re talking suicidality. And it was very striking to see.”  

    His biggest concern? Sleep deprivation — and the mood swings and worse school performance that accompany it. “I think a lot of our population is chronically sleep deprived,” said Abi-Jaoude, pointing the finger at smartphones and social media use.

    The flipside    

    New technologies have gotten caught up in panics before. Looking back, they now seem quaint, even funny.   

    “In the 1940s, there were concerns about radio addiction and children. In the 1960s it was television addiction. Now we have phone addiction. So I think the question is: Is now different? And if so, how?” asks Amy Orben, from the U.K. Medical Research Council’s Cognition and Brain Sciences Unit at the University of Cambridge.  

    She doesn’t dismiss the possible harms of social media, but she argues for a nuanced approach. That means honing in on the specific people who are most vulnerable, and the specific platforms and features that might be most risky. 

    Another major ask: more data.  

    There’s a “real disconnect” between the general belief and the actual evidence that social media use is harmful, said Orben, who went on to praise the new EU’s rules. Among its various provisions, the new EU rules will allow researchers for the first time to get their hands on data usually buried deep inside company servers.   

    Orben said that while much attention has gone into the negative effects of digital media use at the expense of positive examples, research she conducted into adolescent well-being during pandemic lockdowns, for example, showed that teens with access to laptops were happier than those without. 

    But when it comes to risk of harm to kids, Europe has taken a precautionary approach.

    “Not all kids will experience harm due to these risks from smartphones and social media use,” Patti Valkenburg, head of the Center for Research on Children, Adolescents and the Media at the University of Amsterdam, told a Commission event in June. “But for minors, we need to adopt the precautionary principle. The fact that harm can be caused should be enough to justify measures to prevent or mitigate potential risk.”

    Parental controls  

    Faced with mounting pressure in the past years, platforms like Instagram, YouTube and TikTok have introduced various tools to assuage concerns, including parental control. Since 2021, YouTube and Instagram send teenagers using their platform reminders to take breaks. TikTok in March announced minors have to enter a passcode after an hour on the app to continue watching videos. 

    Very large online platforms will also be banned from tracking kids’ online activity to show them personalized advertisements | Lionel Bonaventure/AFP via Getty Images

    But the social media companies will soon have to go further.  

    By the end of August, very large online platforms with over 45 million users in the European Union — including companies like Instagram, Snapchat, TikTok, Pinterest and YouTube — will have to comply with the longest list of rules. 

    They will have to hand in to the Digital Services Act watchdog — the European Commission — their first yearly assessment of the major impact of their design, algorithms, advertising and terms of services on a range of societal issues such as the protection of minors and mental wellbeing. They will then have to propose and implement concrete measures under the scrutiny of an audit company, the Commission and vetted researchers.

    Measures could include ensuring that algorithms don’t recommend videos about dieting to teenage girls or turning off autoplay by default so that minors don’t stay hooked watching content.

    Platforms will also be banned from tracking kids’ online activity to show them personalized advertisements. Manipulative designs such as never-ending timelines to glue users to platforms have been connected to addictive behavior, and will be off limits for tech companies. 

    Brussels is also working with tech companies, industry associations and children’s groups on rules for how to design platforms in a way that protects minors. The Code of Conduct on Age Appropriate Design planned for 2024 would then provide an explicit list of measures that the European Commission wants to see large social media companies carry out to comply with the new law.

    Yet, the EU’s new content law won’t be the magic wand parents might be looking for. The content rulebook doesn’t apply to popular entertainment like online games, messaging apps nor the digital devices themselves. 

    It remains unclear how the European Commission will potentially investigate and go after social media companies if they consider that they have failed to limit their platforms’ negative consequences for mental well-being. External auditors and researchers could also face obstacles to wade through troves of data and lines of code to find smoking guns and challenge tech companies’ claims. 

    How much companies are willing to run up against their business model in the service of their users’ mental health is also an open question, said John Albert, a policy expert at the tech-focused advocacy group AlgorithmWatch. Tech giants have made a serious effort at fighting the most egregious abuses, like cyber-bullying, or eating disorders, Albert said. And the level of transparency made possible by the new rules was unprecedented.

    “But when it comes to much broader questions about mental health and how these algorithmic recommender systems interact with users and affect them over time… I don’t know what we should expect them to change,” he explained. The back-and-forth vetting process is likely going to be drawn out as the Commission comes to grips with the complex platforms.

    “In the short term, at least, I would expect some kind of business as usual.”

    [ad_2]

    Carlo Martuscelli and Clothilde Goujard

    Source link

  • Senator Ted Cruz slams US agency for ‘collusion’ with EU on Big Tech rules

    Senator Ted Cruz slams US agency for ‘collusion’ with EU on Big Tech rules

    [ad_1]

    U.S. Republican Senator Ted Cruz called for details on the Federal Trade Commission’s (FTC) work with its European counterparts in a letter to FTC Chairwoman Lina Khan on Tuesday.

    The conservative Texas lawmaker criticized Khan and other FTC staff for meeting with European Commission officials to discuss incoming EU rules designed to rein in Big Tech companies, which are largely U.S.-based.

    “It is one thing for the EU to target U.S. businesses,” the letter said, but “it is altogether unthinkable that an agency of the U.S. government would actively help the EU” on its digital platform regulation.

    The FTC’s “collusion with foreign governments not only undermines U.S. sovereignty and Congress’s constitutional lawmaking authority,” Cruz’s letter said, “but also damages the competitiveness of U.S. firms and could negatively affect the savings of millions of Americans who hold stock in those companies” through pension plans.

    The letter comes just as tech giants like Meta, X (formerly Twitter) and TikTok are set to have to comply with the Commission’s Digital Services Act (DSA); they face steep fines if they don’t follow the DSA’s content-moderation rules, adopted in 2022.

    The Commission also plans to label companies with core digital services — such as Apple’s App Store and Google Search — as “gatekeepers” under the Digital Market Act (DMA), which is designed to make it harder for them to abuse their market dominance. Seven companies — including the U.S.-headquartered Apple, Meta, Alphabet, Amazon and Microsoft — notified their own platform services to the Commission as potential gatekeepers in July.

    The senator said that the DMA and DSA “objectively discriminate against U.S. companies” through mandatory compliance costs. In the letter, Cruz asks for detailed information on the number of FTC officials who have been “sent to Europe since June 2021,” as well as their titles and monthly expenses.

    Cruz also asked for details on the Commission’s office in San Francisco, which opened last September, and the FTC officials who have met with their EU counterparts there.

    On a visit to the EU’s California office in June, Internal Market Commissioner Thierry Breton rejected accusations that the bloc’s digital rulebooks target U.S.-based companies, calling the idea an “urban legend” and noting that non-U.S. companies must also comply with the rules.

    It follows a similar letter from Republican U.S. Representative James Comer, who’s the chairman of the House Oversight Committee, asking that communications between the FTC and Commission on the DMA be turned over to Congress.

    Clothilde Goujard contributed reporting.

    [ad_2]

    Edith Hancock

    Source link

  • EU to Zuckerberg: Explain yourself over Instagram pedophile network

    EU to Zuckerberg: Explain yourself over Instagram pedophile network

    [ad_1]

    EU Internal Market Commissioner Thierry Breton wants Meta CEO Mark Zuckerberg to explain and take “immediate” action over a recently exposed large pedophile network on Instagram.

    Instagram has been letting a vast network of accounts promoting and purchasing child sexual abuse material flourish on its platform, according to investigations by the Wall Street Journal and researchers released on June 7. The social media platform lets users search for explicit hashtags, and has offenders exploit its recommendation algorithms to promote illicit content.

    “Meta’s voluntary code on child protection seems not to work,” Breton wrote Thursday on Twitter. “Mark Zuckerberg must now explain & take immediate action.”

    Breton said he will discuss the issue with Zuckerberg at the Meta headquarters on June 23 during a trip to the U.S. The politician will travel later this month to see how social media companies including Twitter are preparing to comply with the EU’s flagship content moderation law, the Digital Services Act (DSA).

    He said Meta will have to “demonstrate measures” to the European Commission after August 25 when the DSA starts applying to Big Tech platforms. Otherwise, the company could face sweeping fines of up to 6 percent of its global annual revenue. Under the DSA, platforms have to crack down on illegal content and ensure children are safe on a platform. Companies have to also assess and limit how their platforms and algorithms are contributing to major societal problems such as the dissemination of illegal content and the protection of minors.

    A Meta spokesperson said the company has set up an internal task force to investigate and “immediately address” the recent findings from the Wall Street Journal and researchers.

    The company works “aggressively to fight” child exploitation and support law enforcement track down criminals, the spokesperson said. Meta dismantled 27 “abusive networks” between 2020 and 2022 and disabled over 490,000 accounts for violating our child safety policies in January 2023, they added.

    [ad_2]

    Clothilde Goujard

    Source link

  • EU’s Breton says Twitter ‘can’t hide’ after platform ditches disinformation code

    EU’s Breton says Twitter ‘can’t hide’ after platform ditches disinformation code

    [ad_1]

    Twitter has abandoned the EU’s code of practice on disinformation, Thierry Breton said late Friday, but Europe’s internal markets commissioner insisted that “obligations remain” for the social networking giant.

    “You can run but you can’t hide,” Breton said in a tweet, after confirming that the platform owned by Elon Musk had left the bloc’s disinformation code, which other major social media platforms have pledged to support.

    “Beyond voluntary commitments, fighting disinformation will be a legal obligation under DSA as of August 25,” Breton said, referring to the Digital Services Act — new social media rules that include fines of up to 6 percent of a company’s annual revenue.

    “Our teams will be ready for enforcement,” the commissioner said.

    The code of practice on disinformation is a voluntary rulebook that includes obligations for platforms to track political advertising, stop the monetization of disinformation, and provide greater access to outsiders. Participation in the code is designed to help offset some of these companies’ obligations within the separate and mandatory DSA.

    Twitter is one of eight social media platforms that fall under the scope of the DSA. The others are Facebook, TikTok, YouTube, Instagram, LinkedIn, Pinterest and Snapchat.

    Breton has publicly vowed that he would personally hold Musk to account for complying with the EU’s content rules.

    [ad_2]

    Jones Hayden

    Source link

  • Blocked! French minister threatens to ban Twitter if it doesn’t follow EU rules

    Blocked! French minister threatens to ban Twitter if it doesn’t follow EU rules

    [ad_1]

    France’s Digital Minister Jean-Noël Barrot waded into a growing tussle between the European Union and Elon Musk’s Twitter on Monday, as he threatened the social media platform’s access to the bloc.

    In comments made on radio network France Info, the minister said that the U.S. company would be banned from the EU if it refused to follow the incoming European Digital Services Act, which goes into effect throughout the EU at the end of August.

    “Disinformation is one of the gravest threats weighing on our democracies,” said Barrot. “Twitter, if it repeatedly doesn’t follow our rules, will be banned from the EU,” the French minister added.

    The remarks mark an escalation of an ongoing fight between European politicians and Twitter, which was bought last year by Elon Musk, the controversial billionaire who also controls Tesla and SpaceX.

    Last week, POLITICO reported that the social media platform was withdrawing from the EU’s voluntary disinformation code of practice.

    The code spells out obligations for large digital platforms on tracking political advertising, clamping down on disinformation, and encouraging wider access and participation to outsiders. Other major social media platforms have pledged to support the rulebook, which is meant to pre-empt some of the measures that will become mandatory under the incoming Digital Services Act. The regulation foresees fines worth up to 6 percent of a company’s annual revenue for rule-breakers.

    Internal Markets Commissioner Thierry Breton tweeted “You can run but you can’t hide” in response to Twitter’s decision to withdraw from the code.

    [ad_2]

    Carlo Martuscelli

    Source link

  • What the hell is wrong with TikTok? 

    What the hell is wrong with TikTok? 

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Western governments are ticked off with TikTok. The Chinese-owned app loved by teenagers around the world is facing allegations of facilitating espionage, failing to protect personal data, and even of corrupting young minds.

    Governments in the United States, United Kingdom, Canada, New Zealand and across Europe have moved to ban the use of TikTok on officials’ phones in recent months. If hawks get their way, the app could face further restrictions. The White House has demanded that ByteDance, TikTok’s Chinese parent company, sell the app or face an outright ban in the U.S.

    But do the allegations stack up? Security officials have given few details about why they are moving against TikTok. That may be due to sensitivity around matters of national security, or it may simply indicate that there’s not much substance behind the bluster.

    TikTok’s Chief Executive Officer Shou Zi Chew will be questioned in the U.S. Congress on Thursday and can expect politicians from all sides of the spectrum to probe him on TikTok’s dangers. Here are some of the themes they may pick up on: 

    1. Chinese access to TikTok data

    Perhaps the most pressing concern is around the Chinese government’s potential access to troves of data from TikTok’s millions of users. 

    Western security officials have warned that ByteDance could be subject to China’s national security legislation, particularly the 2017 National Security Law that requires Chinese companies to “support, assist and cooperate” with national intelligence efforts. This law is a blank check for Chinese spy agencies, they say.

    TikTok’s user data could also be accessed by the company’s hundreds of Chinese engineers and operations staff, any one of whom could be working for the state, Western officials say. In December 2022, some ByteDance employees in China and the U.S. targeted journalists at Western media outlets using the app (and were later fired). 

    EU institutions banned their staff from having TikTok on their work phones last month. An internal email sent to staff of the European Data Protection Supervisor, seen by POLITICO, said the move aimed “to reduce the exposure of the Commission from cyberattacks because this application is collecting so much data on mobile devices that could be used to stage an attack on the Commission.” 

    And the Irish Data Protection Commission, TikTok’s lead privacy regulator in the EU, is set to decide in the next few months if the company unlawfully transferred European users’ data to China. 

    Skeptics of the security argument say that the Chinese government could simply buy troves of user data from little-regulated brokers. American social media companies like Twitter have had their own problems preserving users’ data from the prying eyes of foreign governments, they note. 

    TikTok says it has never given data to the Chinese government and would decline if asked to do so. Strictly speaking, ByteDance is incorporated in the Cayman Islands, which TikTok argues would shield it from legal obligations to assist Chinese agencies. ByteDance is owned 20 percent by its founders and Chinese investors, 60 percent by global investors, and 20 percent by employees. 

    There’s little hope to completely stop European data from going to China | Alex Plavevski/EPA

    The company has unveiled two separate plans to safeguard data. In the U.S., Project Texas is a $1.5 billion plan to build a wall between the U.S. subsidiary and its Chinese owners. The €1.2 billion European version, named Project Clover, would move most of TikTok’s European data onto servers in Europe.

    Nevertheless, TikTok’s chief European lobbyist Theo Bertram also said in March that it would be “practically extremely difficult” to completely stop European data from going to China.

    2. A way in for Chinese spies

    If Chinese agencies can’t access TikTok’s data legally, they can just go in through the back door, Western officials allege. China’s cyber-spies are among the best in the world, and their job will be made easier if datasets or digital infrastructure are housed in their home territory.

    Dutch intelligence agencies have advised government officials to uninstall apps from countries waging an “offensive cyber program” against the Netherlands — including China, but also Russia, Iran and North Korea.

    Critics of the cyber espionage argument refer to a 2021 study by the University of Toronto’s Citizen Lab, which found that the app did not exhibit the “overtly malicious behavior” that would be expected of spyware. Still, the director of the lab said researchers lacked information on what happens to TikTok data held in China.

    TikTok’s Project Texas and Project Clover include steps to assuage fears of cyber espionage, as well as legal data access. The EU plan would give a European security provider (still to be determined) the power to audit cybersecurity policies and data controls, and to restrict access to some employees. Bertram said this provider could speak with European security agencies and regulators “without us [TikTok] being involved, to give confidence that there’s nothing to hide.” 

    Bertram also said the company was looking to hire more engineers outside China. 

    3. Privacy rights

    Critics of TikTok have accused the app of mass data collection, particularly in the U.S., where there are no general federal privacy rights for citizens.

    In jurisdictions that do have strict privacy laws, TikTok faces widespread allegations of failing to comply with them.

    The company is being investigated in Ireland, the U.K. and Canada over its handling of underage users’ data. Watchdogs in the Netherlands, Italy and France have also investigated its privacy practices around personalized advertising and for failing to limit children’s access to its platform. 

    TikTok has denied accusations leveled in some of the reports and argued that U.S. tech companies are collecting the same large amount of data. Meta, Amazon and others have also been given large fines for violating Europeans’ privacy.

    4. Psychological operations

    Perhaps the most serious accusation, and certainly the most legally novel one, is that TikTok is part of an all-encompassing Chinese civilizational struggle against the West. Its role: to spread disinformation and stultifying content in young Western minds, sowing division and apathy.

    Earlier this month, the director of the U.S. National Security Agency warned that Chinese control of TikTok’s algorithm could allow the government to carry out influence operations among Western populations. TikTok says it has around 300 million active users in Europe and the U.S. The app ranked as the most downloaded in 2022.

    A woman watches a video of Egyptian influencer Haneen Hossam | Khaled Desouki/AFP via Getty Images

    Reports emerged in 2019 suggesting that TikTok was censoring pro-LGBTQ content and videos mentioning Tiananmen Square. ByteDance has also been accused of pushing inane time-wasting videos to Western children, in contrast to the wholesome educational content served on its Chinese app Douyin.

    Besides accusations of deliberate “influence operations,” TikTok has also been criticized for failing to protect children from addiction to its app, dangerous viral challenges, and disinformation. The French regulator said last week that the app was still in the “very early stages” of content moderation. TikTok’s Italian headquarters was raided this week by the consumer protection regulator with the help of Italian law enforcement to investigate how the company protects children from viral challenges.

    Researchers at Citizen Lab said that TikTok doesn’t enforce obvious censorship. Other critics of this argument have pointed out that Western-owned platforms have also been manipulated by foreign countries, such as Russia’s campaign on Facebook to influence the 2016 U.S. elections. 

    TikTok says it has adapted its content moderation since 2019 and regularly releases a transparency report about what it removes. The company has also touted a “transparency center” that opened in the U.S. in July 2020 and one in Ireland in 2022. It has also said it will comply with new EU content moderation rules, the Digital Services Act, which will request that platforms give access to regulators and researchers to their algorithms and data.

    Additional reporting by Laura Kayali in Paris, Sue Allan in Ottawa, Brendan Bordelon in Washington, D.C., and Josh Sisco in San Francisco.

    [ad_2]

    Clothilde Goujard

    Source link

  • Thierry Breton: Brussels’ bulldozer digs in against US

    Thierry Breton: Brussels’ bulldozer digs in against US

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Thierry Breton is winning the war of ideas in Brussels.

    The ex-CEO is a political whirlwind with a gigantic portfolio as internal market chief, the backing of French President Emmanuel Macron and lots of proposals. He’s been touring European Union capitals to win support for plans to shield Europe’s industry from crippling energy prices, American subsidies and “naive” EU free traders.

    France’s decades-long push for more state intervention is finally finding some echo in Berlin and the 13th floor of the Berlaymont building, occupied by European Commission President Ursula von der Leyen, who largely owes her job to Macron.

    Omnipresent and ebullient, Breton is playing a key role in marshaling industry and political support for sweeping but so far vague plans to boost clean tech, secure key raw materials and overhaul EU checks on government support that he blasts as too slow to help companies.

    “Of course there is resistance; my job is precisely to manage and align everyone,” he told French TV this week of his January meetings with Spanish, Polish and Belgian leaders to flog a forthcoming industrial policy push that could be a turning point in how far European governments will finance companies.

    Time is short. Von der Leyen wants to line up proposals for a February summit. European industry is complaining that it can’t swallow far higher energy prices and tighter regulation for much longer, with at least one announcing a European shutdown and an Asian expansion.

    Breton said governments don’t need convincing on the need for rapid action. But he’s running up against one of Europe’s sacred cows — EU state aid rules run by Executive Vice President Margrethe Vestager that curb government support with lengthy checks to make sure companies don’t get unfair help. She’s also under intense pressure to preserve a “level playing field” as smaller countries worry about German and French financial firepower.

    The French internal market commissioner’s bullish style often sees him act as if he’s got a role in subsidies. In the fall, he sent a letter to EU countries asking them to send views on emergency state aid rules to the internal market department, which is under his supervision, two EU officials recalled. 

    In a meeting with European diplomats, a Commission representative had to correct it, the EU officials said, asking capitals to make sure the input goes instead to the competition department overseen by Vestager. 

    Europe First

    While Breton doesn’t like to be called a protectionist, his latest mission has been to protect Europe from its transatlantic friend.

    As early as September, one Commission official said, the Frenchman was mandated by Europe’s industry to speak out against U.S. President Joe Biden’s Inflation Reduction Act, which provides tax credits for U.S.-made electric cars and support to American battery supply chains.

    U.S President Joe Biden gives remarks during an event celebrating the passage of the Inflation Reduction Act on September 13, 2022 | Anna Moneymaker/Getty Images

    His Paris-backed campaign charged ahead while EU officials and diplomats tiptoed around the subject. Some within the Commission headquarters found his bad cop routine helpful in keeping pressure on the U.S. 

    “He’s been constructive, though clearly disruptive,” said Tyson Barker, head of the technology and global affairs program at the German Council of Foreign Relations.

    The Frenchman has even pitched himself as the bloc’s “sheriff” against Silicon Valley giants, warning billionaire Elon Musk that an overhaul of the Twitter social network can only go so far since “in Europe, the bird will fly by our rules.”

    “Big Tech companies only understand balances of power,” said Cédric O, a former French digital minister who worked with Breton during the French EU Council presidency. “When [Breton and Musk] see each other, it necessarily remains cordial, but Breton shows his teeth and rightly so. It’s his job.”

    Breton can even surprise his own services, according to two EU officials. In May, the Commission’s department responsible for digital policy — DG CONNECT — was caught off guard when Breton announced in the press that he would unveil plans by year-end to make sure that technology giants forked out for telecoms networks. 

    In so doing, Breton — who was CEO of France Télécom in the early 2000s — resurrected a long-dormant and fractious policy debate that had been put to rest almost a decade ago, when erstwhile Digital Commissioner Neelie Kroes ordered Europe’s telecoms operators to “adapt or die” rather than seek money from content providers.

    After Breton’s commitments, the Commission’s services were soon scrambling to develop some sort of a coherent policy program to deliver on the Frenchman’s comments. A consultation is scheduled for early this year. 

    Carte blanche

    Breton is a rare creature in the halls of the Berlaymont, where policy is hatched slowly after extensive consultation. To a former CEO with a broad remit — his portfolio runs from the expanse of space to the tiniest of microchips — rapid reaction matters more than treading on toes or singing from the hymn sheet. This often sees him floating ideas and then pulling back.

    Last year he alarmed environmentalists by raising the prospect of a U-turn on the EU’s polluting car ban. He wagged his finger at German Chancellor Olaf Scholz for a solo trip to China. He called for nuclear energy to be considered green. He has pushed out grand projects — such as industrial alliances on batteries and cloud, or a cyber shield — that he doesn’t always follow up on.

    He’s even pushed forward a multibillion-euro EU communication satellite program dubbed Iris², a favorite of French aerospace companies, that will see the bloc build a rival to Musk’s space-based Starlink broadband constellation.

    “It’s clear that he’s been given more free rein than others,” said one EU official. “He has von der Leyen’s ear,” the official added, noting that Breton enjoys “privileged access” to the Commission president — who may be mindful that she’ll need French support for a second term.

    According to an official, Breton “has von der Leyen’s ear” and enjoys “privileged access” to the Commission president | Valeria Mongeli/AFP via Getty Images

    Indeed, Breton’s massive role was partly designed as a counterweight to a German president.

    “There is a criticism of von der Leyen for being too German,” explained Sébastien Maillard, director of the Jacques Delors Institute think tank. “There may inevitably be a division of roles between them — [where Breton is] a counterbalance.”

    He’s been called an “unguided missile,” but more often than not, the Frenchman has Paris’ backing when going off script. His October op-ed with Italian colleague Paolo Gentiloni, which called for greater European financial solidarity, was part of France’s agenda, according to one high-ranking Commission official.

    “When he went out in the press with Gentiloni against Scholz’s €200 billion, he was clearly doing the job for Macron,” the official said. 

    His November call for a rethink on the 2035 car engine ban came just after a week after critical green legislation had been finalized by Commission Executive Vice President Frans Timmermans and jarred with the EU’s own position at the COP 27 climate summit in Indonesia. But it aped the position of French auto industry captains, such as Stellantis CEO Carlos Tavares and Renault’s Luca de Meo, who wanted Brussels to slam the brakes on the climate drive.

    Breton had not coordinated his car comments with colleagues in advance, according to two Commission officials.

    Less than 10 days later, French Prime Minister Elisabeth Borne echoed caution about the “extremely ambitious” engine ban and warned that pivoting to electric car manufacturing was daunting.

    Going A-list

    Breton acknowledged himself that he wasn’t Macron’s first choice for the critical EU post, telling POLITICO at a live event that he was a “plan B commissioner.”

    Asked if he was targeting an A-list job for the new Commission mandate in 2024, he said he “may be able to consider a new plan B assignment — if it is a plan B.”

    “He is thinking about the future,” said one EU official. “Look at his LinkedIn posts. He is thinking past the next European elections. He definitely wants to convince Macron to get an expanded portfolio.” 

    Grabbing the Commission’s top job may be tricky, relying on how EU leaders will line up, according to multiple EU and French officials. 

    There are other jobs, including overturning the unwritten law that no French or German candidate can hold the economically powerful competition portfolio. Another option could be becoming Europe’s official digital czar, combining the enforcement powers of the Digital Services Act and the Digital Markets Act into a supranational digital enforcement agency, one EU official said.

    Breton has shrugged off speculation on his long-term plans.

    “All my life, I have been informed of my next potential job 15 minutes before,” he said last month.

    Jakob Hanke Vela, Stuart Lau, Barbara Moens, Camille Gijs and Mark Scott contributed reporting.

    [ad_2]

    Laura Kayali, Samuel Stolton and Joshua Posaner

    Source link

  • Europe troubled but powerless over Twitter’s journalist ban

    Europe troubled but powerless over Twitter’s journalist ban

    [ad_1]

    European politicians said they were troubled by Twitter’s suspension of U.S. journalists from its platform but the move shows the limits of their planned new rules for online content and media freedom online. 

    France’s digital affairs minister Jean-Noël Barrot said he was “dismayed” about the direction Twitter was taking under Elon Musk after the platform removed nine U.S. journalists and other high-profile accounts in a seemingly arbitrary decision.

    “Freedom of the press is the very foundation of democracy. To attack one is to attack the other,” Barrot tweeted.

    European Commission Vice President Věra Jourová called the “arbitrary” removal of journalists worrying. French industry minister Roland Lescure announced he was temporarily quitting the platform in protest.

    The Twitter ban for tech journalists from media organizations such as the New York Times, the Washington Post and CNN appeared to come after they criticized the tech billionaire and self-proclaimed free speech advocate and wrote about the suspension of more than 20 accounts for sharing publicly available information about Musk’s private jet location.

    “Talking a lot about #FreeSpeech, but stopping it as soon as one is criticized oneself: that’s a strange understanding of #FreedomOfExpression,” said Germany’s Justice Minister Marc Buschmann.

    The German Foreign Affairs Ministry’s own Twitter account said press freedom should not “be switched on and off arbitrarily.”

    Twitter has been mired in controversy since it was acquired by Musk in October and shed staff that worked on content moderation and policy affairs. The platform is now struggling to stem disinformation, potentially falling foul of commitments it took in June 2022. This week the company disbanded its board of experts advising the company on its content policy.

    But restricting journalists’ access to a platform loved by the press risks a serious blow to media freedom and free speech. None of the banned journalists received an explanation of the social media platform’s decision. It was unclear if and when they would be allowed back on the platform. There had been calls to join alternatives such as Mastodon but links to it have reportedly been blocked on Twitter. The account for the open-source platform was also blocked.

    Flying by EU rules?

    In Brussels, politicians have pointed to the European Union’s legislative arsenal as a powerful tool to curb platforms’ power, with Internal Market Commissioner Thierry Breton insisting in October that Twitter’s bird logo “will fly by our rules” in the region.

    Those laws or proposals aren’t yet ready for use and can’t yet counter Musk’s unilateral decisions for the platform he owns. The Commission is preparing to enforce the EU’s content law, the Digital Services Act (DSA), from summer 2023. The new Media Freedom Act is also being negotiated and may not become law until at least late 2024.

    The DSA — and its ability to levy hefty fines — would require lengthy investigations by a Commission team that isn’t yet fully in place. The Media Freedom Act doesn’t specifically tackle an issue such as “deplatforming” or removing a person from a social network like Twitter.

    The Commission’s Jourová warned Twitter about the possibility of future penalties under the DSA — up to 6 percent of a company’s global revenue if they restrict EU-based users and content in an arbitrary and discriminatory manner. 

    Twitter could also be sanctioned in the future if it doesn’t tell users why they have been sanctioned. Large online platforms with over 45 million users in the EU will have to assess and limit potential harms to freedom of expression and information as well as media freedom and pluralism.

    “EU’s Digital Services Act requires respect of media freedom and fundamental rights. This is reinforced under our #MediaFreedomAct,” she tweeted. “@elonmusk should be aware of that. There are red lines. And sanctions, soon.”

    Politicians’ threats don’t reassure media and journalists’ organizations.

    “The European legal arsenal is not sufficient to oppose acts of arbitrary censorship,” said Ricardo Gutierrez, general secretary of the European Federation of Journalists (EFJ). 

    The draft Media Freedom Act largely aims at how Big Tech might treat news organizations. Very large online platforms would have to inform news outlets before they take down their content. It also foresees talks between media organizations and big social media to discuss content moderation problems.

    Wouter Gekiere from the European Broadcasting Union in Brussels echoed similar worries saying public media services couldn’t see how the DSA could prevent takedowns of journalists’ accounts.

    “The European Media Freedom Act would not do much more to protect the media online,” he said.” Journalists and editors need to have the ability to report on stories without fear of arbitrary platform controls.”

    Laura Kayali and Mark Scott contributed reporting.

    [ad_2]

    Clothilde Goujard

    Source link

  • UK takes fresh stab at internet rules as EU framework surges ahead

    UK takes fresh stab at internet rules as EU framework surges ahead

    [ad_1]

    LONDON — The United Kingdom wants to police the internet. Shame the European Union got there first. 

    Brexit was supposed to let Britain do things quicker. But less than a month after the 27-member bloc’s Digital Services Act (DSA) went into force, London is still struggling to cobble together its own version of the rulebook, known as the Online Safety Bill

    On Monday it tried again, with Britain’s Digital Secretary Michelle Donelan presenting a tweaked bill to parliament. It got the backing of MPs, but faces fresh committee scrutiny before heading to the House of Lords. And the path to a settled law still looks far from certain. 

    The bill, which seeks to make Britain “the safest place in the world to be online” has not only been a casualty of the country’s political instability — it has also proved a divisive issue for the country’s governing Conservative Party, where a vocal minority of backbenchers still view it as an unnecessary limit to free speech.

    “Far from being world-leading, the government has been beaten to the punch in regulating online spaces by numerous jurisdictions, including Canada, Australia and the EU,” said Lucy Powell, the opposition Labour Party’s shadow digital secretary.

    Powell said the latest version of the Online Safety Bill was also at risk of getting stuck due to “chaos in government and vested interests,” adding that it was imperative the bill pass through the legislature by April, when the current parliamentary session ends. 

    Much of the disagreement over the bill has centered on rules policing so-called legal-but-harmful content. That’s been largely dropped from the latest version of the planned law, after Prime Minister Rishi Sunak’s government bowed to pressure from right-wing MPs within his own party, who argued that the provisions threatened free speech.

    In the previous iteration of the bill, Ofcom, the country’s telecommunications and media regulator, was on the hook for enforcing rules that required social media giants to take action against potentially harmful but technically legal material like the promotion of self-harm.

    The government’s scrapping of legal-but-harmful content hasn’t been universally welcomed, however. Nadine Dorries, Donelan’s predecessor as digital secretary, proposed the provisions and has griped that they’d already passed parliamentary scrutiny before the bill was paused. 

    Long and winding road

    Britain’s attempts to regulate the internet really got going under Theresa May, who became prime minister in the wake of Britain’s vote to leave the European Union, and as lawmakers were beginning to become more tech-skeptic.

    The Tories’ May 2017 election manifesto promised that “online rules should reflect those that govern our lives offline,” but by the time Boris Johnson published his 2019 election offering, the Conservatives were also promising to protect the most vulnerable from accessing harmful content. Under Johnson’s close ally Dorries, a version of the legislation tackling legal-but-harmful content started to make its way through Parliament, before it was put on pause after he was ousted by Tory MPs.

    Johnson, the former prime minister, often seemed caught between his own personal free speech philosophy and his populist instincts of attacking Big Tech.

    The summer Tory leadership contest to replace Johnson reignited the debate, with contenders promising to look again at the law before the legal-but-harmful content provisions were ultimately watered down. Donelan replaced Dorries, becoming the seventh culture secretary since Brexit.

    The EU’s path to its online rulebook has been quicker. In part that’s because questions over free speech haven’t yet become the political touchpaper that they now are in the Anglosphere. Nevertheless the EU mostly side-stepped the issue by keeping its own rulebook more squarely aimed at purely illegal content, and the European Commission has made it clear public it does not want to create a so-called “Ministry of Truth.” 

    That means the EU hasn’t had to contend with the deep divisions the Online Safety Bill has prompted in the U.K., especially among the governing Tories.

    Instead, Brussels’ institutions have been mainly aligned on the key aspects of its framework, the DSA. The European Parliament and Council of the EU — representing the 27 European governments — largely supported the European Commission’s cautious approach to create rules to crack down on public-facing content illegal under EU or national laws like child sexual abuse material or terrorist propaganda. 

    When it comes to legal-but-harmful content, the EU’s approach requires very large online platforms — those with more than 45 million European users — to assess and limit the spread of content like disinformation and cyberbullying under the watch of regulators. Europe’s rules also have gone further than those on the other side of the channel by including mandated risk assessment and audits for tech giants like Meta and Alphabet so that they can be held accountable for potential wrongdoing. In the U.K., the main enforcement has been left to Ofcom via investigations. 

    Disagreements, when they came in Europe, have been on the edges, rather than at the core of the debate. Rows focused on limits to targeted ads and the level of obligations for online marketplaces like Amazon to carry out random checks on dangerous products on their platforms. In another example, some EU countries like France and Germany pushed and failed to force a 24-hour deadline for online platforms to take down illegal content. 

    Not just free speech

    In the U.K., it’s not just free speech issues that have proved controversial. The EU set out separate rules aiming to clamp down on child sexual abuse material online, but the U.K. poured similar provisions into the Online Safety Bill.

    That means high-stakes questions over how and whether the monitoring requirements undermine privacy — especially in encrypted messaging apps like WhatsApp — are being dealt with separately in the EU. But in the U.K. they’ve been thrown into the same mix as wide-ranging free speech debates.

    Differences between the rulebooks also raise the prospect of costly regulatory misalignment. While the U.K. bill slaps general monitoring requirements on the tech companies themselves, that’s explicitly banned by the EU.  Last month, the British regulator and its Australian counterpart created a new Western coalition of online content regulators, though failed to invite any EU counterparts to those discussions. Only Ireland’s watchdog joined as an observer.

    “This is about setting up our international engagement in expectation of setting up our rules,” Melanie Dawes, Ofcom’s chief executive, told POLITICO when announcing that initiative. “The success of this is about bringing together international partners.”

    Clothilde Goujard reported from Brussels.

    [ad_2]

    Vincent Manancourt, Annabelle Dickson, Clothilde Goujard and Mark Scott

    Source link

  • Elon Musk gives Europe’s digital watchdogs their biggest test yet

    Elon Musk gives Europe’s digital watchdogs their biggest test yet

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    After Elon Musk bought Twitter — and fired almost anyone whose job it was to deal with regulators — the social networking giant is now facing a flood of legal challenges across the European Union.

    The question now is whether the EU’s watchdogs can live up to their ambitions to be the world’s digital policemen.

    Ireland’s privacy regulator wants to know whether the company’s data protection standards are good enough. The European Commission doesn’t know who to ask about its upcoming online content rules. The bloc’s cybersecurity agencies raise concerns about an increase in online trolls and potential security risks.

    Twitter’s unfolding turmoil is precisely the regulatory challenge that Brussels has said it wants to take on. The 27-country bloc has positioned itself — via a flurry of privacy, content and digital competition rules — as the de facto enforcer for the Western world, expanding its digital rulebook beyond the EU’s borders and urging other countries to follow its lead.

    Now, the world’s richest man is putting those enforcement powers to the test. 

    Europe’s regulators have the largest collective rulebook to throw at companies suspected of potential breaches. But a lack of willingness to act quickly — combined with the internal confusion engulfing Twitter — has so far hamstrung the bloc’s enforcement role when it comes to holding Musk to Europe’s standards, according to eight EU and national government officials, speaking privately to POLITICO. 

    “This will be a major test for European regulators,” said Rebekah Tromble, director of the Institute for Data, Democracy & Politics at George Washington University. She is part of the advisory board of the European Digital Media Observatory, a group helping to shape the EU’s online content rulebook, known as the Digital Services Act (DSA).

    “If Musk continues to act with intransigence, I think there’s an opportunity for European regulators to move much more quickly than normal,” she added. “These regulators will certainly be motivated to act.”

    A representative for Twitter did not return requests for comment.

    Regulatory firepower

    The bloc certainly has the firepower to bring Twitter to heel.

    Under the EU’s General Data Protection Regulation, companies can be fined up to 4 percent of their annual global revenue for failing to keep people’s personal information safe. The Irish regulator, which has responsibility for enforcing these rules against Twitter because the company’s EU headquarters are in Dublin, has already doled out a €450,000 penalty for the firm’s inability to keep data safe.

    As part of the bloc’s upcoming content rules, which will start to be enforced next year, the Commission will have powers to levy separate fines of up to 6 percent of a company’s yearly revenue if it does not take down illegal content. Brussels also has the right to ban a platform from operating in the EU after repeated serious violations.

    “In Europe, the bird will fly by our rules,” Thierry Breton, the French commissioner, told Musk — via Twitter | Kenzo Tribouillard/AFP via Getty images

    Thierry Breton, the European internal market commissioner, reminded Musk of Twitter’s obligations under the bloc’s upcoming content rules in a call with the billionaire soon after his acquisition of the social network. Musk pledged to uphold those rules, even as he has pushed back at other content moderation practices that could hamper people’s freedom of expression on the platform.

    “In Europe, the bird will fly by our rules,” Breton, the French commissioner, told Musk — via Twitter.

    Yet over the last three weeks, European regulators and policymakers have struggled to navigate Twitter’s internal turmoil, according to four EU and national officials who spoke on the condition of anonymity to discuss internal deliberations.

    The likes of Damien Kieran, Twitter’s chief privacy officer in charge of complying with Europe’s tough data protection standards, and Stephen Turner, the company’s chief lobbyist in Brussels, were among scores of senior officials who left since Musk took over.

    Two of the EU officials, speaking about internal discussions on condition of anonymity, told POLITICO that multiple emails to Twitter executives bounced back after those individuals were laid off. One of those policymakers said he had taken to Twitter — scrolling through the scores of posts from the company’s employees announcing their departures — in search of information about who was still working there. A third official said the current confusion could prove problematic when the company had to reveal long-guarded information about the number of its EU users early next year. 

    Others have been fostering wider connections within the company, just in case. Arcom, France’s online platform regulator, for instance, has built ties with high-level executives outside of France and still had a contact in Dublin at the company to answer its pressing questions.

    The policymaking blackholes — fueled by mass layoffs — have been felt beyond the EU. 

    Julie Inman Grant, Australia’s eSafety commissioner who previously ran Twitter’s public policy team in Asia, told POLITICO she had written to the company last week to remind them about its obligations to clamp down on child sexual exploitation on the platform. She had yet to hear back from Musk or other senior officials.

    “We did have a meeting on the books with Twitter,” Melanie Dawes, chief executive of Ofcom, the U.K.’s communications regulator, told POLITICO ahead of her trip to Silicon Valley this week to meet many of the social media companies. “It was canceled.”

    What about privacy?

    Another open question is how Twitter with comply with Europe’s tough privacy rules.

    Although the company’s chief privacy executive had been fired — and rumors swirled Twitter could pull out of Ireland in its cost-saving push — the Irish Data Protection Commission told POLITICO it had yet to open an investigation into the firm.

    A spokesman for the agency said Twitter executives had assured Irish regulators on Monday that Renato Monteiro had been appointed as the company’s acting data protection officer — because it’s a legal requirement to have one — and no changes to how Twitter handled data had been made.  

    A data protection official said it was likely that Musk would move such decision-making powers to his inner circle in the United States | Justin Sullivan/Getty images

    A key unanswered question is whether, in the wake of the mass layoffs, Twitter’s operations in Dublin are either shuttered or cut back to an extent that regulatory decisions are made in California and not Ireland.

    Such a change would lead the company to fall foul of strict provisions within Europe’s privacy regime that require legal oversight of EU citizens’ data to be made in a firm’s headquarters within the 27-country bloc.

    A data protection official, who asked to remain anonymous to speak candidly, said it was likely that Musk would move such decision-making powers to his inner circle in the United States. That potential pullback could allow any European regulator — and not just the Irish agency — to go after Twitter for potential privacy violations under the bloc’s data protection regime, the official added.

    This story has been corrected to specify how multiple European privacy regulators may target Twitter for breaching the bloc’s rules if the company pulls out of Ireland.

    [ad_2]

    Mark Scott, Vincent Manancourt, Laura Kayali, Clothilde Goujard and Louis Westendarp

    Source link

  • Musk fires chief Brussels lobbyist in Twitter’s layoff round

    Musk fires chief Brussels lobbyist in Twitter’s layoff round

    [ad_1]

    Twitter’s director for EU public policy Stephen Turner is among the thousands of employees laid off by its new owner Elon Musk, Turner announced on the platform Monday.

    “After six years I am officially retired from Twitter. From starting the office in Brussels to building an awesome team it has been an amazing ride. Privileged and honoured to have the best colleagues in the world, great partners, and never a dull moment. Onto the next adventure,” he tweeted.

    Since taking over Twitter, Musk reportedly sacked half of the company’s workforce — including lobbyists and content moderators. The deep cuts in the policy teams have raised concerned among regulators and politicians.

    On Monday morning, two of Twitter’s six-persons-strong policy team in Brussels still had a job, one person with first-hand knowledge of the issue told POLITICO.

    Turner spearheaded Twitter’s engagement and lobbying in Brussels at a time when the EU crafted a series of strict laws regulating privacy, content moderation, media freedom, online advertising and more.

    [ad_2]

    Laura Kayali

    Source link