ReportWire

Tag: Illegal content

  • Spain set to ban social media for children under 16

    [ad_1]

    Spain will join the growing list of countries banning access to social media for children, Prime Minister Pedro Sanchez Tuesday. The law will apply to users under 16 years of age amidst a broader push to hold social media companies accountable for hate speech, social division and illegal content.

    at the World Governments Summit in Dubai, Prime Minister Sanchez excoriated social media, calling it a “failed state” where “laws are ignored and crime is endured.” He spoke to the importance of digital governance for these platforms, highlighting recent incidents like X’s AI chatbot Grok sexualized images of children, and the myriad that have taken place on Facebook.

    In light of what Sanchez called the “integral” role social media plays in the lives of young users, he said the best way to help them is to “take back control.” Next week, his government will enact a slew of new regulations, with a ban on users under 16 years of age among them. Social media companies will be required to implement what he calls “effective age verification systems” and “not just checkboxes.” A specific timeline on enforcement of the coming ban has not been announced.

    Spain will also make “algorithmic manipulation and amplification of illegal content” into a new criminal offense and Sanchez says tech CEOs will face criminal liability for hateful or illegal content on their platforms. The Prime Minister further announced that Spain has formed a coalition with five other unnamed European nations to enact stricter governance over social media platforms.

    Sanchez said children have been “exposed to a space they were never meant to navigate alone,” and that it’s the government’s job to intervene. He added social media has fallen from its promise to be a “tool for global understanding and cooperation.”

    enacted an under-16s ban on social media last year, which has prompted many nations to follow suit. It is under in the UK, while and have announced plans to enact similar bans.

    [ad_2]

    Andre Revilla

    Source link

  • EU charges Meta and TikTok over failures to tackle illegal content

    [ad_1]

    The European Commission has found that Meta and TikTok had violated rules under the Digital Services Act (DSA) and is now giving them the chance to comply if they don’t want to be fined up to 6 percent of their total worldwide annual turnover. According to the Commission, Facebook, Instagram and TikTok have “put in place burdensome procedures and tools” for researchers who want to request access to public data. This means they’re stuck with incomplete or unreliable information if they want to do research on topics like how minors are exposed to illegal or harmful content online. “Allowing researchers access to platforms’ data is an essential transparency obligation under the DSA,” the Commission wrote.

    In addition, the Commission is charging Meta over the lack of a user-friendly mechanism that would allow users to easily report posts with illegal content, such as child sexual abuse materials. The Commission explained that Facebook and Instagram use mechanisms that require several steps to be able to flag posts, and they use dark interface designs that make reporting confusing and dissuading. All those factors are in breach of DSA rules that require online platforms to give EU users easy-to-use mechanisms to be able to report illegal content.

    Under the DSA, users must also be able to challenge social networks’ decisions to remove their posts or suspend their accounts. The Commission found that neither Facebook nor Instagram allow users to explain their sides or provide evidence to substantiate their appeals, which limits the effectiveness of the appeal process.

    Meta and TikTok will be able to examine the Commission’s investigation files and to reply in writing about its findings. They’ll also have the opportunity to implement changes to comply with DSA rules, and it’s only if the Commission decides they’re non-compliant that they can be fined up to 6 percent of their global annual turnover. Meta disagreed that it had breached DSA rules, according to Financial Times. “In the European Union, we have introduced changes to our content reporting options, appeals process, and data access tools since the DSA came into force and are confident that these solutions match what is required under the law in the EU,” it said in a statement. Meanwhile, TikTok said it was reviewing the Commission’s findings but that “requirements to ease data safeguards place the DSA and GDPR in direct tension.” It’s asking regulators for guidance on “how these obligations should be reconciled.”

    [ad_2]

    Mariella Moon

    Source link

  • UK regulator fines 4chan for ignoring Online Safety Act demands

    [ad_1]

    Ofcom has slapped 4chan with a £20,000 ($26,700) fine for failing to comply with the internet and telecommunications regulator’s request for information under the UK’s Online Safety Act of 2023. The regulator has released an update for 11 of the investigations it opened after the first of its online safety codes became enforceable in March this year. Apparently, 4chan has ignored its requests for a copy of its illegal harms risk assessment and to provide information about its qualifying worldwide revenue. This is the first fine Ofcom has handed down under the new law, which was designed to prevent children from accessing harmful content online and which has prompted websites like Reddit and X to put up age verification measures.

    When the regulator launch its probe into 4chan in June, it said it received complaints about illegal content on the anonymous online board. It doesn’t exactly come as a surprise that 4chan refuses to give the regulator information about the risks of illegal content on its website: Back in August, the service filed a lawsuit against Ofcom, arguing that the enforcement of the UK’s Online Safety Act violates Americans’ freedom of speech. “This fine is a clear warning to those who fail to remove illegal content or protect children from harmful material,” said Liz Kendall, the UK Secretary of State for Science, Innovation and Technology. The regulator is also imposing an additional penalty of £100 ($133) per day on 4chan until it complies with its requests for information.

    Ofcom has announced the results of other investigations, as well, such as finding “serious compliance concerns” with two file-sharing services that have now deployed an automated tool that can detect and quickly remove uploads with child sexual abuse material (CSAM). Four other file-sharing services that were also under investigation for CSAM chose to geoblock access from UK IP addresses instead, so the regulator closed their cases.

    [ad_2]

    Mariella Moon

    Source link

  • X vs. EU: Elon Musk hit with probe over spread of toxic content

    X vs. EU: Elon Musk hit with probe over spread of toxic content

    [ad_1]

    Elon Musk just got an early, unwelcome Christmas present from Europe: the bloc’s first-ever investigation via its new social media law into X.

    The European Commission on Monday opened infringement proceedings under the Digital Services Act (DSA) into X, formerly known as Twitter, after the billionaire and his company were subjected to repeated claims they were not doing enough to stop disinformation and hate speech from spreading online.

    The four investigations focus on X’s failure to comply with rules to counter illegal content and disinformation as well as rules on transparency on advertising and data access for researchers. They will also scrutinize whether X misled its users by changing its so-called blue checks, which were initially launched as a verification tool but now serve as an indicator that a user is paying a subscription fee.

    “The Commission will carefully investigate X’s compliance with the DSA, to ensure European citizens are safeguarded online — as the regulation mandates,” Margrethe Vestager, the Commission’s executive vice president for digital policy, said in a statement.

    “We now have clear rules, ex-ante obligations, strong oversight, speedy enforcement and deterrent sanctions and we will make full use of our toolbox to protect our citizens and democracies,” said EU Internal Market Commissioner Thierry Breton. 

    “X remains committed to complying with the Digital Services Act, and is cooperating with the regulatory process,” Joe Benarroch, an X executive, said in an email to POLITICO.

    The investigations, which do not constitute wrongdoing and will lead to a monthslong probe,  could lead to fines of up to 6 percent of a company’s global revenue. 

    The rulebook, which started applying in late August, represents the most widespread attempt by any region or country in the Western world to hold social media companies to account for what is posted on their platforms. That includes lengthy risk assessments and outside audits to prove to regulators these companies are clamping down on illegal content like hate speech.

    The Commission, which enforces the DSA on 19 so-called Very Large Online Platforms, or VLOPs, has already taken preliminary steps like requests for information against several other social media networks including Instagram, Facebook, TikTok, YouTube and Snapchat. The focus has been on how they handle illegal content, combat disinformation and protect minors. 

    While Europe’s new social media rules only came into full force in late summer, X has been squarely on Brussels’ radar.

    Musk fired half of the company’s employees — including almost all of its trust and safety team — in November, 2022. That included many of the company’s European Union-focused policy jobs, either in Brussels or in Dublin, where the company has its EU headquarters.

    The social networking giant also pulled out of the EU’s code of practice on disinformation in May, an industry pledge coordinated by the Commission that will soon serve as a part of the bloc’s DSA rules. 

    Musk publicly committed X to complying with the bloc’s DSA rules, though he remains a vocal advocate for almost unfettered free speech rights for people that use his platform.

    Yet it was after Hamas militants attacked Israel on October 7 that Commission regulators upped their attention, according to four officials with direct knowledge of the matter who were granted anonymity to discuss internal discussions. Part of the investigations, linked to potentially illegal content, resulted from posts associated with the ongoing Middle East war.

    In the days and weeks following the Middle East attack, X was flooded with often gruesome images of suspected beheadings — often with few, if any, removals by the tech giant. Repeated requests for information from the company went unanswered, while discussions with X representatives, including at meetings in San Francisco with X engineers in the summer, often left Commission officials unsatisfied, according to two of the individuals who spoke to POLITICO.

    The company was the first to receive a request for information from the Commission in October about how it has tackled problematic content like graphic illegal content and disinformation linked to Hamas’ attack on Israel.

    The Commission on Monday said it would investigate whether X’s requirement to quickly remove illegal content, once flagged, had been respected, including “in light of X’s content moderation resources.” It said it would also examine whether X’s so-called community notes, or crowdsourced fact-checking program, and policies to limit risks for election integrity complied with the DSA.

    Brussels will also review whether X’s so-called blue checks, markers that can be bought by accounts to show they have been verified, could trick users into thinking blue check-holding accounts are more trustworthy. Regulators will similarly look into changes to how outsiders could analyze X’s data after the company replaced free access to this data with a paid version that costs up to $240,000 (€220,000) a month. X’s mandatory publicly accessible library of ads that ran on its platform will also be part of the investigations. 

    The investigations could lead to different results in the coming months from a sweeping fine to orders to impose specific measures and commitments from X to make changes. 

    “It is important that this process remains free of political influence and follows the law,” added Benarroch, the X executive. “X is focused on creating a safe and inclusive environment for all users on our platform, while protecting freedom of expression, and we will continue to work tirelessly toward this goal.”

    This article was updated to include new details.

    [ad_2]

    Clothilde Goujard and Mark Scott

    Source link

  • Inside the police force scouring the internet to save abused children

    Inside the police force scouring the internet to save abused children

    [ad_1]

    EUROPOL HEADQUARTERS, THE HAGUE — “Please knock. Do not enter,” said the sign on the door of Europe’s heavily-secured law enforcement headquarters in the Netherlands.

    Inside, detectives were staring at their computers, examining a video of a newborn girl being molested. 

    A group of international detectives was trying to identify details — a toy, a clothing label, a sound — that would allow them to rescue the girl and arrest those who sexually abused her, recorded it and then shared it on the internet.

    Even a tiny hint could help track down the country where the baby girl was assaulted, allowing the case to be transferred to the right police authority for further investigation. Such details matter when police are trying to tackle crimes carried out behind closed doors but disseminated online across the world.

    Finding and stopping child sex offenders is gruesome and frustrating most of the time — yet hugely rewarding sometimes — police officers part of the international task force at the EU agency Europol told POLITICO. 

    Offenders are getting better at covering their digital tracks and law enforcement officials say they don’t have the tools they need to keep up. The increasing use of encrypted communication online makes investigators’ work harder, especially as a pandemic that kept people at home and online ramped up a flood of abuse images and videos.

    In 2022, social media giant Meta Platforms found and reported 26 million images on Facebook and Instagram. Teenagers’ favorite apps Snapchat and TikTok respectively filed over 550,000 and nearly 290,000 reports to the U.S. National Center for Missing and Exploited Children, an organization acting as a clearing house under U.S. law for child sexual abuse material (CSAM) content that technology firms detect and spot.

    The European Commission in December also ordered Meta to explain what it was doing to fight the spread of illegal sexual images taken by minors themselves and shared through Instagram, under the EU’s new content-moderation rulebook, the Digital Services Act (DSA).

    Politicians across the world are keen to act. In the European Union and the United Kingdom, legislators have drafted laws to dig up more illegal content and extend law enforcement’s powers to crack down on child sexual abuse material.

    But those efforts have ignited a fierce public debate on what takes precedence: granting police new abilities to go after offenders or preserving privacy and protections against states’ and digital platforms’ mass online surveillance.

    The scale of the problem

    The Europol task force has met twice a year since 2014 to accelerate investigations to identify victims, most recently in November. It has almost tripled in size to 33 investigators representing 26 countries including Germany, Australia and the United States. 

    “You might recognize things that are in the images or you might recognize the sounds in the background or the voices. If you do that together with multiple nationalities in one room, it can be really effective,” said Marijn Schuurbiers, head of operations at Europol’s European Cybercrime Centre (EC3).

    Still, too often detectives feel like they’re swimming against the tide, as the amount of child sexual abuse material circulating online surges.

    Europol created a database in 2016 and this system now holds 85 million unique photos and videos of children, many found on pedophile forums on the “dark web” — the part of the internet that isn’t publicly searchable and requires special software to browse.

    “We can work hours and hours on end and we’re still scratching the surface. It’s terrifying,” said Mary, a national police officer from a non-EU country with 17 years of experience. She requested not to use her last name to protect her identity while doing investigative work. 

    The task force in November went through 432 files, each containing tens of thousands of images, and found the most likely country for 285 of the children abused in the images. Police believe it likely identified 74 of the victims, three of whom were rescued by the time of publication. Two offenders were arrested. 

    “We have some successes. But all I can see is those we can’t help,” Mary said. 

    Many Western agencies outside of the U.S. are restricted by privacy provisions in the software they use like facial recognition tools. They often have to make do with a mix of manual analysis and freely accessible tools they can get from the internet.

    “If you have like thousands or hundreds of thousands or even millions of pictures, it’s basically impossible to go manually through them, one by one,” said Schuurbiers. 

    Since 2017, the agency has regularly been asking for public help to identify objects in images like plastic bags and a logo on a school uniform. Europol said it has gotten 27,000 tips from internet sleuths including investigative outlet Bellingcat, some of which led to 23 kids being identified and five offenders being prosecuted.  

    Groups on the “dark web” remain the principal place where offenders share illegal content, according to Europol

    But police and child protection hotlines are seeing a growing number of images cropping up on popular and accessible platforms like Facebook, Instagram, Snapchat and Instagram. The pandemic made this worse as more children and teenagers also joined social media and gaming websites where offenders got better at grooming victims and blackmailing them into making sexual content.

    Law enforcement agencies around the world have also sounded the alarm that offenders are also connecting with minors and exchanging illegal content on encrypted messaging apps like WhatsApp, Signal and iMessage, making it extremely challenging to find the content. WhatsApp, for instance, scans the photos and descriptions users but is unable to monitor their highly secure messages.

    Finding more child sexual abuse material

    The crisis of child sexual abuse material proliferating online has got governments pushing through sweeping new legislation to make it possible for law enforcement to investigate more online material and use artificial intelligence tools to help them. 

    The European Commission has proposed a law that could force tech companies like Meta, Apple and Google to scan messages and content stored in the cloud for images of abuse — and even for conversations of offenders seeking to manipulate minors upon a judge’s order. The companies would have to report the content, so it could end up with Europol or other national investigators, and then remove it.

    The United Kingdom recently passed the Online Safety Act, which some legal experts say would allow the country’s platform regulator Ofcom to force companies to break encryption to find sexual abuse. Government and Ofcom officials have said companies would not currently be forced to monitor content because tools to bypass encryption and also preserve privacy do not exist at the moment.

    Both plans have sparked widespread backlash among digital rights activists, tech experts and some lawyers. They fear the laws effectively force tech firms to ditch encryption, and that indiscriminate scanning will lead to mass surveillance.

    Negotiations on the EU draft law remain on thin ice, with politicians and member countries clashing over how far to go in hunting down potential illegal child abuse. And Brussels also finalized in December a new law, the Artificial Intelligence Act, governing how law enforcement will be able to use AI tools like facial recognition software to go through footage and images. 

    Still, EU lawmakers have already significantly expanded Europol’s powers to build new artificial intelligence tools and handle more data. Under the Digital Services Act, Europol and national police will also be able to swiftly compel tech companies to remove publicly accessible illegal content and hand over information about users posting such images.

    Anne, a Europol investigator, said she doesn’t keep count of the number of kids she’s identified in her 12 years working in the field — but she remembers them. She requested not to use her last name to protect her investigative work.

    “The thing that I will always remember from my cases is the images,” she said. “They stay in my head.”

    [ad_2]

    Clothilde Goujard

    Source link

  • Israel floods social media to shape opinion around the war

    Israel floods social media to shape opinion around the war

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    BRUSSELS — A photo with a bloody dead baby whose face is blurred has been circulating on X for the last four days. 

    “This is the most difficult image we’ve ever posted. As we are writing this we are shaking,” the accompanying message says. 

    The footage is not from a reporter covering the conflict in Israel and Gaza, or from one of the countless accounts sharing horrifying videos of the atrocities. 

    It’s a paid message from the Israeli Foreign Affairs Ministry.

    Since Hamas attacked thousands of its citizens last week, the Israeli government has started a sweeping social media campaign in key Western countries to drum up support for its military response against the group. Part of its strategy: pushing dozens of ads containing brutal and emotional imagery of the deadly militant violence in Israel across platforms such as X and YouTube, according to data reviewed by POLITICO.

    Israel’s attempt to win the online information war is part of a growing trend of governments around the world moving aggressively online in order to shape their image, especially during times of crisis. PR campaigns in and around wars are nothing new. But paying for online advertising targeted at specific countries and demographics is now one of governments’ main tools to get their messages in front of more eyeballs. 

    The Israeli government’s efforts come as Hamas has pumped out its own propaganda on platforms including Telegram and X. The group — which is designated as a terrorist organization by the European Union, United States and United Kingdom — on Monday published online a first hostage video of a young French-Israeli woman.

    The social media campaigns began shortly after Hamas militants killed more than 1,200 and abducted nearly 200 people in a surprise assault. Israel’s military responded with retaliatory strikes and a siege of the Gaza Strip, killing more than 2,330 Palestinians to date. 

    More than 2 million Palestinians trapped in Gaza have been subjected to worsening conditions ahead of an expected upcoming offensive, and Western leaders are increasingly calling on the Israeli government to exercise restraint and respect humanitarian law. 

    A barrage of ads

    In a little over a week, Israel’s Foreign Affairs Ministry has run 30 ads that have been seen over 4 million times on X, according to the platform’s data. The paid videos and photos that started appearing on October 12 were aimed at adults over 25 in Brussels, Paris, Munich and The Hague, according to the same data. 

    The ads portrayed Hamas as a “vicious terrorist group,” similar to the Islamic State, and showed the scale and types of the abuse, including gruesome images like that of a lifeless, naked woman in a pickup truck. Another paid video posted to X, with text alternating between “ISIS” and “Hamas,” has disturbing imagery that gradually speeds up until the names of the two terrorist organizations blend into one. 

    “The world defeated ISIS. The world will defeat Hamas,” the ad ends.  

    A cyclist rides past kidnap and disappearance posters, showing recently kidnapped or missing Israelis, following the Hamas attacks on Israel, in central Paris on October 17, 2023 | Kiran Ridley/AFP via Getty Images

    Over on YouTube, the Israeli Foreign Affairs Ministry has released over 75 different ads, including some that are particularly graphic. They have been directed at viewers in Western countries — including France, Germany, the U.S. and the U.K. — and have aired between the initial Hamas attack on October 7 and Monday, according to Google’s transparency database. 

    “We would never post such graphic things before,” said a spokesperson for Israel’s Mission to the EU, who was granted anonymity because of security concerns to speak candidly. “This is something that is not part of our culture. We have a lot of respect [for] the deceased,” they said, adding that “war is not only on the ground.”

    In one ad, titled “Babies Can’t Read The Text in This Video But Their Parents Can,” a lullaby plays against a backdrop of a rainbow and a unicorn flies across the screen. The ad says, “We know that your child cannot read this,” but pleads with parents to sympathize with those whose children were killed during the attack on Israel.

    Another ad notes that “Israel will take every measure necessary to protect our citizens against these barbaric terrorists.” Yet another shows images of bloodied hostages with their faces blurred. 

    Israel has largely targeted Europe with its narrative to win over support. Nearly 50 video ads in English were directed to EU countries, while viewers in the U.S. and the U.K. were pushed 10 and 13 ads, respectively. One of the videos had been seen over 3 million times as of Tuesday afternoon European time.

    Platforms’ ongoing content challenge

    The ad campaign has posed some challenges to social media companies, which have set standards for what type of content can be posted on their streams.

    Google, for example, removed about 30 ads containing violent images from its public library after POLITICO reached out for a comment on Monday — meaning there is no public record that such ads ran for several days on YouTube. The company said it didn’t allow ads containing violent language, gruesome or disgusting imagery, or graphic images or accounts of physical trauma. (Some of the graphic videos are still available on the Israeli Foreign Affairs Ministry’s YouTube channel with some warnings.)

    X did not respond to a request for comment. The tech company is currently being investigated by the European Commission over whether its handling of illegal content and disinformation connected to the Hamas attack has respected the EU’s content-moderation law, the Digital Services Act (DSA). 

    Under the DSA, companies have to swiftly remove illegal content, including terrorist propaganda, and limit the spread of falsehoods — or else face sweeping fines of up to 6 percent of their global annual revenue. 

    No similar ads were running on Meta’s Instagram and Facebook, LinkedIn and TikTok, according to the platforms’ public ad libraries as of Monday. 

    Some of the ads online have been met with some pushback by viewers who have sought ways to stop being targeted by the foreign ministry. But experts in the field say that this is simply the new reality of PR campaigns built around wars.

    “This tactic is almost as old as war … Stirring moral outrage to build support for war is a very old practice,” said Emerson Brooking, a senior fellow at the Atlantic Council. “But I do not think it has collided with social media in quite this way before.”

    The EU reminded Google’s CEO Sundar Pichai last week to be “very vigilant” to ensure that YouTube respects the DSA | AFP via Getty Images

    Still, amid an onslaught of disinformation and illegal content connected to the attacks, Israel’s online push may prove more complicated. The European commissioner in charge of enforcing the DSA, Thierry Breton, has warned some online platforms to step up their efforts to protect young viewers from harmful content. The EU also reminded Google’s CEO Sundar Pichai last week to be “very vigilant” to ensure that YouTube respects the DSA. 

    As Israel amps up its war online, its army’s retaliatory airstrikes have damaged Gaza’s telecommunications infrastructure, leaving millions on the verge of a total network blackout. 

    “It is difficult to imagine a robust counter-messaging effort by pro-Palestinian groups which could make use of the same advertising medium,” Brooking said. “It’s one part of the social media battlefield in which Israel has a real advantage.”

    Hailey Fuchs contributed reporting from Washington. Liv Martin and Clothilde Goujard contributed reporting from Brussels.

    [ad_2]

    Liv Martin, Clothilde Goujard and Hailey Fuchs

    Source link

  • Hamas hate videos make Elon Musk Europe’s digital enemy No. 1

    Hamas hate videos make Elon Musk Europe’s digital enemy No. 1

    [ad_1]

    Elon Musk has made himself Europe’s digital public enemy No. 1.

    Since Hamas attacked Israel on Saturday, the billionaire’s social network X has been flooded with gruesome images, politically-motivated lies and terrorist propaganda that authorities say appear to violate both its own policies and the European Union’s new social media law.

    Now Musk is facing the threat of sanctions — including potentially hefty fines — as officials in Brussels start gathering evidence in preparation for a formal investigation into whether X has broken the European Union’s rules. Authorities in the U.K. and Germany have joined the criticism.

    The tussle represents a critical test for all sides. Musk will be keen to fight any claim that he’s failing to be a responsible owner of the social network formerly known as Twitter — all while upholding his commitment to free speech. The EU will want to show its new regulation, known as the Digital Services Act (DSA), has teeth.

    Thierry Breton, Europe’s commissioner in charge of social media content rules, demanded that Musk explain why graphic images and disinformation about the Middle East crisis were widespread on X.

    “I urge you to ensure a prompt, accurate and complete response to this request within the next 24 hours,” Breton wrote on X late Tuesday.

    “We will include your answer in our assessment file on your compliance with the DSA,” said Breton, who also wrote to Meta’s Mark Zuckerberg to remind him of his obligations under Europe’s rules. TikTok’s head Shou Zi Chew was also asked on October 12 to explain how his platform was dealing with misinformation and graphic content.

    “I remind you that following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed,” Breton said. Those fines can total up to 6 percent of a company’s global revenue.

    In response, Linda Yaccarino, X’s chief executive, wrote to Breton Thursday to outline how the social media giant had responded to the ongoing Middle East conflict. That included removing or labelling potentially harmful content, working with law enforcement agencies and adding so-called “community notes,” or crowd-sourced fact-checks, to posts.

    The heat on Twitter did not begin with the Hamas attacks. Ever since Musk bought the platform, he’s been hit by criticism that he’s failing to stop hate speech from spreading online.

    X has cut back on its content moderation teams, in the spirit of promoting free speech; pulled out of a Brussels-backed pledge to tackle digital foreign interference; and tweaked its social media algorithms to promote often shady content over verified material from news organizations and politicians.

    Musk has responded — via his social media account with 159 million followers — with jeers and attacks on his naysayers. But the latest uproar over content apparently inciting and praising terrorism has made it a surefire bet that X will be one of the first companies to be investigated under the EU’s social media rules.

    In response to Breton’s demand, Musk asked the French commissioner to outline how X had potentially violated Europe’s content regulations. “Our policy is that everything is open source and transparent,” he added. In the U.K., Michelle Donelan, the country’s digital minister, also met with social media executives Wednesday to discuss how their firms were combatting online hate speech.

    The probe is coming

    In truth, an investigation into X’s compliance with Europe’s new content rulebook has been on the cards for months. Over the summer, Breton and senior EU officials visited the company’s headquarters in San Francisco for a so-called “stress test” to see how it was complying.

    Under the EU’s legislation, tech giants like X, TikTok and Facebook must carry out lengthy risk assessments to figure out how hate speech and other illegal content can spread on their platforms. These firms must also allow greater access to external auditors, regulators and civil society groups that will track how social media companies are complying with the new oversight.

    Investigations into potential wrongdoing under Europe’s content rules will likely involve months-long inquiries into a company’s behavior, the Commission taking a legal decision on whether to levy fines or other sanctions, and a likely appeal from the firm in response. Such cases are expected to take years to complete.

    Within Brussels, the Commission has been compiling evidence of potential wrongdoing across multiple social media companies, even before the EU’s new content legislation came into full force in August, according to five officials and other individuals with direct knowledge of the matter.

    The goal is to start at least three investigations linked to the Digital Services Act by early next year, according to three of those people. They spoke on condition of anonymity because the discussions are not public and remain ongoing.

    In recent days, Commission officials have been compiling evidence associated with Hamas’ attacks on Israel — much of which has been shared on X with little, if any, pushback from the company.

    That content included verified X accounts with ties to Russia and Iran reposting graphic footage of alleged atrocities targeting Israeli soldiers. Some of these posts have been viewed hundreds of thousands of times. Other accounts linked to Hezbollah and ISIS have similarly posted widely with few, if any, removals.

    It is unclear whether such footage will lead to a specific investigation into X’s handling of the most recent violent content. But it has reaffirmed the likelihood Musk will soon face legal consequences for not removing such material from his social network.

    Combating violent and terrorist content requires “people sitting at a computer screen and looking at this and making judgments,” said Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Lab, which has tracked the online footprint of Hamas’ ongoing attacks. “It used to be that there were dozens of people that do that at Twitter, and now there’s only a handful.”

    Steven Overly contributed reporting from Washington. This article has been updated.

    [ad_2]

    Mark Scott

    Source link

  • The EU wants to cure your teen’s smartphone addiction 

    The EU wants to cure your teen’s smartphone addiction 

    [ad_1]

    Glazed eyes. One syllable responses. The steady tinkle of beeps and buzzes coming out of a smartphone’s speakers. 

    It’s a familiar scene for parents around the world as they battle with their kids’ internet use. Just ask Věra Jourová: When her 10-year old grandson is in front of a screen “nothing around him exists any longer, not even the granny,” the transparency commissioner told a European Parliament event in June.

    Countries are now taking the first steps to rein in excessive — and potentially harmful — use of big social media platforms like Facebook, Instagram, and TikTok.

    China wants to limit screen time to 40 minutes for children aged under eight, while the U.S. state of Utah has imposed a digital curfew for minors and parental consent to use social media. France has targeted manufacturers, requiring them to install a parental control system that can be activated when their device is turned on.

    The EU has its own sweeping plans. It’s taking bold steps with its Digital Services Act (DSA) that, from the end of this month, will force the biggest online platforms — TikTok, Facebook, Youtube — to open up their systems to scrutiny by the European Commission and prove that they’re doing their best to make sure their products aren’t harming kids.

    The penalty for non-compliance? A hefty fine of up to six percent of companies’ global annual revenue.

    Screen-sick 

    The exact link between social media use and teen mental health is debated. 

    These digital giants make their money from catching your attention and holding on to it as long as possible, raking in advertisers’ dollars in the process. And they’re pros at it: endless scrolling combined with the periodic, but unpredictable, feedback from likes or notifications, dole out hits of stimulation that mimic the effect of slot machines on our brains’ wiring.  

    It’s a craving that’s hard enough for adults to manage (just ask a journalist). The worry is that for vulnerable young people, that pull comes with very real, and negative, consequences: anxiety, depression, body image issues, and poor concentration. 

    Large mental health surveys in the U.S. — where the data is most abundant — have found a noticeable increase over the last 15 years in adolescent unhappiness, a tendency that continued through the pandemic.

    These increases cut across a number of measures: suicidal thoughts, depression, but also more mundanely, difficulties sleeping. This trend is most pronounced among teenage girls. 

    Smartphone use has exploded, with more people getting one at a younger age | Sean Gallup/Getty Images

    At the same time smartphone use has exploded, with more people getting one at a younger age. Social media use, measured as the number of times a given platform is accessed per day, is also way up. 

    There are some big caveats. The trend is most visible in the Anglophone world, although it’s also observable elsewhere in Europe. And there’s a whole range of confounding factors. Waning stigma around mental health might mean that young people are more comfortable describing what they’re going through in surveys. Changing political and socio-economic factors, as well as worries about climate change, almost certainly play a role. 

    Researchers on all sides of the debate agree that technology factors into it, but also that it doesn’t fully explain the trend. They diverge on where to put the emphasis. 

    Luca Braghieri, an assistant professor of economics at Bocconi university in Italy, said he originally thought concerns over Facebook were overblown, but he’s changed his mind after starting to research the topic (and has since deleted his Facebook account). 

    Braghieri and his colleagues combed through U.S. college mental health surveys from 2004-2006, the period when Facebook was first rolled-out in U.S. colleges, and before it was available to the general public. He found that in colleges where Facebook was introduced, students’ mental health dipped in a way not seen in universities where it hadn’t yet launched.

    Braghieri said the comparison with colleges where Facebook hadn’t yet arrived allowed the researchers to rule out unidentified other variables that might have been simultaneous. 

    Faced with mounting pressure in the last years, platforms like Instagram, YouTube and TikTok have introduced various tools to assuage concerns, including parental control | Staff/AFP via Getty Images

    Elia Abi-Jaoude, a psychiatrist and academic at the University of Toronto, said he observed the effect first-hand when working at a child and adolescent psychiatric in-patient unit starting in 2015.

    “I was basically on the front lines, witnessing the dramatic rise in struggles among adolescents,” said Abi-Jaoude, who has also published research on the topic. He noticed “all sorts of affective complaints, depression, anxiety — but for them to make it to the inpatient setting — we’re talking suicidality. And it was very striking to see.”  

    His biggest concern? Sleep deprivation — and the mood swings and worse school performance that accompany it. “I think a lot of our population is chronically sleep deprived,” said Abi-Jaoude, pointing the finger at smartphones and social media use.

    The flipside    

    New technologies have gotten caught up in panics before. Looking back, they now seem quaint, even funny.   

    “In the 1940s, there were concerns about radio addiction and children. In the 1960s it was television addiction. Now we have phone addiction. So I think the question is: Is now different? And if so, how?” asks Amy Orben, from the U.K. Medical Research Council’s Cognition and Brain Sciences Unit at the University of Cambridge.  

    She doesn’t dismiss the possible harms of social media, but she argues for a nuanced approach. That means honing in on the specific people who are most vulnerable, and the specific platforms and features that might be most risky. 

    Another major ask: more data.  

    There’s a “real disconnect” between the general belief and the actual evidence that social media use is harmful, said Orben, who went on to praise the new EU’s rules. Among its various provisions, the new EU rules will allow researchers for the first time to get their hands on data usually buried deep inside company servers.   

    Orben said that while much attention has gone into the negative effects of digital media use at the expense of positive examples, research she conducted into adolescent well-being during pandemic lockdowns, for example, showed that teens with access to laptops were happier than those without. 

    But when it comes to risk of harm to kids, Europe has taken a precautionary approach.

    “Not all kids will experience harm due to these risks from smartphones and social media use,” Patti Valkenburg, head of the Center for Research on Children, Adolescents and the Media at the University of Amsterdam, told a Commission event in June. “But for minors, we need to adopt the precautionary principle. The fact that harm can be caused should be enough to justify measures to prevent or mitigate potential risk.”

    Parental controls  

    Faced with mounting pressure in the past years, platforms like Instagram, YouTube and TikTok have introduced various tools to assuage concerns, including parental control. Since 2021, YouTube and Instagram send teenagers using their platform reminders to take breaks. TikTok in March announced minors have to enter a passcode after an hour on the app to continue watching videos. 

    Very large online platforms will also be banned from tracking kids’ online activity to show them personalized advertisements | Lionel Bonaventure/AFP via Getty Images

    But the social media companies will soon have to go further.  

    By the end of August, very large online platforms with over 45 million users in the European Union — including companies like Instagram, Snapchat, TikTok, Pinterest and YouTube — will have to comply with the longest list of rules. 

    They will have to hand in to the Digital Services Act watchdog — the European Commission — their first yearly assessment of the major impact of their design, algorithms, advertising and terms of services on a range of societal issues such as the protection of minors and mental wellbeing. They will then have to propose and implement concrete measures under the scrutiny of an audit company, the Commission and vetted researchers.

    Measures could include ensuring that algorithms don’t recommend videos about dieting to teenage girls or turning off autoplay by default so that minors don’t stay hooked watching content.

    Platforms will also be banned from tracking kids’ online activity to show them personalized advertisements. Manipulative designs such as never-ending timelines to glue users to platforms have been connected to addictive behavior, and will be off limits for tech companies. 

    Brussels is also working with tech companies, industry associations and children’s groups on rules for how to design platforms in a way that protects minors. The Code of Conduct on Age Appropriate Design planned for 2024 would then provide an explicit list of measures that the European Commission wants to see large social media companies carry out to comply with the new law.

    Yet, the EU’s new content law won’t be the magic wand parents might be looking for. The content rulebook doesn’t apply to popular entertainment like online games, messaging apps nor the digital devices themselves. 

    It remains unclear how the European Commission will potentially investigate and go after social media companies if they consider that they have failed to limit their platforms’ negative consequences for mental well-being. External auditors and researchers could also face obstacles to wade through troves of data and lines of code to find smoking guns and challenge tech companies’ claims. 

    How much companies are willing to run up against their business model in the service of their users’ mental health is also an open question, said John Albert, a policy expert at the tech-focused advocacy group AlgorithmWatch. Tech giants have made a serious effort at fighting the most egregious abuses, like cyber-bullying, or eating disorders, Albert said. And the level of transparency made possible by the new rules was unprecedented.

    “But when it comes to much broader questions about mental health and how these algorithmic recommender systems interact with users and affect them over time… I don’t know what we should expect them to change,” he explained. The back-and-forth vetting process is likely going to be drawn out as the Commission comes to grips with the complex platforms.

    “In the short term, at least, I would expect some kind of business as usual.”

    [ad_2]

    Carlo Martuscelli and Clothilde Goujard

    Source link

  • EU to Zuckerberg: Explain yourself over Instagram pedophile network

    EU to Zuckerberg: Explain yourself over Instagram pedophile network

    [ad_1]

    EU Internal Market Commissioner Thierry Breton wants Meta CEO Mark Zuckerberg to explain and take “immediate” action over a recently exposed large pedophile network on Instagram.

    Instagram has been letting a vast network of accounts promoting and purchasing child sexual abuse material flourish on its platform, according to investigations by the Wall Street Journal and researchers released on June 7. The social media platform lets users search for explicit hashtags, and has offenders exploit its recommendation algorithms to promote illicit content.

    “Meta’s voluntary code on child protection seems not to work,” Breton wrote Thursday on Twitter. “Mark Zuckerberg must now explain & take immediate action.”

    Breton said he will discuss the issue with Zuckerberg at the Meta headquarters on June 23 during a trip to the U.S. The politician will travel later this month to see how social media companies including Twitter are preparing to comply with the EU’s flagship content moderation law, the Digital Services Act (DSA).

    He said Meta will have to “demonstrate measures” to the European Commission after August 25 when the DSA starts applying to Big Tech platforms. Otherwise, the company could face sweeping fines of up to 6 percent of its global annual revenue. Under the DSA, platforms have to crack down on illegal content and ensure children are safe on a platform. Companies have to also assess and limit how their platforms and algorithms are contributing to major societal problems such as the dissemination of illegal content and the protection of minors.

    A Meta spokesperson said the company has set up an internal task force to investigate and “immediately address” the recent findings from the Wall Street Journal and researchers.

    The company works “aggressively to fight” child exploitation and support law enforcement track down criminals, the spokesperson said. Meta dismantled 27 “abusive networks” between 2020 and 2022 and disabled over 490,000 accounts for violating our child safety policies in January 2023, they added.

    [ad_2]

    Clothilde Goujard

    Source link

  • UK locks horns with WhatsApp over threat to break encryption

    UK locks horns with WhatsApp over threat to break encryption

    [ad_1]

    LONDON — Britain’s tough new plan to police the internet has left politicians in a stand-off with WhatsApp and other popular encrypted messaging services. Deescalating that row will be easier said than done.

    The Online Safety Bill, the United Kingdom’s landmark effort to regulate social media giants, gives regulator Ofcom the power to require tech companies to identify child sex abuse material in private messages.

    But the proposals have prompted Will Cathcart, boss of the Meta-owned messaging app, whose encrypted service is widely-used in Westminster’s own corridors of power, to claim it would rather be blocked in the U.K. than compromise on privacy.

    “The core of what we do is a private messaging service for billions of people around the world,” Cathcart told POLITICO in March when he jetted in to London to lobby ministers over the upcoming bill. “When the U.K., a liberal democracy, says, ‘Oh, it is okay to scan everyone’s private communication for illegal content,’ that emboldens countries around the world that have very different definitions of illegal content to propose the same thing,” he added.

    WhatsApp’s smaller rival, Signal, has also said it could stop providing services in the U.K. if the bill requires it to scan messages — echoing claims from the tech industry that date back more than a decade that they can’t create backdoors in encrypted digital services, even to protect kids online, because to do so opens the products up to vulnerabilities from bad actors, including foreign governments.

    “We can’t just let thousands of pedophiles get away with it. That wouldn’t be responsible or proportionate for a government to do,” Science and Technology Secretary Michelle Donelan told POLITICO in February.

    Ministers are keen to lower the temperature. But doing so will prove challenging, two former ministers told POLITICO on the condition of anonymity, given the likelihood of pushback from MPs, the complexity of the technology and the emotiveness of the issue.

    Easier said than done

    Finding a compromise is unlikely to be easy — and the row mirrors similar debates that are underway in the European Union and Australia over just how accountable tech platforms should be for potentially harmful content on encrypted services. 

    The debate over whether the requirements of the bill can be met while protecting privacy centers around “client-side scanning.” 

    While leaders at Britain’s National Cyber Security Centre and security agency GCHQ said last July they believe such technology can simultaneously protect children and privacy, other experts dispute their findings.

    A raft of cryptographers criticized the technique in a report called Bugs in Our Pockets in 2021 prompting tech giant Apple to abandon plans to introduce client-side scanning on its services. In Australia, the country’s eSafety Commissioner recently published a report highlighting how the likes of Microsoft and Apple had few, if any, mechanisms to track child sexual abuse material, including via their encrypted services.

    “This is not only companies really taking a blind eye to live crime scenes happening on their platforms, but they’re also failing to properly harden their systems and storage against abuse,” Australian eSafety Commissioner Julie Inman Grant told POLITICO. “It’s akin to leaving a home open to an intruder. Once that bad actor is inside the house, good luck getting them out.”

    WhatsApp’s smaller rival, Signal, has also said it could stop providing services in the U.K. if the bill requires it to scan messages | Damien Meyer/AFP via Getty Images

    Hacking risk

    Cybersecurity experts agree the U.K. bill’s demands are incompatible with a desire to protect encryption. They claim that privacy is not a fungible issue — services either have it or they don’t. And they warn that politicians should be wary of undermining such protections in ways that would make people’s online experiences potentially open to abuse or hacking.

    “In essence, end-to-end encryption involves not having a door, or if you want to use a postal analogy, not having a sorting office for the state to search. Client-side-scanning, despite the claims of its proponents, does seem to involve some kind of level of access, some kind of ability to sort and scan, and therefore there’s no way of confining that to good use by lawful credible authorities and liberal democracies,” Ciaran Martin, the former chief executive of the government’s National Cyber Security Centre said.

    Ministers insist that they support strong encryption and privacy, but say it cannot come at the cost of public safety. 

    Tech companies should be researching technology to identify child sex abuse before messages are encrypted, Donelan said. But the government also appears to be searching for a way to cool the row, and Donelan insisted the measure would be a “last resort.”

    “That element of the bill is like a safety mechanism that can be enacted, should it ever be needed to. It might never be needed because there might be other solutions in place,” she said.

    One official in the Department for Science, Innovation and Technology (DSIT), not authorized to speak on the record but familiar with government discussions, said DSIT wanted to find a way through and is having talks “with anyone that wants to discuss this with us.”

    Melanie Dawes, Ofcom’s chief executive, told POLITICO that any efforts to break encryption in the name of safety would have to meet stringent rules, and such requests would be made in only the most extreme situations. 

    “There’s a high bar for Ofcom to be able to require the use of a technology in order to secure safety,” she said.

    Lords debate

    Peers in the unelected House of Lords, the U.K. parliament’s revising chamber, waded into the issue Thursday.

    Richard Allan, a Lib Dem peer who was Facebook’s chief lobbyist in Europe until 2019, led the charge, saying tech companies will feel they’re “unable to offer their products in the UK under the bill.” He said undermining encryption opened the doors to hostile states and accused the government of playing a “high stakes game of chicken” with tech companies.

    But Beeban Kidron, a crossbench peer who has been leading much of the work in the Lords around child safety, said although she had some sympathy for Allan’s arguments, Big Tech companies had to do more to protect users’ privacy themselves.

    Wilf Stevenson, who is managing Labour’s response to the bill in the Lords, said he was not convinced the government’s plans were “right for the present day, let alone the future.” He added that under the bill “Ofcom is expected to be both gamekeeper and poacher,” with power to regulate tech companies and inspect private messages.

    But Stephen Parkinson, who is guiding the bill through the Lords on behalf of the government, defended the legislation. “The bill contains strong safeguards for privacy,” he said, echoing Donelan’s statement that powers to inspect messages were a “last resort” designed to be used only in cases of suspected terrorism and child sexual exploitation.

    Convincing ministers

    Messaging services including Signal and WhatsApp are hoping for a ministerial climbdown — but few see one coming.

    There is little prospect of large swathes of MPs, who will have the final say on the bill, riding to their rescue, according to two former ministers who have worked on the legislation. 

    “People are scared if they go in and fight over this, even for very genuine reasons, it could be very easily portrayed that they’re trying to block protecting kids,” one former Cabinet minister, a party loyalist, who worked on an earlier draft of the bill, said. 

    The second former minister said MPs “haven’t engaged with it terribly much on a very practical level” because it is “really hard.” 

    “Tech companies have made significant efforts to frame this issue in the false binary that any legislation that impacts private messaging will damage end-to-end encryption and will mean that encryption will not work or is broken. That argument is completely false,” opposition Labour frontbencher Alex Davies-Jones, said in a debate last June. 

    The widespread leaking of MPs’ WhatsApp messages has also undermined perceptions of the platform’s privacy credentials, the former Cabinet minister quoted above suggests. 

    “If you are sharing stuff on WhatsApp with people that’s inappropriate, there’s a good chance it’s going to end up in the public domain anyway. The encryption doesn’t stop that because somebody screenshots it and copies it and sends it on,” they lamented. 

    WhatsApp does have one ally in the former Brexit secretary and long-time civil liberties campaigner David Davis, though.

    “Right across the board there are a whole series of weaknesses the government hasn’t taken on board,” he told POLITICO of the bill.

    And on WhatsApp and Signal’s threats to leave the U.K., Davis thinks a point could be made.

    “Well, I sort of hope they do. The truth is their model depends on complete privacy,” he said.

    Update: This article has been updated to include comments from the latest House of Lords debate on the Online Safety Bill.

    [ad_2]

    Annabelle Dickson, Mark Scott and Tom Bristow

    Source link

  • France aims to protect kids from parents oversharing pics online

    France aims to protect kids from parents oversharing pics online

    [ad_1]

    PARIS — French parents had better think twice before posting too many pictures of their offspring on social media.

    On Tuesday, members of the National Assembly’s law committee unanimously green-lit draft legislation to protect children’s rights to their own images.

    “The message to parents is that their job is to protect their children’s privacy,” Bruno Studer, an MP from President Emmanuel Macron’s party who put the bill forward, said in an interview. “On average, children have 1,300 photos of themselves circulating on social media platforms before the age of 13, before they are even allowed to have an account,” he added.

    The French president and his wife Brigitte have made child protection online a political priority. Lawmakers are also working on age-verification requirements for social media and rules to limit kids’ screen time.

    Studer, who was first elected in 2017, has made a career out of child safety online. In the past few years, he authored two groundbreaking pieces of legislation: one requiring smartphone and tablet manufacturers to give parents the option to control their children’s internet access, and another introducing legal protections for YouTube child stars.

    So-called sharenting (combining “sharing” and “parenting,” referring to posting sensitive pictures of one’s kids online) constitutes one of the main risks to children’s privacy, according to the bill’s explanatory statement. Half of the pictures shared by child sexual abusers were initially posted by parents on social media, according to reports by the National Center for Missing and Exploited Children, mentioned in the text.

    The legislation adopted on Tuesday includes protecting their children’s privacy among parents’ legal duties. Both parents would be jointly responsible for their offspring’s image rights and “shall involve the child … according to his or her age and degree of maturity.”

    In case of disagreement between parents, a judge can ban one of them from posting or sharing a child’s pictures without authorization from the other. And in the most extreme cases, parents can lose their parental authority over their kids’ image rights “if the dissemination of the child’s image by both parents seriously affects the child’s dignity or moral integrity.”

    The bill still needs to go through a plenary session next week and the Senate before it would become law.

    [ad_2]

    Laura Kayali

    Source link

  • UK takes fresh stab at internet rules as EU framework surges ahead

    UK takes fresh stab at internet rules as EU framework surges ahead

    [ad_1]

    LONDON — The United Kingdom wants to police the internet. Shame the European Union got there first. 

    Brexit was supposed to let Britain do things quicker. But less than a month after the 27-member bloc’s Digital Services Act (DSA) went into force, London is still struggling to cobble together its own version of the rulebook, known as the Online Safety Bill

    On Monday it tried again, with Britain’s Digital Secretary Michelle Donelan presenting a tweaked bill to parliament. It got the backing of MPs, but faces fresh committee scrutiny before heading to the House of Lords. And the path to a settled law still looks far from certain. 

    The bill, which seeks to make Britain “the safest place in the world to be online” has not only been a casualty of the country’s political instability — it has also proved a divisive issue for the country’s governing Conservative Party, where a vocal minority of backbenchers still view it as an unnecessary limit to free speech.

    “Far from being world-leading, the government has been beaten to the punch in regulating online spaces by numerous jurisdictions, including Canada, Australia and the EU,” said Lucy Powell, the opposition Labour Party’s shadow digital secretary.

    Powell said the latest version of the Online Safety Bill was also at risk of getting stuck due to “chaos in government and vested interests,” adding that it was imperative the bill pass through the legislature by April, when the current parliamentary session ends. 

    Much of the disagreement over the bill has centered on rules policing so-called legal-but-harmful content. That’s been largely dropped from the latest version of the planned law, after Prime Minister Rishi Sunak’s government bowed to pressure from right-wing MPs within his own party, who argued that the provisions threatened free speech.

    In the previous iteration of the bill, Ofcom, the country’s telecommunications and media regulator, was on the hook for enforcing rules that required social media giants to take action against potentially harmful but technically legal material like the promotion of self-harm.

    The government’s scrapping of legal-but-harmful content hasn’t been universally welcomed, however. Nadine Dorries, Donelan’s predecessor as digital secretary, proposed the provisions and has griped that they’d already passed parliamentary scrutiny before the bill was paused. 

    Long and winding road

    Britain’s attempts to regulate the internet really got going under Theresa May, who became prime minister in the wake of Britain’s vote to leave the European Union, and as lawmakers were beginning to become more tech-skeptic.

    The Tories’ May 2017 election manifesto promised that “online rules should reflect those that govern our lives offline,” but by the time Boris Johnson published his 2019 election offering, the Conservatives were also promising to protect the most vulnerable from accessing harmful content. Under Johnson’s close ally Dorries, a version of the legislation tackling legal-but-harmful content started to make its way through Parliament, before it was put on pause after he was ousted by Tory MPs.

    Johnson, the former prime minister, often seemed caught between his own personal free speech philosophy and his populist instincts of attacking Big Tech.

    The summer Tory leadership contest to replace Johnson reignited the debate, with contenders promising to look again at the law before the legal-but-harmful content provisions were ultimately watered down. Donelan replaced Dorries, becoming the seventh culture secretary since Brexit.

    The EU’s path to its online rulebook has been quicker. In part that’s because questions over free speech haven’t yet become the political touchpaper that they now are in the Anglosphere. Nevertheless the EU mostly side-stepped the issue by keeping its own rulebook more squarely aimed at purely illegal content, and the European Commission has made it clear public it does not want to create a so-called “Ministry of Truth.” 

    That means the EU hasn’t had to contend with the deep divisions the Online Safety Bill has prompted in the U.K., especially among the governing Tories.

    Instead, Brussels’ institutions have been mainly aligned on the key aspects of its framework, the DSA. The European Parliament and Council of the EU — representing the 27 European governments — largely supported the European Commission’s cautious approach to create rules to crack down on public-facing content illegal under EU or national laws like child sexual abuse material or terrorist propaganda. 

    When it comes to legal-but-harmful content, the EU’s approach requires very large online platforms — those with more than 45 million European users — to assess and limit the spread of content like disinformation and cyberbullying under the watch of regulators. Europe’s rules also have gone further than those on the other side of the channel by including mandated risk assessment and audits for tech giants like Meta and Alphabet so that they can be held accountable for potential wrongdoing. In the U.K., the main enforcement has been left to Ofcom via investigations. 

    Disagreements, when they came in Europe, have been on the edges, rather than at the core of the debate. Rows focused on limits to targeted ads and the level of obligations for online marketplaces like Amazon to carry out random checks on dangerous products on their platforms. In another example, some EU countries like France and Germany pushed and failed to force a 24-hour deadline for online platforms to take down illegal content. 

    Not just free speech

    In the U.K., it’s not just free speech issues that have proved controversial. The EU set out separate rules aiming to clamp down on child sexual abuse material online, but the U.K. poured similar provisions into the Online Safety Bill.

    That means high-stakes questions over how and whether the monitoring requirements undermine privacy — especially in encrypted messaging apps like WhatsApp — are being dealt with separately in the EU. But in the U.K. they’ve been thrown into the same mix as wide-ranging free speech debates.

    Differences between the rulebooks also raise the prospect of costly regulatory misalignment. While the U.K. bill slaps general monitoring requirements on the tech companies themselves, that’s explicitly banned by the EU.  Last month, the British regulator and its Australian counterpart created a new Western coalition of online content regulators, though failed to invite any EU counterparts to those discussions. Only Ireland’s watchdog joined as an observer.

    “This is about setting up our international engagement in expectation of setting up our rules,” Melanie Dawes, Ofcom’s chief executive, told POLITICO when announcing that initiative. “The success of this is about bringing together international partners.”

    Clothilde Goujard reported from Brussels.

    [ad_2]

    Vincent Manancourt, Annabelle Dickson, Clothilde Goujard and Mark Scott

    Source link

  • Elon Musk gives Europe’s digital watchdogs their biggest test yet

    Elon Musk gives Europe’s digital watchdogs their biggest test yet

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    After Elon Musk bought Twitter — and fired almost anyone whose job it was to deal with regulators — the social networking giant is now facing a flood of legal challenges across the European Union.

    The question now is whether the EU’s watchdogs can live up to their ambitions to be the world’s digital policemen.

    Ireland’s privacy regulator wants to know whether the company’s data protection standards are good enough. The European Commission doesn’t know who to ask about its upcoming online content rules. The bloc’s cybersecurity agencies raise concerns about an increase in online trolls and potential security risks.

    Twitter’s unfolding turmoil is precisely the regulatory challenge that Brussels has said it wants to take on. The 27-country bloc has positioned itself — via a flurry of privacy, content and digital competition rules — as the de facto enforcer for the Western world, expanding its digital rulebook beyond the EU’s borders and urging other countries to follow its lead.

    Now, the world’s richest man is putting those enforcement powers to the test. 

    Europe’s regulators have the largest collective rulebook to throw at companies suspected of potential breaches. But a lack of willingness to act quickly — combined with the internal confusion engulfing Twitter — has so far hamstrung the bloc’s enforcement role when it comes to holding Musk to Europe’s standards, according to eight EU and national government officials, speaking privately to POLITICO. 

    “This will be a major test for European regulators,” said Rebekah Tromble, director of the Institute for Data, Democracy & Politics at George Washington University. She is part of the advisory board of the European Digital Media Observatory, a group helping to shape the EU’s online content rulebook, known as the Digital Services Act (DSA).

    “If Musk continues to act with intransigence, I think there’s an opportunity for European regulators to move much more quickly than normal,” she added. “These regulators will certainly be motivated to act.”

    A representative for Twitter did not return requests for comment.

    Regulatory firepower

    The bloc certainly has the firepower to bring Twitter to heel.

    Under the EU’s General Data Protection Regulation, companies can be fined up to 4 percent of their annual global revenue for failing to keep people’s personal information safe. The Irish regulator, which has responsibility for enforcing these rules against Twitter because the company’s EU headquarters are in Dublin, has already doled out a €450,000 penalty for the firm’s inability to keep data safe.

    As part of the bloc’s upcoming content rules, which will start to be enforced next year, the Commission will have powers to levy separate fines of up to 6 percent of a company’s yearly revenue if it does not take down illegal content. Brussels also has the right to ban a platform from operating in the EU after repeated serious violations.

    “In Europe, the bird will fly by our rules,” Thierry Breton, the French commissioner, told Musk — via Twitter | Kenzo Tribouillard/AFP via Getty images

    Thierry Breton, the European internal market commissioner, reminded Musk of Twitter’s obligations under the bloc’s upcoming content rules in a call with the billionaire soon after his acquisition of the social network. Musk pledged to uphold those rules, even as he has pushed back at other content moderation practices that could hamper people’s freedom of expression on the platform.

    “In Europe, the bird will fly by our rules,” Breton, the French commissioner, told Musk — via Twitter.

    Yet over the last three weeks, European regulators and policymakers have struggled to navigate Twitter’s internal turmoil, according to four EU and national officials who spoke on the condition of anonymity to discuss internal deliberations.

    The likes of Damien Kieran, Twitter’s chief privacy officer in charge of complying with Europe’s tough data protection standards, and Stephen Turner, the company’s chief lobbyist in Brussels, were among scores of senior officials who left since Musk took over.

    Two of the EU officials, speaking about internal discussions on condition of anonymity, told POLITICO that multiple emails to Twitter executives bounced back after those individuals were laid off. One of those policymakers said he had taken to Twitter — scrolling through the scores of posts from the company’s employees announcing their departures — in search of information about who was still working there. A third official said the current confusion could prove problematic when the company had to reveal long-guarded information about the number of its EU users early next year. 

    Others have been fostering wider connections within the company, just in case. Arcom, France’s online platform regulator, for instance, has built ties with high-level executives outside of France and still had a contact in Dublin at the company to answer its pressing questions.

    The policymaking blackholes — fueled by mass layoffs — have been felt beyond the EU. 

    Julie Inman Grant, Australia’s eSafety commissioner who previously ran Twitter’s public policy team in Asia, told POLITICO she had written to the company last week to remind them about its obligations to clamp down on child sexual exploitation on the platform. She had yet to hear back from Musk or other senior officials.

    “We did have a meeting on the books with Twitter,” Melanie Dawes, chief executive of Ofcom, the U.K.’s communications regulator, told POLITICO ahead of her trip to Silicon Valley this week to meet many of the social media companies. “It was canceled.”

    What about privacy?

    Another open question is how Twitter with comply with Europe’s tough privacy rules.

    Although the company’s chief privacy executive had been fired — and rumors swirled Twitter could pull out of Ireland in its cost-saving push — the Irish Data Protection Commission told POLITICO it had yet to open an investigation into the firm.

    A spokesman for the agency said Twitter executives had assured Irish regulators on Monday that Renato Monteiro had been appointed as the company’s acting data protection officer — because it’s a legal requirement to have one — and no changes to how Twitter handled data had been made.  

    A data protection official said it was likely that Musk would move such decision-making powers to his inner circle in the United States | Justin Sullivan/Getty images

    A key unanswered question is whether, in the wake of the mass layoffs, Twitter’s operations in Dublin are either shuttered or cut back to an extent that regulatory decisions are made in California and not Ireland.

    Such a change would lead the company to fall foul of strict provisions within Europe’s privacy regime that require legal oversight of EU citizens’ data to be made in a firm’s headquarters within the 27-country bloc.

    A data protection official, who asked to remain anonymous to speak candidly, said it was likely that Musk would move such decision-making powers to his inner circle in the United States. That potential pullback could allow any European regulator — and not just the Irish agency — to go after Twitter for potential privacy violations under the bloc’s data protection regime, the official added.

    This story has been corrected to specify how multiple European privacy regulators may target Twitter for breaching the bloc’s rules if the company pulls out of Ireland.

    [ad_2]

    Mark Scott, Vincent Manancourt, Laura Kayali, Clothilde Goujard and Louis Westendarp

    Source link

  • Musk fires chief Brussels lobbyist in Twitter’s layoff round

    Musk fires chief Brussels lobbyist in Twitter’s layoff round

    [ad_1]

    Twitter’s director for EU public policy Stephen Turner is among the thousands of employees laid off by its new owner Elon Musk, Turner announced on the platform Monday.

    “After six years I am officially retired from Twitter. From starting the office in Brussels to building an awesome team it has been an amazing ride. Privileged and honoured to have the best colleagues in the world, great partners, and never a dull moment. Onto the next adventure,” he tweeted.

    Since taking over Twitter, Musk reportedly sacked half of the company’s workforce — including lobbyists and content moderators. The deep cuts in the policy teams have raised concerned among regulators and politicians.

    On Monday morning, two of Twitter’s six-persons-strong policy team in Brussels still had a job, one person with first-hand knowledge of the issue told POLITICO.

    Turner spearheaded Twitter’s engagement and lobbying in Brussels at a time when the EU crafted a series of strict laws regulating privacy, content moderation, media freedom, online advertising and more.

    [ad_2]

    Laura Kayali

    Source link

  • Russia, China and Islamic State jump on Musk’s Twitter bandwagon

    Russia, China and Islamic State jump on Musk’s Twitter bandwagon

    [ad_1]

    Press play to listen to this article

    Elon Musk has some new super fans: Russia, China and the Islamic State.  

    After the world’s richest man bought Twitter for $44 billion last month, officials and journalists linked to Russia and China — and even some jihadists — urged him to lift restrictions on their use of the platform. 

    So far, their pleas have fallen on deaf ears. But the repeated requests — including from high-profile figures like Maria Zakharova, the spokesperson for Russia’s foreign ministry — are part of efforts by these individuals to use Musk’s takeover as a chance to make a comeback on Twitter. 

    Right-wing extremist groups in the West have already heralded Musk’s ownership as a signal that they can post hate-filled and potentially illegal content online with little, or no, resistance. 

    Now, Russian and Chinese state-backed Twitter accounts have taken up the same free speech argument, demanding the platform reinstate them, remove labels that identify these accounts as linked to Beijing or Moscow, and allow them to post more freely, including on hot-button topics like the war in Ukraine. 

    “They are doing this to jump on the bandwagon now that the right-wing community are putting pressure on Musk,” said Felix Kartte, a senior adviser at Reset, a technology accountability lobbying group. “They are pushing it because everyone else is pushing Musk, too.”

    A representative for Twitter did not respond to a request for comment. The company has previously said its policies regarding online hate content have not changed since Musk’s takeover. 

    The pressure is a crucial early test of Musk’s willingness to police his new platform. Fears are already mounting that under his leadership, Twitter could be reshaped to make it a more toxic place for political debate  and potentially even incite an increase in violent extremism or foreign interference within Western democracies.

    The resurgence of interest from the state-backed and jihadist accounts comes as Twitter undergoes a fundamental shift under Musk. The South African-born billionaire laid off half of the company’s employees on Friday, including many in senior public policy and content moderation roles.

    After Vladimir Putin’s forces invaded Ukraine, the European Union imposed sanctions banning content from the likes of Russia’s RT and Sputnik, a move that forced Twitter to adopt its own restrictions, which it expanded beyond the borders of the 27-country bloc. Now senior figures at RT — and Kremlin officials — are demanding Musk lift those measures. 

    Margarita Simonyan, RT’s editor-in-chief, and other prominent RT journalists, messaged Musk in the days before and after the acquisition to urge him to end the so-called shadow bans against their state-affiliated news organization. Those restrictions include RT’s content not appearing when people search on Twitter. 

    “Elon @elonmusk, since you’re all for free speech, maybe unban RT and Sputnik accounts and take the shadow ban off mine as well?” Simonyan wrote on Twitter.

    George Galloway, a former British politician who now hosts a show on RT, called on Musk to remove the “Russia state-affiliated media” label that had been placed on his account. 

    Chinese accounts also jumped on the bandwagon. While Beijing blocks Twitter for its domestic audience, the country’s officials and state media have repeatedly used the platform to spread propaganda and attack other users who criticize the Chinese Communist Party. 

    In August 2020, Twitter began labeling these accounts as state-affiliated, and since then, there has been a significant drop in engagement, including likes and shares, of those accounts, according to an analysis by the China Media Project, a research group at the University of Hong Kong.

    Ever since Musk bought Twitter, Chinese officials and state-backed journalists have been urging him to live by his free speech beliefs. He must “remove all those McCarthyist discriminatory” policies for Chinese accounts, according to a Twitter post from Chen Weihua, the European bureau chief of the state-run China Daily newspaper. 

    “Can you please free the warning to Chinese media to give us a better and pleasant experience? Thank you,” added Zhang Heqing, an official in the Chinese embassy in Pakistan in response to Musk when he said Twitter would become a bastion for free speech.

    It’s not just authoritarian governments. Islamic State supporters are also pushing to get back on the platform. 

    Within jihadist online communities, Musk’s takeover of Twitter has been welcomed as an opportunity to return. 

    Before 2015, Islamic State-related accounts had posted indiscriminately, including videos and images of beheadings and other acts of violence. Over the last seven years, Twitter’s content moderation tools had forced such activity to go underground. 

    Yet the number of Islamic State-affiliated accounts on Twitter has seen a sharp rise, compared to the previous 11-day period before Musk’s acquisition on October 27. The activity includes jihadist-supporting accounts likening the global clampdown they face to Musk’s own statements that both the left and right of politics are attacking him. In the last week, Islamic State-related Twitter users have also held so-called Twitter Spaces, or online voice conversations, with at least one of the sessions called “The Islamic Caliphate is remaining and expanding.”

    Yoel Roth, Twitter’s head of safety and integrity, said the company’s policies toward hateful content and so-called online trolls have not changed since Musk’s takeover. Twitter’s “core moderation capabilities” have not been hampered by the recent layoffs, which saw about 15 percent of Twitter’s global trust and safety team fired, Roth added. 

    Not everyone is convinced. “Through the changing of the guard, it seems as if Islamic State accounts have gotten more brazen,” according to Moustafa Ayad, executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue, a think tank that tracks online extremism. “If you make others feel like the group is back, it ultimately creates a sense of relief, or that it’s alright to post again as the Islamic State.”

    This article is part of POLITICO Pro

    The one-stop-shop solution for policy professionals fusing the depth of POLITICO journalism with the power of technology


    Exclusive, breaking scoops and insights


    Customized policy intelligence platform


    A high-level public affairs network

    [ad_2]

    Shannon Van Sant and Mark Scott

    Source link