ReportWire

Tag: Data Protection

  • Cybersecurity Awareness Month highlights new threats | Long Island Business News

    [ad_1]

    In Brief:
    • losses surged to $16 billion in 2024, a one-third jump from the previous year, according to the FBI.
    • AI-driven phishing, deepfakes, and voice cloning are fueling new waves of cyberattacks against businesses.
    • Experts warn supply-chain vulnerabilities and the rise of pose long-term cybersecurity challenges.
    • Organizations are urged to adopt stronger governance, MFA, vendor oversight, and event logging for proactive defense.

    October is . Established in 2004 by the U.S. Department of Homeland Security (DHS) and the National Cyber Security Alliance (NCSA), Cybersecurity Awareness Month aims to educate the public and businesses about cyber threats and equip them with the knowledge and tools needed to stay secure.

    The 21st Annual Cybersecurity Awareness Month comes at a particularly crucial time. First and foremost, cybercrime is on the rise. In fact, the Federal Bureau of Investigation (FBI) reported that cybercrime costs rose to $16 billion in 2024—a one-third increase from 2023.

    Furthermore, the Cybersecurity and Infrastructure Security Agency recently furloughed the majority of its already-downsized staff at the start of the ongoing government shutdown. Many fear this will leave Americans more vulnerable to escalating cyber threats. Additionally, the 2015 Cybersecurity Information Sharing Act expired at the start of the month, raising concerns about diminished collaboration between the public and private sectors.

    As a result, the need for organizations to remain vigilant and informed about cybersecurity risks is greater than ever. Among the top threats businesses should be aware of are:

     

    AI-driven attacks

    While artificial intelligence (AI) has improved efficiency and productivity for many, it has also introduced new risks related to privacy and information security. However, businesses aren’t the only ones using AI. Cybercriminals are, too.

    According to a 2025 KnowBe4 report, more than 80% of phishing emails analyzed showed evidence of AI usage. AI is also behind increasingly convincing deepfakes, which led to one company losing $25 million after an employee was tricked into sending funds to fraudsters posing as the CFO. Similarly, AI-powered voice cloning is on the rise, forcing 91% of surveyed banks to reconsider their voice authentication systems.

     

    These attacks exploit vulnerabilities in third-party vendors to gain access to sensitive customer data. Research from the Ponemon Institute and Mastercard’s RiskRecon found that more than half of breaches in the past 12 months were caused by third-party vendors.

    Alarmingly, the research also stated that only 34% of organizations are confident their suppliers would notify them of a breach of their sensitive information. Yet, less than half of the organizations regularly review the security and privacy controls of their suppliers.

     

    Quantum computing

    Quantum computing leverages quantum mechanics to solve complex problems far beyond the capabilities of traditional computers. The concern is that adversaries may steal encrypted data today with the intent to decrypt it later using advanced quantum technologies.

    The National Institute of Standards and Technology (NIST) has already released encryption algorithms resistant to quantum attacks, however, transitioning to post-quantum cryptography could take years and prove especially challenging for smaller institutions.

    In light of these and other emerging threats, businesses should adopt the following cybersecurity best practices:

     

    Governance and board oversight

    Escalating cyber threats demand informed and active involvement at the board level. Boards and executives should take an active role in cybersecurity oversight by requiring regular updates, ensuring incident response plans exist and treating cybersecurity as a core business risk rather than just a technical issue.

     

    Most regulations require the use of multi-factor authentication for any user accessing an information system. However, not all types of MFA are created equal. Organizations should implement strong, phishing-resistant MFA (such as FIDO/WebAuthn or Public Key Infrastructure) for all users accessing sensitive information and phase out weaker methods like SMS or voice codes.

     

    End of operating life

    Unsupported and legacy systems continue to pose significant risk, especially for smaller organizations. Companies should maintain an inventory of systems, track vendor support timelines and proactively plan upgrades or replacements before software and hardware reach EOL to avoid exploitable vulnerabilities.

     

    Vendor management

    As aforementioned, third-party vendors pose a significant threat. As a result, organizations should maintain a documented program and regularly conduct due diligence audits.

     

    Event logging and threat detection

    Organizations should deploy comprehensive cybersecurity event logging solutions. This can help provide visibility into system performance and security, detect incidents and support response efforts, and enable forensic investigations and threat attributions.

    As cyber threats grow in scale and sophistication, Cybersecurity Awareness Month serves as a timely reminder that proactive defense is no longer optional—it’s essential. With rising risks from AI-driven attacks, supply-chain vulnerabilities and the looming impact of quantum computing, organizations must prioritize cybersecurity as a strategic imperative. By embracing strong governance, modern authentication, lifecycle management, vendor oversight, and robust event logging, businesses can better-safeguard their systems, data and stakeholders.

    Charlie Wood is a partner and practice lead with the Information Risk Management Division of The Bonadio Group.


    [ad_2]

    LIBN Staff

    Source link

  • Record French fines for Google and Shein over cookies

    [ad_1]

    France’s data protection authority on Wednesday issued record fines against search giant Google and fast-fashion platform Shein for failing to respect the law on internet cookies.

    The two groups, each with tens of millions of users in France, received two of the heaviest penalties ever imposed by the CNIL watchdog: 150 million euros ($175 million) for Shein and 325 million euros for Google.

    Both firms failed to secure users’ free and informed consent before setting advertising cookies on their browsers, the authority found in a decision the companies can still appeal.

    Cookies are small files saved to browsers by websites that can collect data about users’ online activity, making them essential to online advertising and the business models of many large platforms.

    The CNIL has stepped up its scrutiny of their use, part of “a general strategy of bringing (market players) into line over the past five years, targeting especially sites and services that receive a lot of traffic,” the authority said.

    Shein had amassed “massive” amounts of data from the cookies it placed on 12 million monthly users’ computers in France, it added.

    The Asian low-cost clothing firm failed to secure users’ consent or inform them adequately, as well as offering inadequate options to withdraw consent.

    Shein has updated its systems to comply with the CNIL’s requirements under French and European law since the investigation.

    It told AFP that it would appeal the fine, which it said was “totally disproportionate given the nature of the alleged grievances” and its “current compliance” with the legislation.

    Google said it would study the decision, and that it has complied with earlier CNIL demands.

    – ‘Cookie wall’ –

    Wednesday’s fine against Google is the third issued by the CNIL over the search giant’s use of cookies, after paying 100 million euros in 2020 and 150 million in 2021.

    Prosecutors had requested an even heavier penalty this time, of 520 million euros.

    Authorities have justified the size of the punishments with reference to the sheer number of Google users in France and the broad array of “negligence” the CNIL says it is guilty of.

    They especially highlight the case of a so-called “cookie wall” when creating a Google account, which requires users to accept the tracking software before proceeding.

    While not in itself illegal, the implications were not sufficiently explained to users, who could therefore not provide informed consent, the CNIL found.

    Some 53 million French people were also affected by Google’s practice of inserting adverts between inbox items in its popular Gmail email service.

    Such “direct canvassing” of users requires prior consent by users under European legal precedent, which Google did not secure according to the CNIL.

    On top of the fines, Google has been ordered to bring its systems into compliance within six months.

    Failure to comply would draw further penalties of 100,000 euros per day for both Google and its Irish subsidiary.

    mng-ola/tgb/rl-yad/gv

    [ad_2]

    Source link

  • Brussels spyware bombshell: Surveillance software found on officials’ phones

    Brussels spyware bombshell: Surveillance software found on officials’ phones

    [ad_1]

    The European Parliament is on high alert for cyberattacks and foreign interference in the run-up to the EU election in June.

    POLITICO reported in December that an internal review showed that the institution’s cybersecurity “has not yet met industry standards” and is “not fully in-line with the threat level” posed by state-sponsored hackers and other threat groups.

    One member of the security and defense subcommittee went in for a routine check on Tuesday, which resulted in a discovery of traces of spyware on their phone. The member told POLITICO it wasn’t immediately clear why they were targeted with hacking software.

    Parliament’s Deputy Spokesperson Delphine Colard said in a statement that “traces found in two devices” prompted the email calling on members to have their phones checked.

    “In the given geopolitical context and given the nature of the files followed by the subcommittee on security and defence, a special attention is dedicated to the devices of the members of this subcommittee and the staff supporting its work,” the statement said.

    The new revelations follow previous incidents with other European Parliament members targeted with spyware. Researchers revealed in 2022 that the phones of members of the Catalan independence movement, including EU politicians, were infected with Pegasus and Candiru, two types of hacking tools. That same year, Greek member of the EU Parliament and opposition leader Nikos Androulakis was among a list of Greek political and public figures found to have been targeted with Predator, another spyware tool. Parliament’s President Roberta Metsola previously also faced an attempted hacking using spyware.

    European Parliament members in 2022 set up a special inquiry committee to investigate the issue. It investigated a series of scandals in countries including Spain, Greece, Hungary and Poland and said at least four governments in the EU had abused the hacking tools for political gain.

    Parliament’s IT service launched a system to check members’ phones for spyware in April last year. It had run “hundreds of operations” since the program started, the statement said.

    [ad_2]

    Antoaneta Roussi

    Source link

  • This Data Recovery Software Keeps Your Business Safe, and Now It's $45.97 for Life | Entrepreneur

    This Data Recovery Software Keeps Your Business Safe, and Now It's $45.97 for Life | Entrepreneur

    [ad_1]

    Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.

    According to TrueList, 94% of companies that face a major data loss don’t end up recovering. And while you may not think about data loss on a daily basis as an entrepreneur, it’s something to seriously consider. Fortunately, there are now tools ready to help in the event it happens, like EaseUS Data Recovery Wizard.

    Recover lost data easily and efficiently anytime with a lifetime subscription to EaseUS Data Recovery Wizard. Though it usually sets you back $149, you can currently score it for just $45.97 — no coupon code required — right here through January 21.

    Cover your bases in the event of a data loss with help from EaseUS Data Recovery Wizard. This powerful software is ready to recover your precious info after any type of data loss scenario, working to retrieve deleted, formatted, or lost files from PCs, laptops, hard drives, SSDs, USB Drives, and more. The software has an impressive 99.7% success rate, so you sleep easy knowing you’re in good hands.

    There are an impressive 2,000 recoverable device types and 1,000 file types supported. And with its user-friendly interface and quick scanning process, EaseUS Data Recovery Wizard is straightforward to use. There are only three steps to take — you scan for lost data in the location where you lost files, preview the lost data filtered by path or type, and then sit back and recover your lost data by selecting the files you want to recover.

    Safeguard your business with a lifetime subscription to EaseUS Data Recovery Wizard, now just $45.97 (reg. $149) with no coupon code required now through January 21 at 11:59 p.m. PT.

    StackSocial prices subject to change.

    [ad_2]

    Entrepreneur Store

    Source link

  • Britain’s got some of Europe’s toughest surveillance laws. Now it wants more

    Britain’s got some of Europe’s toughest surveillance laws. Now it wants more

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    LONDON — The U.K. already has some of the most far-reaching surveillance laws in the democratic world. Now it’s rushing to beef them up even further — and tech firms are spooked.

    Britain’s government wants to build on its landmark Investigatory Powers Act, a controversial piece of legislation dubbed the “snooper’s charter” by critics when introduced back in 2016.

    That law — introduced in the wake of whistleblower Edward Snowden’s revelations of mass state surveillance — attempted to introduce more accountability into the U.K. intelligence agencies’ sprawling snooping regime by formalizing wide-ranging powers to intercept emails, texts, web history and more.

    Now new legislation is triggering a fresh outcry among both industry execs and privacy campaigners — who say it could hobble efforts to protect user privacy.

    Industry body TechUK has written to Home Secretary James Cleverly airing its complaints. The group’s letter warns that the Investigatory Powers (Amendment) Bill threatens technological innovation; undermines the sovereignty of other nations; and could unleash dire consequences if it sets off a domino effect overseas.

    Tech companies are most concerned by a change that would allow the Home Office to issue notices preventing them from making technical updates that might impede information-sharing with U.K. intelligence agencies. 

    TechUK argues that, combined with pre-existing powers, the changes would “grant a de facto power to indefinitely veto companies from making changes to their products and services offered in the U.K.” 

    “Using this power, the government could prevent the implementation of new end-to-end encryption, or stop developers from patching vulnerabilities in code that the government or their partners would like to exploit,” Meredith Whittaker, president of secure messaging app Signal, told POLITICO when the bill was first unveiled. 

    The Home Office, Britain’s interior ministry, remains adamant it’s a technical and procedural set of tweaks. Home Office Minister Andrew Sharpe said at the bill’s committee stage in the House of Lords that the law was “not going to … ban end-to-end encryption or introduce a veto power for the secretary of state … contrary to what some are incorrectly speculating.”

    “We have always been clear that we support technological innovation and private and secure communications technologies, including end-to-end encryption,” a government spokesperson said. “But this cannot come at a cost to public safety, and it is critical that decisions are taken by those with democratic accountability.”

    Encryption threat

    Despite the protestations of industry and campaigners, the British government is whisking the bill through parliament at breakneck speed — risking the ire of lawmakers.

    Ministers have so far blocked efforts’ to refine the bill in the House of Lords, the U.K.’s upper chamber. But there are more opportunities to contest the legislation coming and industry is already making appeals to MPs in the hopes of paring it back in the House of Commons.

    Some companies including Apple have threatened to pull their services from the UK if asked to undermine encryption under Britain’s laws | Feline Lim/Getty Images

    “We stress the critical need for adequate time to thoroughly discuss these changes, highlighting that rigorous scrutiny is essential given the international precedent they will set and their very serious impacts,” the TechUK letter states.

    The backdrop to the row is the fraught debate on encryption that unfolded during the passage of the earlier Online Safety Act, which companies and campaigners argued could compel companies to break encryption in the name of online safety. 

    The bill ultimately said that the government can call for the implementation of this technology when it’s “technically feasible” and simultaneously preserves privacy. 

    Apple, WhatsApp and Signal have threatened to pull their services from the U.K. if asked to undermine encryption under U.K. laws. 

    Since the Online Safety Act passed in November, Meta announced that it had begun its rollout of end-to-end encryption on its Messenger service.

    In response, Cleverly issued a statement saying he was “disappointed” that the company had gone ahead with the move despite repeated government warnings that it would make identifying child abusers on the platform more difficult. 

    Critics see a pincer movement. “Taken together, it appears that the Online Safety Bill’s Clause 122 is intended to undermine existing encryption, while the updates to the IPA are intended to block further rollouts of encryption,” said Whittaker.  

    Beyond encryption 

    In addition to the notice regime, rights campaigners are worried that the bill allows for the more permissive use of bulk data where there are “low or no” expectations of privacy, for wide-ranging purposes including training AI models.

    Lib Dem peer Christopher Fox argued in the House of Lords that this “creates an essentially new and essentially undefined category of information” which marks “a departure from existing privacy law,” notably the Data Protection Act.

    Director of campaign group Big Brother Watch, Silkie Carlo, also has issues with the newly invented category. With CCTV footage or social media posts for example, people may not have an expectation of privacy, “[but] that’s not the point, the point is that that data taken together and processed in a certain way, can be incredibly intrusive.”

    Big Brother Watch is also concerned about how the bill deals with internet connection records — i.e. web logs for individuals for the last 12 months. These can currently be obtained by agencies when specific criteria is known, like the person of interest’s identity. Changes to the bill would broaden this for the purpose of “target discovery,” which Big Brother Watch characterizes as “generalized surveillance.”  

    Members of the House of Lords are also worried about the bill’s proposal to expand the number of people who can sanction spying on parliamentarians themselves. Right now, this requires the PM’s sign-off, but under the bill, the PM would be able to designate deputies for when he is not “available.” The change was inspired by the period in which former PM Boris Johnson was incapacitated with COVID-19.

    The bill will return to the House of Lords on January 23, before heading to the House of Commons to be debated by MPs | Tolga Akmen/AFP via Getty Images

    “The purpose of this bill is to give the intelligence agencies a bit of extra agility at the margins, where the existing Rolls Royce regime is proving a bit clunky and bureaucratic,” argues David Anderson, crossbench peer and author of a review that served as a blueprint for the bill. “If you start throwing in too many safeguards, you will negate that purpose, and you will not solve the problem that bill is addressing.” 

    Anderson proposed the changes relating to spying on MPs and peers are necessary “if the prime minister has got COVID, or if they’re in a foreign country where they have no access to secure communications.” 

    This could even apply in cases where there’s a conflict of interest because spies want to snoop on the PM’s relatives or the PM himself, he added.

    Amendments proposed by peers at the committee stage were uniformly rejected by the government. 

    The bill will return to the House of Lords for the next stage of the legislative process on January 23, before heading to the House of Commons to be debated by MPs.

    “Our overarching concern is that the significance of the proposed changes to the notices regime are presented by the Home Office as minor adjustments and as such are being downplayed,” reads the TechUK letter.

    “What we’re seeing across these different bills is a continual edging further towards … turning private tech companies into arms of a surveillance state,” says Carlo.

    [ad_2]

    Laurie Clarke

    Source link

  • Inside the police force scouring the internet to save abused children

    Inside the police force scouring the internet to save abused children

    [ad_1]

    EUROPOL HEADQUARTERS, THE HAGUE — “Please knock. Do not enter,” said the sign on the door of Europe’s heavily-secured law enforcement headquarters in the Netherlands.

    Inside, detectives were staring at their computers, examining a video of a newborn girl being molested. 

    A group of international detectives was trying to identify details — a toy, a clothing label, a sound — that would allow them to rescue the girl and arrest those who sexually abused her, recorded it and then shared it on the internet.

    Even a tiny hint could help track down the country where the baby girl was assaulted, allowing the case to be transferred to the right police authority for further investigation. Such details matter when police are trying to tackle crimes carried out behind closed doors but disseminated online across the world.

    Finding and stopping child sex offenders is gruesome and frustrating most of the time — yet hugely rewarding sometimes — police officers part of the international task force at the EU agency Europol told POLITICO. 

    Offenders are getting better at covering their digital tracks and law enforcement officials say they don’t have the tools they need to keep up. The increasing use of encrypted communication online makes investigators’ work harder, especially as a pandemic that kept people at home and online ramped up a flood of abuse images and videos.

    In 2022, social media giant Meta Platforms found and reported 26 million images on Facebook and Instagram. Teenagers’ favorite apps Snapchat and TikTok respectively filed over 550,000 and nearly 290,000 reports to the U.S. National Center for Missing and Exploited Children, an organization acting as a clearing house under U.S. law for child sexual abuse material (CSAM) content that technology firms detect and spot.

    The European Commission in December also ordered Meta to explain what it was doing to fight the spread of illegal sexual images taken by minors themselves and shared through Instagram, under the EU’s new content-moderation rulebook, the Digital Services Act (DSA).

    Politicians across the world are keen to act. In the European Union and the United Kingdom, legislators have drafted laws to dig up more illegal content and extend law enforcement’s powers to crack down on child sexual abuse material.

    But those efforts have ignited a fierce public debate on what takes precedence: granting police new abilities to go after offenders or preserving privacy and protections against states’ and digital platforms’ mass online surveillance.

    The scale of the problem

    The Europol task force has met twice a year since 2014 to accelerate investigations to identify victims, most recently in November. It has almost tripled in size to 33 investigators representing 26 countries including Germany, Australia and the United States. 

    “You might recognize things that are in the images or you might recognize the sounds in the background or the voices. If you do that together with multiple nationalities in one room, it can be really effective,” said Marijn Schuurbiers, head of operations at Europol’s European Cybercrime Centre (EC3).

    Still, too often detectives feel like they’re swimming against the tide, as the amount of child sexual abuse material circulating online surges.

    Europol created a database in 2016 and this system now holds 85 million unique photos and videos of children, many found on pedophile forums on the “dark web” — the part of the internet that isn’t publicly searchable and requires special software to browse.

    “We can work hours and hours on end and we’re still scratching the surface. It’s terrifying,” said Mary, a national police officer from a non-EU country with 17 years of experience. She requested not to use her last name to protect her identity while doing investigative work. 

    The task force in November went through 432 files, each containing tens of thousands of images, and found the most likely country for 285 of the children abused in the images. Police believe it likely identified 74 of the victims, three of whom were rescued by the time of publication. Two offenders were arrested. 

    “We have some successes. But all I can see is those we can’t help,” Mary said. 

    Many Western agencies outside of the U.S. are restricted by privacy provisions in the software they use like facial recognition tools. They often have to make do with a mix of manual analysis and freely accessible tools they can get from the internet.

    “If you have like thousands or hundreds of thousands or even millions of pictures, it’s basically impossible to go manually through them, one by one,” said Schuurbiers. 

    Since 2017, the agency has regularly been asking for public help to identify objects in images like plastic bags and a logo on a school uniform. Europol said it has gotten 27,000 tips from internet sleuths including investigative outlet Bellingcat, some of which led to 23 kids being identified and five offenders being prosecuted.  

    Groups on the “dark web” remain the principal place where offenders share illegal content, according to Europol

    But police and child protection hotlines are seeing a growing number of images cropping up on popular and accessible platforms like Facebook, Instagram, Snapchat and Instagram. The pandemic made this worse as more children and teenagers also joined social media and gaming websites where offenders got better at grooming victims and blackmailing them into making sexual content.

    Law enforcement agencies around the world have also sounded the alarm that offenders are also connecting with minors and exchanging illegal content on encrypted messaging apps like WhatsApp, Signal and iMessage, making it extremely challenging to find the content. WhatsApp, for instance, scans the photos and descriptions users but is unable to monitor their highly secure messages.

    Finding more child sexual abuse material

    The crisis of child sexual abuse material proliferating online has got governments pushing through sweeping new legislation to make it possible for law enforcement to investigate more online material and use artificial intelligence tools to help them. 

    The European Commission has proposed a law that could force tech companies like Meta, Apple and Google to scan messages and content stored in the cloud for images of abuse — and even for conversations of offenders seeking to manipulate minors upon a judge’s order. The companies would have to report the content, so it could end up with Europol or other national investigators, and then remove it.

    The United Kingdom recently passed the Online Safety Act, which some legal experts say would allow the country’s platform regulator Ofcom to force companies to break encryption to find sexual abuse. Government and Ofcom officials have said companies would not currently be forced to monitor content because tools to bypass encryption and also preserve privacy do not exist at the moment.

    Both plans have sparked widespread backlash among digital rights activists, tech experts and some lawyers. They fear the laws effectively force tech firms to ditch encryption, and that indiscriminate scanning will lead to mass surveillance.

    Negotiations on the EU draft law remain on thin ice, with politicians and member countries clashing over how far to go in hunting down potential illegal child abuse. And Brussels also finalized in December a new law, the Artificial Intelligence Act, governing how law enforcement will be able to use AI tools like facial recognition software to go through footage and images. 

    Still, EU lawmakers have already significantly expanded Europol’s powers to build new artificial intelligence tools and handle more data. Under the Digital Services Act, Europol and national police will also be able to swiftly compel tech companies to remove publicly accessible illegal content and hand over information about users posting such images.

    Anne, a Europol investigator, said she doesn’t keep count of the number of kids she’s identified in her 12 years working in the field — but she remembers them. She requested not to use her last name to protect her investigative work.

    “The thing that I will always remember from my cases is the images,” she said. “They stay in my head.”

    [ad_2]

    Clothilde Goujard

    Source link

  • Huawei pushes back on the EU calling it ‘high-risk’

    Huawei pushes back on the EU calling it ‘high-risk’

    [ad_1]

    Chinese technology giant Huawei has had it with European Union officials calling it a “high-risk” supplier.

    The firm, a leading manufacturer of telecoms equipment, filed a complaint with the European Ombudsman office last month after the bloc’s industry chief Thierry Breton described Huawei and its smaller Chinese rival ZTE as “high-risk suppliers” at a press conference on June 15.

    Breton was presenting a report reviewing the EU’s policies on secure 5G, which allow member countries to restrict or prohibit “entities considered high-risk suppliers, notably because they are subject to highly intrusive, third countries laws on national intelligence and data security,” the commissioner said, naming both Huawei and ZTE in his statements.

    Huawei told POLITICO in a statement Friday that the company “strongly opposes and disagrees with the comments made by the European Commission representatives publicly naming and shaming an individual company without legal basis while lacking any justification or due process,” confirming the firm is the one behind the complaint with the EU Ombudsman.

    “We expect the European Commission to address our claims and rectify their comments for the sake of Huawei’s reputation,” the spokesperson added.

    The European Ombudsman found “insufficient grounds to open an inquiry into the comments themselves” but it has asked the Commission to send Huawei a reply to its complaints by November 3, Michal Zuk, a communication officer for the EU watchdog, told POLITICO.

    The Shenzhen-based company has been fighting restrictions on the use of its 5G kit for the past few years. It has fought and lost a court challenge in Sweden against the country’s telecoms regulator and more recently filed a lawsuit with a Lisbon court against a resolution by Portugal’s cybersecurity regulator.

    At the core of Western concerns surrounding Huawei is whether the firm can be instrumentalized, pressured or infiltrated by the Chinese government to gain access to critical data in Western countries.

    The Commission didn’t immediately respond to POLITICO’s request for comment.

    [ad_2]

    Mathieu Pollet

    Source link

  • TikTok hit with €345M fine for violating children’s privacy

    TikTok hit with €345M fine for violating children’s privacy

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Booming social media application TikTok needs to pay up in Europe for violating children’s privacy.

    The popular Chinese-owned app failed to protect children’s personal information by making their accounts publicly accessible by default and insufficiently tackled risks that under-13 users could access its platform, the Irish Data Protection Commission (DPC) said in a decision published Friday.

    The regulator slapped TikTok with a €345 million fine for breaching the EU’s landmark privacy law, the General Data Protection Regulation (GDPR).

    The penalty comes amid high tensions between the European Union and China, following the EU’s announcement that it plans to probe Chinese state subsidies of electric cars. European Commission Vice President Věra Jourová is also set to visit China next Monday-Tuesday and meet Vice Premier Zhang Guoqing to discuss the two sides’ technology policies, amid growing concerns over Beijing’s data gathering and cyber espionage practices.

    “Alone the fine of [€345 million] is a headline sanction to impose but reflects the extent to which the DPC identified child users were exposed to risk in particular arising from TikTok’s decision at the time to default child user accounts to public settings on registration,” said Helen Dixon, the Irish data protection commissioner, in a written statement.

    The Irish privacy regulator said that, in the period from July to December 2020, TikTok had unlawfully made accounts of users aged 13 to 17 public by default, effectively making it possible for anyone to watch and comment on videos they posted. The company also did not appropriately assess the risks that users under the age of 13 could gain access to its platform. It also found that TikTok is still pushing teenagers joining the platform to make their accounts and videos public through manipulative pop-ups. The regulator ordered the firm to change these misleading designs, known as dark patterns, within the next three months.

    Minors’ accounts could be paired up with unverified adult accounts during the second half of 2020. The authority said the video platform had also previously failed to explain to teenagers the consequences of making their content and accounts public.

    “We respectfully disagree with the decision, particularly the level of the fine imposed,” said Morgan Evans, a TikTok spokesperson. “The [Data Protection Commission]’s criticisms are focused on features and settings that were in place three years ago, and that we made changes to well before the investigation even began, such as setting all under-16 accounts to private by default.”

    TikTok added it will comply with the order to change misleading designs by extending such default-privacy settings to accounts of new users aged 16 and 17 later in September. It will also roll out in the next three months changes to the pop-up young users get when they first post a video.

    The decision marks the largest-ever privacy fine for TikTok, which is now actively used by 134 million Europeans monthly, and the fifth-largest fine imposed on any tech company under the GDPR.

    The platform popular among teenagers has previously faced criticism for insufficiently mitigating harms it poses to its young users, including deadly viral challenges and its addictive algorithm. TikTok — like 18 other online platforms — also now has to limit risks like cyberbullying or face steep fines under the Digital Services Act (DSA).

    The costly fine adds to TikTok’s woes in Europe, after it saw a wave of new restrictions on its use earlier this year due to concerns about its connection to China.

    The social media app, whose parent company ByteDance is based in Beijing, has struggled to quash concerns over its data security. The company said this month it had started moving its European data to a center within the bloc. Yet, it is still under investigation by the Irish Data Protection Commission over the potentially unlawful transfer of European users’ data to China.

    The social media app, whose parent company ByteDance is based in Beijing, has struggled to quash concerns over its data security | Roslan Rahman/AFP via Getty Images

    The Irish data authority in 2021 started probing whether TikTok was respecting children’s privacy requirements. TikTok set up its legal EU headquarters in Dublin in late 2020, meaning the Irish privacy watchdog has been the company’s supervisor for the whole bloc under the GDPR.

    Other national watchdogs weighed in on the investigation over the summer via the European Data Protection Board (EDPB), after two German privacy agencies and Italy’s regulator disagreed with Ireland’s initial findings. The group instructed Ireland to sanction TikTok for nudging its users toward public accounts in its misleading pop-ups.

    The board of European regulators also had “serious doubts” that TikTok’s measures to keep under-13 users off its platform were effective in the second half of 2020. The EDPB said the mechanisms “could be easily circumvented” and that TikTok was not checking ages “in a sufficiently systematic manner” for existing users. The group said, however, that it couldn’t find an infringement because of a lack of information available during their cooperation process.

    The United Kingdom’s data regulator in April fined TikTok £12.7 million (€14.8 million) for letting children under 13 on its platform and using their data. The company also received a €750,000 fine in 2021 from the Dutch privacy authority for failing to protect Dutch children by not having a privacy policy in their native language.

    This article has been updated.

    [ad_2]

    Clothilde Goujard

    Source link

  • TikTok to face European privacy fine by September

    TikTok to face European privacy fine by September

    [ad_1]

    TikTok is set to face a privacy fine by early September for its handling of teenagers’ and children’s data, according to three people with knowledge of the matter.

    Europe’s network of national privacy regulators, the European Data Protection Board (EDPB), on Wednesday resolved disagreements among agencies in an investigation into the popular video-sharing platform used by 125 million people in the bloc.

    Their decision kicks off a process giving TikTok’s lead privacy regulator in the EUthe Irish Data Protection Commission, a month to issue the final penalty and any potential measures. The size and details of the fine are unknown.

    The Irish data authority in 2021 started probing whether TikTok was respecting children’s privacy under the requirements of the EU’s landmark privacy rulebook, the General Data Protection Regulation (GDPR).

    The Irish regulator wanted to check whether the Chinese-owned app ensured its default settings sufficiently protected children’s privacy and if the company was transparent enough in how it processed minors’ data. One of the trickiest points has also been TikTok’s age-verification practices, intended to keep minors under 13 off its platform. TikTok is supervised by the Irish Data Protection Commission because its EU headquarters are in the country.

    The Irish DPC sent the case to the EDPB in May following disagreements with its German and Italian counterparts.

    “We’ve yet to receive the final decision so we’re not in a position to comment,” said a TikTok spokesperson.

    TikTok in 2021 received a €750,000 fine from the Dutch data protection authority for failing to protect Dutch children’s privacy by not having a privacy policy in their native language. The company is also being investigated by Ireland over the potentially unlawful shipping of European users’ data to China.

    [ad_2]

    Clothilde Goujard

    Source link

  • From Napoléon to Macron: How France learned to love Big Brother

    From Napoléon to Macron: How France learned to love Big Brother

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    PARIS — Liberté. Egalité. But mostly: sécurité

    It all started with Napoléon Bonaparte. Over two centuries, France cobbled together a surveillance apparatus capable of intercepting private communications; keeping traffic and localization data for up to a year; storing people’s fingerprints; and monitoring most of the territory with cameras.

    This system, which has faced pushback from digital rights organizations and United Nations experts, will get its spotlight moment at the 2024 Paris Summer Olympics. In July next year, France will deploy large-scale, real-time, algorithm-supported video surveillance cameras — a first in Europe. (Not included in the plan: facial recognition.) 

    Last month, the French parliament approved a controversial government plan to allow investigators to track suspected criminals in real-time via access to their devices’ geolocation, camera and microphone. Paris also lobbied in Brussels to be allowed to spy on reporters in the name of national security. 

    Helping France down the path of mass surveillance: a historically strong and centralized state; a powerful law enforcement community; political discourse increasingly focused on law and order; and the terrorist attacks of the 2010s. In the wake of President Emmanuel Macron’s agenda for so-called strategic autonomy, French defense and security giants, as well as innovative tech startups, have also gotten a boost to help them compete globally with American, Israeli and Chinese companies. 

    “Whenever there’s a security issue, the first reflex is surveillance and repression. There’s no attempt in either words or deeds to address it with a more social angle,” said Alouette, an activist at French digital rights NGO La Quadrature du Net who uses a pseudonym to protect her identity. 

    As surveillance and security laws have piled up in recent decades, advocates have lined up on opposite sides. Supporters argue law enforcement and intelligence agencies need such powers to fight terrorism and crime. Algorithmic video surveillance would have prevented the 2016 Nice terror attack, claimed Sacha Houlié, a prominent lawmaker from Macron’s Renaissance party.

    Opponents point to the laws’ effect on civil liberties and fear France is morphing into a dystopian society. In June, the watchdog in charge of monitoring intelligence services said in a harsh report that French legislation is not compliant with the European Court of Human Rights’ case law, especially when it comes to intelligence-sharing between French and foreign agencies.

    “We’re in a polarized debate with good guys and bad guys, where if you oppose mass surveillance, you’re on the bad guys’ side,” said Estelle Massé, Europe legislative manager and global data protection lead at digital rights NGO Access Now. 

    A history of surveillance

    Both the 9/11 and the Paris 2015 terror attacks have accelerated mass surveillance in France, but the country’s tradition of snooping, monitoring and data collection dates way back — to Napoléon Bonaparte in the early 1800s. 

    “Historically, France has been at the forefront of these issues, in terms of police files and records. During the First Empire, France’s highly centralized government was determined to square the entire territory,” said Olivier Aïm, a lecturer at Sorbonne Université Celsa who authored a book on surveillance theories. Before electronic devices, paper was the main tool of control because identification documents were used to monitor travels, he explained. 

    The French emperor revived the Paris Police Prefecture — which exists to this day — and tasked law enforcement with new powers to keep political opponents in check. 

    In the 1880s, Alphonse Bertillon devised a method of identifying suspects and criminals using biometric features | Peter Macdiarmid/Getty Images

    In the 1880s, Alphonse Bertillon, who worked for the Paris Police Prefecture, introduced a new way of identifying suspects and criminals using biometric features — the forerunner of facial recognition. The Bertillon method would then be emulated across the world.

    Between 1870 and 1940, under the Third Republic, the police kept a massive file — dubbed the National Security’s Central File — with information about 600,000 people, including anarchists and communists, certain foreigners, criminals, and people who requested identification documents. 

    After World War II ended, a bruised France moved away from hard-line security discourse until the 1970s. And in the early days of the 21st century, the 9/11 attacks in the United States marked a turning point, ushering in a steady stream of controversial surveillance laws — under both left- and right-wing governments. In the name of national security, lawmakers started giving intelligence services and law enforcement unprecedented powers to snoop on citizens, with limited judiciary oversight. 

    “Surveillance covers a history of security, a history of the police, a history of intelligence,” Aïm said. “Security issues have intensified with the fight against terrorism, the organization of major events and globalization.” 

    The rise of technology

    In the 1970s, before the era of omnipresent smartphones, French public opinion initially pushed back against using technology to monitor citizens

    In 1974, as ministries started using computers, Le Monde revealed a plan to merge all citizens’ files into a single computerized database, a project known as SAFARI.

    The project, abandoned amid the resulting scandal, led lawmakers to adopt robust data protection legislation — creating the country’s privacy regulator CNIL. France then became one of the few European countries with rules to protect civil liberties in the computer age. 

    However, the mass spread of technology — and more specifically video surveillance cameras in the 1990s — allowed politicians and local officials to come up with new, alluring promises: security in exchange for surveillance tech. 

    In 2020, there were about 90,000 video surveillance cameras powered by the police and the gendarmerie in France. The state helps local officials finance them via a dedicated public fund. After France’s violent riots in early July — which also saw Macron float social media bans during periods of unrest — Interior Minister Gérald Darmanin announced he would swiftly allocate €20 million to repair broken video surveillance devices. 

    In parallel, the rise of tech giants such as Google, Facebook and Apple in everyday life has led to so-called surveillance capitalism. And for French policymakers, U.S. tech giants’ data collection has over the years become an argument to explain why the state, too, should be allowed to gather people’s personal information. 

    “We give Californian startups our fingerprints, face identification, or access to our privacy from our living room via connected speakers, and we would refuse to let the state protect us in the public space?” Senator Stéphane Le Rudulier from the conservative Les Républicains said in June to justify the use of facial recognition on the street. 

    Strong state, strong statesmen

    Resistance to mass surveillance does exist in France at the local level — especially against the development of so-called safe cities. Digital rights NGOs can boast a few wins: In the south of France, La Quadrature du Net scored a victory in an administrative court, blocking plans to test facial recognition in high schools. 

    Some grassroots movements have opposed surveillance schemes at the local level, but the nationwide legislative push has continued | Ludovic Marin/AFP via Getty Images

    At the national level, however, security laws are too powerful a force, despite a few ongoing cases before the European Court of Human Rights. For example, France has de facto ignored multiple rulings from the EU top court that deemed mass data retention illegal. 

    Often at the center of France’s push for more state surveillance: the interior minister. This influential office, whose constituency includes the law enforcement and intelligence community, is described as a “stepping stone” toward the premiership — or even the presidency. 

    “Interior ministers are often powerful, well-known and hyper-present in the media. Each new minister pushes for new reforms, new powers, leading to the construction of a never-ending security tower,” said Access Now’s Massé.

    Under Socialist François Hollande, Manuel Valls and Bernard Cazeneuve both went from interior minister to prime minister in, respectively, 2014 and 2016. Nicolas Sarkozy, Jacques Chirac’s interior minister from 2005 to 2007, was then elected president. All shepherded new surveillance laws under their tenure.

    In the past year, Darmanin has been instrumental in pushing for the use of police drones, even going against the CNIL.

    For politicians, even at the local level, there is little to gain electorally by arguing against expanded snooping and the monitoring of public space. “Many on the left, especially in complicated cities, feel obliged to go along, fearing accusations of being soft [on crime],” said Noémie Levain, a legal and political analyst at La Quadrature du Net. “The political cost of reversing a security law is too high,” she added.

    It’s also the case that there’s often little pushback from the public. In March, on the same day a handful of French MPs voted to allow AI-powered video surveillance cameras at the 2024 Paris Olympics, about 1 million people took to the streets to protest against … Macron’s pension reform. 

    Sovereign cameras

    For politicians, France’s industrial competitiveness is also at stake. The country is home to defense giants that dabble in both the military and civilian sectors, such as Thalès and Safran. Meanwhile, Idemia specializes in biometrics and identification. 

    “What’s accelerating legislation is also a global industrial and geopolitical context: Surveillance technologies are a Trojan horse for artificial intelligence,” said Caroline Lequesne Rot, an associate professor at the Côte d’Azur University, adding that French policymakers are worried about foreign rivals. “Europe is caught between the stranglehold of China and the U.S. The idea is to give our companies access to markets and allow them to train.”

    In 2019, then-Digital Minister Cédric O told Le Monde that experimenting with facial recognition was needed to allow French companies to improve their technology. 

    France’s surveillance apparatus will be on full display at the 2024 Olympic Games | Patrick Kovarik/AFP via Getty Images

    For the video surveillance industry — which made €1.6 billion in France in 2020 — the 2024 Paris Olympics will be a golden opportunity to test their products and services and showcase what they can do in terms of AI-powered surveillance. 

    XXII — an AI startup with funding from the armed forces ministry and at least some political backinghas already hinted it would be ready to secure the mega sports event. 

    “If we don’t encourage the development of French and European solutions, we run the risk of later becoming dependent on software developed by foreign powers,” wrote lawmakers Philippe Latombe, from Macron’s allied party Modem, and Philippe Gosselin, from Les Républicains, in a parliamentary report on video surveillance released in April.

    “When it comes to artificial intelligence, losing control means undermining our sovereignty,” they added.

    [ad_2]

    Laura Kayali

    Source link

  • EU hits Meta with record €1.2B privacy fine

    EU hits Meta with record €1.2B privacy fine

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    U.S. tech giant Meta has been hit with a record €1.2 billion fine for not complying with the EU’s privacy rulebook.

    The Irish Data Protection Commission announced on Monday that Meta violated the General Data Protection Regulation (GDPR) when it shuttled troves of personal data of European Facebook users to the United States without sufficiently protecting them from Washington’s data surveillance practices.

    It’s the largest fine imposed under the bloc’s flagship General Data Protection Regulation (GDPR) privacy law and it comes on the eve of the fifth anniversary of the law’s enforcement on May 25.

    Amazon was previously fined €746 million by Luxembourg and the Irish regulator also imposed four fines against Meta’s platforms Facebook, Instagram and WhatsApp ranging between €405 million and €225 million in the past two years.

    The Irish privacy watchdog said that Meta’s use of a legal instrument known as standard contractual clauses (SCCs) to move data to the U.S. “did not address the risks to the fundamental rights and freedoms” of Facebook’s European users raised by a landmark ruling from the EU’s top court.

    The European Court of Justice in 2020 struck down an EU-U.S. data flows agreement known as the Privacy Shield over fears of U.S. intelligence services’ surveillance practices. In the same judgment, the top EU court also tightened requirements to use SCCs, another legal tool widely used by companies to transfer personal data to the U.S.

    Meta — as well as other international companies — kept relying on the legal instrument as European and U.S. officials struggled to put together a new data flows arrangement and the U.S. tech giant lacked other legal mechanisms to transfer its personal data.

    The EU and U.S. are finalizing a new data flow deal that could come as early as July and as late as October. Meta has until October 12 to stop relying on SCCs for their transfers.

    The U.S. tech giant previously warned that if it would be forced to stop using SCCs without a proper alternative data flow agreement in place, it could shut down services like Facebook and Instagram in Europe.

    Meta also has until November 12 to delete or move back to the EU the personal data of European Facebook users transferred and stored in the U.S. since 2020 and until a new EU-U.S. deal is reached. However, it’s unlikely the tech firm will have to delete or move data as European and U.S. negotiators are expected to finalize the new deal before early November.

    “This decision is flawed, unjustified and sets a dangerous precedent for the countless other companies transferring data between the EU and U.S.,” Meta’s President of Global Affairs Nick Clegg and Chief Legal Officer Jennifer Newstead said in a statement on Monday.

    Clegg and Newstead said the company will appeal the decision and seek a stay with the courts to pause the implementation deadlines. “There is no immediate disruption to Facebook because the decision includes implementation periods that run until later this year,” they added.

    Max Schrems, the privacy activist behind the original 2013 complaint supporting the case, said: “We are happy to see this decision after ten years of litigation … Unless U.S. surveillance laws get fixed, Meta will have to fundamentally restructure its systems.”

    The Irish Data Protection Commission said it disagreed with the fine and measure that it was imposing on Meta but had been forced by the pan-European network of national regulators, the European Data Protection Board (EDPB), after Dublin’s initial decision was challenged by four of its peer regulators in Europe, from Germany, France, Spain and Austria.

    According to internal discussions released on Monday, the Irish regulator earlier this year vehemently argued against imposing a financial penalty on the social media giant, saying that such a decision would be disproportionate for the alleged privacy abuses. Dublin also argued any such fine against Meta could be viewed as discriminatory since U.S. tech firm Google had not faces similar penalties for other transatlantic data protection cases.

    But Ireland was overruled by other European regulators. In a stinging rebuke, the pan-EU body of privacy regulators EDPB said it took the view that “Meta committed the infringement at least with the highest degree of negligence,” the discussions released Monday showed, arguing in favor of a fine. The EDPB backed claims from the four EU privacy regulators that Meta should also be forced to delete historical European data affected by the decision.

    This article was updated to include comments from Meta and Max Schrems and to add details about the decision.

    [ad_2]

    Clothilde Goujard and Mark Scott

    Source link

  • Meta faces record privacy fine for data transfers to the US

    Meta faces record privacy fine for data transfers to the US

    [ad_1]

    Meta is expected to face a record privacy fine on Monday when Ireland’s data protection watchdog confirms the social media platform mishandled people’s data when shipping it to the United States, according to two people with direct knowledge of the upcoming decision.

    POLITICO was not able to confirm the size of the record-setting penalty, which will likely be more than the €746 million fine that Amazon was forced to pay in 2021 for similarly flouting the European Union’s privacy standards, the people added, who spoke on condition of anonymity to speak about internal deliberations.

    Ireland’s Data Protection Commission will publish its ruling on Monday; it is also expected to include demands that Meta’s Facebook stop using complex legal instruments to move EU data to the U.S., called standard contract clauses, in the fall. 

    The upcoming decision dates back to revelations in 2013 from Edward Snowden, the former U.S. National Security Agency contractor, who disclosed that American authorities had repeatedly accessed people’s information via tech companies like Facebook and Google.

    Max Schrems, an Austrian privacy campaigner, filed a legal challenge against Facebook for failing to protect his privacy rights, setting off a decade-long battle over the legality of moving EU data to the U.S.

    Europe’s top court has repeatedly stated Washington does not have sufficient checks in place to protect Europeans’ personal information, and the U.S. recently updated its internal legal protections to give the EU greater assurances that American intelligence agencies will follow new rules governing such data access.

    Meta declined to comment. The Irish Data Protection Commission did not respond in time for publication.

    [ad_2]

    Mark Scott and Clothilde Goujard

    Source link

  • MEPs cling to TikTok for Gen Z votes

    MEPs cling to TikTok for Gen Z votes

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    It may come with security risks but, for European Parliamentarians, TikTok is just too good a political tool to abandon.

    Staff at the European Parliament were ordered to delete the video-sharing application from any work devices by March 20, after an edict last month from the Parliament’s President Roberta Metsola cited cybersecurity risks about the Chinese-owned platform. The chamber also “strongly recommended” that members of the European Parliament and their political advisers give up the app.

    But with European Parliament elections scheduled for late spring 2024, the chamber’s political groups and many of its members are opting to stay on TikTok to win over the hearts and minds of the platform’s user base of young voters. TikTok says around 125 million Europeans actively use the app every month on average.

    “It’s always important in my parliamentary work to communicate beyond those who are already convinced,” said Leïla Chaibi, a French far-left lawmaker who has 3,500 TikTok followers and has previously used the tool to broadcast videos from Strasbourg explaining how the EU Parliament works.

    Malte Gallée, a 29-year-old German Greens lawmaker with over 36,000 followers on TikTok, said, “There are so many young people there but also more and more older people joining there. For me as a politician of course it’s important to be where the people that I represent are, and to know what they’re talking about.”

    Finding Gen Z 

    Parliament took its decision to ban the app from staffers’ phones in late February, in the wake of similar moves by the European Commission, Council of the EU and the bloc’s diplomatic service.

    A letter from the Parliament’s top IT official, obtained by POLITICO, said the institution took the decision after seeing similar bans by the likes of the U.S. federal government and the European Commission and to prevent “possible threats” against the Parliament and its lawmakers.

    For the chamber, it was a remarkable U-turn. Just a few months earlier its top lawmakers in the institution’s Bureau, including President Metsola and 14 vice presidents, approved the launch of an official Parliament account on TikTok, according to a “TikTok strategy” document from the Parliament’s communications directorate-general dated November 18 and seen by POLITICO. 

    “Members and political groups are increasingly opening TikTok accounts,” stated the document, pointing out that teenagers then aged 16 will be eligible to vote in 2024. “The main purpose of opening a TikTok channel for the European Parliament is to connect directly with the young generation and first time voters in the European elections in 2024, especially among Generation Z,” it said.

    Another supposed benefit of launching an official TikTok account would be countering disinformation about the war in Ukraine, the document stated.  

    Most awkwardly, the only sizeable TikTok account claiming to represent the European Parliament is actually a fake one that Parliament has asked TikTok to remove.

    Dummy phones and workarounds

    Among those who stand to lose out from the new TikTok policy are the European Parliament’s political groupings. Some of these groups have sizeable reach on the Chinese-owned app.

    All political groups with a TikTok account said they will use dedicated computers in order to skirt the TikTok ban on work devices | Khaled Desouki/AFP via Getty Images

    The largest group, the center-right European People’s Party, has 51,000 followers on TikTok. Spokesperson Pedro López previously dismissed the Parliament’s move to stop using TikTok as “absurd,” vowing the EPP’s account will stay up and active. López wrote to POLITICO that “we will use dedicated computers … only for TikTok and not connected to any EP or EPP network.”

    That’s the same strategy that all other political groups with a TikTok account — The Left, Socialists and Democrats (S&D) and Liberal Renew groups — said they will use in order to skirt the TikTok ban on work devices like phones, computers or tablets, according to spokespeople. Around 30 Renew Europe lawmakers are active on the platform, according to the group’s spokesperson.

    Beyond the groups, it’s the individual members of parliament — especially those popular on the app — that are pushing back on efforts to restrict its use.

    Clare Daly, an Irish independent member who sits with the Left group, is one of the most popular MEPs on the platform with over 370,000 subscribed to watch clips of her plenary speeches. Daly has gained some 80,000 extra followers in just the few weeks since Parliament’s ban was announced.

    Daly in an email railed against Parliament’s new policy: “This decision is not guided by a serious threat assessment. It is security theatre, more about appeasing a climate of geopolitical sinophobia in EU politics than it is about protecting sensitive information or mitigating cybersecurity threats,” she said.

    According to Moritz Körner, an MEP from the centrist Renew Europe group, cybersecurity should be a priority. “Politicians should think about cybersecurity and espionage first and before thinking about their elections to the European Parliament,” he told POLITICO, adding that he doesn’t have a TikTok account.

    Others are finding workarounds to have it both ways.

    “We will use a dummy phone and not our work phones anymore. That [dummy] phone will only be used for producing videos,” said an assistant to German Social-democrat member Delara Burkhardt, who has close to 2,000 followers. The assistant credited the platform with driving a friendlier, less abrasive political debate than other platforms like Twitter: “On TikTok the culture is nicer, we get more questions.”

    [ad_2]

    Eddy Wax and Clothilde Goujard

    Source link

  • French surveillance system for Olympics moves forward, despite civil rights campaign

    French surveillance system for Olympics moves forward, despite civil rights campaign

    [ad_1]

    PARIS — A controversial video surveillance system cleared a legislative hurdle Wednesday to be used during the 2024 Paris Summer Olympics amid opposition from left-leaning French politicians and digital rights NGOs, who argue it infringes upon privacy standards.

    The National Assembly’s law committee approved the system, but also voted to limit the temporary program’s duration until December 24, 2024, instead of June 2025. 

    The plan pitched by the French government includes experimental large-scale, real-time camera systems supported by an algorithm to spot suspicious behavior, including unsupervised luggage and alarming crowd movements like stampedes.  

    Earlier this week, civil society groups in France and beyond — including La Quadrature du Net, Access Now and Amnesty International — penned an op-ed in Le Monde raising concerns about what they argued was a “worrying precedent” that France could set in the EU. 

    There’s a risk that the measures, pitched as temporary, could become permanent, and they likely would not comply with the EU’s Artificial Intelligence Act, the groups also argue. 

    About 90 left-leaning lawmakers signed a petition initiated by La Quadrature du Net to scrap Article 7, which includes the AI-powered surveillance system. They failed, however, to gather enough votes to have it deleted from the bill. 

    Lawmakers also voted to ensure the general public is better informed of where the cameras are and to involve the cybersecurity agency ANSSI on top of the privacy regulator CNIL. They also widened the pool of images and data that can be used to train the algorithms ahead of the Olympics.

    The bill will go to a full plenary vote on March 21 for final approval.

    [ad_2]

    Laura Kayali

    Source link

  • France aims to protect kids from parents oversharing pics online

    France aims to protect kids from parents oversharing pics online

    [ad_1]

    PARIS — French parents had better think twice before posting too many pictures of their offspring on social media.

    On Tuesday, members of the National Assembly’s law committee unanimously green-lit draft legislation to protect children’s rights to their own images.

    “The message to parents is that their job is to protect their children’s privacy,” Bruno Studer, an MP from President Emmanuel Macron’s party who put the bill forward, said in an interview. “On average, children have 1,300 photos of themselves circulating on social media platforms before the age of 13, before they are even allowed to have an account,” he added.

    The French president and his wife Brigitte have made child protection online a political priority. Lawmakers are also working on age-verification requirements for social media and rules to limit kids’ screen time.

    Studer, who was first elected in 2017, has made a career out of child safety online. In the past few years, he authored two groundbreaking pieces of legislation: one requiring smartphone and tablet manufacturers to give parents the option to control their children’s internet access, and another introducing legal protections for YouTube child stars.

    So-called sharenting (combining “sharing” and “parenting,” referring to posting sensitive pictures of one’s kids online) constitutes one of the main risks to children’s privacy, according to the bill’s explanatory statement. Half of the pictures shared by child sexual abusers were initially posted by parents on social media, according to reports by the National Center for Missing and Exploited Children, mentioned in the text.

    The legislation adopted on Tuesday includes protecting their children’s privacy among parents’ legal duties. Both parents would be jointly responsible for their offspring’s image rights and “shall involve the child … according to his or her age and degree of maturity.”

    In case of disagreement between parents, a judge can ban one of them from posting or sharing a child’s pictures without authorization from the other. And in the most extreme cases, parents can lose their parental authority over their kids’ image rights “if the dissemination of the child’s image by both parents seriously affects the child’s dignity or moral integrity.”

    The bill still needs to go through a plenary session next week and the Senate before it would become law.

    [ad_2]

    Laura Kayali

    Source link

  • French privacy chief warns against using facial recognition for 2024 Olympics

    French privacy chief warns against using facial recognition for 2024 Olympics

    [ad_1]

    PARIS — The French data protection authority’s president Marie-Laure Denis warned Tuesday against using facial recognition as part of the 2024 Paris Summer Olympics security toolkit.

    “The members of the CNIL’s college call on parliamentarians not to introduce facial recognition, that is to say the identification of people on the fly in the public space,” she told Franceinfo.

    The French government is seeking to ramp up France’s arsenal of surveillance powers to ensure the safety of the millions of tourists expected for the 2024 Paris Summer Olympics. The plans include AI-powered cameras for the first time — but not facial recognition.

    The Senate’s plenary session starts to vote today on the law introducing the new powers. Senators are divided between those who want to add privacy safeguards and those who want to push the surveillance and security arsenal further, mainly by introducing facial recognition.

    “The amendment [to include facial recognition] was rejected in the Senate’s law committee, but it can come back [in the plenary session],” the CNIL’s chief cautioned.

    Civil liberties NGOs such as La Quadrature du Net and the Human Rights League are currently campaigning against the experimental AI-powered surveillance cameras. Denis however tried to assuage concerns.

    The CNIL will monitor algorithmic training to ensure there is no bias and that footage of people is deleted in due time, she said. The experiment will “not necessarily” become permanent, she added.

    [ad_2]

    Laura Kayali

    Source link

  • France plots surveillance power grab for Paris 2024 Olympics

    France plots surveillance power grab for Paris 2024 Olympics

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    PARIS — France is seeking to massively expand its arsenal of surveillance powers and tools to secure the millions of tourists expected for the 2024 Paris Summer Olympics.

    Among the plans are large-scale, real-time camera systems supported by an algorithm to spot suspicious behavior, including unsupervised luggage and alarming crowd movements like stampedes. Senators on Wednesday will vote on a law introducing the new powers, which are supposed to be temporary, with some lawmakers pushing to allow controversial facial-recognition technology.

    The stakes are high: The government badly wants to avoid “failures” like the ones that dented its reputation during the Champions League final last summer, and the trauma of the 2015 Paris terror attacks still looms large over the country.

    But the plans are already causing an uproar among privacy campaigners. “The Olympic Games are used as a pretext to pass measures the [security technology] industry has long been waiting for,” said Bastien Le Querrec from digital rights NGO La Quadrature du Net, who’s leading a campaign against algorithmic video surveillance.

    The French government already backtracked on deploying facial recognition after lawmakers within President Emmanuel Macron’s majority party raised concerns. It was also forced by the country’s data protection authority and top administrative court to build in more privacy safeguards.

    For now, the law would allow for “experimentation” with the surveillance systems, and the trial is supposed to end in June 2025 — 10 months after the sports competition wraps up.

    Critics, however, fear the law will lead to unwanted surveillance in the long term.

    One key question is what will happen to the AI-powered devices once the Olympic Games are over, especially since the legislation mentions not only sports events but also “festive” and “cultural” gatherings. In the past, Le Querrec warned, security measures initially designed to be temporary — for example, under the state of emergency that followed the 2015 attacks — ended up becoming permanent.

    Whether the tech survives the Olympics will depend on how the final law is written, according to Francisco Klauser, a professor at the University of Neuchâtel, who has written about surveillance and sporting events. 

    “In the history of mega-events, there is always a legacy,” he said. Countries staging major events are under “extraordinary circumstances and time pressure” that often mean systems get deployed that otherwise “would have been debated much more heavily,” he added.

    Case in point: IBM helped Rio de Janeiro install a “control room” in view of the 2016 Olympics, and the tech is still operational to this day, Klauser said.

    For the 2024 Olympics, France already has the cameras but will need to buy the software to analyze footage, an official from the interior ministry told POLITICO.

    MP Philippe Latombe said that French companies such as Atos, Idemia, XXII and Datakalab would be able to provide certain software items | Joel Saget/AFP via Getty Images

    Philippe Latombe, an MP from the centrist Macron-allied party Modem, said that French companies such as Atos, Idemia, XXII and Datakalab, among others, would be able to provide such tech. The lawmaker is co-chairing a fact-finding mission on video surveillance in public spaces.

    After the Senate votes on the law to allow “experimentations” with the surveillance systems, the legislation will go to the National Assembly, and lawmakers in both chambers are expected to fight over the balance between privacy and security.

    Time is already running out, Latombe warned, as algorithms will need to be trained on datasets for months before the Olympics kick off.

    Elisa Braun contributed reporting.

    [ad_2]

    Laura Kayali

    Source link

  • Europe turns on TikTok

    Europe turns on TikTok

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    In the United States, TikTok is a favorite punching ball for lawmakers who’ve compared the Chinese-owned app to “digital fentanyl” and say it should be banned.

    Now that hostility is spreading to Europe, where fears about children’s safety and reports that TikTok spied on journalists using their IP locations are fueling a backlash against the video-sharing app used by more than 250 million Europeans.

    As TikTok Chief Executive Shou Zi Chew heads to Brussels on Tuesday to meet with top digital policymaker Margrethe Vestager amid a wider reappraisal of EU ties with China, his company faces a slew of legal, regulatory and security challenges in the bloc — as well as a rising din of public criticism.

    One of the loudest critics is French President Emmanuel Macron, who has called TikTok “deceptively innocent” and a cause of “real addiction” among users, as well as a source of Russian disinformation. Such comments have gone hand-in-hand with aggressive media coverage in France, including Le Parisien daily’s December 29 front page calling TikTok “A real danger for the brains of our children.”

    New restrictions may be in order. During a trip to the United States in November, Macron told a group of American investors and French tech CEOs that he wanted to regulate TikTok, according to two people in the room. TikTok denies it is harmful and says it has measures to protect kids on the app.

    While it wasn’t clear what rules Macron was referring to — his office declined to comment — the remarks added to a darkening tableau for TikTok. In addition to two EU-wide privacy probes that are set to wrap up in coming months, TikTok has to contend with extensive new requirements on content moderation under the bloc’s new digital rulebook, the DSA, from mid-2023 — as well as the possibility of being caught up in the bloc’s new digital competition rulebook, the Digital Markets Act.

    In answers to emailed questions, France’s digital minister Jean-Noel Barrot said that France would rely on the DSA and DMA to regulate TikTok at an EU level, though he “remained vigilant on these ever-evolving models” of ad-supported social media. Barrot added that he “never failed to maintain a level of pressure appropriate to the stakes of the DSA” in meetings with TikTok executives.

    Ahead of Chew’s visit to Brussels, Thierry Breton, the bloc’s internal market commissioner, warned him about the need to “respect the integrality of our rules,” according to comments the commissioner made in Spain, reported by Reuters. A spokesperson for Vestager said she aimed to “review how the company was preparing for complying with its (possible) obligations under our regulation.”

    That said, the probes TikTok is facing deal with suspected violations that have already taken place. If Ireland’s data regulator, which leads investigations on behalf of other EU states, finds that TikTok has broken the bloc’s privacy rulebook, the General Data Protection Regulation, fines could amount to up to 4 percent of the firm’s global turnover. Penalties can be even higher under the DSA, which starts applying to big platforms in mid-2023.

    Spying fears

    And yet, having to fork over a few million euros could be the least of TikTok’s troubles in Europe, as some lawmakers here are following their U.S. peers to call for much tougher restrictions on the app amid fears that data from TikTok will be used for spying.

    TikTok is under investigation for sending data on EU users to China — one of two probes being led by Ireland. Reports that TikTok employees in China used TikTok data to track the movements of two Western journalists only intensified spying fears, especially in privacy-conscious Germany. (TikTok acknowledged the incident and fired four employees over what they said was unauthorized access to user data.)

    One of the loudest critics is French President Emmanuel Macron, who has called TikTok “deceptively innocent” and a cause of “real addiction” among users | Pool photo by Ludovic Marin/AFP via Getty Images

    Citing a “lack of data security and data protection” as well as data transfers to China, the digital policy spokesman for Germany’s Social Democratic Party group in the Bundestag said that the U.S. ban on TikTok for federal employees’ phones was “understandable.”

    “I think it makes sense to also critically examine applications such as TikTok and, if necessary, to take measures. I would therefore advise civil servants, but also every citizen, not to install untrustworthy services and apps on their smartphones,” Jens Zimmermann added.

    Maximilian Funke-Kaiser, digital policy spokesman for the liberal FDP group in German parliament, went even further raising the prospect of a full ban on use of TikTok on government phones. “In view of the privacy and security risks posed by the app and the app’s far-reaching access rights, I consider the ban on TikTok on the work phones of U.S. government officials to be appropriate. Corresponding steps should also be examined in Germany.”

    For Moritz Körner, a centrist lawmaker in European Parliament, the potential risks linked to TikTok are far greater than with Twitter due to the former’s larger user base — at least five times as many users as Twitter in Europe — and the fact that up to a third of its users are aged 13-19. 

    “The China-app TikTok should be under the special surveillance of the European authorities,” he wrote in an email. “The fight between autocratic and democratic systems will also be fought via digital platforms. Europe has to wake up.”

    In Switzerland, lawmakers called earlier this month for a ban on officials’ phones.

    Call for a ban

    So far, though, no European government or public body has followed the U.S. in banning TikTok usage on officials’ phones. In response to questions from POLITICO, a spokesperson for the European Commission — which previously advised its employees against using Meta’s WhatsApp — wrote that any restriction on TikTok usage for EU civil servants would “require a political decision and will be based on the careful assessment of data protection cybersecurity concerns, and others.”

    The spokesperson also pointed out that “there are no official Commission accounts” on TikTok.

    A spokesperson for the European Parliament said its services “continuously monitor” for cybersecurity issues, but that “due to the nature of security matters, we don’t comment further on specific platforms.”

    POLITICO reached out to cybersecurity agencies for the EU, the U.K. and Germany to ask if they had or were planning any restrictions or recommendations having to do with TikTok. None flagged any specific restrictions, which doesn’t mean there aren’t any. In Germany, for example, officials who use iPhones can’t use or download TikTok in the section of their phone where confidential data can be accessed.

    The European Commission has previously advised its employees against using Meta’s WhatsApp | Kirill Kudryavtsev/AFP via Getty Images

    For Hamburg’s data protection agency, one of 16 in Germany’s federal system, restricting TikTok on official phones would be a good idea.

    “Based on what we know from the available sources, we share, among other things, the concerns of the U.S. government that you mentioned and would therefore welcome it appropriate for government agencies in the EU to refrain from using TikTok,” a spokesperson said.

    This suggests that the most immediate public threat for TikTok in Europe is privacy-related. Of the two probes being conducted by Ireland’s privacy regulator, the one looking into child safety on the app is the closest to wrapping up, according to a spokesperson for the Irish Data Protection Commission.

    Depending on the outcome of discussions between EU privacy regulators — the child safety probe is likely to trigger a dispute resolution mechanism — TikTok could face new requirements to verify age in the EU. The other probe, looking into TikTok’s transfers of data to China, is likely to wrap up around mid-year or toward the end of 2023 if a dispute is triggered, the spokesperson said.

    Antoaneta Roussi contributed reporting.

    [ad_2]

    Nicholas Vinocur, Clothilde Goujard, Océane Herrero and Louis Westendarp

    Source link

  • Germany is (still) a Huawei hotspot in Europe

    Germany is (still) a Huawei hotspot in Europe

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    Europe’s largest economy Germany hasn’t kicked its habit of using Chinese kit for its 5G telecoms networks yet.

    A new study analyzing Huawei’s market share in Europe estimates that Germany relies on Chinese technology for 59 percent of its 5G networks. Other key markets including Italy and the Netherlands are also among eight countries where over half of 5G networks run on Chinese equipment.

    The study, by Copenhagen-based telecoms consultancy Strand Consult, offers a rare glimpse of how some telecoms operators have relied on Chinese vendors Huawei and ZTE in the early stage of Europe’s 5G rollout. The figures also underline one of Western officials’ fears: that Europe’s pushback against Chinese technology for communications networks was slow to wean operators off Huawei.

    “It’s easier to preach than to practice,” said John Strand, founder of the consultancy, of EU governments’ hesitance to throw up clear barriers to using Chinese telecoms equipment.

    “It is more dangerous to be dependent on Chinese telecoms networks than to be dependent on Russian gas. Digital infrastructure is the fundament of society,” Strand said.

    The study matches a warning by the European Commission’s digital chief Margrethe Vestager, who said last month that “a number of countries have passed legislation but they have not put it into effect … Making it work is even better.”

    “It is not only Germany, but it is also Germany,” Vestager said in November.

    Germany’s ministries of digital affairs, interior and economic affairs didn’t immediately respond to a request for comment.

    Huawei also didn’t immediately respond to a request for comment.

    Clinging to Huawei kit

    European governments in the past two years have imposed security policies on the telecoms industry to cut down on Chinese kit.

    In some countries, this has led to a full stop on using Huawei and its smaller Chinese rival ZTE. Strand’s study estimates that nine EU countries, as well as Norway and the Faroe Islands, have no Chinese equipment in new 5G networks at all. France (17 percent) and Belgium (30 percent) have a much lower presence of Chinese kit in 5G than was the case in their 4G and 3G networks.

    But the EU regime on using Chinese technology in 5G is a patchwork. In other EU countries those policies either allow for operators to still rely on Huawei for parts of their networks or require the government to actively step in to stop deals.

    The Berlin government in the past two years was criticized for being slow in setting up the legal framework that now allows it to intervene on contracts between operators and vendors if ministers choose to do so. Olaf Scholz’s government has taken a more critical stance on Chinese technology and just last month blocked Chinese investors from buying a German chip plant over potential security threats.

    But Germany’s largest operator Deutsche Telekom has also maintained a strategic partnership with Huawei for years and it and others have worked with Huawei on the early stages of rolling out 5G, Strand’s report suggests.

    In Italy, the government has “golden powers” to stop contracts with Huawei. The former government led by Mario Draghi, seen as close to the U.S., intervened on a couple of deals but it is still unclear how the current government led by far-right Prime Minister Giorgia Meloni will position itself.

    In other, smaller countries like the Netherlands, operators were quick to launch 5G networks and some did so using Huawei, especially in “radio access network” (RAN) parts — effectively preempting EU and national decisions to cut down on Chinese kit.

    The EU in the past few months repeatedly slammed countries’ slow pace in adopting its common “5G security toolbox” guidelines to mitigate security risks in networks, according to several legislative texts.

    Huawei’s headwinds

    Strand’s data, gathered from European industry players in the past months, show Huawei was quick to provide operators with 5G gear in the first stages of Europe’s rollout.

    But another boutique telecoms consultancy, Dell’Oro, compiled data recently that showed the firm in the past year started running into serious obstacles in selling its kit.

    As of early last year — right as European officials were changing direction on 5G security — Sweden’s Ericsson overtook Huawei in market share of new European sales of radio access network (RAN) equipment for 3G, 4G and 5G equipment, according to updated figures Dell’Oro compiled this summer, shared with POLITICO by an industry official. Radio access networks make up the largest chunk of network investment and include base stations and antennas.

    For 5G RAN specifically, Huawei lost its initial position as a market leader at the start of the rollout; it now provides 22 percent of sales, with Ericsson at 42 percent and Nokia at 32 percent in Europe, Dell’Oro estimated.

    A POLITICO investigation last month revealed how the Chinese tech giant was consolidating its operations in Europe and scaling down its lobbying and branding operations across a series of important markets, including France, the United Kingdom and its European representation in Brussels.

    Pressed by the United States and increasingly shunned on a continent it once considered its most strategic overseas market, Huawei is pivoting back toward the Chinese market, focusing its remaining European attention on just a few countries, among them Germany.

    China hawks, however, fear that Huawei could continue to supply 5G equipment because of the loopholes and political considerations of national governments.

    The new figures could serve as “an eye opener for a lot of governments and regulators in Europe,” Strand said.

    Sarah Wheaton contributed reporting.

    [ad_2]

    Laurens Cerulus

    Source link

  • Meta faces record EU privacy fines

    Meta faces record EU privacy fines

    [ad_1]

    This Christmas is bound to be an expensive one for U.S. tech giant Meta.

    The Big Tech firm looks set to soon face a huge regulatory bill for all three of its social networks, Facebook, WhatsApp and Instagram. Europe’s privacy regulator body, the European Data Protection Board, is expected to issue decisions on Monday that target the three platforms, after which Meta’s lead regulator in Ireland will issue a final decision within a month.

    The detail and possible value of the monetary penalty will remain under wraps until then, but the triplet of fines could add up to over €2 billion, financial statements by Meta indicate — setting a new record for the highest fines under the European Union’s feared General Data Protection Regulation (GDPR) received by a single company in one go.

    According to filings in Ireland, Meta has set aside €3 billion for EU privacy fines in 2022 and 2023. Its platform Instagram already got slapped with a €405 million fine in September for violating kids’ privacy, and Facebook so far has accumulated €282 million in penalties for data breaches as well as a 60 million hit from the French. That leaves well over €2 billion earmarked by the firm for regulatory action.

    That’s a substantial hit for Meta, which announced last month it was laying off 11,000 employees globally amid lower sales and major costs linked to the firm’s pivot to the metaverse.

    Beyond hitting Meta’s pocket, the three fines expected within weeks could also put a bomb under its broader business model. The decisions stem from complaints filed by Austrian activist Max Schrems accusing the company of failing to have proper legal grounds to process millions of Europeans’ data. If the final decisions invalidate Meta’s argument that it’s processing data as part of a contract with users, the company would have to seek another legal basis for its data-fuelled ad targeting model.

    The cases have also revealed deep fissures between Europe’s data watchdogs.

    Ireland’s data protection commission largely backed Meta’s argument that it could claim it needs data to fulfill a “contract” with its users to provide personalized ads, in its draft decision issued a year ago. But that reasoning has long put Ireland in the minority amongst its colleagues. The Norwegian data protection authority said the Irish interpretation would render European data protection law “pointless,” according to a document obtained by POLITICO last year. The Irish regulator was also alone in voting against EU guidelines that banned companies from using the contract legal basis to use data to target ads.

    The three decisions are likely to lay into the Irish regulator’s initial position and, more worryingly for Meta, amp up the pressure for the company to go scrambling for new legal ways to gather and process data on Europeans.

    Meta also still faces an ongoing, high-profile probe into the company’s transfers of Europeans’ data to the U.S.

    Meta declined to comment. It can still appeal the fines coming out of the coming decisions.

    [ad_2]

    Vincent Manancourt

    Source link