ReportWire

Tag: Privacy

  • Germany detains 2nd man over fatal arson attack on refugee shelter in 1991

    Germany detains 2nd man over fatal arson attack on refugee shelter in 1991

    [ad_1]

    German authorities detained a second man Tuesday in connection with a racist arson attack on a shelter for asylum-seekers 32 years ago in which a Ghanaian man was killed

    BERLIN — German authorities detained a second man Tuesday in connection with a racist arson attack on a shelter for asylum-seekers 32 years ago in which a Ghanaian man was killed.

    Federal prosecutors said Peter St., whose full surname wasn’t released due to privacy rules, was detained by police in the western state of Saarland on suspicion of being an accessory to murder and accessory to attempted murder.

    Prosecutors said the suspect, who holds neo-Nazi and racist views, is alleged to have met with other far-right extremists at a bar in the town of Saarlouis on Sept. 18, 1991, and called for attacks on migrant homes.

    Peter St., who had a prominent role in the regional skinhead scene, is alleged to have praised attacks occurring in eastern Germany at the time and said that “something should burn or happen here too,” prosecutors claim.

    Another man who was present in the bar, identified only as Peter S., is then alleged to have gone to a nearby building housing asylum-seekers, poured gasoline on the staircase and set it alight. A 27-year-old Ghanaian resident, Samuel Kofi Yeboah, died after suffering smoke inhalation and severe burns. Two other residents suffered broken bones after jumping out of windows, while 18 people escaped unhurt.

    Peter S. was arrested last year and is currently on trial for murder, attempted murder and fatal arson.

    Authorities in Saarland have apologized for police failures in the immediate aftermath of the attack that allowed the suspects to remain free for decades.

    [ad_2]

    Source link

  • Microsoft Fined $20 Million For ‘Illegally’ Collecting Children’s Information On Xbox

    Microsoft Fined $20 Million For ‘Illegally’ Collecting Children’s Information On Xbox

    [ad_1]

    The Federal Trade Commission just announced that Microsoft has been fined $20 million “over charges it illegally collected personal information from children who signed up for its Xbox gaming system without their parents’ consent”.

    The ruling follows a larger one from December 2022, when Epic Games, developers of Fortnite, were hit with a $550 million fine for using “privacy-invasive default settings and deceptive interfaces that tricked Fortnite users, including teenagers and children”.

    In this instance, the FTC says the issue centred around the creation of children’s accounts on an Xbox console, a process that until late 2021 would allow a child to enter a certain amount of personal information before requiring a parent’s assistance and permission. Microsoft had been keeping that data (sometimes for “years”), even if the account wasn’t created, which is a violation of the Children’s Online Privacy Protection Rule (COPPA).

    Microsoft have already responded to the ruling with a post on the official Xbox blog, with Dave McCarthy, CVP Xbox Player Services, saying the violation was a result of a “glitch”, and that Microsoft will “continue improving” going forwards:

    We recently entered into a settlement with the U.S. Federal Trade Commission (FTC) to update our account creation process and resolve a data retention glitch found in our system. Regrettably, we did not meet customer expectations and are committed to complying with the order to continue improving upon our safety measures. We believe that we can and should do more, and we’ll remain steadfast in our commitment to safety, privacy, and security for our community.

    McCarthy goes on to explain the details of this “glitch”, and how it led to retention of children’s data despite this being “inconsistent with our policy to save that information for only 14 days”:

    During the investigation, we identified a technical glitch where our systems did not delete account creation data for child accounts where the account creation process was started but not completed. This was inconsistent with our policy to save that information for only 14 days to make it easier for gamers to pick up where they left off to complete the process. Our engineering team took immediate action: we fixed the glitch, deleted the data, and implemented practices to prevent the error from recurring. The data was never used, shared, or monetized.

    The FTC’s statement, meanwhile, says:

    Microsoft will pay $20 million to settle Federal Trade Commission charges that it violated the Children’s Online Privacy Protection Act (COPPA) by collecting personal information from children who signed up to its Xbox gaming system without notifying their parents or obtaining their parents’ consent, and by illegally retaining children’s personal information.

    “Our proposed order makes it easier for parents to protect their children’s privacy on Xbox, and limits what information Microsoft can collect and retain about kids,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “This action should also make it abundantly clear that kids’ avatars, biometric data, and health information are not exempt from COPPA.”

    As part of a proposed order filed by the Department of Justice on behalf of the FTC, Microsoft will be required to take several steps to bolster privacy protections for child users of its Xbox system. For example, the order will extend COPPA protections to third-party gaming publishers with whom Microsoft shares children’s data. In addition, the order makes clear that avatars generated from a child’s image, and biometric and health information, are covered by the COPPA Rule when collected with other personal data. The order must be approved by a federal court before it can go into effect.

    [ad_2]

    Luke Plunkett

    Source link

  • What to know as Prince Harry prepares to take on a British tabloid publisher in court

    What to know as Prince Harry prepares to take on a British tabloid publisher in court

    [ad_1]

    LONDON — Prince Harry is going where other British royals haven’t for over a century: to a courtroom witness stand.

    The Duke of Sussex is set to testify in the first of his five pending legal cases largely centered around battles with British tabloids. Opening statements are scheduled Monday in his case.

    Harry said in court documents that the royal family had assiduously avoided the courts to prevent testifying about matters that might be embarrassing.

    His frustration and anger at the press, however, impelled him to buck convention by suing newspaper owners — allegedly against the wishes of his father, now King Charles III.

    If Harry testifies as scheduled Tuesday in his lawsuit against the publisher of the Daily Mirror, he’ll be the first member of the royal family to do so since the late 19th century, when Queen Victoria’s eldest son, Prince Albert Edward, testified twice in court.

    The man who would go on to become King Edward VII testified in the divorce proceedings of a woman he was accused of having an affair with (he denied it) and in a slander case involving a man who cheated at cards. Edward VII was the great-grandfather of Queen Elizabeth II, Harry’s grandmother.

    A look at Prince Harry‘s legal battles:

    HARRY’S HISTORY WITH PHONE HACKING AND PAPARAZZI

    The Daily Mirror case is one of three Harry has brought alleging phone hacking and other invasions of his privacy, dating back to when he was a boy.

    In court documents, he described his relationship with the press as “uneasy” in court documents, but it runs much deeper than that. The prince blames paparazzi for causing the car crash that killed his mother, the late Princess Diana.

    He also cites harassment and intrusion by the British Press and “vicious, persistent attacks” on his wife, Meghan, including racist articles, as the reason the couple left royal life and fled to the U.S. in 2020. Reforming the news media has become one of his life’s missions.

    News that British journalists hacked phones for scoops first emerged in 2006 with the arrest of a private investigator and the royals reporter at the now-defunct News of the World. The two were jailed, and the reporter apologized for hacking phones used by aides of Harry, his older brother, Prince William, and their father.

    A full-blown hacking scandal erupted five years later when it was revealed that the Rupert Murdoch-owned tabloid eavesdropped on voicemails on the phone of a slain girl, forcing the paper to close and launching a public inquiry.

    Since that time, other newspapers have been accused of illegal intrusions that extended to tapping phones, bugging homes and using deception to obtain phone, bank and medical records.

    WHO IS HARRY SUING?

    The duke is taking on three of Britain’s best-known tabloid publishers.

    In addition to Mirror Group Newspapers, he is suing Murdoch’s News Group Newspapers, publisher of The Sun, and Associated Newspapers Ltd., which owns the Daily Mail and Mail on Sunday.

    The claims are similar: that journalists and people they employed listened to phone messages and committed other unlawful acts to snoop on Harry and invade his privacy.

    In a sign of how much the cases matter to him, Harry attended several days of hearings in March in the case against the Mail publisher.

    Several celebrities with similar allegations have also filed claims being heard alongside Harry’s, including Hugh Grant in the News Group case, and Elton John and Elizabeth Hurley in the Associated Newspapers case.

    Associated Newspapers “vigorously denies” the claims. News Group has apologized for News of the World’s hacking but The Sun does not accept liability or admit to any of the allegations, according to spokespeople.

    Both publishers argued during High Court hearings this spring that the lawsuits should be thrown out because Harry and the others failed to bring them within a six-year time limit.

    The lawyer representing Harry and other claimants said they should be granted an exception because the publishers lied and concealed evidence that prevented them from learning of the covert acts in time to meet the deadlines.

    WHAT’S THE CURRENT TRIAL ABOUT?

    At the outset of the proceedings, Mirror Group appeared to fall on its sword, acknowledging instances when its newspapers unlawfully gathered information. It apologized in court papers and said Harry and two of the other three claimants in the case were due compensation.

    But the admission involving Harry — the hiring of a private eye to dig up unspecified dirt for an article about his nightclubbing — wasn’t among the nearly 150 articles between 1995 and 2011 for which he claimed Mirror Group reporters used phone hacking and other illegal methods to gather material. The trial is focusing on 33 of those stories.

    Harry’s lawyer, David Sherborne, said unlawful acts by reporters and editors at the Daily Mirror, Sunday Mirror and Sunday People were “widespread and habitual” and carried out on “an industrial scale.” He pointed the finger at management, in particular TV personality Piers Morgan, a former Daily Mirror editor.

    Morgan has publicly denied involvement in phone hacking, as has Mirror Group in its court submissions. Mirror lawyer Andrew Green said a substantial proportion of the articles at issue involved a “breathtaking level of triviality” and that with the exception of a few instances of unlawful information gathering, the company’s reporters had used public records and sources to legally obtain information.

    The trial is a test case involving four claimants, including two members of Britain’s longest-running soap opera, “Coronation Street.” But the verdict could determine the outcome of hacking claims also made against Mirror Group by the estate of the late singer George Michael, former Girls Aloud member Cheryl and former soccer player Ian Wright.

    The case is broken into two parts: a generic case that lasted nearly three weeks in which Harry’s lawyer laid out evidence of alleged skullduggery at the newspapers; the second part, starting Monday, with the four claimants testifying about specific acts targeting them.

    WHAT ARE THE OTHER LAWSUITS ABOUT?

    Harry’s fear and loathing of the press intersects with two active cases that center around the government’s decision to stop protecting him after he abandoned royal duties.

    Harry argued his security is compromised when he visits the U.K., saying that aggressive paparazzi chased him after a charity event in 2021. He sued the British government for withdrawing his security detail.

    With that lawsuit pending, he unsuccessfully tried to challenge the government’s subsequent rejection of his offer to pay for his own police protection.

    A judge is weighing whether Harry’s libel suit against Associated Newspapers for reporting that he tried to hide his legal efforts to get the British government to provide security should go to trial.

    “How Prince Harry tried to keep his legal fight with the government over police bodyguards a secret… then — just minutes after the story broke — his PR machine tried to put a positive spin on the dispute,” the Mail on Sunday wrote in its headline.

    In past cases, Meghan won an invasion of privacy case in 2021 against the Mail on Sunday for printing a private letter she wrote to her father. That led to a 1-pound settlement for violating her privacy and an undisclosed sum for copyright infringement.

    The couple has also settled lawsuits against photo agencies for flying a drone over their California home and a helicopter over a home where they were living in England.

    [ad_2]

    Source link

  • Amazon charged with privacy violations over Alexa and Ring cameras

    Amazon charged with privacy violations over Alexa and Ring cameras

    [ad_1]

    Amazon will pay more than $30 million to settle alleged privacy violations involving its voice assistant Alexa and its doorbell camera Ring

    FILE – Amazon Echo and Echo Plus devices, behind, sit near illuminated Echo Button devices during an event by the company in Seattle, Sept. 27, 2017. In a vote Wednesday, May 31, 2023, the Federal Trade Commission is ordering Amazon to pay more than $30 million in fines over privacy violations involving its voice assistant Alexa and its doorbell camera Ring. (AP Photo/Elaine Thompson, File)

    The Associated Press

    WASHINGTON — Amazon will pay more than $30 million to settle alleged privacy violations involving its voice assistant Alexa and its doorbell camera Ring.

    The Federal Trade Commission voted to file charges in two separate cases Wednesday that could also force the company to delete certain data collected by its popular internet-connected devices.

    In the Alexa case, the FTC said Amazon had deceived users of the voice assistance service for years. It retained children’s recordings indefinitely unless a parent requested the information be deleted, the agency said, and even when it deleted those recordings, Amazon often kept the transcripts.

    The FTC ordered the company to delete inactive child accounts as well as certain voice information and geolocation data.

    In imposing a $25 million fine, the agency said Amazon had violated the Children’s Online Privacy Protection Act and FTC Consumer Protection Chief Samuel Levine accused the tech giant of sacrificing “privacy for profits” in “flouting parents’ deletion requests.”

    FTC Commissioner Alvaro Bedoya said Amazon kept kids’ data indefinitely to refine its voice recognition algorithm. In a separate statement, he said the Alexa ruling sends a message to all tech companies who are “sprinting to do the same” amid fierce competition in developing AI datasets.

    “Nothing is more visceral to a parent than the sound of their child’s voice,” tweeted Bedoya, the father of two small children.

    In the Ring case, the FTC accuses Amazon’s home security camera subsidiary of allowing its employees and contractors to access the private videos of consumers and providing lax security practices that enabled hackers to take control of some accounts.

    Amazon bought California-based Ring in 2018, and many of the violations cited by the FTC predate the acquisition. The FTC’s order would require Ring to pay $5.8 million that would be used for consumer refunds.

    Amazon said it disagreed with the FTC’s claims about the two devices and denied violating the law. But it said the settlements “put these matters behind us.”

    “Our devices and services are built to protect customers’ privacy, and to provide customers with control over their experience,” the Seattle-based company said.

    The proposed orders must be approved by federal judges.

    FTC commissioners unanimously voted to file the charges against Amazon in both cases. In addition to the fine in the Alexa case, the proposed order prohibits Amazon from using deleted geolocation and voice information to create or improve any data product. The order also requires Amazon to create a privacy program for its use of geolocation information.

    [ad_2]

    Source link

  • 8 tips for parents and teens on social media use — from the US surgeon general

    8 tips for parents and teens on social media use — from the US surgeon general

    [ad_1]

    The U.S. surgeon general is calling for tech companies and lawmakers to take “immediate action” to protect kids’ and adolescents’ mental health on social media.

    But after years of insufficient action by both social media platforms and policymakers, parents and young people still bear most of the burden in navigating the fast-changing, often harmful world of secretive algorithms, addictive apps and extreme and inappropriate content found on platforms such as Instagram, TikTok and Snapchat.

    So what can parents and young people do now? Surgeon General Vivek Murthy has some tips.

    “Our children and adolescents don’t have the luxury of waiting years until we know the full extent of social media’s impact,” Murthy said in an advisory released Tuesday. “Their childhoods and development are happening now.”

    TIPS FOR YOUNG PEOPLE

    — Reach out for help: If you or someone you know is being negatively affected by social media, reach out to a trusted friend or adult for help. Check the American Academy of Pediatrics’ guidance on social media.

    — Create boundaries: Limit the use of phones, tablets, and computers for at least one hour before bedtime and through the night to make sure you get enough sleep. Keep mealtimes and in-person gatherings device‑free to help build social bonds and engage in two‑way conversations with others. Connect with people in person and make unplugged interactions a daily priority.

    — Be cautious about what you share: Personal information about you has value. Be selective with what you post and share online and with whom, as it is often public and can be stored permanently. If you aren’t sure if you should post something, it’s usually best if you don’t.

    — Don’t keep harassment or abuse a secret: Reach out to at least one person you trust, such as a close friend, family member, counselor, or teacher, who can give you the help and support you deserve. Visit stopbullying.gov for tips on how to report cyberbullying. If you have experienced online harassment and abuse by a dating partner, contact an expert at Love is Respect for support. If your private images have been taken and shared online without your permission, visit Take It Down to help get them removed.

    TIPS FOR PARENTS AND CAREGIVERS

    — Create a family media plan: Agreed-upon expectations can help establish healthy technology boundaries at home – including social media use. A family media plan can promote open family discussion and rules about media use and include topics such as balancing screen/online time, content boundaries, and not disclosing personal information

    — Create tech-free zones: Restrict the use of electronics at least one hour before bedtime and through the night. Keep meal times and other in-person gatherings tech-free. Help children develop social skills and nurture their in‑person relationships by encouraging unstructured and offline connections with others.

    — Model responsible behavior: Parents can set a good example of what responsible and healthy social media use looks like by limiting their own use, being mindful of social media habits (including when and how parents share information or content about their child), and modeling positive behavior on your social media accounts.

    — Empower kids: Teach kids about technology and empower them to be responsible online participants at the appropriate age. Discuss with children the benefits and risks of social media as well as the importance of respecting privacy and protecting personal information in age-appropriate ways. Have conversations with children about who they are connecting with, their privacy settings, their online experiences, and how they are spending their time online.

    [ad_2]

    Source link

  • Meta fined record $1.3 billion and ordered to stop sending European user data to US

    Meta fined record $1.3 billion and ordered to stop sending European user data to US

    [ad_1]

    LONDON — The European Union slapped Meta with a record $1.3 billion privacy fine Monday and ordered it to stop transferring users’ personal information across the Atlantic by October, the latest salvo in a decadelong case sparked by U.S. cybersnooping fears.

    The penalty of 1.2 billion euros is the biggest since the EU’s strict data privacy regime took effect five years ago, surpassing Amazon’s 746 million euro fine in 2021 for data protection violations.

    Meta, which had previously warned that services for its users in Europe could be cut off, vowed to appeal and ask courts to immediately put the decision on hold.

    The company said “there is no immediate disruption to Facebook in Europe.” The decision applies to user data like names, email and IP addresses, messages, viewing history, geolocation data and other information that Meta — and other tech giants like Google — use for targeted online ads.

    “This decision is flawed, unjustified and sets a dangerous precedent for the countless other companies transferring data between the EU and U.S.,” Nick Clegg, Meta’s president of global affairs, and chief legal officer Jennifer Newstead said in a statement.

    It’s yet another twist in a legal battle that began in 2013 when Austrian lawyer and privacy activist Max Schrems filed a complaint about Facebook’s handling of his data following former National Security Agency contractor Edward Snowden’s revelations of electronic surveillance by U.S. security agencies. That included the disclosure that Facebook gave the agencies access to the personal data of Europeans.

    The saga has highlighted the clash between Washington and Brussels over the differences between Europe’s strict view on data privacy and the comparatively lax regime in the U.S., which lacks a federal privacy law. The EU has been a global leader in reining in the power of Big Tech with a series of regulations forcing them police their platforms more strictly and protect users’ personal information.

    An agreement covering EU-U.S. data transfers known as the Privacy Shield was struck down in 2020 by the EU’s top court, which said it didn’t do enough to protect residents from the U.S. government’s electronic prying. Monday’s decision confirmed that another tool to govern data transfers — stock legal contracts — was also invalid.

    Brussels and Washington signed a deal last year on a reworked Privacy Shield that Meta could use, but the pact is awaiting a decision from European officials on whether it adequately protects data privacy.

    EU institutions have been reviewing the agreement, and the bloc’s lawmakers this month called for improvements, saying the safeguards aren’t strong enough.

    The Ireland’s Data Protection Commission handed down the fine as Meta’s lead privacy regulator in the 27-nation bloc because the Silicon Valley tech giant’s European headquarters is based in Dublin.

    The Irish watchdog said it gave Meta five months to stop sending European user data to the U.S. and six months to bring its data operations into compliance “by ceasing the unlawful processing, including storage, in the U.S.” of European users’ personal data transferred in violation of the bloc’s privacy rules.

    In other words, Meta has to erase all that data, which could be a bigger problem than the fine, said Johnny Ryan, senior fellow at the Irish Council for Civil Liberties, a nonprofit rights group that has worked on digital and data issues.

    “This order to delete data is really a headache for Meta,” Ryan said. If the company has to scrub data for hundreds of millions of European Union users going back 10 years, “it is very hard to see how it will be able to comply with that order.”

    If a new transatlantic privacy agreement does take effect before the deadlines, “our services can continue as they do today without any disruption or impact on users,” Meta said.

    Schrems predicted that Meta has “no real chance” of getting the decision materially overturned. And a new privacy pact might not mean the end of Meta’s troubles, because there’s a good chance it could be tossed out by the EU’s top court, he said.

    “Meta plans to rely on the new deal for transfers going forward, but this is likely not a permanent fix,” Schrems said in a statement. “Unless U.S. surveillance laws gets fixed, Meta will likely have to keep EU data in the EU.”

    Schrems said a possible solution could be a “federated” social network, where European data stays in Meta’s data centers in Europe, “unless users for example chat with a U.S. friend.”

    Meta warned in its latest earnings report that without a legal basis for data transfers, it will be forced to stop offering its products and services in Europe, “which would materially and adversely affect our business, financial condition, and results of operations.”

    The social media company might have to carry out a costly and complex revamp of its operations if it’s ultimately forced to stop the transfers. Meta has a fleet of 21 data centers, according to its website, but 17 of them are in the United States. Three others are in the European nations of Denmark, Ireland and Sweden. Another is in Singapore.

    Other social media giants are facing pressure over their data practices. TikTok has tried to soothe Western fears about the Chinese-owned short video sharing app’s potential cybersecurity risks with a $1.5 billion project to store U.S. user data on Oracle servers.

    [ad_2]

    Source link

  • TikTok files lawsuit to overturn Montana’s 1st-in-nation ban on the video sharing app

    TikTok files lawsuit to overturn Montana’s 1st-in-nation ban on the video sharing app

    [ad_1]

    HELENA, Mont. — Social media company TikTok Inc. filed a lawsuit Monday seeking to overturn Montana’s first-in-the-nation ban on the video sharing app, arguing the law is an unconstitutional violation of free speech rights and is based on “unfounded speculation” that the Chinese government could access users’ data.

    The lawsuit by TikTok, owned by Chinese tech company ByteDance, follows one filed last week by five content creators. They made similar arguments including that the state of Montana has no authority to take action on matters of national security. Both lawsuits were filed in federal court in Missoula.

    Republican Gov. Greg Gianforte signed the bill Wednesday and the content creators’ lawsuit was filed hours later. The law is scheduled to take effect on Jan. 1, but cybersecurity experts say it could be difficult to enforce.

    TikTok says it has not shared and would not share U.S. user data with the Chinese government and has taken measures to protect the privacy and security of its users, including storing all U.S. user data in the United States, according to the lawsuit.

    Some lawmakers, the FBI and officials at other agencies are concerned that the video-sharing app could be used to allow the Chinese government to access information on U.S. citizens or push pro-Beijing misinformation that could influence the public.

    Chinese law compels Chinese companies to share data with the government for whatever purposes it deems to involve national security. TikTok says this has never happened.

    “The Chinese Communist Party is using TikTok as a tool to spy on Americans by collecting personal information, keystrokes, and even the locations of its users — and by extension, people without TikTok who affiliate with users may have information about themselves shared without even knowing it,” Emily Flower, a spokesperson for the Montana Department of Justice, said in a statement.

    “We expected legal challenges and are fully prepared to defend the law that helps protect Montanans’ privacy and security,” she wrote

    The federal government and about half the U.S. states, including Montana, have banned TikTok from government-owned devices.

    Montana’s new law prohibits downloads of TikTok in the state. It would fine any “entity” — an app store or TikTok — $10,000 per day for each time someone “is offered the ability” to access the social media platform or download the app. The penalties would not apply to users.

    Chatter about a TikTok ban has been around since 2020, when then-President Donald Trump attempted to bar the company from operating in the U.S. through an executive order that was halted in federal courts. Congress has also considered banning the app over security concerns.

    [ad_2]

    Source link

  • Meta fined record $1.3 billion and ordered to stop sending European user data to US

    Meta fined record $1.3 billion and ordered to stop sending European user data to US

    [ad_1]

    LONDON — The European Union slapped Meta with a record $1.3 billion privacy fine Monday and ordered it to stop transferring users’ personal information across the Atlantic by October, the latest salvo in a decadelong case sparked by U.S. cybersnooping fears.

    The penalty of 1.2 billion euros is the biggest since the EU’s strict data privacy regime took effect five years ago, surpassing Amazon’s 746 million euro fine in 2021 for data protection violations.

    Meta, which had previously warned that services for its users in Europe could be cut off, vowed to appeal and ask courts to immediately put the decision on hold.

    The company said “there is no immediate disruption to Facebook in Europe.” The decision applies to user data like names, email and IP addresses, messages, viewing history, geolocation data and other information that Meta — and other tech giants like Google — use for targeted online ads.

    “This decision is flawed, unjustified and sets a dangerous precedent for the countless other companies transferring data between the EU and U.S.,” Nick Clegg, Meta’s president of global affairs, and chief legal officer Jennifer Newstead said in a statement.

    It’s yet another twist in a legal battle that began in 2013 when Austrian lawyer and privacy activist Max Schrems filed a complaint about Facebook’s handling of his data following former National Security Agency contractor Edward Snowden’s revelations of electronic surveillance by U.S. security agencies. That included the disclosure that Facebook gave the agencies access to the personal data of Europeans.

    The saga has highlighted the clash between Washington and Brussels over the differences between Europe’s strict view on data privacy and the comparatively lax regime in the U.S., which lacks a federal privacy law. The EU has been a global leader in reining in the power of Big Tech with a series of regulations forcing them police their platforms more strictly and protect users’ personal information.

    An agreement covering EU-U.S. data transfers known as the Privacy Shield was struck down in 2020 by the EU’s top court, which said it didn’t do enough to protect residents from the U.S. government’s electronic prying. Monday’s decision confirmed that another tool to govern data transfers — stock legal contracts — was also invalid.

    Brussels and Washington signed a deal last year on a reworked Privacy Shield that Meta could use, but the pact is awaiting a decision from European officials on whether it adequately protects data privacy.

    EU institutions have been reviewing the agreement, and the bloc’s lawmakers this month called for improvements, saying the safeguards aren’t strong enough.

    The Ireland’s Data Protection Commission handed down the fine as Meta’s lead privacy regulator in the 27-nation bloc because the Silicon Valley tech giant’s European headquarters is based in Dublin.

    The Irish watchdog said it gave Meta five months to stop sending European user data to the U.S. and six months to bring its data operations into compliance “by ceasing the unlawful processing, including storage, in the U.S.” of European users’ personal data transferred in violation of the bloc’s privacy rules.

    In other words, Meta has to erase all that data, which could be a bigger problem than the fine, said Johnny Ryan, senior fellow at the Irish Council for Civil Liberties, a nonprofit rights group that has worked on digital and data issues.

    “This order to delete data is really a headache for Meta,” Ryan said. If the company has to scrub data for hundreds of millions of European Union users going back 10 years, “it is very hard to see how it will be able to comply with that order.”

    If a new transatlantic privacy agreement does take effect before the deadlines, “our services can continue as they do today without any disruption or impact on users,” Meta said.

    Schrems predicted that Meta has “no real chance” of getting the decision materially overturned. And a new privacy pact might not mean the end of Meta’s troubles, because there’s a good chance it could be tossed out by the EU’s top court, he said.

    “Meta plans to rely on the new deal for transfers going forward, but this is likely not a permanent fix,” Schrems said in a statement. “Unless U.S. surveillance laws gets fixed, Meta will likely have to keep EU data in the EU.”

    Schrems said a possible solution could be a “federated” social network, where European data stays in Meta’s data centers in Europe, “unless users for example chat with a U.S. friend.”

    Meta warned in its latest earnings report that without a legal basis for data transfers, it will be forced to stop offering its products and services in Europe, “which would materially and adversely affect our business, financial condition, and results of operations.”

    The social media company might have to carry out a costly and complex revamp of its operations if it’s ultimately forced to stop the transfers. Meta has a fleet of 21 data centers, according to its website, but 17 of them are in the United States. Three others are in the European nations of Denmark, Ireland and Sweden. Another is in Singapore.

    Other social media giants are facing pressure over their data practices. TikTok has tried to soothe Western fears about the Chinese-owned short video sharing app’s potential cybersecurity risks with a $1.5 billion project to store U.S. user data on Oracle servers.

    [ad_2]

    Source link

  • EU hits Meta with record €1.2B privacy fine

    EU hits Meta with record €1.2B privacy fine

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    U.S. tech giant Meta has been hit with a record €1.2 billion fine for not complying with the EU’s privacy rulebook.

    The Irish Data Protection Commission announced on Monday that Meta violated the General Data Protection Regulation (GDPR) when it shuttled troves of personal data of European Facebook users to the United States without sufficiently protecting them from Washington’s data surveillance practices.

    It’s the largest fine imposed under the bloc’s flagship General Data Protection Regulation (GDPR) privacy law and it comes on the eve of the fifth anniversary of the law’s enforcement on May 25.

    Amazon was previously fined €746 million by Luxembourg and the Irish regulator also imposed four fines against Meta’s platforms Facebook, Instagram and WhatsApp ranging between €405 million and €225 million in the past two years.

    The Irish privacy watchdog said that Meta’s use of a legal instrument known as standard contractual clauses (SCCs) to move data to the U.S. “did not address the risks to the fundamental rights and freedoms” of Facebook’s European users raised by a landmark ruling from the EU’s top court.

    The European Court of Justice in 2020 struck down an EU-U.S. data flows agreement known as the Privacy Shield over fears of U.S. intelligence services’ surveillance practices. In the same judgment, the top EU court also tightened requirements to use SCCs, another legal tool widely used by companies to transfer personal data to the U.S.

    Meta — as well as other international companies — kept relying on the legal instrument as European and U.S. officials struggled to put together a new data flows arrangement and the U.S. tech giant lacked other legal mechanisms to transfer its personal data.

    The EU and U.S. are finalizing a new data flow deal that could come as early as July and as late as October. Meta has until October 12 to stop relying on SCCs for their transfers.

    The U.S. tech giant previously warned that if it would be forced to stop using SCCs without a proper alternative data flow agreement in place, it could shut down services like Facebook and Instagram in Europe.

    Meta also has until November 12 to delete or move back to the EU the personal data of European Facebook users transferred and stored in the U.S. since 2020 and until a new EU-U.S. deal is reached. However, it’s unlikely the tech firm will have to delete or move data as European and U.S. negotiators are expected to finalize the new deal before early November.

    “This decision is flawed, unjustified and sets a dangerous precedent for the countless other companies transferring data between the EU and U.S.,” Meta’s President of Global Affairs Nick Clegg and Chief Legal Officer Jennifer Newstead said in a statement on Monday.

    Clegg and Newstead said the company will appeal the decision and seek a stay with the courts to pause the implementation deadlines. “There is no immediate disruption to Facebook because the decision includes implementation periods that run until later this year,” they added.

    Max Schrems, the privacy activist behind the original 2013 complaint supporting the case, said: “We are happy to see this decision after ten years of litigation … Unless U.S. surveillance laws get fixed, Meta will have to fundamentally restructure its systems.”

    The Irish Data Protection Commission said it disagreed with the fine and measure that it was imposing on Meta but had been forced by the pan-European network of national regulators, the European Data Protection Board (EDPB), after Dublin’s initial decision was challenged by four of its peer regulators in Europe, from Germany, France, Spain and Austria.

    According to internal discussions released on Monday, the Irish regulator earlier this year vehemently argued against imposing a financial penalty on the social media giant, saying that such a decision would be disproportionate for the alleged privacy abuses. Dublin also argued any such fine against Meta could be viewed as discriminatory since U.S. tech firm Google had not faces similar penalties for other transatlantic data protection cases.

    But Ireland was overruled by other European regulators. In a stinging rebuke, the pan-EU body of privacy regulators EDPB said it took the view that “Meta committed the infringement at least with the highest degree of negligence,” the discussions released Monday showed, arguing in favor of a fine. The EDPB backed claims from the four EU privacy regulators that Meta should also be forced to delete historical European data affected by the decision.

    This article was updated to include comments from Meta and Max Schrems and to add details about the decision.

    [ad_2]

    Clothilde Goujard and Mark Scott

    Source link

  • Stock market today: Wall Street drifts higher amid debt ceiling talks

    Stock market today: Wall Street drifts higher amid debt ceiling talks

    [ad_1]

    Wall Street is trading small gains and losses early Monday as a deadline nears to reach a deal to avoid a federal default.

    Dow and S&P futures rose less than than 0.1% before the bell.

    There’s a pivotal meeting set for later in the day at the White House between President Joe Biden and House Speaker Kevin McCarthy on the debt ceiling. A default on the U.S. debt would almost surely cause a recession in the American economy, which would have damaging effects on economies worldwide.

    “It seems pretty likely that a full-fledged deal will be reached before early June, but the timing is hard to predict,” Stephen Innes, managing partner at SPI Asset Management, said of the U.S. efforts to avoid a potentially disastrous default on its debt.

    “While negotiation strategy and political incentives imply a last-minute deal, we will soon find out if it’s baked beans or lobster during the Memorial Day holiday.”

    The White House and House Republicans wrapped up another round of talks over the weekend.

    Washington needs to strike a budget compromise along with a deal to raise the nation’s borrowing limit to avoid a federal default. Democrats and Republicans face a June 1 deadline, which is when the U.S. government could run out of cash to pay its bills unless Congress allows it to borrow more.

    On the positive side, U.S. Federal Reserve Chair Jerome Powell made comments Friday indicating the Fed may leave interest rates alone at its next meeting in June.

    The majority of companies in the S&P 500 have been reporting stronger earnings for the start of the year than analysts expected. But they’re still on track to report a second consecutive quarter of profit declines from year-ago levels.

    Facebook parent company Meta lost 1.4% in premarket trading after the European Union slapped the social media giant with a record $1.3 billion privacy fine Monday. The EU ordered Meta to stop transferring user data across the Atlantic by October, the latest salvo in a decadelong case sparked by U.S. cybersnooping fears.

    Meta, which had previously warned that services for its users in Europe could be cut off, vowed to appeal and ask courts to immediately put the decision on hold.

    Shares of Micron slumped more than 4% in premarket after China’s government on Sunday told users of computer equipment deemed sensitive to stop buying products from the biggest U.S. memory chipmaker. Micron products have unspecified “serious network security risks” that pose hazards to China’s information infrastructure and affect national security, the Cyberspace Administration of China said on its website.

    In Japan, data for machinery orders in March, released Monday, highlighted a slowdown in the world’s third-largest economy, with the key indicator falling 3.9% for the second straight month of declines. But analysts think a recovery is coming during this quarter, as domestic manufacturing gradually rebounds from the various negative effects related to the pandemic.

    Japan’s benchmark Nikkei 225 gained 0.9% to finish at 31,086.82. Australia’s S&P/ASX 200 slid 0.2% to 7,263.30. South Korea’s Kospi gained 0.8% to 2,557.08. Hong Kong’s Hang Seng jumped 1.2% to 19,678.17, while the Shanghai Composite edged up 0.4% to 3,296.47.

    France’s CAC 40 and Germany’s DAX each slipped 0.3%, while Britain’s FTSE 100 was unchanged.

    In energy trading, benchmark U.S. crude picked up 15 cents to $71.70 a barrel. Brent crude, the international standard, gained 22 cents to $75.80 a barrel.

    In currency trading, the U.S. dollar rose to 138.28 yen from 137.88 yen. The euro cost $1.0824, up from $1.0808.

    ___

    Kageyama reported from Tokyo; Ott reported from Silver Spring, Md.

    [ad_2]

    Source link

  • ChatGPT makes its debut as a smartphone app on iPhones

    ChatGPT makes its debut as a smartphone app on iPhones

    [ad_1]

    ChatGPT is now a smartphone app, which could be good news for people who like to use the artificial intelligence chatbot and bad news for all the clone apps that have tried to profit off the technology.

    The free app became available on iPhones and iPads in the U.S. on Thursday and will later be coming to Android devices. Unlike the desktop web version, the mobile version on Apple’s iOS operating system also enables users to speak to it using their voice.

    The company that makes it, OpenAI, said it will remain ad-free but “syncs your history across devices.”

    “We’re starting our rollout in the U.S. and will expand to additional countries in the coming weeks,” said a blog post announcing the new app, which is described in the App Store as the “official app” by OpenAI.

    It’s been more than five months since OpenAI released ChatGPT to the public, sparking excitement and alarm at its ability to generate convincingly human-like essays, poems, form letters and conversational answers to almost any question. But the San Francisco startup never seemed to be in a hurry to get it onto phones — where most people access the internet.

    “We’re not trying to get people to use it more and more,” OpenAI CEO Sam Altman told U.S. senators this week in a hearing over how to regulate AI systems such as those built by his company.

    The delay in getting the product on phones helped fuel a rise of clones built on similar technology, some of which the security firm Sophos described as “fleeceware” in a report this week because they push unsuspecting users toward enrolling in a free trial that converts into a recurring subscription, or use intrusive advertising techniques.

    Another privacy researcher, Simon Migliano, said the official ChatGPT app might eventually starve similar-sounding apps of new users, but that could take a while because many of those apps were given names deliberately intended to confuse people into thinking they already have the official app. They were also “hyper-optimized” to rank highly in Apple’s App Store search results, said Migliano, head of research at Top10VPN.com.

    “For many of those who have already downloaded a clone, it’s likely they will simply stick with the ChatGPT apps they already have and continue to have their personal data harvested and sold,” Migliano said.

    Altman told Congress this week that his company doesn’t try to maximize engagement because it doesn’t have an advertising-based business, and because it’s costly to train and run its AI models on computer chips known as graphics processing units.

    “In fact, we’re so short on GPUs, the less people use our products, the better,” Altman said.

    The new app does include an option to pay for a premium version of ChatGPT with additional features. Along with those subscriptions, the company makes money from developers and corporations that pay to integrate its AI models into their own apps and products.

    Its chief partner, Microsoft, has invested billions of dollars into the startup and has integrated ChatGPT-like technology into its own products, including a chatbot for its search engine Bing.

    The ChatGPT app will now compete for attention with the Bing chatbot already available on iPhones, and could eventually compete with a mobile version of rival Google’s chatbot, called Bard. Versions of OpenAI’s chatbot technology can also be found in other apps, such as the “My AI” feature on Snapchat.

    [ad_2]

    Source link

  • Meta faces record privacy fine for data transfers to the US

    Meta faces record privacy fine for data transfers to the US

    [ad_1]

    Meta is expected to face a record privacy fine on Monday when Ireland’s data protection watchdog confirms the social media platform mishandled people’s data when shipping it to the United States, according to two people with direct knowledge of the upcoming decision.

    POLITICO was not able to confirm the size of the record-setting penalty, which will likely be more than the €746 million fine that Amazon was forced to pay in 2021 for similarly flouting the European Union’s privacy standards, the people added, who spoke on condition of anonymity to speak about internal deliberations.

    Ireland’s Data Protection Commission will publish its ruling on Monday; it is also expected to include demands that Meta’s Facebook stop using complex legal instruments to move EU data to the U.S., called standard contract clauses, in the fall. 

    The upcoming decision dates back to revelations in 2013 from Edward Snowden, the former U.S. National Security Agency contractor, who disclosed that American authorities had repeatedly accessed people’s information via tech companies like Facebook and Google.

    Max Schrems, an Austrian privacy campaigner, filed a legal challenge against Facebook for failing to protect his privacy rights, setting off a decade-long battle over the legality of moving EU data to the U.S.

    Europe’s top court has repeatedly stated Washington does not have sufficient checks in place to protect Europeans’ personal information, and the U.S. recently updated its internal legal protections to give the EU greater assurances that American intelligence agencies will follow new rules governing such data access.

    Meta declined to comment. The Irish Data Protection Commission did not respond in time for publication.

    [ad_2]

    Mark Scott and Clothilde Goujard

    Source link

  • TSA is testing facial recognition at more airports, raising privacy concerns

    TSA is testing facial recognition at more airports, raising privacy concerns

    [ad_1]

    BALTIMORE (AP) — A passenger walks up to an airport security checkpoint, slips an ID card into a slot and looks into a camera atop a small screen. The screen flashes “Photo Complete” and the person walks through — all without having to hand over their identification to the TSA officer sitting behind the screen.

    It’s all part of a pilot project by the Transportation Security Administration to assess the use of facial recognition technology at a number of airports across the country.

    “What we are trying to do with this is aid the officers to actually determine that you are who you say who you are,” said Jason Lim, identity management capabilities manager, during a demonstration of the technology to reporters at Baltimore-Washington International Thurgood Marshall Airport.

    The effort comes at a time when the use of various forms of technology to enhance security and streamline procedures is only increasing. TSA says the pilot is voluntary and accurate, but critics have raised concerns about questions of bias in facial recognition technology and possible repercussions for passengers who want to opt out.

    The technology is currently in 16 airports. In addition to Baltimore, it’s being used at Reagan National near Washington, D.C., airports in Atlanta, Boston, Dallas, Denver, Detroit, Las Vegas, Los Angeles, Miami, Orlando, Phoenix, Salt Lake City, San Jose, and Gulfport-Biloxi and Jackson in Mississippi. However, it’s not at every TSA checkpoint so not every traveler going through those airports would necessarily experience it.

    Travelers put their driver’s license into a slot that reads the card or place their passport photo against a card reader. Then they look at a camera on a screen about the size of an iPad, which captures their image and compares it to their ID. The technology is both checking to make sure the people at the airport match the ID they present and that the identification is in fact real. A TSA officer is still there and signs off on the screening.

    A small sign alerts travelers that their photo will be taken as part of the pilot and that they can opt out if they’d like. It also includes a QR code for them to get more information.

    Since it’s come out the pilot has come under scrutiny by some elected officials and privacy advocates. In a February letter to TSA, five senators — four Democrats and an Independent who is part of the Democratic caucus — demanded the agency stop the program, saying: “Increasing biometric surveillance of Americans by the government represents a risk to civil liberties and privacy rights.”

    As various forms of technology that use biometric information like face IDs, retina scans or fingerprint matches have become more pervasive in both the private sector and the federal government, it’s raised concerns among privacy advocates about how this data is collected, who has access to it and what happens if it gets hacked.

    Meg Foster, a justice fellow at Georgetown University’s Center on Privacy and Technology, said there are concerns about bias within the algorithms of various facial recognition technologies. Some have a harder time recognizing faces of minorities, for example. And there’s the concern of outside hackers figuring out ways to hack into government systems for nefarious aims.

    With regard to the TSA pilot, Foster said she has concerns that while the agency says it’s not currently storing the biometric data it collects, what if that changes in the future? And while people are allowed to opt out, she said it’s not fair to put the onus on harried passengers who might be worried about missing their flight if they do.

    “They might be concerned that if they object to face recognition, that they’re going to be under further suspicion,” Foster said.

    Jeramie Scott, with the Electronic Privacy Information Center, said that while it’s voluntary now it might not be for long. He noted that David Pekoske, who heads TSA, said during a talk in April that eventually the use of biometrics would be required because they’re more effective and efficient, although he gave no timeline.

    Scott said he’d prefer TSA not use the technology at all. At the least, he’d like to see an outside audit to verify that the technology isn’t disproportionally affecting certain groups and that the images are deleted immediately.

    TSA says the goal of the pilot is to improve the accuracy of the identity verification without slowing down the speed at which passengers pass through the checkpoints — a key issue for an agency that sees 2.4 million passengers daily. The agency said early results are positive and have shown no discernable difference in the algorithm’s ability to recognize passengers based on things like age, gender, race and ethnicity.

    Lim said the images aren’t being compiled into a database, and that photos and IDs are deleted. Since this is an assessment, in limited circumstances some data is collected and shared with the Department of Homeland Security’s Science and Technology Directorate. TSA says that data is deleted after 24 months.

    Lim said the camera only turns on when a person puts in their ID card — so it’s not randomly gathering images of people at the airport. That also gives passengers control over whether they want to use it, he said. And he said that research has shown that while some algorithms do perform worse with certain demographics, it also shows that higher-quality algorithms, like the one the agency uses, are much more accurate. He said using the best available cameras also is a factor.

    “We take these privacy concerns and civil rights concerns very seriously, because we touch so many people every day,” he said.

    Retired TSA official Keith Jeffries said the pandemic greatly accelerated the rollout of various types of this “touchless” technology, whereby a passenger isn’t handing over a document to an agent. And he envisioned a “checkpoint of the future” where a passenger’s face can be used to check their bags, go through the security checkpoints and board the plane — all with little to no need to pull out a boarding card or ID documents.

    He acknowledged the privacy concerns and lack of trust many people have when it comes to giving biometric data to the federal government, but said in many ways the use of biometrics is already deeply embedded in society through the use of privately owned technology.

    “Technology is here to stay,” he said.

    __

    Follow Santana on Twitter @ruskygal.

    [ad_2]

    Source link

  • ChatGPT’s chief testifies before Congress as concerns grow about artificial intelligence risks

    ChatGPT’s chief testifies before Congress as concerns grow about artificial intelligence risks

    [ad_1]

    The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems.

    “As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman testified at a Senate hearing Tuesday.

    His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses.

    What started out as a panic among educators about ChatGPT’s use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

    And while there’s no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

    Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal’s floor speeches and reciting a speech written by ChatGPT after he asked the chatbot, “How I would open this hearing?”

    The result was impressive, said Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

    Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them.

    Founded in 2015, OpenAI is also known for other AI products including the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

    Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

    Also testifying will be IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI’s latest model, GPT-4, described as more powerful than ChatGPT.

    “Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel’s ranking Republican, Sen. Josh Hawley of Missouri. “This hearing marks a critical first step towards understanding what Congress should do.”

    Altman and other tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. In a copy of her prepared remarks, IBM’s Montgomery asks Congress to take a “precision regulation” approach.

    “This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself,” Montgomery said.

    [ad_2]

    Source link

  • Hong Kong leader says China’s sentencing of US citizen exposes national security threats

    Hong Kong leader says China’s sentencing of US citizen exposes national security threats

    [ad_1]

    HONG KONG — Hong Kong’s leader on Tuesday said the sentencing on spying charges of a U.S. citizen in China, who was also a permanent resident of the semi-autonomous city, illustrated that the territory should “stay vigilant to national security risks hidden in society.”

    The government said mainland Chinese authorities had informed Hong Kong of the arrest of John Shing-Wan Leung in 2021. He was sentenced Monday to life in prison. Hong Kong’s government was prepared to provide assistance to anyone arrested by mainland authorities upon request but had not yet received any in Leung’s case, authorities said.

    Neither Hong Kong Chief Executive John Lee nor the court in the eastern Chinese city of Suzhou that tried Leung, 78, have released details of his alleged crime.

    Since taking office last year, Lee has taken a hard line toward any signs of dissent, backed up by the uncompromising attitude adopted by Chinese leaders from authoritarian Communist Party head Xi Jinping on down.

    “This incident showed us that national security risks could be hidden in society. That’s why we repeatedly stressed that, although Hong Kong’s situation appears to be largely stabilized, we can’t let down our guard over national security risks,” said Lee, a former police officer and head of security in the city.

    A longtime Beijing loyalist, he was effectively appointed to the top position after running unopposed in an election choreographed by Beijing last year.

    In an echo of party propaganda, Lee referred to the 2019 pro-democracy protests that triggered a crackdown as “black violence” and Hong Kong’s version of “color revolution,” a phrase used by China and Russia to describe political movements seeking to overturn authoritarian regimes. Lee said the protest movement was an alarm bell that reminds the city to keep monitoring such risks.

    Leung’s sentencing threatens to further exacerbate already strained ties between Beijing and Washington.

    Leung was detained April 15, 2021, by the local bureau of China’s counterintelligence agency in Suzhou, according to a statement posted by the city’s intermediate court on its social media site. His detention came after China had closed its borders and imposed tight domestic travel restrictions and lockdowns affecting tens of millions to fight the spread of COVID-19.

    Such investigations and trials are held behind closed doors and little information is generally released.

    The harsh sentence given Leung was especially notable because of his previous affiliations with pro-Communist Party organizations, including one seeking overseas support for Beijing’s goal of unification with self-governing Taiwan.

    Relations between Washington and Beijing are at their lowest in decades amid disputes over trade, technology, human rights and China’s increasingly aggressive territorial claims toward Taiwan, the South China Sea and elsewhere.

    High-level government exchanges between the sides have been placed on hold and U.S. companies are delaying major investments amid mixed messaging from Beijing. Many Chinese firms, most notably telecoms giant Huawei, have been effectively shut out of the U.S. market due to legal bans and high tariffs.

    The sentencing comes as U.S. President Joe Biden is traveling to Hiroshima, Japan this weekend for the summit of the Group of Seven major industrial nations, followed by a visit to Papua New Guinea, a Pacific island nation in a region where China has sought to expand its economic, military and diplomatic influence.

    While the Suzhou court offered no indication of a link between Leung’s case and overall China-U.S. relations, spying charges in China often appear highly selective and evidence backing them up is held in secret. The party’s rigid control over courts, civil society and the media effectively blocks efforts to gain further information or mount legal appeals.

    The U.S. Embassy in Beijing said Monday it was aware of Leung’s case, but could not comment further due to privacy concerns. “The Department of State has no greater priority than the safety and security of U.S. citizens overseas,” the embassy said.

    A former British colony, Hong Kong was promised it would retain its financial, social and political liberties when returned to China in 1997. Beijing has since torn up that commitment through progressively harsher restrictions on public gatherings, free speech and political participation, while still promoting the city as an efficient and corruption-free center for trade and finance.

    Meanwhile, on the mainland, Chinese national security agencies have raided the offices of foreign business consulting firms in Beijing and other cities as part of a crackdown on foreign businesses that provide sensitive economic data.

    The pressure on foreign companies appears to clash with attempts by Beijing to lure back foreign investors after draconian COVID-19 pandemic restrictions were lifted at the beginning of the year.

    It wasn’t clear who represented Leung at his trial and his family has not commented on the sentence. Friends and former colleagues declined The Associated Press’ requests for comment.

    Long pretrial detentions are not unusual in China and prosecutors have broad powers to hold people charged in national security cases, regardless of their citizenship status.

    Two Chinese-Australians, Cheng Lei, who formerly worked for China’s state broadcaster, and writer Yang Jun, have been held since 2020 and 2019 respectively, without word on their sentencing.

    Government suspicion is particularly focused on Chinese-born foreign citizens and people from Taiwan and Hong Kong, especially if they have political contacts or work in academia or publishing.

    Under Xi, the party has launched multiple campaigns against what it calls foreign efforts to sabotage its rule, without showing evidence. Online commentary and independent information sources have been muzzled and universities ordered to censor discussions of human rights, modern Chinese history and ideas that could prompt questions about total Communist Party control.

    Xi’s government has also taken a hard line on foreign relations, most recently ordering a Canadian diplomat to leave at short notice in retaliation for Ottawa’s expulsion of a staffer at the Chinese Embassy who was accused of threatening a member of the Canadian Parliament and his family members living in Hong Kong.

    China’s leader for a decade who faces no term limits, Xi has taken a highly confrontational stance toward the U.S. and other democracies, while backing Russian President Vladimir Putin in his invasion of Ukraine and supporting other autocratic governments from Nicaragua to Myanmar.

    [ad_2]

    Source link

  • TSA is testing facial recognition at more airports, raising privacy concerns

    TSA is testing facial recognition at more airports, raising privacy concerns

    [ad_1]

    BALTIMORE — A passenger walks up to an airport security checkpoint, slips an ID card into a slot and looks into a camera atop a small screen. The screen flashes “Photo Complete” and the person walks through — all without having to hand over their identification to the TSA officer sitting behind the screen.

    It’s all part of a pilot project by the Transportation Security Administration to assess the use of facial recognition technology at a number of airports across the country.

    “What we are trying to do with this is aid the officers to actually determine that you are who you say who you are,” said Jason Lim, identity management capabilities manager, during a demonstration of the technology to reporters at Baltimore-Washington International Thurgood Marshall Airport.

    The effort comes at a time when the use of various forms of technology to enhance security and streamline procedures is only increasing. TSA says the pilot is voluntary and accurate, but critics have raised concerns about questions of bias in facial recognition technology and possible repercussions for passengers who want to opt out.

    The technology is currently in 16 airports. In addition to Baltimore, it’s being used at Reagan National near Washington, D.C., airports in Atlanta, Boston, Dallas, Denver, Detroit, Las Vegas, Los Angeles, Miami, Orlando, Phoenix, Salt Lake City, San Jose, and Gulfport-Biloxi and Jackson in Mississippi. However, it’s not at every TSA checkpoint so not every traveler going through those airports would necessarily experience it.

    Travelers put their driver’s license into a slot that reads the card or place their passport photo against a card reader. Then they look at a camera on a screen about the size of an iPad, which captures their image and compares it to their ID. The technology is both checking to make sure the people at the airport match the ID they present and that the identification is in fact real. A TSA officer is still there and signs off on the screening.

    A small sign alerts travelers that their photo will be taken as part of the pilot and that they can opt out if they’d like. It also includes a QR code for them to get more information.

    Since it’s come out the pilot has come under scrutiny by some elected officials and privacy advocates. In a February letter to TSA, five senators — four Democrats and an Independent who is part of the Democratic caucus — demanded the agency stop the program, saying: “Increasing biometric surveillance of Americans by the government represents a risk to civil liberties and privacy rights.”

    As various forms of technology that use biometric information like face IDs, retina scans or fingerprint matches have become more pervasive in both the private sector and the federal government, it’s raised concerns among privacy advocates about how this data is collected, who has access to it and what happens if it gets hacked.

    Meg Foster, a justice fellow at Georgetown University’s Center on Privacy and Technology, said there are concerns about bias within the algorithms of various facial recognition technologies. Some have a harder time recognizing faces of minorities, for example. And there’s the concern of outside hackers figuring out ways to hack into government systems for nefarious aims.

    With regard to the TSA pilot, Foster said she has concerns that while the agency says it’s not currently storing the biometric data it collects, what if that changes in the future? And while people are allowed to opt out, she said it’s not fair to put the onus on harried passengers who might be worried about missing their flight if they do.

    “They might be concerned that if they object to face recognition, that they’re going to be under further suspicion,” Foster said.

    Jeramie Scott, with the Electronic Privacy Information Center, said that while it’s voluntary now it might not be for long. He noted that David Pekoske, who heads TSA, said during a talk in April that eventually the use of biometrics would be required because they’re more effective and efficient, although he gave no timeline.

    Scott said he’d prefer TSA not use the technology at all. At the least, he’d like to see an outside audit to verify that the technology isn’t disproportionally affecting certain groups and that the images are deleted immediately.

    TSA says the goal of the pilot is to improve the accuracy of the identity verification without slowing down the speed at which passengers pass through the checkpoints — a key issue for an agency that sees 2.4 million passengers daily. The agency said early results are positive and have shown no discernable difference in the algorithm’s ability to recognize passengers based on things like age, gender, race and ethnicity.

    Lim said the images aren’t being compiled into a database, and that photos and IDs are deleted. Since this is an assessment, in limited circumstances some data is collected and shared with the Department of Homeland Security’s Science and Technology Directorate. TSA says that data is deleted after 24 months.

    Lim said the camera only turns on when a person puts in their ID card — so it’s not randomly gathering images of people at the airport. That also gives passengers control over whether they want to use it, he said. And he said that research has shown that while some algorithms do perform worse with certain demographics, it also shows that higher-quality algorithms, like the one the agency uses, are much more accurate. He said using the best available cameras also is a factor.

    “We take these privacy concerns and civil rights concerns very seriously, because we touch so many people every day,” he said.

    Retired TSA official Keith Jeffries said the pandemic greatly accelerated the rollout of various types of this “touchless” technology, whereby a passenger isn’t handing over a document to an agent. And he envisioned a “checkpoint of the future” where a passenger’s face can be used to check their bags, go through the security checkpoints and board the plane — all with little to no need to pull out a boarding card or ID documents.

    He acknowledged the privacy concerns and lack of trust many people have when it comes to giving biometric data to the federal government, but said in many ways the use of biometrics is already deeply embedded in society through the use of privately owned technology.

    “Technology is here to stay,” he said.

    __

    Follow Santana on Twitter @ruskygal.

    [ad_2]

    Source link

  • Are you who you say you are? TSA tests facial recognition technology to boost airport security

    Are you who you say you are? TSA tests facial recognition technology to boost airport security

    [ad_1]

    BALTIMORE — A passenger walks up to an airport security checkpoint, slips an ID card into a slot and looks into a camera atop a small screen. The screen flashes “Photo Complete” and the person walks through — all without having to hand over their identification to the TSA officer sitting behind the screen.

    It’s all part of a pilot project by the Transportation Security Administration to assess the use of facial recognition technology at a number of airports across the country.

    “What we are trying to do with this is aid the officers to actually determine that you are who you say who you are,” said Jason Lim, identity management capabilities manager, during a demonstration of the technology to reporters at Baltimore-Washington International Thurgood Marshall Airport.

    The effort comes at a time when the use of various forms of technology to enhance security and streamline procedures is only increasing. TSA says the pilot is voluntary and accurate, but critics have raised concerns about questions of bias in facial recognition technology and possible repercussions for passengers who want to opt out.

    The technology is currently in 16 airports. In addition to Baltimore, it’s being used at Reagan National near Washington, D.C., airports in Atlanta, Boston, Dallas, Denver, Detroit, Las Vegas, Los Angeles, Miami, Orlando, Phoenix, Salt Lake City, San Jose, and Gulfport-Biloxi and Jackson in Mississippi. However, it’s not at every TSA checkpoint so not every traveler going through those airports would necessarily experience it.

    Travelers put their driver’s license into a slot that reads the card or place their passport photo against a card reader. Then they look at a camera on a screen about the size of an iPad, which captures their image and compares it to their ID. The technology is both checking to make sure the people at the airport match the ID they present and that the identification is in fact real. A TSA officer is still there and signs off on the screening.

    A small sign alerts travelers that their photo will be taken as part of the pilot and that they can opt out if they’d like. It also includes a QR code for them to get more information.

    Since it’s come out the pilot has come under scrutiny by some elected officials and privacy advocates. In a February letter to TSA, five senators — four Democrats and an Independent who is part of the Democratic caucus — demanded the agency stop the program, saying: “Increasing biometric surveillance of Americans by the government represents a risk to civil liberties and privacy rights.”

    As various forms of technology that use biometric information like face IDs, retina scans or fingerprint matches have become more pervasive in both the private sector and the federal government, it’s raised concerns among privacy advocates about how this data is collected, who has access to it and what happens if it gets hacked.

    Meg Foster, a justice fellow at Georgetown University’s Center on Privacy and Technology, said there are concerns about bias within the algorithms of various facial recognition technologies. Some have a harder time recognizing faces of minorities, for example. And there’s the concern of outside hackers figuring out ways to hack into government systems for nefarious aims.

    With regard to the TSA pilot, Foster said she has concerns that while the agency says it’s not currently storing the biometric data it collects, what if that changes in the future? And while people are allowed to opt out, she said it’s not fair to put the onus on harried passengers who might be worried about missing their flight if they do.

    “They might be concerned that if they object to face recognition, that they’re going to be under further suspicion,” Foster said.

    Jeramie Scott, with the Electronic Privacy Information Center, said that while it’s voluntary now it might not be for long. He noted that David Pekoske, who heads TSA, said during a talk in April that eventually the use of biometrics would be required because they’re more effective and efficient, although he gave no timeline.

    Scott said he’d prefer TSA not use the technology at all. At the least, he’d like to see an outside audit to verify that the technology isn’t disproportionally affecting certain groups and that the images are deleted immediately.

    TSA says the goal of the pilot is to improve the accuracy of the identity verification without slowing down the speed at which passengers pass through the checkpoints — a key issue for an agency that sees 2.4 million passengers daily. The agency said early results are positive and have shown no discernable difference in the algorithm’s ability to recognize passengers based on things like age, gender, race and ethnicity.

    Lim said the images aren’t being compiled into a database, and that photos and IDs are deleted. Since this is an assessment, in limited circumstances some data is collected and shared with the Department of Homeland Security’s Science and Technology Directorate. TSA says that data is deleted after 24 months.

    Lim said the camera only turns on when a person puts in their ID card — so it’s not randomly gathering images of people at the airport. That also gives passengers control over whether they want to use it, he said. And he said that research has shown that while some algorithms do perform worse with certain demographics, it also shows that higher-quality algorithms, like the one the agency uses, are much more accurate. He said using the best available cameras also is a factor.

    “We take these privacy concerns and civil rights concerns very seriously, because we touch so many people every day,” he said.

    Retired TSA official Keith Jeffries said the pandemic greatly accelerated the rollout of various types of this “touchless” technology, whereby a passenger isn’t handing over a document to an agent. And he envisioned a “checkpoint of the future” where a passenger’s face can be used to check their bags, go through the security checkpoints and board the plane — all with little to no need to pull out a boarding card or ID documents.

    He acknowledged the privacy concerns and lack of trust many people have when it comes to giving biometric data to the federal government, but said in many ways the use of biometrics is already deeply embedded in society through the use of privately owned technology.

    “Technology is here to stay,” he said.

    __

    Follow Santana on Twitter @ruskygal.

    [ad_2]

    Source link

  • Are you who you say you are? TSA tests facial recognition technology to boost airport security

    Are you who you say you are? TSA tests facial recognition technology to boost airport security

    [ad_1]

    BALTIMORE — A passenger walks up to an airport security checkpoint, slips an ID card into a slot and looks into a camera atop a small screen. The screen flashes “Photo Complete” and the person walks through — all without having to hand over their identification to the TSA officer sitting behind the screen.

    It’s all part of a pilot project by the Transportation Security Administration to assess the use of facial recognition technology at a number of airports across the country.

    “What we are trying to do with this is aid the officers to actually determine that you are who you say who you are,” said Jason Lim, identity management capabilities manager, during a demonstration of the technology to reporters at Baltimore-Washington International Thurgood Marshall Airport.

    The effort comes at a time when the use of various forms of technology to enhance security and streamline procedures is only increasing. TSA says the pilot is voluntary and accurate, but critics have raised concerns about questions of bias in facial recognition technology and possible repercussions for passengers who want to opt out.

    The technology is currently in 16 airports. In addition to Baltimore, it’s being used at Reagan National near Washington, D.C., airports in Atlanta, Boston, Dallas, Denver, Detroit, Las Vegas, Los Angeles, Miami, Orlando, Phoenix, Salt Lake City, San Jose, and Gulfport-Biloxi and Jackson in Mississippi. However, it’s not at every TSA checkpoint so not every traveler going through those airports would necessarily experience it.

    Travelers put their driver’s license into a slot that reads the card or place their passport photo against a card reader. Then they look at a camera on a screen about the size of an iPad, which captures their image and compares it to their ID. The technology is both checking to make sure the people at the airport match the ID they present and that the identification is in fact real. A TSA officer is still there and signs off on the screening.

    A small sign alerts travelers that their photo will be taken as part of the pilot and that they can opt out if they’d like. It also includes a QR code for them to get more information.

    Since it’s come out the pilot has come under scrutiny by some elected officials and privacy advocates. In a February letter to TSA, five senators — four Democrats and an Independent who is part of the Democratic caucus — demanded the agency stop the program, saying: “Increasing biometric surveillance of Americans by the government represents a risk to civil liberties and privacy rights.”

    As various forms of technology that use biometric information like face IDs, retina scans or fingerprint matches have become more pervasive in both the private sector and the federal government, it’s raised concerns among privacy advocates about how this data is collected, who has access to it and what happens if it gets hacked.

    Meg Foster, a justice fellow at Georgetown University’s Center on Privacy and Technology, said there are concerns about bias within the algorithms of various facial recognition technologies. Some have a harder time recognizing faces of minorities, for example. And there’s the concern of outside hackers figuring out ways to hack into government systems for nefarious aims.

    With regard to the TSA pilot, Foster said she has concerns that while the agency says it’s not currently storing the biometric data it collects, what if that changes in the future? And while people are allowed to opt out, she said it’s not fair to put the onus on harried passengers who might be worried about missing their flight if they do.

    “They might be concerned that if they object to face recognition, that they’re going to be under further suspicion,” Foster said.

    Jeramie Scott, with the Electronic Privacy Information Center, said that while it’s voluntary now it might not be for long. He noted that David Pekoske, who heads TSA, said during a talk in April that eventually the use of biometrics would be required because they’re more effective and efficient, although he gave no timeline.

    Scott said he’d prefer TSA not use the technology at all. At the least, he’d like to see an outside audit to verify that the technology isn’t disproportionally affecting certain groups and that the images are deleted immediately.

    TSA says the goal of the pilot is to improve the accuracy of the identity verification without slowing down the speed at which passengers pass through the checkpoints — a key issue for an agency that sees 2.4 million passengers daily. The agency said early results are positive and have shown no discernable difference in the algorithm’s ability to recognize passengers based on things like age, gender, race and ethnicity.

    Lim said the images aren’t being compiled into a database, and that photos and IDs are deleted. Since this is an assessment, in limited circumstances some data is collected and shared with the Department of Homeland Security’s Science and Technology Directorate. TSA says that data is deleted after 24 months.

    Lim said the camera only turns on when a person puts in their ID card — so it’s not randomly gathering images of people at the airport. That also gives passengers control over whether they want to use it, he said. And he said that research has shown that while some algorithms do perform worse with certain demographics, it also shows that higher-quality algorithms, like the one the agency uses, are much more accurate. He said using the best available cameras also is a factor.

    “We take these privacy concerns and civil rights concerns very seriously, because we touch so many people every day,” he said.

    Retired TSA official Keith Jeffries said the pandemic greatly accelerated the rollout of various types of this “touchless” technology, whereby a passenger isn’t handing over a document to an agent. And he envisioned a “checkpoint of the future” where a passenger’s face can be used to check their bags, go through the security checkpoints and board the plane — all with little to no need to pull out a boarding card or ID documents.

    He acknowledged the privacy concerns and lack of trust many people have when it comes to giving biometric data to the federal government, but said in many ways the use of biometrics is already deeply embedded in society through the use of privately owned technology.

    “Technology is here to stay,” he said.

    __

    Follow Santana on Twitter @ruskygal.

    [ad_2]

    Source link

  • Private detective appeals to Nevada Supreme court in Reno mayor’s lawsuit over GPS tracking device

    Private detective appeals to Nevada Supreme court in Reno mayor’s lawsuit over GPS tracking device

    [ad_1]

    RENO, Nev. — A private investigator who used GPS devices to secretly track the vehicles of Reno Mayor Hillary Schieve and a county commissioner ahead of the 2002 election asked the Nevada Supreme Court late Friday to overturn a judge’s order that he identify the client who hired him.

    Schieve filed suit in December seeking damages from private detective David McNeely for a violation of her privacy after a mechanic alerted her to the clandestine GPS tracking device.

    Sparks police determined it was purchased by McNeely and ex-Washoe County Commissioner Vaughn Hartung joined the suit in February under similar circumstances.

    Lawyers for McNeely said in Friday’s appeal to the state’s high court that divulging the name of a client who paid him to spy on the politicians would violate the long-accepted and expected confidentiality of a “private investigator-client relationship.”

    The attorneys said Washoe District Judge David Hardy had erroneously rejected McNeely’s argument earlier this month that the client’s name was a “trade secret” protected under Nevada law. They likened the stealth nature of the relationship to the “secret sauce” in a prized recipe.

    “Clients of private investigators expect confidentiality,” attorney Ryan Gormley wrote in a 31-page appeal filed Friday.

    “Without that confidentiality, the business will fail. Thus, the protection of client identity creates significant economic value for both defendants and the private investigation industry as a whole,” he said.

    Hardy had ordered McNeely to identify his client by Friday. But he noted in his ruling earlier this month that he was inclined to stay the case if an appeal was in the works because there would be no way to reverse the harm McNeely suffered from the disclosure of his client’s identity if an appellate court later decided he had a right to keep it secret.

    Another lawyer filed a motion this week to halt the proceedings on behalf of an anonymous John Doe who said he hired McNeely in an effort to combat corruption in government.

    The document attorney Jeffery Barr filed said John Doe has a First Amendment right to anonymously investigate elected officials. It said Doe had not broken any laws or disseminated any of the information gathered on his behalf and never was aware or instructed McNeely to place GPS trackers on vehicles.

    Judge Hardy on Thursday agreed to put the case on hold while McNeely pursued appeal avenues, a stipulation to which all the parties had agreed.

    The tracking device was on Schieve’s vehicle for at least several weeks and on Hartung’s vehicle for several months, their lawsuit says.

    Schieve said McNeely trespassed onto her property to install the device, which a mechanic noticed while working on her vehicle last year in the thick of campaign season, about two weeks before she won re-election for mayor in November.

    Hartung also won re-election but since has resigned to accept an appointment as chairman of the Nevada Transportation Commission.

    Hardy said in his May 4 ruling that the use of a GPS tracking device to monitor the movements of a person could be “a tortious invasion of privacy.”

    McNeely’s appeal said the Supreme Court’s intervention is necessary to provide clarity to state law industry wide.

    “In the context of the private investigator-client relationship, the secrecy of the relationship between the private investigator and client is precisely what makes the relationship valuable to the business,” the appeal said.

    “Because, without the secrecy, there would be no relationship,” it said. “The confidentiality is the secret sauce.”

    [ad_2]

    Source link

  • Delaware judge refuses to dismiss Facebook shareholder suit over user data privacy breaches

    Delaware judge refuses to dismiss Facebook shareholder suit over user data privacy breaches

    [ad_1]

    DOVER, Del. — A Delaware judge on Wednesday refused to dismiss a shareholder lawsuit alleging that Facebook officers and directors violated both the law and their fiduciary duties in failing for years to protect the privacy of user data.

    Vice Chancellor J. Travis Laster rejected arguments that the complaint should be dismissed because the plaintiffs did not first demand that Facebook’s board take legal action before filing litigation themselves. Under Delaware law, shareholders must make such a demand or demonstrate that doing so would be futile because a majority of directors were self-interested, lacked independence or faced a substantial likelihood of liability.

    Laster agreed with the plaintiffs that demand would be futile because there is reasonable doubt that a majority of the relevant Facebook board members, many with close personal and business ties to Mark Zuckerberg, would be willing to confront the CEO and founder of the company now known as Meta Platforms Inc., over its privacy failures.

    Meta has said in filings with securities regulators that it believes the lawsuit is without merit.

    In refusing to dismiss the lawsuit, the judge noted that he was required to accept the allegations in the complaint, which he described as “encyclopedic and specific” as true for purposes of ruling on the motion.

    “It tells a story of directors who were on notice of the law breaking, and who either affirmatively went along with it or consciously disregarded it,” Laster said. “What we don’t have is a little lawbreaking, what we don’t have is isolated lawbreaking, what we don’t have are immaterial violations. … This is a case involving alleged wrongdoing on a truly colossal scale.”

    The complaint alleges that Facebook officials repeatedly and continually violated a 2012 consent order with the Federal Trade Commission under which the company agreed to stop collecting personal data on platform users and friends without their consent, and sharing it with the third-party applications.

    Facebook later sold user data to commercial partners in direct violation of the consent order, and removed disclosures from privacy settings that were required under consent order, the lawsuit alleges. The company’s conduct resulted in significant fines from regulators in Europe and culminated in the Cambridge Analytica scandal in 2018. That case involved a British political consulting firm hired by Donald Trump’s 2106 presidential campaign that paid a Facebook app developer for the personal information of tens of millions Facebook users.

    The fallout led to Facebook agreeing to pay unprecedented $5 billion penalty to settle Federal Trade Commission charges that the company violated the 2012 consent order by deceiving users about their ability to protect their personal information.

    While allowing the plaintiffs to pursue their claims that Zuckerberg and several others breached their fiduciary duties to the company, Laster dismissed insider trading claims against several defendants, with the exception of Zuckerberg. The plaintiffs are seeking damages awarded to the company, disgorgement of profits allegedly made through insider trading and corporate governance reforms.

    [ad_2]

    Source link