ReportWire

Tag: Privacy

  • TikTok Sued by US Justice Department for Alleged Violations of Kids’ Privacy

    TikTok Sued by US Justice Department for Alleged Violations of Kids’ Privacy

    [ad_1]

    In March 2019, TikTok agreed to a US federal court order barring the social media giant from collecting personal information from its youngest users without their parents’ consent. According to a new lawsuit filed by US authorities, TikTok immediately breached that order and now faces penalties of $51,744 per violation per day.

    TikTok “knowingly allowed children under 13 to create accounts in the regular TikTok experience and collected extensive personal information from those children without first providing parental notice or obtaining verifiable parental consent,” the US Department of Justice alleged on behalf of the Federal Trade Commission in a complaint lodged on Friday in federal court in California.

    TikTok spokesperson Michael Hughes says the company strongly disagrees with the allegations. He reiterates a statement the company issued in June, when the FTC had voted to sue, that many of the issues raised relate to “practices that are factually inaccurate or have been addressed.” Hughes adds that TikTok is “proud of our efforts to protect children, and we will continue to update and improve the platform.”

    Lawsuits over alleged violations of children’s privacy are almost a rite of passage for social platforms these days, with companies such as Google, Microsoft, and Epic Games collectively having paid hundreds of millions of dollars in penalties.

    But the case against TikTok also falls into the US government’s escalating battle with the service, whose ownership by China-based ByteDance has drawn national security concerns. Some US officials and lawmakers have said they worry about China exploiting TikTok to spread propaganda and gather data on vulnerable Americans. TikTok has refuted the concerns as baseless fear-mongering and is fighting a law that requires it to seek new ownership.

    The complaint filed on Friday alleges that as of 2020, TikTok wouldn’t let users sign up on their own if they entered a birthdate that showed they were under 13 years old. But it allowed those same users to go back, edit their birthdate, and sign up without parental permission.

    TikTok also wouldn’t remove accounts purporting to belong to children unless the user made an explicit admission of their age on their account, according to the lawsuit. TikTok’s hired content moderators allegedly spent just five to seven seconds on average reviewing accounts for age violations. “Defendants actively avoid deleting the accounts of users they know to be children,” the lawsuit states. Additionally, millions of accounts flagged as potentially belonging to children allegedly were never removed because of a bug in TikTok’s internal tools.

    The lawsuit acknowledges that TikTok improved some policies and processes over the years but that it still held on to and used personal information of children that it shouldn’t have had in the first place.

    Authorities also took issue with TikTok’s dedicated Kids Mode. The lawsuit alleges that TikTok gathered and shared information about children’s usage of the service and built profiles on them while misleading parents about the data collection. When parents tried to have data on their kids deleted, TikTok forced them to jump through unnecessary hoops, the lawsuit further alleges.

    TikTok should have known better, according to the government, because of the 2019 court order, which stemmed from TikTok’s predecessor—a service known as Musical.ly—allegedly violating a number of rules aimed at protecting children’s privacy. Those rules largely come from the Children’s Online Privacy Protection Act, a law dating to the late-1990s dotcom era that tried to create a safer environment for children on the web.

    Lawmakers in the US this year have been weighing a major update in the form of the Kids Online Safety Act, or KOSA. The proposed measure, which passed the Senate earlier this week, would require services like TikTok to better control kids’ usage. Detractors have said it would unfairly cut off some young populations, such as transgender kids, from vital support networks. KOSA’s fate remains uncertain. But as the case against TikTok allegedly shows, stricter rules may do little to stop companies from pursuing familiar tactics.

    [ad_2]

    Paresh Dave

    Source link

  • Justice Department sues TikTok, accusing the company of illegally collecting children’s data

    Justice Department sues TikTok, accusing the company of illegally collecting children’s data

    [ad_1]

    The Justice Department sued TikTok on Friday, accusing the company of violating children’s online privacy law and running afoul of a settlement it had reached with another federal agency.

    The complaint, filed together with the Federal Trade Commission in a California federal court, comes as the U.S. and the prominent social media company are embroiled in yet another legal battle that will determine if – or how – TikTok will continue to operate in the country.

    The latest lawsuit focuses on allegations that TikTok, a trend-setting platform popular among young users, and its China-based parent company ByteDance violated a federal law that requires kid-oriented apps and websites to get parental consent before collecting personal information of children under 13. It also says the companies failed to honor requests from parents who wanted their children’s accounts deleted, and chose not to delete accounts even when the firms knew they belonged to kids under 13.

    “This action is necessary to prevent the defendants, who are repeat offenders and operate on a massive scale, from collecting and using young children’s private information without any parental consent or control,” Brian M. Boynton, head of the Justice Department’s Civil Division, said in a statement.

    TikTok said it disagreed with the allegations, “many of which relate to past events and practices that are factually inaccurate or have been addressed.”

    “We offer age-appropriate experiences with stringent safeguards, proactively remove suspected underage users and have voluntarily launched features such as default screentime limits, Family Pairing, and additional privacy protections for minors,” the company said in a statement.

    The U.S. decided to file the lawsuit following an investigation by the FTC that looked into whether the companies were complying with a previous settlement involving TikTok’s predecessor, Musical.ly.

    In 2019, the federal government sued Musical.ly, alleging it violated the Children’s Online Privacy Protection Act, or COPPA, by failing to notify parents about its collection and use of personal information for kids under 13.

    That same year, Musical.ly — acquired by ByteDance in 2017 and merged with TikTok — agreed to pay $5.7 million to resolve those allegations. The two companies were also subject to a court order requiring them to comply with COPPA, which the government says hasn’t happened.

    In the complaint, the Justice Department and the FTC allege TikTok has knowingly allowed children to create accounts and retained their personal information without notifying their parents. This practice extends to accounts created in “Kids Mode,” a version of TikTok for children under 13. The feature allows users to view videos but bars them from uploading content.

    The two agencies allege the information collected included activities on the app and other identifiers used to build user profiles. They also accuse TikTok of sharing the data with other companies – such as Meta’s Facebook and an analytics company called AppsFlyer – to persuade “Kids Mode” users to be on the platform more, a practice TikTok called “re-targeting less active users.”

    The complaint says TikTok also allowed children to create accounts without having to provide their age, or obtain parental approval, by using credentials from third-party services. It classified these as “age unknown” accounts, which the agencies say have grown into millions.

    After parents discovered some of their children’s accounts and asked for them to be deleted, federal officials said TikTok asked them to go through a convoluted process to deactivate them and frequently did not honor their requests.

    Overall, the government said TikTok employed deficient policies that were unable to prevent children’s accounts from proliferating on its app and suggested the company was not taking the issue seriously. In at least some periods since 2019, the complaint said TikTok’s human moderators spent an average of five to seven seconds reviewing accounts flagged as potentially belonging to a child. It also said TikTok and ByteDance have technology they can use to identify and remove children’s accounts, but do not use them for that reason.

    The alleged violations have resulted in millions of children under 13 using the regular TikTok app, allowing them to interact with adults and access adult content, the complaint said.

    In March, a person with the matter had told the AP the FTC’s investigation was also looking into whether TikTok violated a portion of federal law that prohibits “unfair and deceptive” business practices by denying that individuals in China had access to U.S. user data.

    Those allegations were not included in the complaint, which is asking the court to fine the companies and enter a preliminary injunction to prevent future violations.

    Other social media companies have also come under fire for how they’ve handled children’s data.

    In 2019, Google and YouTube agreed to pay a $170 million fine to settle allegations that the popular video site had illegally collected personal information on children without their parents’ consent.

    And last fall, dozens of U.S. states sued Meta Platforms Inc., which owns Facebook and Instagram, for harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms. A lawsuit filed by 33 states claims that Meta routinely collects data on children under 13 without their parents’ consent, in violation of COPPA. Nine attorneys general are also filing lawsuits in their respective states, bringing the total number of states taking action to 41 plus Washington, D.C.

    [ad_2]

    Source link

  • Justice Department sues TikTok, alleging the social media company violated a children’s online privacy law

    Justice Department sues TikTok, alleging the social media company violated a children’s online privacy law

    [ad_1]

    Justice Department sues TikTok, alleging the social media company violated a children’s online privacy law

    [ad_2]

    Source link

  • Local hospital network data breach may affect over 500

    Local hospital network data breach may affect over 500

    [ad_1]

    SALEM, N.H. — A data breach at a local hospital network caused more than 500 patients’ personal information to be leaked.

    Northeast Rehabilitation Hospital Network, 70 Butler St., announced on its website that between May 13 and May 22, there was unauthorized access to the company’s network and files containing sensitive information may have been accessed.

    Information was accessed from Neuro Rehab Associates Inc., a subsidiary founded in 1983, according to the data breach portal for the U.S. Department of Health and Human Services’ Office for Civil Rights.

    The breach was reported to the Department of Health and Human Services on July 17.

    Despite claiming it was an instance of unauthorized access, the department categorized the breach as a hacking and IT incident and noted the information was found on network servers.

    NRHN said it is investigating the breach’s severity and will only notify people who have been affected and that it reported the incident to a federal law enforcement agency.

    NRHN has four inpatient hospitals in New Hampshire and more than 25 outpatient rehabilitation clinics across Massachusetts and New Hampshire.

    The company said while it is still investigating the breach’s extent, the information that could have been stolen includes patients’ names, contact information, dates of birth, Social Security numbers, driver’s license and ID numbers, financial account information, diagnoses, treatments and health insurance information.

    NRHN has asked for patients to remain vigilant and, if they believe they are a victim of this breach, to contact it by email at NRHNCyberInfo@northeastrehab.com.

    [ad_2]

    By Katelyn Sahagian | ksahagian@northofboston.com

    Source link

  • Senate passes bill to protect kids online, make tech companies accountable for harmful content

    Senate passes bill to protect kids online, make tech companies accountable for harmful content

    [ad_1]

    WASHINGTON — The Senate overwhelmingly passed legislation Tuesday that is designed to protect children from dangerous online content, pushing forward with what would be the first major effort by Congress in decades to hold tech companies more accountable for the harm that they cause.

    The bill, which passed 91-3, has been pushed by parents of children who died by suicide after online bullying or have otherwise been harmed by online content. It would force companies to take reasonable steps to prevent harm on online platforms frequently used by minors, requiring them to exercise “duty of care” and ensure that they generally default to the safest settings possible.

    The House has not yet acted on the bill, but Speaker Mike Johnson, R-La., has said he is “committed to working to find consensus.” Supporters are hoping that the strong Senate vote will push the House to act before the end of the congressional session in January.

    The legislation is about allowing children, teens and parents “to take back control of their lives online,” said Democratic Sen. Richard Blumenthal of Connecticut, who wrote the bill with Republican Sen. Marsha Blackburn of Tennessee. He said that the message to big tech companies is that “we no longer trust you to make decisions for us.”

    The bill would be the first major tech regulation package to move in years, and it could potentially pave the way for other bills that would strengthen online privacy laws or set parameters for the growing use of artificial intelligence, among others. While there has long been bipartisan support for the idea that the biggest technology companies should face more government scrutiny, there has been little consensus on how it should be done. Congress passed legislation earlier this year that would force China-based social media company TikTok to sell or face a ban, but that law only targets one company.

    “This is a good first step, but we have more to go,” said Senate Majority Leader Chuck Schumer, D-N.Y.

    If the child safety bill becomes law, companies would be required to mitigate harm to children, including bullying and violence, the promotion of suicide, eating disorders, substance abuse, sexual exploitation and advertisements for illegal products such as narcotics, tobacco or alcohol.

    To do that, social media platforms would have to provide minors with options to protect their information, disable addictive product features and opt out of personalized algorithmic recommendations. They would also be required to limit other users from communicating with children and limit features that “increase, sustain, or extend the use” of the platform — such as autoplay for videos or platform rewards.

    The idea, Blumenthal and Blackburn say, is for the platforms to be “safe by design.”

    “The message we are sending to big tech is that kids are not your product,” Blackburn said at a news conference as the Senate passed the bill. “Kids are not your profit source. And we are going to protect them in the virtual space.”

    Several tech companies, including Microsoft, X and Snap, have supported the legislation. But NetChoice, a a tech industry group that represents X and Snap, along with Google, TikTok and Meta Platforms, called it unconstitutional.

    Carl Szabo, a vice president and counsel for the group, said in a statement that the law’s “cybersecurity, censorship, and constitutional risks remain unaddressed.” He did not elaborate.

    Blumenthal and Blackburn have said they worked to find a balance between forcing companies to become more responsible for what children see online while also ensuring that Congress does not go too far in regulating what individuals post — an effort to head off potential legal challenges and win over lawmakers who worry that regulation could impose on freedom of expression.

    In addition to First Amendment concerns, some critics have said the legislation could harm kids who wouldn’t be able to access information on LGBTQ+ issues or reproductive rights — although the bill has been revised to address many of those criticisms, and major LGBTQ+ groups have decided to support the proposed legislation.

    The bill also includes an update to child privacy laws that prohibit online companies from collecting personal information from users under 13, raising that age to 17. It would also ban targeted advertising to teenagers and allow teens or guardians to delete a minor’s personal information.

    Massachusetts Sen. Ed Markey, sponsored the original legislation in 1998 — the last time Congress passed a child online safety law — and worked with Republican Sen. Bill Cassidy of Louisiana on the update. Markey said that the online space “has come a long way” since the first bill and new tools are needed for parents as teens have struggled with mental health.

    As their bill stalled for several months, Blumenthal and Blackburn worked closely with the parents of children who have been harmed by social media — either by cyberbullying or social media challenges, extortion attempts, eating disorders, drug deals or other potential dangers. At an emotional news conference last week, the parents said they were pleased that the Senate is finally moving ahead with the legislation.

    Maurine Molak, the mother of a 16-year-old who died by suicide after “months of relentless and threatening cyberbullying,” said she believes the bill can save lives. She urged every senator to vote for it.

    “Anyone who believes that children’s well-being and safety should come before big tech’s greed ought to put their mark on this historic legislation,” Molak said.

    ___

    Ortutay reported from San Francisco.

    [ad_2]

    Source link

  • Meta agrees to $1.4B settlement with Texas in privacy lawsuit over facial recognition

    Meta agrees to $1.4B settlement with Texas in privacy lawsuit over facial recognition

    [ad_1]

    AUSTIN, Texas — Meta has agreed to a $1.4 billion settlement with Texas in a privacy lawsuit over claims that the tech giant used biometric data of users without their permission, officials said Tuesday.

    Texas Attorney General Ken Paxton said the settlement is the largest secured by a single state. In 2021, a judge approved a $650 million settlement with the company, formerly known as Facebook, over similar claims of users in Illinois.

    “This historic settlement demonstrates our commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights,” Paxton, a Republican, said in a statement.

    Meta said in a statement: “We are pleased to resolve this matter, and look forward to exploring future opportunities to deepen our business investments in Texas, including potentially developing data centers.”

    Filed in 2022, the Texas lawsuit alleged that Meta was in violation of a state law that prohibits capturing or selling a resident’s biometric information, such as their face or fingerprint, without their consent.

    The company announced in 2021 that it was shutting down its face-recognition system and delete the faceprints of more than 1 billion people amid growing concerns about the technology and its misuse by governments, police and others.

    At the time, more than a third of Facebook’s daily active users had opted in to have their faces recognized by the social network’s system. Facebook introduced facial recognition more than a decade earlier but gradually made it easier to opt out of the feature as it faced scrutiny from courts and regulators.

    Facebook in 2019 stopped automatically recognizing people in photos and suggesting people “tag” them, and instead of making that the default, asked users to choose if they wanted to use its facial recognition feature.

    ___

    Lathan is a corps member for The Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

    [ad_2]

    Source link

  • Two former FBI officials settle lawsuits with Justice Department over leaked text messages

    Two former FBI officials settle lawsuits with Justice Department over leaked text messages

    [ad_1]

    WASHINGTON — WASHINGTON (AP) — Two former FBI officials settled lawsuits with the Justice Department on Friday, resolving claims that their privacy was violated when the department leaked to the news media text messages that they had sent one another that disparaged former President Donald Trump.

    Peter Strzok, a former top counterintelligence agent who played a crucial role in the investigation into Russian election interference in 2016, settled his case for $1.2 million. Attorneys for Lisa Page, an FBI lawyer who exchanged text messages with Strzok, also confirmed that she had settled but did not disclose an amount.

    The two had sued the Justice Department over a 2017 episode in which officials shared copies with reporters of text messages they had sent each other, including ones that described Trump as an “idiot” and a ”loathsome human” and that called the prospect of a Trump victory “terrifying.”

    Strzok, who also investigated former Secretary of State Hillary Clinton’s use of a private email server, was fired after the text messages came to light. Page resigned.

    “This outcome is a critical step forward in addressing the government’s unfair and highly politicized treatment of Pete,” Strzok’s lawyer, Aitan Goelman, said in a statement Friday announcing the settlement.

    “As important as it is for him, it also vindicates the privacy interests of all government employees. We will continue to litigate Pete’s constitutional claims to ensure that, in the future, public servants are protected from adverse employment actions motivated by partisan politics,” he added.

    A spokesman for the Justice Department did not have an immediate comment Friday,

    Strzok also sued the department over his termination, alleging that the FBI caved to “unrelenting pressure” from Trump when it fired him and that his First Amendment rights were violated. Those constitutional claims have not been resolved by the tentative settlement.

    “While I have been vindicated by this result, my fervent hope remains that our institutions of justice will never again play politics with the lives of their employees,” Page said in a statement. Her attorneys said that “the evidence was overwhelming that the release of text messages to the press in December 2017 was for partisan political purposes and was against the law. ”

    [ad_2]

    Source link

  • Melania Trump to tell her story in memoir, ‘Melania,’ scheduled for this fall

    Melania Trump to tell her story in memoir, ‘Melania,’ scheduled for this fall

    [ad_1]

    NEW YORK — Former first lady Melania Trump has a memoir coming out this fall, “Melania,” billed by her office as “a powerful and inspiring story of a woman who has carved her own path, overcome adversity and defined personal excellence.” It’s the first memoir by Trump, who has been mostly absent as her husband, former President Donald Trump, seeks to return to the White House.

    “Melania” will be released by Skyhorse Publishing, which has published such Donald Trump supporters as former New York City Mayor Rudolph Giuliani and attorney Alan Dershowitz. Skyhorse also has worked with third-party candidate Robert F. Kennedy Jr. and former Trump insider Michael Cohen, who later became one of his harshest critics. Some Skyhorse books include forewords by Trump ally Steve Bannon.

    Melania Trump’s memoir was announced Thursday by her office, which neither provided a specific release date nor mentioned whether it would come out before Election Day in November. Trump has been the subject of other books, including one by former adviser Stephanie Winston Wolkoff, but she has never told her own story at length before.

    The former first lady “invites readers into her world, offering an intimate portrait of a woman who has lived an extraordinary life,” the announcement reads in part. “‘Melania’ includes personal stories and family photos she has never before shared with the public.”

    A spokesperson said no information was available beyond what was included in the release, which made no reference to financial terms, promotional plans or if she worked with a co-author.

    Melania Trump, Donald Trump’s third wife, has been an enigmatic figure since her husband announced he was running in the 2016 election. She has sought to maintain her privacy even as she served as first lady, focusing on raising their son, Barron, and promoting her “Be Best” initiative to support the “social, emotional, and physical health of children.” While she appeared at her husband’s campaign launch event for 2024 and attended the closing night of last week’s Republican National Convention, she has otherwise stayed off the campaign trail. Her decision not to deliver a speech at this year’s convention marked a departure from tradition for candidates’ wives, and from the 2016 and 2020 Republican gatherings.

    According to her office, the memoir will come in two versions: a $150 “Collector’s Edition,” 256 pages, “in full color throughout, with each copy signed by the author,” and a “Memoir Edition,” 304 pages, including 48 pages of never-before-seen photographs. The book is listed at $40, with signed editions going for $75.

    Both editions are available for pre-order exclusively through the first lady’s web site, MelaniaTrump.com. A spokesperson did not have any immediate comment on when or whether it could be ordered elsewhere.

    Unlike other former presidents and first ladies, Donald and Melania Trump have not released any post-White House books through mainstream New York publishers. Donald Trump published numerous books before his presidency, working with Random House and Simon & Schuster among others, but many shunned him after the siege of the U.S. Capitol on Jan. 6, 2021.

    He has released two books since leaving Washington, a picture book commemorating his time at the White House and a compilation of letters from world leaders and celebrities. Both came out through Winning Team Publishing, co-founded in 2021 by Donald Trump Jr. and former Trump campaign staffer Sergio Gor.

    [ad_2]

    Source link

  • J.D. Vance Left His Venmo Public. Here’s What It Shows

    J.D. Vance Left His Venmo Public. Here’s What It Shows

    [ad_1]

    Despite his anti-elite stance, Vance’s connections reveal a more complex relationship with establishment figures. At the same time, as the former president distances himself from Project 2025—a right-wing policy roadmap aiming to purge the federal government and reshape the executive branch and turn the US into what critics characterize as a Christian nationalist autocratic state—Vance’s Venmo network reveals his ties not just to Halikias but to others associated with a maximalist interpretation of MAGA. Gladden Pappin, for instance—president of the Hungarian Institute of International Affairs and a figure with close ties to the intellectual wing of the far right—shows up as one of Vance’s friends.

    Senator Vance’s office declined to comment on the record for this story. In an interview with Newsmax earlier this month, he said that the Project 2025 document has good ideas in it, as well as things he disagrees with. Vance did not elaborate on what exactly those good or bad ideas are. At the time of publication, Vance’s Venmo account remains fully public.

    Vance’s friends have an average of 277 friends each. This wider network of associates shows an extended web of accounts who share names with high-profile political figures like Cohen, Nick Ayers, Todd Ricketts, and Michael Flynn Jr., as well as far-right activists like Project Veritas founder James O’Keefe, Laura Loomer, and Ali Alexander.

    “What you guys need to realize is that Vance is influenceable,” wrote Andrew Torba on X. Torba is the founder of Gab, a social network popular with conspiracy theorists and Christian nationalists. He has long promoted antisemitic content on his social media accounts. “We have plenty of people in his orbit. Plenty of our guys can be put into positions of power because he’s there.”

    “This appears to be his actual personal contacts,” says Jordan Libowitz, the vice president of communications for Citizens for Responsibility and Ethics in Washington, or CREW. He notes that the data found on Venmo is much more personal than what campaigns typically share through official channels, warning that “the more personal data that is public about someone the more points of pressure or influence there are on that person.”

    Few of Vance’s transactions are public, and those that are seem mundane, like a payment to a staff member for doughnuts in January. WIRED also uncovered the Venmo account of his former Senate campaign manager, Jordan Wiggins, which shows a more extensive and occasionally eyebrow-raising transaction history, including more than 50 payments from as early as 2015, some labeled for things like “Back waxing & Happy Ending,” and “adult 🎥”. While these descriptions are likely jokes between friends, Wiggins didn’t respond to a request for comment.

    After WIRED reached out to Vance’s Senate office on Wednesday, Wiggins made his account transactions private.

    Experts say that the visibility of Vance’s account could create problems for the high-profile individuals connected to it. “Access to anyone’s social connections can reveal sensitive private information and expose them to security risks,” Jennifer Lynch, general counsel at civil liberties nonprofit the Electronic Frontier Foundation, tells WIRED. High-profile politicians like Vance, Lynch argues, may be especially prone to social engineering attacks and impersonation. “If someone who is a candidate for vice president hasn’t changed his privacy settings, I don’t know how a company can expect the rest of us to stay on top of this.”

    [ad_2]

    Dhruv Mehrotra, Tim Marchman, Andrew Couts

    Source link

  • AT&T Paid a Hacker $370,000 to Delete Stolen Phone Records

    AT&T Paid a Hacker $370,000 to Delete Stolen Phone Records

    [ad_1]

    Despite the payment and deletion, some AT&T customers and those who communicated with them may still be at risk, given that others may have samples of the data that were not deleted.

    The hacker who spoke with WIRED obtained payment from AT&T instead of Binns because, he says, in an odd twist to the case, Binns was arrested in Turkey in May for an unrelated breach dating back to 2021. That one involved a massive theft of data from T-Mobile. AT&T said in its SEC filing that it believed “at least one person” associated with the breach had already been apprehended, but didn’t identify him. 404 Media was first to report on Friday that Binns is allegedly that person.

    Binns was indicted in 2022 on 12 counts related to the 2021 hack of T-Mobile “and theft and sale of sensitive files and information” that involved data on more than 40 million people. Binns, however, had moved from the US to Turkey in 2018 with his Turkish mother, according to an interview he gave three years ago to The Wall Street Journal. The indictment remained sealed until this year. Last September, the US learned he could possibly be arrested in Turkey and extradited to the US because he didn’t have Turkish citizenship. Prosecutors in Seattle, near where T-Mobile is based, asked a US court in December to unseal parts of the indictment so they could give it and an arrest warrant to Turkish authorities who were making the final decision on whether Binns could be extradited legally under Turkish law. The court granted the request to unseal in January.

    The hacker who received payment from AT&T tells WIRED he believes Binns was arrested in Turkey around May 5, since Binns hasn’t responded to any attempts by him and others to contact him. WIRED contacted the Seattle public defender representing Binns in the T-Mobile case but did not receive a reply.

    Binns has had contact with US authorities on a number of occasions and has accused the CIA and other agencies of wild conspiracies to harm and entrap him. As part of a 2020 FOIA lawsuit against the FBI, CIA, and US Special Operations Command to obtain records he claimed they held about him, Binns claimed that CIA contractors spied on him, experimented on him, harassed him, and that one of them pointed a “psychotronic weapon” at his head and used a microwave oven to shock him, among other allegations. He later filed a motion to dismiss his FOIA case, claiming he had filed some documents while “experiencing a psychological episode brought on by intoxication.”

    Last October, in the T-Mobile case, Binns wrote to the US District Court in Seattle and said he believed his actions were affected by a chip that had been implanted in his brain when he was an infant. In a certified letter sent to the court and viewed by WIRED, Binns told the judge that he believed a “wireless brain (basal gangliea) stimulation implant or device implanted” shortly after he was born was responsible for “erratic behavior to include irresistible impulses, artificial neurological problems, and the possible commission of crimes.”

    The timeline suggests that if Binns is responsible for the AT&T breach, he allegedly did it when he was likely already aware that he was under indictment for the T-Mobile hack and could face arrest for it.

    [ad_2]

    Kim Zetter

    Source link

  • The Sweeping Danger of the AT&T Phone Records Breach

    The Sweeping Danger of the AT&T Phone Records Breach

    [ad_1]

    From targeted wiretaps to bulk surveillance dragnets, phone companies have been at the center of privacy concerns for decades—and their time in the limelight isn’t over yet. On Friday, telecom giant AT&T announced that it recently suffered a data breach impacting call and text messaging records of “nearly all” its customers. The company is in the process of notifying about 110 million people that they were affected.

    AT&T said in a US Securities and Exchange Commission filing that it learned about the data breach on April 19. Attackers exfiltrated data between April 14 and April 25. The company said in its SEC submission that the US Justice Department authorized delayed disclosure of the breach on May 9 and again on June 5, pending investigation. AT&T added that it is “working with law enforcement in its efforts to arrest those involved in the incident.” So far, “at least one person has been apprehended.”

    “Yeah, this is really bad,” says Jake Williams, vice president of research and development at the cybersecurity consultancy Hunter Strategy. “What the threat actors stole here are essentially call data records. These are a gold mine in intelligence analysis because they allow someone to understand networks—who is talking to whom and when. And threat actors have data from previous compromises to map phone numbers to identities. But even without identifying data for a phone number, closed networks—where numbers only communicate with others in the same network—are almost always interesting.”

    The incident is significant not only because of its sheer scale and reach but because AT&T says it is the latest in a staggering spate of data thefts that resulted from attackers compromising organizations’ Snowflake cloud accounts. Snowflake is a data warehousing platform, and attackers collected its customers’ account credentials in recent months to steal hundreds of millions of records from about 165 Snowflake clients, including Ticketmaster, Santander bank, and LendingTree’s QuoteWizard.

    The AT&T data is from both landline and cellular accounts and spans May 1, 2022, to October 31, 2022. A smaller, undisclosed number of people also had records from January 2, 2023, stolen in the breach. The company said on Friday that the data trove “does not contain the content of calls or texts” and does not include the date and time of communications. But attackers did make off with phone numbers and a massive amount of so-called “metadata” about calls and texts, including who contacted whom, call durations, and tallies of a customer’s total calls and texts. The trove also includes some cell site identification numbers—essentially cell tower data that can be used to approximate a cellphone’s location when it made or received a call or text.

    The data includes some records of people who are customers of phone carriers—known as “mobile virtual network operators”—that contract with AT&T to use the larger company’s networks and infrastructure for their service. And, crucially, the stolen trove exposes people who have no relationship with AT&T when they communicated with an AT&T customer during the relevant time spans.

    [ad_2]

    Lily Hay Newman

    Source link

  • Report finds most subscription services manipulate customers with ‘dark patterns’

    Report finds most subscription services manipulate customers with ‘dark patterns’

    [ad_1]

    Most subscription sites use “dark patterns” to influence customer behavior around subscriptions and personal data, according to a pair of new reports from global consumer protection groups. Dark patterns are “practices commonly found in online user interfaces [that] steer, deceive, coerce or manipulate consumers into making choices that often are not in their best interests.” The international research efforts were conducted by the International Consumer Protection and Enforcement Network (ICPEN) and the Global Privacy Enforcement Network (GPEN).

    The ICPEN conducted the of 642 websites and mobile apps with a subscription component. The assessment revealed one dark pattern in use at almost 76 percent of the platforms, and multiple dark patterns at play in almost 68 percent of them. One of the most common dark patterns discovered was sneaking, where a company makes potentially negative information difficult to find. ICPEN said 81 percent of the platforms with automatic subscription renewal kept the ability for a buyer to turn off auto-renewal out of the purchase flow. Other dark patterns for subscription services included interface interference, where desirable actions are easier to perform, and forced action, where customers have to provide information to access a particular function.

    The companion from GPEN examined dark patterns that could encourage users to compromise their privacy. In this review, nearly all of the more than 1,000 websites and apps surveyed used a deceptive design practice. More than 89 percent of them used complex and confusing language in their privacy policies. Interface interference was another key offender here, with 57 percent of the platforms making the least protective privacy option the easiest to choose and 42 percent using emotionally charged language that could influence users.

    Even the most savvy of us can be influenced by these subtle cues to make suboptimal decisions. Those decisions might be innocuous ones, like forgetting that you’ve set a service to auto-renew, or they might put you at risk by encouraging you to reveal more personal information than needed. The reports didn’t specify whether the dark patterns were used in illicit or illegal ways, only that they were present. The dual release is a stark reminder that digital literacy is an essential skill.

    [ad_2]

    Anna Washenko

    Source link

  • Google’s Nonconsensual Explicit Images Problem Is Getting Worse

    Google’s Nonconsensual Explicit Images Problem Is Getting Worse

    [ad_1]

    In early 2022, two Google policy staffers met with a trio of women victimized by a scam that resulted in explicit videos of them circulating online—including via Google search results. The women were among the hundreds of young adults who responded to ads seeking swimsuit models only to be coerced into performing in sex videos distributed by the website GirlsDoPorn. The site shut down in 2020, and a producer, a bookkeeper, and a cameraman subsequently pleaded guilty to sex trafficking, but the videos kept popping up on Google search faster than the women could request removals.

    The women, joined by an attorney and a security expert, presented a bounty of ideas for how Google could keep the criminal and demeaning clips better hidden, according to five people who attended or were briefed on the virtual meeting. They wanted Google search to ban websites devoted to GirlsDoPorn and videos with its watermark. They suggested Google could borrow the 25-terabyte hard drive on which the women’s cybersecurity consultant, Charles DeBarber, had saved every GirlsDoPorn episode, take a mathematical fingerprint, or “hash,” of each clip, and block them from ever reappearing in search results.

    The two Google staffers in the meeting hoped to use what they learned to win more resources from higher-ups. But the victim’s attorney, Brian Holm, left feeling dubious. The policy team was in “a tough spot” and “didn’t have authority to effect change within Google,” he says.

    His gut reaction was right. Two years later, none of those ideas brought up in the meeting have been enacted, and the videos still come up in search.

    WIRED has spoken with five former Google employees and 10 victims’ advocates who have been in communication with the company. They all say that they appreciate that because of recent changes Google has made, survivors of image-based sexual abuse such as the GirlsDoPorn scam can more easily and successfully remove unwanted search results. But they are frustrated that management at the search giant hasn’t approved proposals, such as the hard drive idea, which they believe will more fully restore and preserve the privacy of millions of victims around the world, most of them women.

    The sources describe previously unreported internal deliberations, including Google’s rationale for not using an industry tool called StopNCII that shares information about nonconsensual intimate imagery (NCII) and the company’s failure to demand that porn websites verify consent to qualify for search traffic. Google’s own research team has published steps that tech companies can take against NCII, including using StopNCII.

    The sources believe such efforts would better contain a problem that’s growing, in part through widening access to AI tools that create explicit deepfakes, including ones of GirlsDoPorn survivors. Overall reports to the UK’s Revenge Porn hotline more than doubled last year, to roughly 19,000, as did the number of cases involving synthetic content. Half of over 2,000 Brits in a recent survey worried about being victimized by deepfakes. The White House in May urged swifter action by lawmakers and industry to curb NCII overall. In June, Google joined seven other companies and nine organizations in announcing a working group to coordinate responses.

    Right now, victims can demand prosecution of abusers or pursue legal claims against websites hosting content, but neither of those routes is guaranteed, and both can be costly due to legal fees. Getting Google to remove results can be the most practical tactic and serves the ultimate goal of keeping violative content out of the eyes of friends, hiring managers, potential landlords, or dates—who almost all likely turn to Google to look up people.

    A Google spokesperson, who requested anonymity to avoid harassment from perpetrators, declined to comment on the call with GirlsDoPorn victims. She says combating what the company refers to as nonconsensual explicit imagery (NCEI) remains a priority and that Google’s actions go well beyond what is legally required. “Over the years, we’ve invested deeply in industry-leading policies and protections to help protect people affected by this harmful content,” she says. “Teams across Google continue to work diligently to bolster our safeguards and thoughtfully address emerging challenges to better protect people.”

    [ad_2]

    Paresh Dave

    Source link

  • How Apple Intelligence’s Privacy Stacks Up Against Android’s ‘Hybrid AI’

    How Apple Intelligence’s Privacy Stacks Up Against Android’s ‘Hybrid AI’

    [ad_1]

    Yet Google and its hardware partners argue privacy and security are a major focus of the Android AI approach. VP Justin Choi, head of the security team, mobile eXperience business at Samsung Electronics, says its hybrid AI offers users “control over their data and uncompromising privacy.”

    Choi describes how features processed in the cloud are protected by servers governed by strict policies. “Our on-device AI features provide another element of security by performing tasks locally on the device with no reliance on cloud servers, neither storing data on the device nor uploading it to the cloud,” Choi says.

    Google says its data centers are designed with robust security measures, including physical security, access controls, and data encryption. When processing AI requests in the cloud, the company says, data stays within secure Google data center architecture and the firm is not sending your information to third parties.

    Meanwhile, Galaxy’s AI engines are not trained with user data from on-device features, says Choi. Samsung “clearly indicates” which AI functions run on the device with its Galaxy AI symbol, and the smartphone maker adds a watermark to show when content has used generative AI.

    The firm has also introduced a new security and privacy option called Advanced Intelligence settings to give users the choice to disable cloud-based AI capabilities.

    Google says it “has a long history of protecting user data privacy,” adding that this applies to its AI features powered on-device and in the cloud. “We utilize on-device models, where data never leaves the phone, for sensitive cases such as screening phone calls,” Suzanne Frey, vice president of product trust at Google, tells WIRED.

    Frey describes how Google products rely on its cloud-based models, which she says ensures “consumer’s information, like sensitive information that you want to summarize, is never sent to a third party for processing.”

    “We’ve remained committed to building AI-powered features that people can trust because they are secure by default and private by design, and most importantly, follow Google’s responsible AI principles that were first to be championed in the industry,” Frey says.

    Apple Changes the Conversation

    Rather than simply matching the “hybrid” approach to data processing, experts say Apple’s AI strategy has changed the nature of the conversation. “Everyone expected this on-device, privacy-first push, but what Apple actually did was say, it doesn’t matter what you do in AI—or where—it’s how you do it,” Doffman says. He thinks this “will likely define best practice across the smartphone AI space.”

    Even so, Apple hasn’t won the AI privacy battle just yet: The deal with OpenAI—which sees Apple uncharacteristically opening up its iOS ecosystem to an outside vendor—could put a dent in its privacy claims.

    Apple refutes Musk’s claims that the OpenAI partnership compromises iPhone security, with “privacy protections built in for users who access ChatGPT.” The company says you will be asked permission before your query is shared with ChatGPT, while IP addresses are obscured and OpenAI will not store requests—but ChatGPT’s data use policies still apply.

    Partnering with another company is a “strange move” for Apple, but the decision “would not have been taken lightly,” says Jake Moore, global cybersecurity adviser at security firm ESET. While the exact privacy implications are not yet clear, he concedes that “some personal data may be collected on both sides and potentially analyzed by OpenAI.”

    [ad_2]

    Kate O’Flaherty

    Source link

  • Proton Is Launching Encrypted Documents to Take On Google Docs

    Proton Is Launching Encrypted Documents to Take On Google Docs

    [ad_1]

    Yen says Proton has been internally using the system for the last month and is now ready to roll it out to consumers. “I feel it is relatively polished,” Yen says. To compete with other online document editors, he says, the team also built in collaboration functionality from the beginning. This includes real-time editing by multiple people, commenting, and showing when someone else is viewing the document.

    In April, Proton acquired encrypted note-taking app Standard Notes, which is a separate product from Docs. “It’s actually not ‘take Standard Notes and stick it into Proton,’” Yen says, adding that the encryption architecture of the two were different, and Proton Docs is “more or less a ground-up, clean build in Proton’s ecosystem on our software stack.” (WIRED was unable to test the Docs before it was launched).

    The big difference Proton is adding when compared to Google Docs is the encryption—something that is challenging to do at scale and also harder when a document has multiple people editing it at the same time. Yen says it’s not just the contents of documents that are being encrypted, so are other elements like keystrokes, mouse movements, and file names and paths.

    The company, which last month announced it is moving toward a nonprofit status, uses open source encryption, and Yen says building the Docs system required encryption key exchange and synchronization to happen across multiple users. Part of this was possible, Yen says, because last year the company added version history for documents stored in its Drive system, which the Docs are built on top of.

    There are relatively few—if any—major end-to-end encrypted document editors online. Other existing services, which WIRED has not tried, include CryptPad and various note-taking or notepad-style apps. There are also apps that encrypt files locally on your machine, such as Cryptee and Anytype.

    Recently, Proton has been moving quickly to launch new encrypted products—adding cloud storage, a VPN, a password manager, and calendar alongside its original ProtonMail email service. The company has also faced scrutiny over some information it has provided to law enforcement, such as recovery emails that have been added to accounts. It changed some of its policies in 2021 after being ordered to collect some user metadata. While the company is based outside of the US and EU, it still responds to thousands of Swiss law enforcement requests.

    Ultimately, Yen says, the company is trying to offer as many private alternatives to Big Tech services, particularly Google, as it can. “Everything Google’s got, we’ve got to build as well. That’s the road map. But the challenge, of course, is the order in which you do it,” Yen says. “In some sense, taking privacy to a more mainstream audience also requires going further afield, trying different things, and being a bit more adventurous in the things that we build and things that we launch.”

    [ad_2]

    Matt Burgess

    Source link

  • The Tech Crash Course That Trains US Diplomats to Spot Threats

    The Tech Crash Course That Trains US Diplomats to Spot Threats

    [ad_1]

    By the time the Senate unanimously confirmed Nate Fick to be America’s cyber ambassador in September 2022, tech diplomacy headaches were impossible to ignore, and Fick quickly tasked his team with creating a modern training program and embedding it in the FSI’s regular curriculum.

    “He understood that we needed to do more and better in terms of preparing our people in the field,” Hop says.

    The training program fit neatly into secretary of state Antony Blinken’s vision of an American diplomatic corps fully versed in modern challenges and nimble enough to confront them. “Elevating our tech diplomacy” is one of Blinken’s “core priorities,” Fick says.

    As they developed a curriculum, Fick and his aides had several big goals for the new training program.

    The first priority was to make sure diplomats understood what was at stake as the US and its rivals compete for global preeminence on tech issues. “Authoritarian states and other actors have used cyber and digital tools to threaten national security, international peace and security, economic prosperity, [and] the exercise of human rights,” says Kathryn Fitrell, a senior cyber policy adviser at State who helps run the course.

    Equally critical was preparing diplomats to promote the US tech agenda from their embassies and provide detailed reports back to Washington on how their host governments were approaching these issues.

    “It’s important to us that tech expertise [in] the department not sit at headquarters alone,” Fick says, “but instead that we have people everywhere—at all our posts around the world, where the real work gets done—who are equipped with the tools that they need to make decisions with a fair degree of autonomy.”

    Foreign Service officers are America’s eyes and ears on the ground in foreign countries, studying the landscape and alerting their bosses back home to risks and opportunities. They are also the US government’s most direct and regular interlocutors with representatives of other nations, forming personal bonds with local officials that can sometimes make the difference between unity and discord.

    When these diplomats need to discuss the US tech agenda, they can’t just read monotonously off a piece of paper. They need to actually understand the positions they’re presenting and be prepared to answer questions about them.

    “You can’t be calling back to someone in Washington every time there’s a cyber question,” says Sherman.

    But some issues will still require help from experts at headquarters, so Fick and his team also wanted to use the course to deepen their ties with diplomats and give them friendly points of contact at the cyber bureau. “We want to be able to support officers in the field as they confront these issues,” says Melanie Kaplan, a member of Fick’s team who took the class and now helps run it.

    Inside the Classroom

    After months of research, planning, and scheduling, Fick’s team launched the Cyberspace and Digital Policy Tradecraft course at the Foreign Service Institute with a test run in November 2022. Since then, FSI has taught the class six more times—once in London for European diplomats, once in Morocco for diplomats in the Middle East and Africa, and four times in Arlington—and trained 180 diplomats.

    The program begins with four hours of “pre-work” to prepare students for the lessons ahead. Students must document that they’ve completed the pre-work—which includes experimenting with generative AI—before taking the class. “That has really put us light-years ahead in ensuring that no one is lost on day one,” Hop says.

    [ad_2]

    Eric Geller

    Source link

  • The Supreme Court casts doubt on Florida and Texas laws to regulate social media platforms

    The Supreme Court casts doubt on Florida and Texas laws to regulate social media platforms

    [ad_1]

    WASHINGTON — WASHINGTON (AP) — The Supreme Court on Monday kept on hold efforts by Texas and Florida to limit how Facebook, TikTok, X, YouTube and other social media platforms regulate content posted by their users in a ruling that strongly defended the platforms’ free speech rights.

    Writing for the court, Justice Elena Kagan said the platforms, like newspapers, deserve protection from governments’ intrusion in determining what to include or exclude from their space. “The principle does not change because the curated compilation has gone from the physical to the virtual world,” Kagan wrote in an opinion signed by five justices. All nine justices agreed on the overall outcome.

    The justices returned the cases to lower courts for further review in broad challenges from trade associations for the companies.

    While the details vary, both laws aimed to address long-standing conservative complaints that the social media companies were liberal-leaning and censored users based on their viewpoints, especially on the political right. The cases are among several this term in which the justices are wrestling with standards for free speech in the digital age.

    The Florida and Texas laws were signed by Republican governors in the months following decisions by Facebook and Twitter, now X, to cut then-President Donald Trump off over his posts related to the Jan. 6, 2021, attack on the U.S. Capitol by his supporters.

    Trade associations representing the companies sued in federal court, claiming that the laws violated the platforms’ speech rights. One federal appeals court struck down Florida’s statute, while another upheld the Texas law. But both were on hold pending the outcome at the Supreme Court.

    While the cases are complicated, said First Amendment expert and Notre Dame Law School professor Richard W. Garnett, the justices were clear on two things:

    “First, the First Amendment protects what we choose to say, but also what we choose not to say, support, or endorse. That is, the freedom of speech includes editorial judgment. This is true whether the speaker is a lone individual or a large media company,” he said. “Second, the government is not permitted to regulate speakers simply to produce what the government thinks would be a better, or more diverse, marketplace of ideas. What’s on offer in that marketplace is, in the end, up to us.”

    In a statement when he signed the Florida measure into law, Gov. Ron DeSantis said it would be “protection against the Silicon Valley elites.”

    When Gov. Greg Abbott signed the Texas law, he said it was needed to protect free speech in what he termed the new public square. Social media platforms “are a place for healthy public debate where information should be able to flow freely — but there is a dangerous movement by social media companies to silence conservative viewpoints and ideas,” Abbott said. “That is wrong, and we will not allow it in Texas.”

    But much has changed since then. Elon Musk purchased Twitter and, besides changing its name, eliminated teams focused on content moderation, welcomed back many users previously banned for hate speech and used the site to spread conspiracy theories.

    President Joe Biden’s administration sided with the challengers, though it cautioned the court to seek a narrow ruling that maintained governments’ ability to impose regulations to ensure competition, preserve data privacy and protect consumer interests. Lawyers for Trump filed a brief in the Florida case that had urged the Supreme Court to uphold the state law.

    Free speech advocates hailed the ruling as a victory.

    “The court’s recognition that the government cannot control social media in an effort to impose its own vision of what online speech should look like is crucial to protecting all of our right to speak our minds and access information on the internet,” said Vera Eidelman, staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “The court’s recognition that the government cannot control social media in an effort to impose its own vision of what online speech should look like is crucial to protecting all of our right to speak our minds and access information on the internet.”

    Nora Benavidez, senior counsel at the nonprofit media advocacy group Free Press said that while the decision “rests on procedural grounds, Justice Kagan’s comprehensive opinion for the Court explains in very clear terms why the Florida and Texas laws will have a tough time ever passing First Amendment muster. That’s a very good thing.”

    But it’s a “bumpy win,” noted Gus Hurwitz, academic director of the Center for Technology, Innovation & Competition at the University of Pennsylvania Carey Law School. He said the justices were “clearly frustrated” that the case came to them as a facial challenge — where the plaintiff argues that the law is unconstitutional — vacating both cases and sending them back to be “more fully developed.”

    “Five of the justices sign on to the direct statement that ‘Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose,” Hurwitz said. “It is hard to see how this doesn’t dictate the ultimate resolution of the case, and clearly foreshadows a rocky road ahead for these statutes if Texas and Florida continue to press forward with them.”

    The cases are among several the justices have grappled with over the past year involving social media platforms, including one decided last week in which the court threw out a lawsuit from Louisiana, Missouri and other parties accusing federal officials of pressuring social media companies to silence conservative points of view.

    During arguments in February, the justices seemed inclined to prevent the laws from taking effect. Several justices suggested then that they viewed the platforms as akin to newspapers that have broad free-speech protections, rather than like telephone companies, known as common carriers, that are susceptible to broader regulation.

    But two justices, Samuel Alito and Clarence Thomas, appeared more ready to embrace the states’ arguments. Thomas raised the idea that the companies are seeking constitutional protection for “censoring other speech.” Alito also equated the platforms’ content moderation to censorship.

    The justices also worried about too broad a ruling that might affect businesses that are not the primary targets of the laws, including e-commerce sites like Uber and Etsy and email and messaging services.

    ___

    AP Technology Writer Barbara Ortutay contributed to this story.

    Follow the AP’s coverage of the U.S. Supreme Court at https://apnews.com/hub/us-supreme-court.

    [ad_2]

    Source link

  • Meta’s Pay for Privacy Model Is Illegal, Says EU

    Meta’s Pay for Privacy Model Is Illegal, Says EU

    [ad_1]

    For the past eight months, Europeans uncomfortable with the way Meta tracks their data for personalized advertising have had another option: They can pay the tech giant up to €12.99 ($14) per month for their privacy instead.

    Launched in November 2023, Meta introduced its “pay or consent” subscription model as fines, legal cases and regulatory attention pressured the company to change the way it asks users to consent to targeted advertising. On Monday, however, the European Commision rejected its latest solution, arguing its “pay or consent” subscription is illegal under the bloc’s new digital markets act (DMA).

    “Our preliminary view is that Meta’s “Pay or Consent” business model is in breach of the DMA,” Thierry Breton, Commissioner for the EU’s Internal Market, said in a statement. “The DMA is there to give back to the users the power to decide how their data is used and ensure innovative companies can compete on equal footing with tech giants on data access.”

    Meta denied its subscription model broke the rules. “Subscription for no ads follows the direction of the highest court in Europe and complies with the DMA,” Meta spokesperson Matt Pollard told WIRED, referring to a Court of Justice of the European Union (CJEU) decision in July that said that Meta needed to offer users an alternative to ads, if necessary for an appropriate fee. “We look forward to further constructive dialogue with the European Commission to bring this investigation to a close.”

    In a press briefing on Monday morning, Commission officials said their concern was not that the company was charging for an ad-free service. “This is perfectly fine for us, as long as we have the middle option,” they said, explaining there should be a third option that may still contain ads but are just less targeted. There are different, less-specific ways of providing advertising to users, they added, such as contextual advertising. “The consumer needs to be in a position to choose an alternative version of the service which relies on non personalization of the ads.”

    Under the DMA, very large tech platforms must ask users for consent if they want to share their personal data with other parts of their businesses. In Meta’s case, the Commission said it is particularly concerned about the competitive advantage Meta receives over its rivals by being able to combine the data from platforms like Instagram and its advertising business.

    Meta has a chance to respond to the charges issued on Monday. However if the company cannot reach an agreement with regulators before March 2025, Brussels has the power to levy fines of up to 10 percent of the company’s global turnover.

    In the past week, the EU has issued a series of reprimands to US tech giants. The Commission warned Apple that its App Store is in breach of EU rules for preventing app developers offering promotions directly to their users. Brussels also accused Microsoft of abusing its dominance in the office-software market, following a complaint from rival Slack.

    [ad_2]

    Morgan Meaker

    Source link

  • War Crime Prosecutions Enter a New Digital Age

    War Crime Prosecutions Enter a New Digital Age

    [ad_1]

    A custom platform developed by SITU Research aided the International Criminal Court’s prosecution in a war crimes trial for the first time. It could change how justice is enacted on an international scale.

    [ad_2]

    Vittoria Elliott

    Source link

  • The Mystery of AI Gunshot-Detection Accuracy Is Finally Unraveling

    The Mystery of AI Gunshot-Detection Accuracy Is Finally Unraveling

    [ad_1]

    This week, New York City’s comptroller published a similar audit of the city’s ShotSpotter system showing that only 13 percent of the alerts the system generated over an eight-month period could be confirmed as gunfire. The auditors noted that while the NYPD has the information necessary to publish data about ShotSpotter’s accuracy, it does not do so. They described the department’s accountability measures as “inadequate” and “not sufficient to demonstrate the effectiveness of the tool.”

    Champaign and Chicago have since canceled their contracts with Flock Safety and SoundThinking, respectively.

    “Raven is over 90 percent accurate at detecting gunshots with around the same accuracy percentage at detecting fireworks,” Josh Thomas, Flock Safety senior vice president of policy and communications, tells WIRED in a statement. “And critically, Raven alerts officers to gun violence incidents they never would have been aware of. In the San Jose report, for example, of the 111 true positive gunshot alerts, SJPD states that only 6 percent were called in to 911.”

    Eric Piza, a professor of criminology at Northeastern University, has conducted some of the most thorough studies available on gunshot detection systems. In a recent study of shooting incidents in Chicago and Kansas City, Missouri, his team’s analysis showed that police responded faster to shooting incidents, stopped their vehicles closer to the scene of shootings, and collected more ballistic evidence when responding to automated gunshot alerts compared to 911 calls. However, there was no reduction in gun-related crimes, and police were no more likely to solve gun crimes in areas with gunshot sensors than in areas without them. That study only examined confirmed shootings; it did not include false-positive incidents where the systems incorrectly identified gunfire.

    In another study in Kansas City, Piza found that shots-fired reports in areas with gunshot sensors were 15 percent more likely to be classified as unfounded compared to shots-fired reports in areas without the systems, where police would have relied on calls to 911 and other reporting methods.

    “If you look at the different goals of the system, research shows that [gunshot detection technology] typically tends to result in quicker police response times,” Piza says. “But research consistently has shown that gun violence victimization doesn’t reduce after gunshot detection technology has been introduced.”

    The New York City comptroller recommended the NYPD not renew its current $22 million contract with SoundThinking without first conducting a more thorough performance evaluation. In its response to the audit, the NYPD wrote that “non-renewal of ShotSpotter services may endanger the public.”

    In its report, San Jose’s Digital Privacy Office recommended that the police department continue looking for ways to improve accuracy if it intends to keep using the Raven system.

    Pointing to the report’s finding that only 6 percent of the confirmed gunshots detected by the system were reported to police via 911 calls or other means, police spokesperson Sergeant Jorge Garibay tells WIRED the SJPD will continue to use the technology. “The system is still proving useful in providing supplementary evidence for various violent gun crimes,” he says. “The hope is to solve more crime and increase apprehension efforts desirably leading to a reduction in gun violence.”

    [ad_2]

    Todd Feathers

    Source link