ReportWire

Tag: end-to-end encryption

  • Cindy Cohn Is Leaving the EFF, but Not the Fight for Digital Rights

    [ad_1]

    After a quarter century defending digital rights, Cindy Cohn announced on Tuesday that she is stepping down as executive director of the Electronic Frontier Foundation. Cohn, who has led the San Francisco–based nonprofit since 2015, says she will leave the role later this year, concluding a chapter that helped define the modern fight over online freedom.

    Cohn first rose to prominence as lead counsel in Bernstein v. Department of Justice, the 1990s case that overturned federal restrictions on publishing encryption code. As EFF’s legal director and later executive director, she guided the group through legal challenges to government surveillance, reforms to computer crime laws, and efforts to hold corporations accountable for data collection. Over the past decade, EFF has expanded its influence, becoming a central force in shaping the debate over privacy, security, and digital freedom.

    In an interview with WIRED, Cohn reflected on EFF’s foundational encryption victories, its unfinished battles against National Security Agency (NSA) surveillance, and the organization’s work protecting independent security researchers. She spoke about the shifting balance of power between corporations and governments, the push for stronger state-level privacy laws, and the growing risks posed by artificial intelligence.

    Though stepping down from leadership, Cohn tells WIRED she plans to remain active in the fight against mass surveillance and government secrecy. Describing herself as “more of a warrior than a manager,” she says her intent is to return to frontline advocacy. She is also at work on a forthcoming book, Privacy’s Defender, due out next spring, which she hopes will inspire a new generation of digital rights advocates.

    This interview has been edited for length and clarity.

    WIRED: Tell us about the fights you won, and the ones that still feel unfinished after 25 years.

    CINDY COHN: The early fight that we made to free up encryption from government regulation still stands out as setting the stage for a potentially secure internet. We’re still working on turning that promise into a reality, but we’re in such a different place than we would’ve been in had we lost that fight. Encryption protects anybody who buys anything online, anyone who uses Signal to be a whistleblower or journalists, or just regular people who want privacy and use WhatsApp or Signal. Even the backend-certificate authorities provided by Let’s Encrypt—that make sure that when you think you’re going to your bank, you’re actually going to your bank website—are all made possible because of encryption. These are all things that would’ve been at risk if we hadn’t won that fight. I think that win was foundational, even though the fights aren’t over.

    The fights that we’ve had around the NSA and national security, those are still works in progress. We were not successful with our big challenge to the NSA spying in Jewel v. NSA, although over the long arc of that case and the accompanying legislative fights, we managed to claw back quite a bit of what the NSA started doing after 9/11.

    [ad_2]

    Dell Cameron

    Source link

  • Telegram CEO Pavel Durov’s Arrest Linked to Sweeping Criminal Investigation

    Telegram CEO Pavel Durov’s Arrest Linked to Sweeping Criminal Investigation

    [ad_1]

    French prosecutors gave preliminary information in a press release on Monday about the investigation into Telegram CEO Pavel Durov, who was arrested suddenly on Saturday at Paris’ Le Bourget airport. Durov has not yet been charged with any crime, but officials said that he is being held as part of an investigation “against person unnamed” and can be held in police custody until Wednesday.

    The investigation began on July 8 and involves wide-ranging charges related to alleged money laundering, violations related to import and export of encryption tools, refusal to cooperate with law enforcement, and “complicity” in drug trafficking, possession and distribution of child pornography, and more.

    The investigation was initiated by “Section J3” cybercrime prosecutors and has involved collaboration with France’s Centre for the Fight against Cybercrime (C3N) and Anti-Fraud National Office (ONAF), according to the press release. “It is within this procedural framework in which Pavel Durov was questioned by the investigators,” Paris prosecutor Laure Beccuau wrote in the statement.

    Telegram did not respond to multiple requests for comment about the investigation but asserted in a statement posted to the company’s news channel on Sunday that Durov has “nothing to hide.”

    “Given the existence of several preliminary investigations in France concerning Telegram in relation to the protection of minors’ rights and in cooperation with other French investigation units—for instance, on cyber harassment—the arrest of Durov, does not seem to me like a highly exceptional move,” says Cannelle Lavite, a French lawyer who specializes in free-speech matters.

    Lavite notes that Durov is a French citizen who was arrested in French territory with an arrest warrant issued by French judges. She adds that the list of charges involved in the investigation is “extensive,” a wide net that she says is not entirely surprising in the context of “France’s ambiguous legislative arsenal” meant to balance content moderation and free speech.

    Durov is a controversial figure for his leadership of Telegram, in large part because he has not typically cooperated with calls to moderate the platform’s content. In some ways, this has positioned him as a free-speech defender against government censorship, but it has also made Telegram a haven for hate speech, criminal activity, and abuse. Additionally, the platform is often billed as a secure communication tool, but much of it is open and accessible by default.

    “Telegram is not primarily an encrypted messenger; most people use it almost as a social network, and they’re not using any of its features that have end-to-end encryption,” says John Scott-Railton, senior researcher at Citizen Lab. “The implication there is that Telegram has a wide range of abilities and access to potentially do content moderation and respond to lawful requests. This puts Pavel Durov very much in the center of all kinds of potential governmental pressure.”

    On top of all of this, many researchers have questioned whether Telegram’s end-to-end encryption is durable when users do elect to enable it.

    French president Emmanuel Macron said in a social media post on Monday that “France is deeply committed to freedom of expression and communication … The arrest of the president of Telegram on French soil took place as part of an ongoing judicial investigation. It is in no way a political decision.”

    News of Durov’s arrest is fueling concerns, though, that the move could threaten Telegram’s stability and undermine the platform. The case seems poised, too, to have implications in long-standing debates around the world about social media moderation, government influence, and use of privacy-preserving end-to-end encryption.

    Lavite says the case certainly invokes debates about “the balance between the right to encrypted communication and free speech on the one hand, and users’ protection—content moderation—on the other hand.” But she notes that there is a lot of information about the investigation that is unknown and “a lot of blurry zones still.”

    On Monday afternoon, Telegram seemed to be receiving a download boost from the situation, moving from 18th to 8th place in Apple’s US App Store apps ranking. Global iOS downloads were up by 4 percent, and in France the app was number one in the App Store social network category and number three overall.

    [ad_2]

    Lily Hay Newman

    Source link

  • How Apple Intelligence’s Privacy Stacks Up Against Android’s ‘Hybrid AI’

    How Apple Intelligence’s Privacy Stacks Up Against Android’s ‘Hybrid AI’

    [ad_1]

    Yet Google and its hardware partners argue privacy and security are a major focus of the Android AI approach. VP Justin Choi, head of the security team, mobile eXperience business at Samsung Electronics, says its hybrid AI offers users “control over their data and uncompromising privacy.”

    Choi describes how features processed in the cloud are protected by servers governed by strict policies. “Our on-device AI features provide another element of security by performing tasks locally on the device with no reliance on cloud servers, neither storing data on the device nor uploading it to the cloud,” Choi says.

    Google says its data centers are designed with robust security measures, including physical security, access controls, and data encryption. When processing AI requests in the cloud, the company says, data stays within secure Google data center architecture and the firm is not sending your information to third parties.

    Meanwhile, Galaxy’s AI engines are not trained with user data from on-device features, says Choi. Samsung “clearly indicates” which AI functions run on the device with its Galaxy AI symbol, and the smartphone maker adds a watermark to show when content has used generative AI.

    The firm has also introduced a new security and privacy option called Advanced Intelligence settings to give users the choice to disable cloud-based AI capabilities.

    Google says it “has a long history of protecting user data privacy,” adding that this applies to its AI features powered on-device and in the cloud. “We utilize on-device models, where data never leaves the phone, for sensitive cases such as screening phone calls,” Suzanne Frey, vice president of product trust at Google, tells WIRED.

    Frey describes how Google products rely on its cloud-based models, which she says ensures “consumer’s information, like sensitive information that you want to summarize, is never sent to a third party for processing.”

    “We’ve remained committed to building AI-powered features that people can trust because they are secure by default and private by design, and most importantly, follow Google’s responsible AI principles that were first to be championed in the industry,” Frey says.

    Apple Changes the Conversation

    Rather than simply matching the “hybrid” approach to data processing, experts say Apple’s AI strategy has changed the nature of the conversation. “Everyone expected this on-device, privacy-first push, but what Apple actually did was say, it doesn’t matter what you do in AI—or where—it’s how you do it,” Doffman says. He thinks this “will likely define best practice across the smartphone AI space.”

    Even so, Apple hasn’t won the AI privacy battle just yet: The deal with OpenAI—which sees Apple uncharacteristically opening up its iOS ecosystem to an outside vendor—could put a dent in its privacy claims.

    Apple refutes Musk’s claims that the OpenAI partnership compromises iPhone security, with “privacy protections built in for users who access ChatGPT.” The company says you will be asked permission before your query is shared with ChatGPT, while IP addresses are obscured and OpenAI will not store requests—but ChatGPT’s data use policies still apply.

    Partnering with another company is a “strange move” for Apple, but the decision “would not have been taken lightly,” says Jake Moore, global cybersecurity adviser at security firm ESET. While the exact privacy implications are not yet clear, he concedes that “some personal data may be collected on both sides and potentially analyzed by OpenAI.”

    [ad_2]

    Kate O’Flaherty

    Source link

  • Proton Is Launching Encrypted Documents to Take On Google Docs

    Proton Is Launching Encrypted Documents to Take On Google Docs

    [ad_1]

    Yen says Proton has been internally using the system for the last month and is now ready to roll it out to consumers. “I feel it is relatively polished,” Yen says. To compete with other online document editors, he says, the team also built in collaboration functionality from the beginning. This includes real-time editing by multiple people, commenting, and showing when someone else is viewing the document.

    In April, Proton acquired encrypted note-taking app Standard Notes, which is a separate product from Docs. “It’s actually not ‘take Standard Notes and stick it into Proton,’” Yen says, adding that the encryption architecture of the two were different, and Proton Docs is “more or less a ground-up, clean build in Proton’s ecosystem on our software stack.” (WIRED was unable to test the Docs before it was launched).

    The big difference Proton is adding when compared to Google Docs is the encryption—something that is challenging to do at scale and also harder when a document has multiple people editing it at the same time. Yen says it’s not just the contents of documents that are being encrypted, so are other elements like keystrokes, mouse movements, and file names and paths.

    The company, which last month announced it is moving toward a nonprofit status, uses open source encryption, and Yen says building the Docs system required encryption key exchange and synchronization to happen across multiple users. Part of this was possible, Yen says, because last year the company added version history for documents stored in its Drive system, which the Docs are built on top of.

    There are relatively few—if any—major end-to-end encrypted document editors online. Other existing services, which WIRED has not tried, include CryptPad and various note-taking or notepad-style apps. There are also apps that encrypt files locally on your machine, such as Cryptee and Anytype.

    Recently, Proton has been moving quickly to launch new encrypted products—adding cloud storage, a VPN, a password manager, and calendar alongside its original ProtonMail email service. The company has also faced scrutiny over some information it has provided to law enforcement, such as recovery emails that have been added to accounts. It changed some of its policies in 2021 after being ordered to collect some user metadata. While the company is based outside of the US and EU, it still responds to thousands of Swiss law enforcement requests.

    Ultimately, Yen says, the company is trying to offer as many private alternatives to Big Tech services, particularly Google, as it can. “Everything Google’s got, we’ve got to build as well. That’s the road map. But the challenge, of course, is the order in which you do it,” Yen says. “In some sense, taking privacy to a more mainstream audience also requires going further afield, trying different things, and being a bit more adventurous in the things that we build and things that we launch.”

    [ad_2]

    Matt Burgess

    Source link

  • Apple’s iMessage Encryption Puts Its Security Practices in the DOJ’s Crosshairs

    Apple’s iMessage Encryption Puts Its Security Practices in the DOJ’s Crosshairs

    [ad_1]

    The argument is one that some Apple critics have made for years, as spelled out in an essay in January by Cory Doctorow, the science fiction writer, tech critic, and co-author of Chokepoint Capitalism. “The instant an Android user is added to a chat or group chat, the entire conversation flips to SMS, an insecure, trivially hacked privacy nightmare that debuted 38 years ago—the year Wayne’s World had its first cinematic run,” Doctorow writes. “Apple’s answer to this is grimly hilarious. The company’s position is that if you want to have real security in your communications, you should buy your friends iPhones.”

    In a statement to WIRED, Apple says it designs its products to “work seamlessly together, protect people’s privacy and security, and create a magical experience for our users,” and adds that the DOJ lawsuit “threatens who we are and the principles that set Apple products apart” in the marketplace. The company also says it hasn’t released an Android version of iMessage because it couldn’t ensure that third parties would implement it in ways that met the company’s standards.

    “If successful, [the lawsuit] would hinder our ability to create the kind of technology people expect from Apple—where hardware, software, and services intersect,” the statement continues. “It would also set a dangerous precedent, empowering government to take a heavy hand in designing people’s technology. We believe this lawsuit is wrong on the facts and the law, and we will vigorously defend against it.”

    Apple has, in fact, not only declined to build iMessage clients for Android or other non-Apple devices, but actively fought against those who have. Last year, a service called Beeper launched with the promise of bringing iMessage to Android users. Apple responded by tweaking its iMessage service to break Beeper’s functionality, and the startup called it quits in December.

    Apple argued in that case that Beeper had harmed users’ security—in fact, it did compromise iMessage’s end-to-end encryption by decrypting and then re-encrypting messages on a Beeper server, though Beeper had vowed to change that in future updates. Beeper cofounder Eric Migicovsky argued that Apple’s heavyhanded move to reduce Apple-to-Android texts to traditional text messaging was hardly a more secure alternative.

    “It’s kind of crazy that we’re now in 2024 and there still isn’t an easy, encrypted, high-quality way for something as simple as a text between an iPhone and an Android,” Migicovsky told WIRED in January. “I think Apple reacted in a really awkward, weird way—arguing that Beeper Mini threatened the security and privacy of iMessage users, when in reality, the truth is the exact opposite.”

    Even as Apple has faced accusations of hoarding iMessage’s security properties to the detriment of smartphone owners worldwide, it’s only continued to improve those features: In February it upgraded iMessage to use new cryptographic algorithms designed to be immune to quantum codebreaking, and last October it added Contact Key Verification, a feature designed to prevent man-in-the-middle attacks that spoof intended contacts to intercept messages. Perhaps more importantly, it’s vowed to adopt the RCS standard to allow for improvements in messaging with Android users—although the company did not say whether those improvements would include end-to-end encryption.

    [ad_2]

    Andy Greenberg, Andrew Couts

    Source link

  • Signal Finally Rolls Out Usernames, So You Can Keep Your Phone Number Private

    Signal Finally Rolls Out Usernames, So You Can Keep Your Phone Number Private

    [ad_1]

    The third new feature, which is not enabled by default and which Signal recommends mainly for high-risk users, allows you to turn off not just your number’s visibility but its discoverability. That means no one can find you in Signal unless they have your username, even if they already know your number or have it saved in their address book. That extra safeguard might be important if you don’t want anyone to be able to tie your Signal profile to your phone number, but it will also make it significantly harder for people who know you to find you on Signal.

    The new phone number protections should now make it possible to use Signal to communicate with untrusted people in ways that would have previously presented serious privacy risks. A reporter can now post a Signal username on a social media profile to allow sources to send encrypted tips, for instance, without also sharing a number that allows strangers to call their cell phone in the middle of the night. An activist can discreetly join an organizing group without broadcasting their personal number to people in the group they don’t know.

    In the past, using Signal without exposing a private number in either of those situations would have required setting up a new Signal number on a burner phone—a difficult privacy challenge for people in many countries that require identification to buy a SIM card—or with a service like Google Voice. Now you can simply set a username instead, which can be changed or deleted at any time. (Any conversations you’ve started with the old username will switch over to the new one.) To avoid storing even those usernames, Signal is also using a cryptographic function called a Ristretto hash, which allows it to instead store a list of unique strings of characters that encode those handles.

    Amid these new features designed to calibrate exactly who can learn your phone number, however, one key role for that number hasn’t changed: There’s still no way to avoid sharing your phone number with Signal itself when you register. The fact that this requirement persists even after Signal’s upgrade will no doubt rankle some critics who have pushed Signal’s developers to better cater to users seeking more complete anonymity, such that even Signal’s own staff can’t see a phone number that might identify users or hand that number over to a surveillance agency wielding a court order.

    Whittaker says that, for better or worse, a phone number remains a necessary requisite as the identifier Signal privately collects from its users. That’s partly because it prevents spammers from creating endless accounts since phone numbers are scarce. Phone numbers are also what allow anyone to install Signal and have it immediately populate with contacts from their address book, a key element of its usability.

    In fact, designing a system that prevents spam accounts and imports the user’s address book without requiring a phone number is “a deceptively hard problem,” says Whittaker. “Spam prevention and actually being able to connect with your social graph on a communications app—those are existential concerns,” she says. “That’s the reason that you still need a phone number to register, because we still need a thing that does that work.”

    [ad_2]

    Andy Greenberg

    Source link