ReportWire

Tag: Internet privacy

  • California Finalizes 2025 CCPA Rules on Data & AI Oversight

    The flags fly in front of Sacramento’s Capital Building
    Credit: Christopher Boswell via Adobe Stock

    If you’ve ever been rejected for a job by an algorithm, denied an apartment by a software program, or had your health coverage questioned by an automated system, California just voted to change the rules of the game. On July 24, 2025, the California Privacy Protection Agency (CPPA) voted to finalize one of the most consequential privacy rulemakings in U.S. history. The new regulations—covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT)—are the product of nearly a year of public comment, political pressure, and industry lobbying. 

    They represent the most ambitious expansion of U.S. privacy regulation since voters approved the California Privacy Rights Act (CPRA) in 2020 and its provisions took effect in 2023, adding for the first time binding obligations around automated decision-making, cybersecurity audits, and ongoing risk assessments.

    How We Got Here: A Contentious Rulemaking

    The CPPA formally launched the rulemaking process in November 2024. At stake was how California would regulate technologies often grouped under the “AI” umbrella-term. The CPPA opted to focus narrowly on automated decision-making technology (ADMT), rather than attempting to define AI in general. This move generated both relief and frustration among stakeholders. The groups weighing in ranged from Silicon Valley giants to labor unions and gig workers, reflecting the numerous corners of the economy that automated decision-making touches.

    Early drafts had explicitly mentioned “artificial intelligence” and “behavioral advertising.” By the time the final rules were adopted, those references were stripped out. Regulators stated that they sought to avoid ambiguity and not encompass too many technologies. Critics said the changes weakened the rules.

    The comment period drew over 575 pages of submissions from more than 70 organizations and individuals, including tech companies, civil society groups, labor advocates, and government officials. Gig workers described being arbitrarily deactivated by opaque algorithms. Labor unions argued the rules should have gone further to protect employees from automated monitoring. On the other side, banks, insurers, and tech firms warned that the regulations created duplicative obligations and legal uncertainty.

    The CPPA staff defended the final draft as one that “strikes an appropriate balance,” while acknowledging the need to revisit these rules as technology and business practices evolve. After the July 24 vote, the agency formally submitted the package to the Office of Administrative Law, which has 30 business days to review it for procedural compliance before the rules take effect.

    Automated Decision-Making Technology (ADMT): Redefining AI Oversight

    The centerpiece of the regulations is the framework for ADMT. The rules define ADMT as “any technology that processes personal information and uses computation to replace human decisionmaking, or substantially replace human decisionmaking.”

    The CPPA applies these standards to what it calls “significant decisions:” choices that determine whether someone gets a job or contract, qualifies for a loan, secures housing, is admitted to a school, or receives healthcare. In practice, that means résumé-screening algorithms, tenant-screening apps, loan approval software, and healthcare eligibility tools all fall within the law’s scope.

    Companies deploying ADMT for significant decisions will face several new obligations. They must provide plain-language pre-use notices so consumers understand when and how automated systems are being applied. Individuals must also be given the right to opt out or, at minimum, appeal outcomes to a qualified human reviewer with real authority to reverse the decision. Businesses are further required to conduct detailed risk assessments, documenting the data inputs, system logic, safeguards, and potential impacts. In short, if an algorithm decides whether you get hired, approved for a loan, or accepted into housing, the company has to tell you up front, offer a meaningful appeal, and prove that the system isn’t doing more harm than good. Liability also cannot be outsourced: with the business itself, firms remain responsible even when they rely on third-party vendors.

    Some tools are excluded—like firewalls, anti-malware, calculators, and spreadsheets—unless they are actually used to make the decision. Additionally, the CPPA tightened what counts as “meaningful human review.” Reviewers must be able to interpret the system’s output, weigh other relevant information, and have genuine authority to overturn the result.

    Compliance begins on January 1, 2027.

    Cybersecurity Audits: Scaling Expectations

    Another pillar of the new rules is the requirement for annual cybersecurity audits. For the first time under state law, companies must undergo independent assessments of their security controls.

    The audit requirement applies broadly to larger data-driven businesses. It covers companies with annual gross revenue exceeding $26.6 million that process the personal information of more than 250,000 Californians, as well as firms that derive half or more of their revenue from selling or sharing personal data.

    Audits must be conducted by independent professionals who cannot report to a Chief Information Security Officer (CISO) or other executives directly responsible for cybersecurity to ensure objectivity.

    The audits cover a comprehensive list of controls, from encryption and multifactor authentication to patch management and employee training, and must be certified annually to the CPPA or Attorney General if requested.

    Deadlines are staggered:

    • April 1, 2028: $100M+ businesses
    • April 1, 2029: $50–100M businesses
    • April 1, 2030: <$50M businesses

    By codifying this framework and embedding these requirements into law, California is effectively setting a de facto national cybersecurity baseline: one that may exceed federal NIST standards and ripple into vendor contracts nationwide. For businesses, these audits won’t just be about checking boxes: they could become the new cost of entry for doing business in California. Because companies can’t wall off California users from the rest of their customer base, these standards are likely to spread nationally through vendor contracts and compliance frameworks.

    Privacy Risk Assessments: Accountability in High-Risk Processing

    The regulations also introduce mandatory privacy risk assessments, required annually for companies engaged in high-risk processing.

    Triggering activities include:

    • Selling or sharing personal information
    • Processing sensitive personal data (including neural data, newly classified as sensitive)
    • Deploying ADMT for significant decisions
    • Profiling workers or students
    • Training ADMT on personal data 

    Each assessment must document categories of personal information processed, explain the purpose and benefits, identify potential harms and safeguards, and be submitted annually to the CPPA starting April 21, 2028, with attestations under penalty of perjury (a high-stakes accountability mechanism). This clause is designed to prevent “paper compliance.” By requiring executives to sign off under penalty of perjury, California is telling companies this isn’t paperwork. Leaders will be personally accountable if their systems mishandle sensitive data. Unlike voluntary risk assessments, California’s system ties accountability directly to the personal liability of signatories.

    Other Notable Provisions

    Beyond these headline rules, the CPPA also addressed sector-specific issues and tied in earlier reforms. For the insurance industry, the regulations clarify how the CCPA applies to companies that routinely handle sensitive personal and health data—an area where compliance expectations were often unclear. The rules also fold in California’s Delete Act, which takes effect on August 1, 2026. That law will give consumers a single, one-step mechanism to request deletion of their personal information across all registered data brokers, closing a major loophole in the data marketplace and complementing the broader CCPA framework. Together, these measures reinforce California’s role as a privacy trendsetter, creating tools that other states are likely to copy as consumers demand similar rights.

    Implications for California

    California has long served as the nation’s privacy laboratory, pioneering protections that often ripple across the country. This framework places California among the first U.S. jurisdictions to regulate algorithmic governance. With these rules, the state positions itself alongside the EU AI Act and the Colorado AI Act, creating one of the world’s most demanding compliance regimes.

    However, the rules also set up potential conflict with the federal government. The America’s AI Action Plan, issued earlier this year, emphasizes innovation over regulation and warns that restrictive state-level rules could jeopardize federal AI funding decisions. This tension may play out in future policy disputes.

    For California businesses, the impact is immediate. Companies must begin preparing governance frameworks, reviewing vendor contracts, and updating consumer-facing disclosures now. These compliance efforts build on earlier developments in California privacy law, including the creation of a dedicated Privacy Law Specialization for attorneys. This specialization will certify legal experts equipped to navigate the state’s intricate web of statutes and regulations, from ADMT disclosures to phased cybersecurity audits. Compliance will be expensive, but it will also drive demand for new privacy officers, auditors, and legal specialists. Mid-sized firms may struggle, while larger companies may gain an edge by showing early compliance. For businesses outside California, the ripple effects may be unavoidable because national companies will have to standardize around the state’s higher bar.

    The CPPA’s finalized regulations mark a structural turning point in U.S. privacy and AI governance. Obligations begin as early as 2026 and accelerate through 2027–2030, giving businesses a narrow window to adapt. For consumers, the rules promise greater transparency and the right to challenge opaque algorithms. For businesses, they establish California as the toughest compliance environment in the country, forcing firms to rethink how they handle sensitive data, automate decisions, and manage cybersecurity. California is once again setting the tone for global debates on privacy, cybersecurity, and AI. Companies that fail to keep pace will not only face regulatory risk but could also lose consumer trust in the world’s fifth-largest economy. Just as California’s auto emissions standards reshaped national car design, its privacy rules are likely to shape national policy on data and AI. Other states will borrow from California, and Washington will eventually have to decide whether to match it or rein it in.

    What starts in Sacramento rarely stays there. From Los Angeles to Silicon Valley, California just set the blueprint for America’s data and AI future.

    Hillah Greenberg

    Source link

  • Apple’s AI Cloud System Makes Big Privacy Promises, but Can It Keep Them?

    Apple’s AI Cloud System Makes Big Privacy Promises, but Can It Keep Them?

    Apple’s new Apple Intelligence system is designed to infuse generative AI into the core of iOS. The system offers users a host of new services, including text and image generation as well as organizational and scheduling features. Yet while the system provides impressive new capabilities, it also brings complications. For one thing, the AI system relies on a huge amount of iPhone users’ data, presenting potential privacy risks. At the same time, the AI system’s substantial need for increased computational power means that Apple will have to rely increasingly on its cloud system to fulfill users’ requests.

    Apple has historically offered iPhone customers unparalleled privacy; it’s a big part of the company’s brand. Part of those privacy assurances has been the option to choose when mobile data is stored locally and when it’s stored in the cloud. While an increased reliance on the cloud might ring some privacy alarm bells, Apple has anticipated these concerns and created a startling new system that it calls its Private Cloud Compute, or PCC. This is really a cloud security system designed to keep users’ data away from prying eyes while it’s being used to help fulfill AI-related requests.

    On paper, Apple’s new privacy system sounds really impressive. The company claims to have created “the most advanced security architecture ever deployed for cloud AI compute at scale.” But what looks like a massive achievement on paper could ultimately cause broader issues for user privacy down the road. And it’s unclear, at least at this juncture, whether Apple will be able to live up to its lofty promises.

    How Apple’s Private Cloud Compute Is Supposed to Work

    In many ways, cloud systems are just giant databases. If a bad actor gets into that system/database, they can look at the data contained within. However, Apple’s Private Cloud Compute (PCC) brings a number of unique safeguards that are designed to prevent that kind of access.

    Apple says it has implemented its security system at both the software and hardware levels. The company created custom servers that will house the new cloud system, and those servers go through a rigorous process of screening during manufacturing to ensure they are secure.  “We inventory and perform high-resolution imaging of the components of the PCC node,” the company claims. The servers are also being outfitted with physical security mechanisms such as a tamper-proof seal. iPhone users’ devices can only connect to servers that have been certified as part of the protected system, and those connections are end-to-end encrypted, meaning that the data being transmitted is pretty much untouchable while in transit.

    Once the data reaches Apple’s servers, there are more protections to ensure that it stays private. Apple says its cloud is leveraging stateless computing to create a system where user data isn’t retained past the point at which it is used to fulfill an AI service request. So, according to Apple, your data won’t have a significant lifespan in its system. The data will travel from your phone to the cloud, interact with Apple’s high-octane AI algorithms—thus fulfilling whatever random question or request you’ve submitted (“draw me a picture of the Eiffel Tower on Mars”)—and then the data (again, according to Apple) will be deleted.

    Apple has instituted an array of other security and privacy protections that can be read about in more detail on the company’s blog. These defenses, while diverse, all seem designed to do one thing: prevent any breach of the company’s new cloud system.

    But Is This Really Legit?

    Companies make big cybersecurity promises all the time and it’s usually impossible to verify whether they’re telling the truth or not. FTX, the failed crypto exchange, once claimed it kept users’ digital assets in air-gapped servers. Later investigation showed that was pure bullshit. But Apple is different, of course. To prove to outside observers that it’s really securing its cloud, the company says it will launch something called a “transparency log” that involves full production software images (basically copies of the code being used by the system). It plans to publish these logs regularly so that outside researchers can verify that the cloud is operating just as Apple says.

    What People Are Saying About the PCC

    Apple’s new privacy system has notably polarized the tech community. While the sizable effort and unparalleled transparency that characterize the project have impressed many, some are wary of the broader impacts it may have on mobile privacy in general. Most notably—aka loudly—Elon Musk immediately began proclaiming that Apple had betrayed its customers.

    Simon Willison, a web developer and programmer, told Gizmodo that the “scale of ambition” of the new cloud system impressed him.

    “They are addressing multiple extremely hard problems in the field of privacy engineering, all at once,” he said. “The most impressive part I think is the auditability—the bit where they will publish images for review in a transparency log which devices can use to ensure they are only talking to a server running software that has been made public. Apple employs some of the best privacy engineers in the business, but even by their standards this is a formidable piece of work.”

    But not everybody is so enthused. Matthew Green, a cryptography professor at Johns Hopkins University, expressed skepticism about Apple’s new system and the promises that went along with it.

    “I don’t love it,” said Green with a sigh. “My big concern is that it’s going to centralize a lot more user data in a data center, whereas right now most of that is on people’s actual phones.”

    Historically, Apple has made local data storage a mainstay of its mobile design, because cloud systems are known for their privacy deficiencies.

    “Cloud servers are not secure, so Apple has always had this approach,” Green said. “The problem is that, with all this AI stuff that’s going on, Apple’s internal chips are not powerful enough to do the stuff that they want it to do. So they need to send the data to servers and they’re trying to build these super protected servers that nobody can hack into.”

    He understands why Apple is making this move, but doesn’t necessarily agree with it, since it means a higher reliance on the cloud.

    Green says Apple also hasn’t made it clear whether it will explain to users what data remains local and what data will be shared with the cloud. This means that users may not know what data is being exported from their phones. At the same time, Apple hasn’t made it clear whether iPhone users will be able to opt out of the new PCC system. If users are forced to share a certain percentage of their data with Apple’s cloud, it may signal less autonomy for the average user, not more. Gizmodo reached out to Apple for clarification on both of these points and will update this story if the company responds.

    To Green, Apple’s new PCC system signals a shift in the phone industry to a more cloud-reliant posture. This could lead to a less secure privacy environment overall, he says.

    “I have very mixed feelings about it,” Green said. “I think enough companies are going to be deploying very sophisticated AI [to the point] where no company is going to want to be left behind. I think consumers will probably punish companies that don’t have great AI features.”

    Lucas Ropek

    Source link

  • There’s One Big Problem With the New Federal Data Privacy Bill

    There’s One Big Problem With the New Federal Data Privacy Bill

    Americans have wanted a federal privacy law for years but intensive lobbying by the tech industry and general incompetence by our federal legislators has repeatedly thwarted that desire. Well, in 2024, it’s possible that we may finally get a strong federal privacy law.

    I’ll say it again: It’s possible. It’s also technically possible that frogs could rain from the sky over lower Manhattan, coating New Yorkers in a spring shower of amphibious guts, but is that actually likely to happen?

    The American Privacy Rights Act of 2024, recently introduced by Cathy McMorris Rodgers (R-WA) and Maria Cantwell (D-WA), would create basic digital privacy protections for Americans. The law, if enacted, would create a variety of protections and rights for consumers, including the ability to access, control, and delete information collected by companies.

    While that may sound like a good thing, there’s one aspect of the legislation that privacy advocates seem concerned about. The proposed law would eliminate potentially stronger, state-level protections that currently exist. While privacy rights groups remain cautiously optimistic about the APRA’s potential, they are also wary of its proposed preemption of state laws. If the currently proposed regulations look strong, the legislative process is just beginning and there’s no telling what the federal law may look like after what is sure to be a long, combative policymaking process.

    Here’s a quick look at what the legislation currently promises, and what privacy advocates are saying about it.

    The right to access, control, and delete

    The American Privacy Rights Act would create broad protections for Americans’ data, giving consumers the ability to access, control, and delete data covered by the legislation. The policy would give all Americans the power to request information from entities that have collected data about them. Businesses that fall under the law would need to comply with consumers’ requests within “specified timeframes,” the bill states. The bill allows certain exemptions from these mandates, including small businesses (which are defined as companies making “$40,000,000 or less in annual revenue” or that collect, process, retain, or transfer “the covered data of 200,000 or fewer individuals”), as well as governments, and “entities working on behalf of governments.”

    Data minimization

    The bill would also mandate something called “data minimization.” The idea here is to reduce the overall amount of information that companies can collect about web users. Bill backers say that companies covered by the legislative will not be able to “collect, process, retain, or transfer data beyond what is necessary, proportionate, or limited to provide or maintain a product or service requested by an individual, or provide a communication reasonably anticipated in the context of the relationship, or a permitted purpose.” Again, while that sounds good, the devil is in the details here, and it’s not totally clear yet what this sort of data minimization would look like in real life.

    What is covered data?

    The bill defines the data covered by the legislation as follows:

    …information that identifies or is linked or reasonably linkable to an individual or device. It does not include de-identified data, employee data, publicly available information, inferences made from multiple sources of publicly available information that do not meet the definition of sensitive covered data and are not combined with covered data, and information in a library, archive, or museum collection subject to specific limitations.

    Empowering the FTC

    Enforcement of the law would take place at both the federal and state levels. Most notably, the Federal Trade Commission would be tasked with developing regulations and technical specifications for a “centralized mechanism for individuals to exercise” their opt-out rights, as well as other technical issues surrounding the execution of the legislation, the bill states. At the same time, the bill gives authority to “State attorneys general, chief consumer protection officers, and other officers of a State in Federal district court” to pursue enforcement actions against companies that violate the law.

    Taking aim at the data broker industry

    The bill also targets data brokers. Under the new legislation, the FTC would be mandated to establish a data broker registry that could be used by consumers to identify which companies are brokers and to opt out of data collection by those firms. All data brokers that collect data on more than 5,000 people would be forced to re-register with the federal registry every year. At the same time, brokers would also be forced to maintain their own websites that identify them as data brokers and include a tool for consumers to opt out.

    Private right of action

    A longstanding desire for privacy advocates has been a private right of action—which is a mechanism allowing individual consumers to sue companies that have violated their rights. A number of state privacy laws have failed to include this. Under the current version of the APRA, consumers would be given a private right of action, allowing them to file litigation against companies that have demonstrably violated their digital privacy rights.

    Privacy advocates remain cautiously optimistic

    Given years of inaction on privacy policy by federal regulators, state governments have passed a number of strong privacy laws over the past decade. Some of those laws, like California’s CCPA, have been quite strong. The newly proposed federal law openly acknowledges that it would eliminate “the existing patchwork of state comprehensive data privacy laws” and establish in its place “robust enforcement mechanisms to hold violators accountable.” The fact that the APRA would pre-empt state laws worries some privacy advocates who fear the potential for a watered-down federal law. The fact that the APRA may seem strong now doesn’t mean much, since it could easily be neutered by lobbyists during the legislative process.

    Caitriona Fitzgerald, the deputy director at the Electronic Privacy Information Center, said that the federal law’s preemption of state-level regulation is only appropriate if it ends up being a strong law. “From our perspective—in an ideal world—it would not preempt state laws, it would allow states to pass stronger laws,” said Fitzgerald. “We recognize that compromise is necessary and that this is a big sticking point. If it’s going to preempt state laws, it needs to be stronger than existing state laws and regulations. We’re still evaluating the bill to determine whether that’s the case.”

    Other privacy advocates, like the Surveillance Technology Oversight Project (STOP), expressed similar concerns. “The ADPPA does offer strong privacy protections, especially data minimization rules,” said STOP Communications Director Will Owen. “But the bill falls short by preempting states from taking even stronger action, should they so choose. Worst of all, the ADPPA preempts states from enforcing protections, leaving it solely up to the U.S. executive branch, which has been fickle in enforcing Americans’ privacy rights.”

    Cody Venzke, senior policy counsel at the ACLU, said his organization remained “concerned this bill’s broad preemption of state laws will freeze our ability to respond to evolving challenges posed by technology.”

    Lucas Ropek

    Source link

  • Baldur’s Gate 3 Lets You Hide Your Sexy Times From Co-Op Friends

    Baldur’s Gate 3 Lets You Hide Your Sexy Times From Co-Op Friends

    Screenshot: Larian Studios / Kotaku

    There’s a lot of sex in Baldur’s Gate 3. Some of it’s pretty tame, with typical relations between two humanoid characters. Some of it gets a little weirder, like the druid bear sex scene. But if you’re playing a cooperative campaign with your friends, you might not want them to see your avatar get down with other party members. Luckily, if you’re not looking to put on a show—unless you are, and if that’s the case, more power to you—Larian Studios has included an option to hide these scenes from your co-op friends.

    The setting is enabled by default. In the Gameplay tab in the options menu, you’ll see “Share Private Moments” under “User Options.” The description reads:

    By default, certain scenes are private. This means in multiplayer, other players cannot witness your private moments. If you leave this option disabled, you can toggle each dialogue’s privacy setting. Enabling this option means that you will share everything: all scenes are public, and other players can listen in on your private moments and dreams.

    So already you can keep some of your more intimate moments, whether that be with a romantic partner or just having a conversation, away from prying eyes. But if you want to just lay it all out there, you can disable this protection, too.

    Personally, I’m playing through the game alone the first time before I delve into a co-op campaign. But I also don’t think I’d mind certain scenes, such as just regular conversations with party members, being audible to other players who just happened to be around in a future co-op session.

    The sex scenes I’d probably keep the privacy settings on for, but part of what makes a cooperative campaign interesting is the shared world you and your friends are experiencing together. Finding moments for privacy and recognizing when it isn’t an option is just part of being around other people. So I like the idea that this aspect of the game can be toggled and play into a role-playing experience. It’s neat. Plus, it means you can fuck the bear druid without anyone being the wiser.

    If you want some more ideas on settings worth tweaking in Baldur’s Gate 3, check out some of our early-game tips.

    Kenneth Shepard

    Source link

  • Microsoft Fined $20 Million For ‘Illegally’ Collecting Children’s Information On Xbox

    Microsoft Fined $20 Million For ‘Illegally’ Collecting Children’s Information On Xbox

    The Federal Trade Commission just announced that Microsoft has been fined $20 million “over charges it illegally collected personal information from children who signed up for its Xbox gaming system without their parents’ consent”.

    The ruling follows a larger one from December 2022, when Epic Games, developers of Fortnite, were hit with a $550 million fine for using “privacy-invasive default settings and deceptive interfaces that tricked Fortnite users, including teenagers and children”.

    In this instance, the FTC says the issue centred around the creation of children’s accounts on an Xbox console, a process that until late 2021 would allow a child to enter a certain amount of personal information before requiring a parent’s assistance and permission. Microsoft had been keeping that data (sometimes for “years”), even if the account wasn’t created, which is a violation of the Children’s Online Privacy Protection Rule (COPPA).

    Microsoft have already responded to the ruling with a post on the official Xbox blog, with Dave McCarthy, CVP Xbox Player Services, saying the violation was a result of a “glitch”, and that Microsoft will “continue improving” going forwards:

    We recently entered into a settlement with the U.S. Federal Trade Commission (FTC) to update our account creation process and resolve a data retention glitch found in our system. Regrettably, we did not meet customer expectations and are committed to complying with the order to continue improving upon our safety measures. We believe that we can and should do more, and we’ll remain steadfast in our commitment to safety, privacy, and security for our community.

    McCarthy goes on to explain the details of this “glitch”, and how it led to retention of children’s data despite this being “inconsistent with our policy to save that information for only 14 days”:

    During the investigation, we identified a technical glitch where our systems did not delete account creation data for child accounts where the account creation process was started but not completed. This was inconsistent with our policy to save that information for only 14 days to make it easier for gamers to pick up where they left off to complete the process. Our engineering team took immediate action: we fixed the glitch, deleted the data, and implemented practices to prevent the error from recurring. The data was never used, shared, or monetized.

    The FTC’s statement, meanwhile, says:

    Microsoft will pay $20 million to settle Federal Trade Commission charges that it violated the Children’s Online Privacy Protection Act (COPPA) by collecting personal information from children who signed up to its Xbox gaming system without notifying their parents or obtaining their parents’ consent, and by illegally retaining children’s personal information.

    “Our proposed order makes it easier for parents to protect their children’s privacy on Xbox, and limits what information Microsoft can collect and retain about kids,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “This action should also make it abundantly clear that kids’ avatars, biometric data, and health information are not exempt from COPPA.”

    As part of a proposed order filed by the Department of Justice on behalf of the FTC, Microsoft will be required to take several steps to bolster privacy protections for child users of its Xbox system. For example, the order will extend COPPA protections to third-party gaming publishers with whom Microsoft shares children’s data. In addition, the order makes clear that avatars generated from a child’s image, and biometric and health information, are covered by the COPPA Rule when collected with other personal data. The order must be approved by a federal court before it can go into effect.

    Luke Plunkett

    Source link

  • Discord Announces Forced Name Changes, Pisses Everyone Off

    Discord Announces Forced Name Changes, Pisses Everyone Off

    Discord is a pretty good product. It’s an easy way to communicate with friends, find realtime communities around topics of mutual interest, and crucial for making use of voice chat across most online multiplayer games. And now Discord’s decided to muck it all up by forcing everyone to switch to a new username in a giant migration no one seems to understand the reasoning for.

    As things stand, every Discord username is case sensitive and has four digits at the end of it. This lets multiple people adopt the same name and also makes it harder to search for people unless you have their exact handle—a virtue in a world where online harassment has become the norm. The system is occasionally annoying but overall feels befitting the platform’s greater amount of intimacy and privacy, and has helped it become a great hangout space, especially for gaming. Sony and Microsoft recently integrated it directly into the PlayStation 5 and Xbox Series X/S. And of course it’s also become a hotbed for leaks lately, including classified military reports.

    Image: Discord

    Not content with that successful status quo, Discord now plans to massively shake things up. “We wanted to make it easier for you to identify and add your friends while preserving your ability to use your preferred name across Discord,” the company announced this week. “So, we are removing discriminators and introducing new, unique usernames (@username) and display names.”

    These changes will arrive in the coming weeks and will initially be voluntary. Eventually, however, everyone will have to move over to the new system. Display names will still exist and be the primary way people are identified in chat, but the underlying username will become similar to the kind used everywhere else, complete with lots of potential duplicates once everyone is forced to change. Many of the initial reactions have not been kind:

    Aside from the fact that many Discord users seem to have adopted the platform precisely because it’s not easily searchable like Twitter, Facebook, and Instagram, there are plenty of other concerns as well. The move could open up more possibilities for fraud and impersonation, as we’ve seen with the recent hellfire on Twitter. There’s also been speculation that some people will now start camping on high profile usernames that belong to streamers and influencers on other platforms. But the biggest issue is that there’s no clear benefit to users with the change.

    Discord, on the other hand, is a for-profit startup that needs to continually scale in order to get bought or eventually go public. Like Slack, it can’t just be really good at private messaging and voice channels, it seemingly needs to be a huge social platform all its own. Bleh. There are already genuine concerns about how the company harvests use data, and might potentially exploit it to train AI chat tools. Many of the better features, meanwhile, are locked behind the service’s monthly Nitro subscription.

    The platform has been great in recent years, and was a lifeline for many when the pandemic shuttered everyone inside. Who knows what it will become in the future though, and changes like this are never reassuring. In the meantime, game companies keep moving their internet forums to Discord, leaving entire online communities at the mercy of the Silicon Valley growth mindset.

                   

    Ethan Gach

    Source link