ReportWire

Tag: tech policy and law

  • Cindy Cohn Is Leaving the EFF, but Not the Fight for Digital Rights

    [ad_1]

    After a quarter century defending digital rights, Cindy Cohn announced on Tuesday that she is stepping down as executive director of the Electronic Frontier Foundation. Cohn, who has led the San Francisco–based nonprofit since 2015, says she will leave the role later this year, concluding a chapter that helped define the modern fight over online freedom.

    Cohn first rose to prominence as lead counsel in Bernstein v. Department of Justice, the 1990s case that overturned federal restrictions on publishing encryption code. As EFF’s legal director and later executive director, she guided the group through legal challenges to government surveillance, reforms to computer crime laws, and efforts to hold corporations accountable for data collection. Over the past decade, EFF has expanded its influence, becoming a central force in shaping the debate over privacy, security, and digital freedom.

    In an interview with WIRED, Cohn reflected on EFF’s foundational encryption victories, its unfinished battles against National Security Agency (NSA) surveillance, and the organization’s work protecting independent security researchers. She spoke about the shifting balance of power between corporations and governments, the push for stronger state-level privacy laws, and the growing risks posed by artificial intelligence.

    Though stepping down from leadership, Cohn tells WIRED she plans to remain active in the fight against mass surveillance and government secrecy. Describing herself as “more of a warrior than a manager,” she says her intent is to return to frontline advocacy. She is also at work on a forthcoming book, Privacy’s Defender, due out next spring, which she hopes will inspire a new generation of digital rights advocates.

    This interview has been edited for length and clarity.

    WIRED: Tell us about the fights you won, and the ones that still feel unfinished after 25 years.

    CINDY COHN: The early fight that we made to free up encryption from government regulation still stands out as setting the stage for a potentially secure internet. We’re still working on turning that promise into a reality, but we’re in such a different place than we would’ve been in had we lost that fight. Encryption protects anybody who buys anything online, anyone who uses Signal to be a whistleblower or journalists, or just regular people who want privacy and use WhatsApp or Signal. Even the backend-certificate authorities provided by Let’s Encrypt—that make sure that when you think you’re going to your bank, you’re actually going to your bank website—are all made possible because of encryption. These are all things that would’ve been at risk if we hadn’t won that fight. I think that win was foundational, even though the fights aren’t over.

    The fights that we’ve had around the NSA and national security, those are still works in progress. We were not successful with our big challenge to the NSA spying in Jewel v. NSA, although over the long arc of that case and the accompanying legislative fights, we managed to claw back quite a bit of what the NSA started doing after 9/11.

    [ad_2]

    Dell Cameron

    Source link

  • UK Considers New Smartphone Bans for Children

    UK Considers New Smartphone Bans for Children

    [ad_1]

    We know surprisingly little about the impact of smartphone bans in schools, says Sonia Livingstone, a professor at the London School of Economics who studies how digital technologies affect young people. There are relatively few good studies in this area, and those studies that have been done often point in contradictory directions. There is just about enough evidence to suggest that preventing children from accessing their phones improves concentration, says Livingstone, but it’s much harder to say that banning phones leads to less bullying or more play. “The research is just really insufficient for that,” she says.

    Separating out how specific issues like bullying, mental health, sleep time, exercise, and concentration are impacted by smartphones is extremely tricky, says Livingstone. She points to the lack of mental health services for young people and poor pay and conditions for teachers as other potential issues that get overlooked in favor of smartphone bans. Phones might be part of the problem, she says, but they’re also seized upon as an all-purpose solution. “They seem the bit we can do something about,” she says, “and they seem the most obvious new thing.”

    The proposed new bill would also raise the age at which children can consent to allow social media companies to use their date from 13 to 16. “If we can create a version of those apps and a version of smartphones effectively for U16s, it will make it easier for them to clock out and go do real-world activities,” MacAllister told the Today show. The UK already passed a law in 2023—the Online Safety Act—that is supposed to protect children from some kinds of content, but most parts of the act have yet to come into force.

    Rather than focusing on bans, legislators should think more about how to teach children to have healthier relationships with technology and hold tech companies to account, says Pete Etchells, a professor at Bath Spa University and author of Unlocked: The Real Science of Screen Time. “We need to think about how we design [digital technologies] better, and support people in understanding how to use them,” he says.

    And getting there, according to Etchells, means moving past simplistic narratives like assuming that restricting screen time will lead to more outdoor play. He points to a 2011 law in South Korea that banned children from playing online games between midnight and 6 in the morning. After four years, the ban had made no meaningful difference in terms of internet use or sleeping hours. The law was dropped in 2021.

    “If you talk to any mental health professional, any researcher in this area, they will tell you there’s no such thing as a single root cause for things getting worse or better,” Etchells says. Looking to smartphone restrictions as the main response to the problems facing young people might turn out to be the easy answer rather than the right one.

    [ad_2]

    Matt Reynolds

    Source link

  • A Lawsuit Against Perplexity Calls Out Fake News Hallucinations

    A Lawsuit Against Perplexity Calls Out Fake News Hallucinations

    [ad_1]

    Perplexity did not respond to requests for comment.

    In a statement emailed to WIRED, News Corp chief executive Robert Thomson compared Perplexity unfavorably to OpenAI. “We applaud principled companies like OpenAI, which understands that integrity and creativity are essential if we are to realize the potential of Artificial Intelligence,” the statement says. “Perplexity is not the only AI company abusing intellectual property and it is not the only AI company that we will pursue with vigor and rigor. We have made clear that we would rather woo than sue, but, for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy.”

    OpenAI is facing its own accusations of trademark dilution, though. In New York Times v. OpenAI, the Times alleges that ChatGPT and Bing Chat will attribute made-up quotes to the Times, and accuses OpenAI and Microsoft of damaging its reputation through trademark dilution. In one example cited in the lawsuit, the Times alleges that Bing Chat claimed that the Times called red wine (in moderation) a “heart-healthy” food, when in fact it did not; the Times argues that its actual reporting has debunked claims about the healthfulness of moderate drinking.

    “Copying news articles to operate substitutive, commercial generative AI products is unlawful, as we made clear in our letters to Perplexity and our litigation against Microsoft and OpenAI,” says NYT director of external communications Charlie Stadtlander. “We applaud this lawsuit from Dow Jones and the New York Post, which is an important step toward ensuring that publisher content is protected from this kind of misappropriation.”

    If publishers prevail in arguing that hallucinations can violate trademark law, AI companies could face “immense difficulties” according to Matthew Sag, a professor of law and artificial intelligence at Emory University.

    “It is absolutely impossible to guarantee that a language model will not hallucinate,” Sag says. In his view, the way language models operate by predicting words that sound correct in response to prompts is always a type of hallucination—sometimes it’s just more plausible-sounding than others.

    “We only call it a hallucination if it doesn’t match up with our reality, but the process is exactly the same whether we like the output or not.”

    [ad_2]

    Kate Knibbs

    Source link

  • The Meteoric Rise of Temu and Pinduoduo—and What Might Finally Slow Them Down

    The Meteoric Rise of Temu and Pinduoduo—and What Might Finally Slow Them Down

    [ad_1]

    Tsai didn’t mention Pinduoduo by name, but from its beginnings, the shopping platform has never made the merchant its focus like Alibaba did: It has always prioritized getting the user the lowest price online.

    “In retail ecommerce, price wars are continuous and will never stop,” says Zhuang Shuai, retail analyst and founder of Bailian Consulting. “They’re effective in the short term but not a long-term effective way to compete.”

    Pinduoduo has even instated policies that favor customers to the detriment of merchants. Since 2021, Pinduoduo has allowed consumers to get refunds without returning the item, if what they got didn’t match the seller’s description. The Chinese counterpart to Tiktok, Douyin introduced a similar policy in September 2023, as did Taobao and JD at year end.

    The platform is also edging into territory traditionally occupied by its competitors by welcoming dealers for established brands like Apple and Louis Vuitton.

    Competitors like JD, which banked on being the destination for quality products and fast logistics, are at risk of their users being stolen. “JD is worried it can’t retain its existing users, and also won’t be able to attract price-sensitive users,” says one former mid-level JD manager, who asked for anonymity because of potential professional repercussions, about Pinduoduo’s rise. On its app homepage, JD has begun aping Pinduoduo by emphasizing discounts.

    Pinduoduo has also made international expansion a priority by launching Temu for international markets, a step that many retail Chinese companies haven’t taken. It used to be fine for a Chinese brand to stay within the Chinese market—after all, the consumer base is huge. Rather than make international expansion a side thought, Pinduoduo spent a reported $21 million on ads at the SuperBowl earlier this year; The Wall Street Journal also reported that Temu was Meta’s single biggest advertiser in 2023, racking up $2 billion in spend. That push has paid off; in the first half of this year, Temu spent more days ranked first for downloads on both the iOS App Store and Google Play Store in the US than any other app.

    The company is facing headwinds, though. In addition to the potential US curbs on cheap shipments, other countries and regions are moving in a similar protective direction. Brazil passed a law levying a 20 percent tax on purchases up to $50 in June. The EU has considered scrapping its $150 duty-free threshold. In August, South Africa announced it would introduce a value-added tax on imported low-value goods, which had previously enjoyed a concession.

    Managing director of CTR Market Research Jason Yu says it’s “very likely” that Temu would take a hit if the US goes through with it. “Competing on lower price will not be a sustainable strategy for companies like Temu or Shein in the long run,” he says. “With the change of law, their advantage in price will be less obvious.”

    It all adds up to “a gloomy outlook for cross-border online shopping in 2025,” says Tendolkar, the research analyst.

    At least on the surface, Pinduoduo isn’t worried. A Pinduoduo spokesperson tells WIRED, “If their [policy change is] fair, we believe they won’t tilt the competitive landscape.”

    [ad_2]

    Lavender Au

    Source link

  • Pilots Are Dying of Tiredness. Tech Can’t Save Them

    Pilots Are Dying of Tiredness. Tech Can’t Save Them

    [ad_1]

    In May 2023, Air India launched safety management software called Coruson, as well as BAM (Boeing Alertness Model), a fatigue-mitigation tool integrated into its rostering system, which is used by airlines to create and manage pilot schedules. Coruson, developed by cloud software company Ideagen, centralizes, analyzes, and reports on safety-related data—such as incidents, hazards, and risk assessments. BAM, developed jointly by Boeing and the software company Jeppesen, predicts and manages pilot fatigue by analyzing flight schedules and performance data. These tools were designed to prevent the creation of fatiguing rosters and pairings, Air India CEO Campbell Wilson noted in an internal message to employees. The carrier also introduced two new digital tools for its crew—the Pilot Sector Report app, to help pilots easily submit information on flight performance, incidents, and observations post-flight; and DocuNet, a digital management system that facilitates the storage, retrieval, and sharing of documents (such as flight manuals, training records, and compliance documents).

    Despite these measures, the airline was fined by the DGCA in March this year for violating FDTL limits and fatigue management rules. This May, Air India Express cabin staff called in sick en masse to protest against “mismanagement.” This followed a similar protest from the crew, mostly pilots, at Vistara airlines. Both Air India and Vistara are now owned by one of India’s largest conglomerates, the Tata Group, which took over the former from the Indian government in January 2022.

    Twenty-five of those who called in sick at Air India Express were terminated. Others were reportedly served an ultimatum. Those sacked were later reinstated by the airline following an intervention by the chief labour commissioner. Nearly a week before, the regional labor commissioner of Delhi had allegedly written to the Tata group chairman pointing to “blatant violations of labour laws” and insisting the legitimate concerns of the cabin crew be looked into. According to CNBC, Vistara employees said the agitation at their end had to do with recent salary updates, which fixed pilot pay at 40 flight hours—down from 70. Protesting first officers claimed that the new salary structure would result in an almost 57 percent pay cut. Under the new terms they would also have to fly up to 76 hours to earn what they were previously earning at 70 hours.

    To placate the pilots and get them back to work, management had assured them that salaries for the “extra working hours” would be credited once Vistara was integrated with Air India. At the time, two Air India pilots unions had written to the chairman of the company, saying that such issues were not isolated but systemic. Burnout was the other related issue, with many pilots complaining of inadequate rest and being pushed to their limits.

    Captain Singh, a former senior manager at AirAsia, tells WIRED that such effects significantly increase the risk of accidents, but also adversely affect pilot health in the long run. Tail swaps—rushing between different types of aircraft to take off immediately after disembarking from another—have become more prevalent under the 13-hour rules, and can further contribute to exhaustion, as do hasty acclimatization and, most significantly, landing three, four, or more flights consecutively, which Captain Randhawa described as a “severe energy management challenge.”

    In the 2024 “Safety Culture Survey” conducted by Singh’s Safety Matters Foundation in July, 81 percent of 530 respondents, primarily medium- to short-haul pilots, stated that bufferless rosters contribute to their fatigue. As many as 84 percent indicated concerns with the speed and direction of shift rotation. “That’s the problem with the new rostering softwares the operators are introducing,” a pilot from a private airline, who requested anonymity, says. “They’re optimizers designed to make pilots work every second of their 13-hour schedule, leaving no breathing room.” The buffer-deficient timetables push pilots to their limits, so any additional pressure—like unpredictable weather—can easily overwhelm them.

    Solving this issue with wrist-worn fatigue-measuring devices is contentious. But that isn’t the only problem. A year since they were hyped up, the buzz around fatigue-management tech has all but fizzled out. There have been no updates from IndiGo about the wrist device. Neither IndiGo nor the Thales Group responded to requests to comment.

    [ad_2]

    Parni Ray

    Source link

  • Content Creators in the Adult Industry Want a Say in AI Rules

    Content Creators in the Adult Industry Want a Say in AI Rules

    [ad_1]

    A group of sex industry professionals and advocates issued an open letter to EU regulators on Thursday, claiming that their views are being overlooked in vital discussions on policing AI technology despite also being implicated in AI’s momentous rise.

    In response to European internet regulations, a collective of adult industry members—including sex workers, erotic filmmakers, sex tech enterprises, and sex educators—urged the European Commission to include them in future negotiations shaping AI regulations, according to the letter, seen by WIRED.

    The group includes erotic filmmaker Erika Lust’s company as well as the European Sex Workers’ Rights Alliance campaign group, and is signed the Open Mind AI initiative. The group aims to alert the commission of what it says is a “critical gap” in discussions on AI regulation. Those coordinating the campaign say that current discussion strategy risks excluding first-hand perspectives on adult content and overregulating an already-marginalized community.

    “AI is evolving every day [and] we see new developments at every corner,” said Ana Ornelas, a Berlin-based erotic author and educator who goes by the pseudonym Pimenta Cítrica, and who is one of the leaders of the initiative. “It is natural that people will turn to this new technology to satisfy their fantasies.”

    But deepfakes are now a major AI threat. Ninety six percent of them feature nonconsensual “porn,” mostly of women and girls. It is “extremely harmful” to those targeted, as well as to porn performers, says Ornelas. “It’s a threat both to their human integrity and their livelihood,” she adds. “But the way the landscape is posed, adult content creators, sex workers, and educators are getting the shorter end of the stick on both sides of the spectrum.” She says that she fears banishing all adult content will sweep legitimately created content away with nonconsensual material and push people to AI models with no filters at all.

    On August 1, the European Commission introduced what it called the world’s first comprehensive legislation on AI. The aim, it said, is to cultivate responsible use of AI across the bloc. It followed earlier EU legislation policing illegal and harmful activities on digital platforms. But the initiative’s organizers say regulators don’t understand the adult industry, risking censorship, draconian measures, and misunderstandings.

    “We can offer the right insight to policymakers so they can regulate in a way that safeguards fundamental rights, freedom, and fosters a more sex-positive online environment,” says Ornelas. The European Commission did not immediately respond to a WIRED request for comment.

    Sex workers and porn performers have already reported censorship and discrimination linked to global legislation clamping down on sex trafficking and banks limiting their services. Adult industry members, including sex educators, have also had to grapple with suspensions and removals from tech platforms.

    “There’s a lack of awareness of how policies impact our livelihoods,” says Paulita Pappel, an adult filmmaker and an organizer of the initiative. “We are facing discrimination, and if regulators are trying to protect the rights of people, it would be nice if they could protect the digital rights of everyone.”

    [ad_2]

    Lydia Morrish

    Source link

  • Tether Was Playing a Risky Game, a New Celsius Suit Reveals

    Tether Was Playing a Risky Game, a New Celsius Suit Reveals

    [ad_1]

    “Ensuring that a stablecoin retains its peg even under stressed market conditions is a solvable problem,” Catalini says. In an optimal scenario, he says, reserves would be made up of exclusively “high-quality, liquid assets,” like short-term US government bonds, and providers would maintain an “adequate capital buffer.”

    In the two years since Celsius filed for bankruptcy, Tether has voluntarily both increased the size of its USDT reserve buffer and slightly reduced the proportion of the reserve made up of secured loans—from 6.76 to 5.55 percent. But Tether “does not operate under a framework that would limit what the directors of the company can and cannot do,” says Catalini. “This is where regulation is required.”

    There have been a handful of attempts to regulate the stablecoin industry in major markets. Earlier this year, rules for stablecoin issuers came into force in the EU under the Markets in Crypto Assets (MiCA) act, including requirements regarding the amount of cash a stablecoin issuer must hold, the types of assets that can comprise a stablecoin reserve, the safe custody of reserve assets, and more.

    In April, US senators Cynthia Lummis and Kirsten Gillibrand proposed a bill under which stablecoin issuers would not be permitted to lend out reserve assets. The bill is unlikely to pass through Congress before the upcoming presidential election, says Cooper, but “there is recognition on both sides of the aisle that some level of regulation is necessary.”

    By and large, though, stablecoin businesses have been left to figure out how to police themselves. “We’re dealing with a new asset class that, as of right now, is run by a group of people looking around for guidance as to what is and isn’t allowed—and they are not getting it,” says Cooper. “In an industry that thrives on risk-taking—and there is a lot of that in crypto—it’s not surprising that some outfits are pushing the boundaries.”

    The difficulty for the first handful of regulators that institute stablecoin regimes will be in limiting the threat of a de-peg without driving issuers away. The appetite for risk among stablecoin providers—whose profitability is tied to some degree to the risks they are permitted to take with reserve assets—could lead them to retreat from jurisdictions that impose the most stringent restrictions. “The problem of regulatory arbitrage is as old as time,” Cooper adds.

    Since the introduction of MiCA, Tether reportedly has yet to seek a license to operate in the EU. In an interview with WIRED earlier this month, Tether CEO Ardoino said the company is still “formalizing our strategy for the European market,” but expressed misgivings about some of the reserve requirements imposed under MiCA, which he described as unsafe.

    Meanwhile, although Ardoino considers stablecoins a potential threat to traditional banks, he balked in the interview at the prospect of Tether being asked to abide by a similarly stringent set of regulations, citing the freedom for banks to lend out the majority of deposits they receive, unlike a stablecoin company.

    But the window for regulatory arbitrage, whatever the motivation, will close, says Catalini, as an international consensus forms around the appropriate controls to be placed on stablecoin issuers. “Regulatory arbitrage is a temporary phenomenon,” he says. “It’s only a matter of time before any stablecoin with significant scale will be required to comply.”

    [ad_2]

    Joel Khalili

    Source link

  • TikTok Sued by US Justice Department for Alleged Violations of Kids’ Privacy

    TikTok Sued by US Justice Department for Alleged Violations of Kids’ Privacy

    [ad_1]

    In March 2019, TikTok agreed to a US federal court order barring the social media giant from collecting personal information from its youngest users without their parents’ consent. According to a new lawsuit filed by US authorities, TikTok immediately breached that order and now faces penalties of $51,744 per violation per day.

    TikTok “knowingly allowed children under 13 to create accounts in the regular TikTok experience and collected extensive personal information from those children without first providing parental notice or obtaining verifiable parental consent,” the US Department of Justice alleged on behalf of the Federal Trade Commission in a complaint lodged on Friday in federal court in California.

    TikTok spokesperson Michael Hughes says the company strongly disagrees with the allegations. He reiterates a statement the company issued in June, when the FTC had voted to sue, that many of the issues raised relate to “practices that are factually inaccurate or have been addressed.” Hughes adds that TikTok is “proud of our efforts to protect children, and we will continue to update and improve the platform.”

    Lawsuits over alleged violations of children’s privacy are almost a rite of passage for social platforms these days, with companies such as Google, Microsoft, and Epic Games collectively having paid hundreds of millions of dollars in penalties.

    But the case against TikTok also falls into the US government’s escalating battle with the service, whose ownership by China-based ByteDance has drawn national security concerns. Some US officials and lawmakers have said they worry about China exploiting TikTok to spread propaganda and gather data on vulnerable Americans. TikTok has refuted the concerns as baseless fear-mongering and is fighting a law that requires it to seek new ownership.

    The complaint filed on Friday alleges that as of 2020, TikTok wouldn’t let users sign up on their own if they entered a birthdate that showed they were under 13 years old. But it allowed those same users to go back, edit their birthdate, and sign up without parental permission.

    TikTok also wouldn’t remove accounts purporting to belong to children unless the user made an explicit admission of their age on their account, according to the lawsuit. TikTok’s hired content moderators allegedly spent just five to seven seconds on average reviewing accounts for age violations. “Defendants actively avoid deleting the accounts of users they know to be children,” the lawsuit states. Additionally, millions of accounts flagged as potentially belonging to children allegedly were never removed because of a bug in TikTok’s internal tools.

    The lawsuit acknowledges that TikTok improved some policies and processes over the years but that it still held on to and used personal information of children that it shouldn’t have had in the first place.

    Authorities also took issue with TikTok’s dedicated Kids Mode. The lawsuit alleges that TikTok gathered and shared information about children’s usage of the service and built profiles on them while misleading parents about the data collection. When parents tried to have data on their kids deleted, TikTok forced them to jump through unnecessary hoops, the lawsuit further alleges.

    TikTok should have known better, according to the government, because of the 2019 court order, which stemmed from TikTok’s predecessor—a service known as Musical.ly—allegedly violating a number of rules aimed at protecting children’s privacy. Those rules largely come from the Children’s Online Privacy Protection Act, a law dating to the late-1990s dotcom era that tried to create a safer environment for children on the web.

    Lawmakers in the US this year have been weighing a major update in the form of the Kids Online Safety Act, or KOSA. The proposed measure, which passed the Senate earlier this week, would require services like TikTok to better control kids’ usage. Detractors have said it would unfairly cut off some young populations, such as transgender kids, from vital support networks. KOSA’s fate remains uncertain. But as the case against TikTok allegedly shows, stricter rules may do little to stop companies from pursuing familiar tactics.

    [ad_2]

    Paresh Dave

    Source link

  • Amazon Has to Recall More Than 400,000 Dangerous Products

    Amazon Has to Recall More Than 400,000 Dangerous Products

    [ad_1]

    Amazon failed to adequately alert more than 300,000 customers to serious risks—including death and electrocution—that US Consumer Product Safety Commission (CPSC) testing found with more than 400,000 products that third parties sold on its platform.

    The CPSC unanimously voted to hold Amazon legally responsible for third-party sellers’ defective products. Now, Amazon must make a CPSC-approved plan to properly recall the dangerous products—including highly flammable children’s pajamas, faulty carbon monoxide detectors, and unsafe hair dryers that could cause electrocution—which the CPSC fears may still be widely used in homes across America.

    While Amazon scrambles to devise a plan, the CPSC summarized the ongoing risks to consumers:

    If the [products] remain in consumers’ possession, children will continue to wear sleepwear garments that could ignite and result in injury or death; consumers will unwittingly rely on defective [carbon monoxide] detectors that will never alert them to the presence of deadly carbon monoxide in their homes; and consumers will use the hair dryers they purchased, which lack immersion protection, in the bathroom near water, leaving them vulnerable to electrocution.

    Instead of recalling the products, which were sold between 2018 and 2021, Amazon sent messages to customers that the CPSC said “downplayed the severity” of hazards.

    In these messages—”despite conclusive testing that the products were hazardous” by the CPSC—Amazon only warned customers that the products “may fail” to meet federal safety standards and only “potentially” posed risks of “burn injuries to children,” “electric shock,” or “exposure to potentially dangerous levels of carbon monoxide.”

    Typically, a distributor would be required to specifically use the word “recall” in the subject line of these kinds of messages, but Amazon dodged using that language entirely. Instead, Amazon opted to use much less alarming subject lines that said, “Attention: Important safety notice about your past Amazon order” or “Important safety notice about your past Amazon order.”

    Amazon then left it up to customers to destroy products and explicitly discouraged them from making returns. The ecommerce giant also gave every affected customer a gift card without requiring proof of destruction or adequately providing public notice or informing customers of actual hazards, as can be required by law to ensure public safety.

    Further, Amazon’s messages did not include photos of the defective products, as required by law, and provided no way for customers to respond. The commission found that Amazon “made no effort” to track how many items were destroyed or even do the minimum of monitoring the “number of messages that were opened.”

    Amazon still thinks these messages were appropriate remedies, though. An Amazon spokesperson told Ars that Amazon plans to appeal the ruling.

    “We are disappointed by the CPSC’s decision,” Amazon’s spokesperson said. “We plan to appeal the decision and look forward to presenting our case in court. When we were initially notified by the CPSC three years ago about potential safety issues with a small number of third-party products at the center of this lawsuit, we swiftly notified customers, instructed them to stop using the products, and refunded them.”

    Amazon’s “Sidestepped” Safety Obligations

    The CPSC has additional concerns about Amazon’s “insufficient” remedies. It is particularly concerned that anyone who received the products as a gift or bought them on the secondary market likely was not informed of serious known hazards. The CPSC found that Amazon resold faulty hair dryers and carbon monoxide detectors, proving that secondary markets for these products exist.

    “Amazon has made no direct attempt to reach consumers who obtained the hazardous products as gifts, hand-me-downs, donations, or on the secondary market,” the CPSC said.

    [ad_2]

    Ashley Belanger, Ars Technica

    Source link

  • The Affordable Connectivity Program Died—and Thousands of Households Have Already Lost Their Internet

    The Affordable Connectivity Program Died—and Thousands of Households Have Already Lost Their Internet

    [ad_1]

    The death of the US government’s Affordable Connectivity Program (ACP) is starting to result in disconnection of internet service for Americans with low incomes. On Friday, Charter Communications reported a net loss of 154,000 internet subscribers that it said was mostly driven by customers canceling after losing the federal discount. About 100,000 of those subscribers were reportedly getting the discount, which in some cases made internet service free to the consumer.

    The $30 monthly broadband discounts provided by the ACP ended in May after Congress failed to allocate more funding. The Biden administration requested $6 billion to fund the ACP through December 2024, but Republicans called the program “wasteful.”

    Republican lawmakers’ main complaint was that most of the ACP money went to households that already had broadband before the subsidy was created. FCC Chairwoman Jessica Rosenworcel warned that killing the discounts would reduce internet access, saying an FCC survey found that 77 percent of participating households would change their plan or drop internet service entirely once the discounts expired.

    Charter’s Q2 2024 earnings report provides some of the first evidence of users dropping internet service after losing the discount. “Second quarter residential Iiternet customers decreased by 154,000, largely driven by the end of the FCC’s Affordable Connectivity Program subsidies in the second quarter, compared to an increase of 70,000 during the second quarter of 2023,” Charter said.

    Across all ISPs, there were 23 million US households enrolled in the ACP. Research released in January 2024 found that Charter was serving over 4 million ACP recipients and that up to 300,000 of those Charter customers would be “at risk” of dropping internet service if the discounts expired. Given that ACP recipients must meet low-income eligibility requirements, losing the discounts could put a strain on their overall finances even if they choose to keep paying for internet service.

    “The Real Question Is the Customers’ Ability to Pay”

    Charter, which offers service under the brand name Spectrum, has 28.3 million residential internet customers in 41 states. The company’s earnings report said Charter made retention offers to customers that previously received an ACP subsidy. The customer loss apparently would have been higher if not for those offers.

    Light Reading reported that Charter attributed about 100,000 of the 154,000 customer losses to the ACP shutdown. Charter said it retained most of its ACP subscribers so far, but that low-income households might not be able to continue paying for internet service without a new subsidy for much longer:

    “We’ve retained the vast majority of ACP customers so far,” Charter CEO Chris Winfrey said on [Friday’s] earnings call, pointing to low-cost internet programs and the offer of a free mobile line designed to keep those customers in the fold. “The real question is the customers’ ability to pay—not just now, but over time.”

    The ACP only lasted a couple of years. The FCC implemented the $30 monthly benefit in early 2022, replacing a previous $50 monthly subsidy from the Emergency Broadband Benefit Program that started enrolling users in May 2021.

    Separately, the FCC Lifeline program that provides $9.25 monthly discounts is in jeopardy after a court ruling last week. Lifeline is paid for by the Universal Service Fund, which was the subject of a constitutional challenge.

    The US Court of Appeals for the 5th Circuit found that Universal Service fees on phone bills are a “misbegotten tax” that violate the Constitution. But in similar cases, the 6th and 11th circuit appeals courts ruled that the fund is constitutional. The circuit split increases the chances that the Supreme Court will take up the case.

    Disclosure: The Advance/Newhouse Partnership, which owns 12.4 percent of Charter, is part of Advance Publications, which also owns Ars Technica and WIRED parent Condé Nast.

    This story originally appeared on Ars Technica.

    [ad_2]

    Jon Brodkin, Ars Technica

    Source link

  • California Supreme Court Rules That Uber and Lyft Drivers Will Remain Independent Contractors

    California Supreme Court Rules That Uber and Lyft Drivers Will Remain Independent Contractors

    [ad_1]

    The California Supreme Court on Thursday ruled unanimously that drivers for app-based companies including Uber, Lyft, and DoorDash will remain independent contractors, as opposed to employees. The decision, upholding a state ballot measure called Proposition 22, was considered a major victory for the gig-economy companies.

    The question of whether those who drive for the companies should be treated as employees or contractors has spurred a yearslong legal battle in the state. In 2020, California voters approved Proposition 22, allowing app-based companies to continue to treat their workers as independent contractors. That vote reversed an earlier court ruling that found such companies controlled too many of their drivers’ working conditions to treat them as contractors. The ballot measure campaign cost its advocates, including Uber, Lyft, Postmates, Instacart, and DoorDash, some $200 million, breaking state records for spending.

    Driver advocates have long argued that those behind the wheel were due the same sort of benefits offered to full-time employees, including health care, sick pay, and workers’ compensation. The companies have said that gig work is an entirely new and flexible form of work, and that treating drivers as employees would reshape their businesses. One 2020 analysis suggested that treating drivers as employees in California would cost Uber and Lyft nearly $800 million annually in just payroll taxes and benefits.

    The 2020 ballot measure required the app-based companies to institute a wage floor, at least for the time drivers spend with passengers in the car, and to pay out health care stipends for workers who drive enough monthly hours.

    “Today’s decision was supposed to bring justice, to confirm that even as workers who are managed by apps on our phone, by algorithms, by AI, that we are indeed workers with robot managers,” Nicole Moore, president of Rideshare Drivers United and a part-time driver in Los Angeles, said during a briefing with reporters following the decision. “And we deserve the same rights and benefits as all other workers in our state. But that did not happen today.” Moore called on lawmakers in the state to find a “creative pathway” to ensure that drivers are protected and paid fairly.

    In a statement, Uber said the ruling put “an end to misguided attempts to force [drivers] into an employment model that they overwhelmingly do not want.” Lyft also praised the decision: “We are pleased to continue to bring Californians closer to their friends, family, and neighbors, and provide drivers with access to flexible earnings opportunities and benefits while preserving their independence.”

    On a call for reporters hosted by proponents of Proposition 22, some drivers said they were glad that app-based companies would maintain their flexibility. “I’m just so grateful right now,” said driver Stephanie Whitfield, who works in the Coachella Valley.

    The ruling won’t have a direct effect on other states’ gig worker laws, but could influence policy in other places. Minnesota and Colorado both recently passed laws instituting better pay standards for app-based drivers, though neither resolved whether workers should be treated as contractors or employees. The Biden administration has taken aim at worker misclassification in the gig economy through new labor rules, though app-based companies say those rules don’t affect their businesses.

    [ad_2]

    Aarian Marshall, Amanda Hoover

    Source link

  • Google’s Nonconsensual Explicit Images Problem Is Getting Worse

    Google’s Nonconsensual Explicit Images Problem Is Getting Worse

    [ad_1]

    In early 2022, two Google policy staffers met with a trio of women victimized by a scam that resulted in explicit videos of them circulating online—including via Google search results. The women were among the hundreds of young adults who responded to ads seeking swimsuit models only to be coerced into performing in sex videos distributed by the website GirlsDoPorn. The site shut down in 2020, and a producer, a bookkeeper, and a cameraman subsequently pleaded guilty to sex trafficking, but the videos kept popping up on Google search faster than the women could request removals.

    The women, joined by an attorney and a security expert, presented a bounty of ideas for how Google could keep the criminal and demeaning clips better hidden, according to five people who attended or were briefed on the virtual meeting. They wanted Google search to ban websites devoted to GirlsDoPorn and videos with its watermark. They suggested Google could borrow the 25-terabyte hard drive on which the women’s cybersecurity consultant, Charles DeBarber, had saved every GirlsDoPorn episode, take a mathematical fingerprint, or “hash,” of each clip, and block them from ever reappearing in search results.

    The two Google staffers in the meeting hoped to use what they learned to win more resources from higher-ups. But the victim’s attorney, Brian Holm, left feeling dubious. The policy team was in “a tough spot” and “didn’t have authority to effect change within Google,” he says.

    His gut reaction was right. Two years later, none of those ideas brought up in the meeting have been enacted, and the videos still come up in search.

    WIRED has spoken with five former Google employees and 10 victims’ advocates who have been in communication with the company. They all say that they appreciate that because of recent changes Google has made, survivors of image-based sexual abuse such as the GirlsDoPorn scam can more easily and successfully remove unwanted search results. But they are frustrated that management at the search giant hasn’t approved proposals, such as the hard drive idea, which they believe will more fully restore and preserve the privacy of millions of victims around the world, most of them women.

    The sources describe previously unreported internal deliberations, including Google’s rationale for not using an industry tool called StopNCII that shares information about nonconsensual intimate imagery (NCII) and the company’s failure to demand that porn websites verify consent to qualify for search traffic. Google’s own research team has published steps that tech companies can take against NCII, including using StopNCII.

    The sources believe such efforts would better contain a problem that’s growing, in part through widening access to AI tools that create explicit deepfakes, including ones of GirlsDoPorn survivors. Overall reports to the UK’s Revenge Porn hotline more than doubled last year, to roughly 19,000, as did the number of cases involving synthetic content. Half of over 2,000 Brits in a recent survey worried about being victimized by deepfakes. The White House in May urged swifter action by lawmakers and industry to curb NCII overall. In June, Google joined seven other companies and nine organizations in announcing a working group to coordinate responses.

    Right now, victims can demand prosecution of abusers or pursue legal claims against websites hosting content, but neither of those routes is guaranteed, and both can be costly due to legal fees. Getting Google to remove results can be the most practical tactic and serves the ultimate goal of keeping violative content out of the eyes of friends, hiring managers, potential landlords, or dates—who almost all likely turn to Google to look up people.

    A Google spokesperson, who requested anonymity to avoid harassment from perpetrators, declined to comment on the call with GirlsDoPorn victims. She says combating what the company refers to as nonconsensual explicit imagery (NCEI) remains a priority and that Google’s actions go well beyond what is legally required. “Over the years, we’ve invested deeply in industry-leading policies and protections to help protect people affected by this harmful content,” she says. “Teams across Google continue to work diligently to bolster our safeguards and thoughtfully address emerging challenges to better protect people.”

    [ad_2]

    Paresh Dave

    Source link

  • US Record Labels Sue AI Music Generators Suno and Udio for Copyright Infringement

    US Record Labels Sue AI Music Generators Suno and Udio for Copyright Infringement

    [ad_1]

    The music industry has officially declared war on Suno and Udio, two of the most prominent AI music generators. A group of music labels including Universal Music Group, Warner Music Group, and Sony Music Group has filed lawsuits in US federal court on Monday morning alleging copyright infringement on a “massive scale.”

    The plaintiffs seek damages up to $150,000 per work infringed. The lawsuit against Suno is filed in Massachusetts, while the case against Udio’s parent company Uncharted Inc. was filed in New York. Suno and Udio did not immediately respond to a request to comment.

    “Unlicensed services like Suno and Udio that claim it’s ‘fair’ to copy an artist’s life’s work and exploit it for their own profit without consent or pay set back the promise of genuinely innovative AI for us all,” Recording Industry Association of America chair and CEO Mitch Glazier said in a press release.

    The companies have not publicly disclosed what they trained their generators on. Ed Newton-Rex, a former AI executive who now runs the ethical AI nonprofit Fairly Trained, has written extensively about his experiments with Suno and Udio; Newton-Rex found that he could generate music that “bears a striking resemblance to copyright songs.” In the complaints, the music labels state that they were independently able to prompt Suno into producing outputs that “match” copyrighted work from artists ranging from ABBA to Jason Derulo.

    One example provided in the lawsuit describes how the labels generated songs extremely similar to Chuck Berry’s 1958 rock hit “Johnny B. Goode” in Suno by using prompts like “1950s rock and roll, rhythm & blues, 12 bar blues, rockabilly, energetic male vocalist, singer guitarist,” along with snippets of the song’s lyrics. One song almost exactly replicated the “Go, Johnny, go” chorus; the plaintiffs attached side-by-side transcriptions of the scores and argued that such overlap was only possible because Suno had trained on copyrighted work.

    The Udio lawsuit offers similar examples, noting that the labels were able to generate a dozen outputs resembling Mariah Carey’s perennial hit “All I Want for Christmas Is You.” It also offers a side-by-side comparison of music and lyrics, and notes that Mariah Carey soundalikes generated by Udio have already caught the attention of the public.

    RIAA chief legal officer Ken Doroshow says Suno and Udio are trying to conceal “the full scope of their infringement.” According to the complaint against Suno, the AI company did not deny that it used copyrighted materials in its training data when asked in prelitigation correspondence, but instead said that the training data is “confidential business information.”

    “Our technology is transformative; it is designed to generate completely new outputs, not to memorize and regurgitate pre-existing content. That is why we don’t allow user prompts that reference specific artists,” said Suno CEO Mikey Schulman in a statement. “We would have been happy to explain this to the corporate record labels that filed this lawsuit (and in fact, we tried to do so), but instead of entertaining a good faith discussion, they’ve reverted to their old lawyer-led playbook.”

    [ad_2]

    Kate Knibbs

    Source link

  • OpenAI-Backed Nonprofits Have Gone Back on Their Transparency Pledges

    OpenAI-Backed Nonprofits Have Gone Back on Their Transparency Pledges

    [ad_1]

    Neither database mandates nor generally contains up-to-date versions of the records that UBI Charitable and OpenResearch had said they provided in the past.

    The original YC Research conflict-of-interest policy that Das did share calls for company insiders to be upfront about transactions in which their impartiality could be questioned and for the board to decide how to proceed.

    Das says the policy “may have been amended since OpenResearch’s policies changed (including when the name was changed from YC Research), but the core elements remain the same.”

    No Website

    UBI Charitable launched in 2020 with $10 million donated from OpenAI, as first reported by TechCrunch last year. UBI Charitable’s aim, according to its government filings, is putting the over $31 million it received by the end of 2022 to support initiatives that try to offset “the societal impacts” of new technologies and ensure no one is left behind. It has donated largely to CitySquare in Dallas and Heartland Alliance in Chicago, both of which work on a range of projects to fight poverty.

    UBI Charitable doesn’t appear to have a website but shares a San Francisco address with OpenResearch and OpenAI, and OpenAI staff have been listed on UBI Charitable’s government paperwork. Its three Form 990 filings since launching all state that records including governing documents, financial statements, and a conflict-of-interest policy were available upon request.

    Rick Cohen, chief operating and communications officer for National Council of Nonprofits, an advocacy group, says “available upon request” is a standard answer plugged in by accounting firms. OpenAI, OpenResearch, and UBI Charitable have always shared the same San Francisco accounting firm, Fontanello Duffield & Otake, which didn’t respond to a request for comment.

    Miscommunication or poor oversight could lead to the standard answer about access to records getting submitted, “even if the organization wasn’t intending to make them available,” Cohen says.

    The disclosure question ended up on what’s known as the Form 990 as part of an effort in 2008 to help the increasingly complex world of nonprofits showcase their adherence to governance best practices, at least as implied by the IRS, says Kevin Doyle, senior director of finance and accountability at Charity Navigator, which evaluates nonprofits to help guide donors’ giving decisions. “Having that sort of transparency story is a way to indicate to donors that their money is going to be used responsibly,” Doyle says.

    OpenResearch solicits donations on its website, and UBI Charitable stated on its most recent IRS filing that it had received over $27 million in public support. Doyle says Charity Navigator’s data show donations tend to flow to organizations it rates higher, with transparency among the measured factors.

    It’s certainly not unheard of for organizations to share a wide range of records. Charity Navigator has found that most of the roughly 900 largest US nonprofits reliant on individual donors publish financial statements on their websites. It doesn’t track disclosure of bylaws or conflict-of-interest policies.

    Charity Navigator publishes its own audited financial statements and at least eight nonstandard policies it maintains, including ones on how long it retains documents, how it treats whistleblower complaints, and which gifts staff can accept. “Donors can look into what we’re doing and make their own judgment rather than us operating as a black box, saying, ‘Please give us money, but don’t ask any questions,’” Doyle says.

    Cohen of the National Council of Nonprofits cautions that over-disclosure could create vulnerabilities. Posting a disaster-recovery plan, for example, could offer a roadmap to computer hackers. He adds that just because organizations have a policy on paper doesn’t mean they follow it. But knowing what they were supposed to do to evaluate a potential conflict of interest could still allow for more public accountability than otherwise possible, and if AI could be as consequential as Altman envisions, the scrutiny may very well be needed.

    [ad_2]

    Paresh Dave

    Source link

  • The Pirate Party Survived Mutiny and Scandal. Now It’s Trying to Rewrite the Rules of the Web

    The Pirate Party Survived Mutiny and Scandal. Now It’s Trying to Rewrite the Rules of the Web

    [ad_1]

    Outside the skatepark in Prague, on a scrubby patch of grass, Bartoš leans back into his deck chair as he tries to impress on me that Pirates are not your regular stiff politicians. From the campaign launch unfolding behind us, that’s pretty obvious. Yes, there are long speeches and polite rounds of applause. But there are also gangs of shirtless skateboarders, a blue-haired rapper, rainbow banners showing our solar-powered future, and references to the online forums where party members can vote on new policies or demand new leadership.

    He disagrees that the broadening of the Pirates’ focus has diluted its identity. “We cannot be a single issue party,” he insists. Instead, he compares the Pirates’ evolution to Europe’s Greens, which started as a grassroots movement built around a single issue: the environment. Now the Greens are applying their original values to everything from housing to energy, as they sit in coalition governments in Germany, Luxembourg, Ireland, and Austria. Although the Pirates “don’t preach” like the Greens, he says, “we’re doing the same journey they did a while ago.”

    The Czech branch demonstrates the Pirates’ potential—how an internet-first ideology can be woven into national politics—but it is also a microcosm of the party’s problems. Like other Pirates before it, the Czechs suffer from internal bickering, factionalism, and claims of sexual harassment. Former campaign manager Šárka Václavíková has spoken publicly about her decision to leave the party and her police complaint against a fellow party member for what she describes as stalking and psychological abuse. Over Zoom from her new home in Italy, she says sexual harassment of women was systemic before she left last year—a claim the party strongly denies. “Isolated incidents can, of course, happen, just as in society or any other party. However, if we had any information about such incidents, we would take immediate action,” party spokesperson Lucie Švehlíková told WIRED.

    But Václavíková says she’s also disappointed with the direction of the party as a whole. “There are two factions in the Pirate Party,” she declares. There are the centrists, the people who want to appeal to everyone and are disowning the party’s Pirate Bay roots in the process. Václavíková says she identified with the other faction, whom she calls “the real pirates.” “For us,” she says, “the ideology of transparent policy and privacy, and also human rights, are more important than just gaining more power for our own profit.”

    So far, Bartoš has prevented these issues from tearing the party apart. Part of why he has lasted so long, surviving a series of leadership challenges (including from Gregorová), is because he can clearly describe what makes the Pirates’ outlook different. Across Europe, other Pirates are still struggling to define what a better future—with more technology, not less—would actually look like. When I sign into a Zoom call with Tommy Klein, political adviser to the Pirates in Luxembourg, he is sitting in front of a poster emblazoned with the phrase “Save Our Internet.” When I ask how exactly the internet needs saving, he replies without enthusiasm that the poster is old. “It’s from the 2018 election,” he says.

    Under Bartoš, however, the Czech Pirates have found a way to articulate a utopian vision of a technology-infused future that means more than just reducing Big Tech’s influence on the European internet. Like the Pirate Bureau 20 years ago, the Czech Pirates also have a bus—really more of a camper van—that carries illustrations of their message. There is a sun, with rays resembling internet nodes. Wind turbines and solar farms grow out of rolling pink hills. Slogans like “Girl Power” and “Tolerance” hover over people doing peace signs and smiling through heart-shaped glasses. In Bartoš, the original Pirate vision for an alternative technology-enabled future still lingers. “I believe that we can save the planet and society through technology,” he declares from his deck chair. Whether that optimism is still applicable, 20 years later, is up to the voters to decide.

    [ad_2]

    Morgan Meaker

    Source link

  • Judge Hints at Plans to Rein In Google’s Illegal Play Store Monopoly

    Judge Hints at Plans to Rein In Google’s Illegal Play Store Monopoly

    [ad_1]

    A jury in December found that Google broke US antitrust laws through deals and billing rules that gave an unfair boost to its Google Play app store. On Thursday, a judge began laying out how Google could be forced to change its business as a penalty. The remedies under consideration could drive the most consequential shakeup ever to Google’s dominance over the Android universe.

    Fortnite video game developer Epic Games, which beat Google in the trial that saw a jury declare the Play store an illegal monopoly, is demanding that federal judge James Donato ban Google from contracts that deter competition. Epic also wants Google to be forced to help competing stores list more apps, giving them a competitive boost. The changes could enable Epic to realize its long-held plan to increase revenue by processing in-game purchases in Fortnite and other titles without using Google’s payment system, and marketing games via its own app store.

    Google contends that Epic’s demands would threaten its users’ security and impair the businesses of partners, including Android device makers and app developers. The search company is appealing the jury’s verdict, which could delay the rollout of any penalties for many years—or void them altogether. But Google over the past few years already has had to make some costly changes in Europe and Asia due to court losses and new laws affecting the Play store, and a trial with Epic is currently underway in Australia.

    “I want to be clear: Google as an illegal monopolist will have to pay some penalties,” Donato told Epic and Google at a hearing in San Francisco on Thursday. He explained that Google’s loss requires him to pry open the company’s grip on the Android ecosystem in a way that ends Google’s illegal monopoly and also removes its ill-gotten gains from years of unfair dominance.

    That would mean major changes for the industry that has built up around Google’s Android operating system—and potentially more choices for consumers. It could require Google investing cash into new projects to make things right, Donato said.

    Donato expressed frustration with Google’s claims that any changes would be bad for consumers and other businesses. “To jump up and down and say the new way is going to be a world no one wants to live in, it’s unfounded,” he said. But he also spent hours in the hearing quizzing two economists, one appearing on behalf of each company, about how to craft penalties for Google without being unreasonable.

    Among Epic’s requests is that Google be barred from striking deals that prevent or discourage companies from working with alternatives to its app store. In the past the company has required hardware companies that want to offer Google Play on their devices to agree not to work with or promote alternative app stores. That prevented most consumers from ever seeing other app stores, since most device makers want to offer Google’s app store, because it is the largest.

    Rival app stores such as those from Amazon and Samsung also have struggled to persuade developers to list their apps outside of Google Play, because maintaining apps in multiple stores takes extra work. To even the playing field, Epic proposes that Google be required for six years to provide rival stores a way to list apps that are hosted on Google Play. That would allow people to browse alternative stores without feeling they are missing out on popular apps, giving the store a better chance of success in the long term.

    [ad_2]

    Paresh Dave

    Source link

  • Ethereum’s Cofounder Says SEC Is ‘Gaslighting’ Everyone About Crypto

    Ethereum’s Cofounder Says SEC Is ‘Gaslighting’ Everyone About Crypto

    [ad_1]

    Joe Lubin is in a fight with the Securities and Exchange Commission. Not only is the financial regulator waging war against Ethereum, he claims, but making a grab for jurisdiction over the future of the Internet. So Lubin has decided to punch back.

    In 2015, Lubin was part of the team that created Ethereum, the computer network home to the world’s second largest cryptocurrency, known as ETH. Later that year, Lubin founded Consensys, with the loose ambition to support the development and adoption of Ethereum and built software products on top of the network. In April, Consensys received an unwelcome missive—known as a Wells Notice—from the SEC, informing the company that it was about to be sued. The regulator’s grievance, Consensys was told, had to do with one of the software products in its stable: MetaMask, a crypto wallet that lets users store crypto coins and interact with Ethereum-based apps.

    Consensys claims that the SEC notice, which has not been made public, states that MetaMask has made the company into an unregistered securities broker. Specifically, the SEC takes issue with two MetaMask features: one that allows users to trade between different tokens and another that lets them lock up their tokens in exchange for a regular reward, in a process called staking.

    On April 25, Consensys filed a lawsuit of its own against the SEC. The complaint accuses the regulator of an “unlawful seizure of authority over ETH,” which “bears none of the attributes of a security”—the specific type of financial instrument over which the SEC has dominion. The SEC having its way “would spell disaster for the Ethereum network,” the complaint alleges.

    In its Wells Notice, the SEC stopped short of calling ETH itself a security, says Consensys, focusing instead on the MetaMask features. But according to Consensys, the agency has long been quietly conducting an investigation into Ethereum, in the view that ETH should be reclassified as such.

    That’s not fair, claims Consensys, because an SEC director has previously described ETH as a commodity, not a security, and the Commodity Futures Trading Commission, a separate US financial regulator, has made the same contention. “Consensys built its business against the backdrop of this regulatory consensus,” the lawsuit says.

    In bringing the lawsuit, Consensys hopes to drag itself and Ethereum out from underneath the SEC, by clarifying the limits of its jurisdiction, and embolden the rest of the crypto industry to retaliate against what it describes as “aggressive and unlawful SEC overreach.” An SEC spokesperson declined to comment on the specific allegations made by Consensys, saying only that “noncompliance with the securities laws deprives investors of critical protections, including rulebooks that prevent fraud and manipulation, proper disclosures, segregation of customer assets, safeguards against conflicts of interest, oversight by a self-regulatory organization, and routine inspection by the SEC. It’s investors who get hurt and the American financial markets that may suffer.”

    The following Q&A has been edited for brevity and clarity.

    [ad_2]

    Joel Khalili

    Source link

  • A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed

    A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed

    [ad_1]

    A lawsuit filed Wednesday against Meta argues that US law requires the company to let people use unofficial add-ons to gain more control over their social feeds.

    It’s the latest in a series of disputes in which the company has tussled with researchers and developers over tools that give users extra privacy options or that collect research data. It could clear the way for researchers to release add-ons that aid research into how the algorithms on social platforms affect their users, and it could give people more control over the algorithms that shape their lives.

    The suit was filed by the Knight First Amendment Institute at Columbia University on behalf of researcher Ethan Zuckerman, an associate professor at the University of Massachusetts—Amherst. It attempts to take a federal law that has generally shielded social networks and use it as a tool forcing transparency.

    Section 230 of the Communications Decency Act is best known for allowing social media companies to evade legal liability for content on their platforms. Zuckerman’s suit argues that one of its subsections gives users the right to control how they access the internet, and the tools they use to do so.

    “Section 230 (c) (2) (b) is quite explicit about libraries, parents, and others having the ability to control obscene or other unwanted content on the internet,” says Zuckerman. “I actually think that anticipates having control over a social network like Facebook, having this ability to sort of say, ‘We want to be able to opt out of the algorithm.’”

    Zuckerman’s suit is aimed at preventing Facebook from blocking a new browser extension for Facebook that he is working on called Unfollow Everything 2.0. It would allow users to easily “unfollow” friends, groups, and pages on the service, meaning that updates from them no longer appear in the user’s newsfeed.

    Zuckerman says that this would provide users the power to tune or effectively disable Facebook’s engagement-driven feed. Users can technically do this without the tool, but only by unfollowing each friend, group, and page individually.

    There’s good reason to think Meta might make changes to Facebook to block Zuckerman’s tool after it is released. He says he won’t launch it without a ruling on his suit. In 2020, the company argued that the browser Friendly, which had let users search and reorder their Facebook news feeds as well as block ads and trackers, violated its terms of service and the Computer Fraud and Abuse Act. In 2021, Meta permanently banned Louis Barclay, a British developer who had created a tool called Unfollow Everything, which Zuckerman’s add-on is named after.

    “I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly,” Barclay wrote for Slate at the time. “But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically.”

    [ad_2]

    Vittoria Elliott

    Source link

  • Can the First Amendment Save TikTok?

    Can the First Amendment Save TikTok?

    [ad_1]

    On Wednesday, President Joe Biden signed a law that could effectively ban TikTok if the company does not divest from ByteDance, its Chinese owner, in the next 12 months. But the law, which sped through the House and Senate, could face a significant uphill battle in US courts for potentially violating the First Amendment rights of both the company and its users.

    In a statement, a TikTok spokesperson said “this unconstitutional law is a TikTok ban, and we will challenge it in court. We believe the facts and the law are clearly on our side, and we will ultimately prevail.”

    TikTok has argued that prior attempts to ban the app ran afoul of the First Amendment. Last year, the state of Montana passed a TikTok ban that was blocked by a federal judge before it could go into effect. US District Judge Donald Molloy wrote that TikTok “had established a likelihood of irreparable harm” if the ban was enacted, both to the First Amendment rights of its users and to the ability of creators to make money.

    Some experts say that the federal government could run into some of these same traps.

    “Assuming the combination that the divestiture does not go through and the app is actually banned, that means that Americans who wish to access it cannot do so,” Nadine Farid Johnson, policy director at the Knight Institute, tells WIRED. Banning the app outright would go too far, Johnson says, and “wouldn’t be a tailored response that addresses the government’s stated concerns.”

    “In all cases, I think that where this legislation is going to fail is that it’s burdening so much more speech than is necessary,” says Jenna Leventoff, senior policy counsel at the ACLU.

    If TikTok or its creators were to sue the government for violating the First Amendment, experts believe they could make a solid argument. John Morris, a principal at the Internet Society, says that the case in Montana and a 2020 case brought by users of WeChat following a Trump administration executive order to ban the Chinese chat app provide a blueprint for how the courts may view TikTok’s legal challenge.

    “In that case, what appeared to be very relevant to the court was the fact that the WeChat platform was a critical platform for communications of the users of WeChat, and they really didn’t have a good alternative,” Morris says. “If you’re looking at TikTok, many of the users of TikTok also predominantly use that platform to interact with other people.”

    In both the WeChat case and the Montana case, both the companies and their users were parties to the case, meaning that both “speakers” and “listeners” were claiming that their speech had been violated.

    TikTok has found itself in the crosshairs of US regulations for several years due to concerns about surveillance by the Chinese government. In 2020, former president Donald Trump issued an executive order to ban the app, calling it a threat to the “the national security, foreign policy, and economy of the United States.” In 2023, Democratic senator Mark Warner introduced the Restrict Act, which would allow the office of the commerce secretary to review and ban certain apps. Lawmakers have expressed concern that TikTok could be spying on its US users on behalf of the Chinese government due to a law that allows the Chinese government to compel companies, organizations, and individuals to work with the state on matters of national intelligence.

    [ad_2]

    Vittoria Elliott, Makena Kelly

    Source link

  • A Breakthrough Online Privacy Proposal Hits Congress

    A Breakthrough Online Privacy Proposal Hits Congress

    [ad_1]

    Congress may be closer than ever to passing a comprehensive data privacy framework after key House and Senate committee leaders released a new proposal on Sunday.

    The bipartisan proposal, titled the American Privacy Rights Act, or APRA, would limit the types of consumer data companies can collect, retain, and use to what they need to operate their services. Users would also be allowed to opt-out of targeted advertising and have the ability to view, correct, delete, and download their data from online services. The proposal would also create a national registry of data brokers, and force those companies to allow users to opt out of having their data sold.

    “This landmark legislation gives Americans the right to control where their information goes and who can sell it,” Cathy McMorris Rodgers, House Energy and Commerce Committee chair, said in a statement on Sunday. “It reins in Big Tech by prohibiting them from tracking, predicting, and manipulating people’s behaviors for profit without their knowledge and consent. Americans overwhelmingly want these rights, and they are looking to us, their elected representatives, to act.”

    Congress has tried to put together a comprehensive federal law protecting user data for decades. Lawmakers have remained divided, though, on whether that legislation should prevent states from issuing tougher rules, and whether to allow a “private right of action” that would enable people to sue companies in response to privacy violations.

    In an interview with the Spokesman Review on Sunday, McMorris Rodgers claimed that the draft’s language is stronger than any active laws, seemingly as an attempt to assuage the concerns of Democrats who have long fought attempts to preempt preexisting state-level protections. APRA does allow states to pass their own privacy laws related to civil rights and consumer protections, among other exceptions.

    In the previous session of Congress, the leaders of the House Energy and Commerce Committees brokered a deal with Roger Wicker, the top Republican on the Senate Commerce Committee, on a bill that would preempt state laws with the exception of the California Consumer Privacy Act and the Biometric Information Privacy Act of Illinois. That measure, titled the American Data Privacy and Protection Act, also created a weaker private right of action than most Democrats were willing to support. Cantwell refused to support the measure, instead circulating her own draft legislation. The ADPPA hasn’t been reintroduced, but APRA was designed as a compromise.

    “I think we have threaded a very important needle here,” Cantwell told the Spokesman Review. “We are preserving those standards that California and Illinois and Washington have.”

    APRA includes language from California’s landmark privacy law allowing people to sue companies when they are harmed by a data breach. It also provides the Federal Trade Commission, state attorneys general, and private citizens the authority to sue companies when they violate the law.

    The categories of data that would be impacted by the APRA include certain categories of “information that identifies or is linked or reasonably linkable to an individual or device,” according to a Senate Commerce Committee summary of the legislation. Small businesses—those with $40 million or less in annual revenue and limited data collection—would be exempt under APRA, with enforcement focused on businesses with $250 million or more in yearly revenue. Governments and “entities working on behalf of governments” are excluded under the bill, as are the National Center for Missing and Exploited Children and, apart from certain cybersecurity provisions, “fraud-fighting” nonprofits.

    US representative Frank Pallone, the top Democrat on the House Energy and Commerce Committee, called the draft “very strong” in a Sunday statement, but said he wanted to “strengthen” it with tighter child safety provisions.

    Still, it remains unclear whether APRA will receive the necessary support for approval. On Sunday, committee aids said that conversations on other lawmakers signing onto the legislation are ongoing. The current proposal is a “discussion draft;” while there’s no official date for introducing a bill, Cantwell and McMorris Rodgers will likely shop around the text to colleagues for feedback over the coming weeks, and plan to send it to committees this month.

    [ad_2]

    Makena Kelly

    Source link