ReportWire

Tag: tech policy

  • New York State Just Put Itself on a Legal Collision Course with Trump’s AI Policy

    [ad_1]

    On Friday, New York Governor Kathy Hochul signed something called the Responsible AI Safety and Education (Raise) Act, meant to, on one hand, establish an AI safety regime, and on another, troll Silicon Valley Republicans like Marc Andreessen who have been trying to dictate tech policy during the second Trump Administration.

    This comes just days after President Trump sent out an executive order that ostensibly blocks states from regulating AI.

    According to the new state law, AI companies with more than $500 million in annual revenue must draft, publish, and follow formalized sets of safety procedures aimed at preventing “critical harm,” and will have to report safety issues within 72 hours or be hit with fines, which makes it stricter than California’s SB 53, which gives companies 15 days to report safety issues.

    About a week ago on December 11, the Trump executive order called “Ensuring a National Policy Framework for Artificial Intelligence,” framed AI as a federal priority and outlined something called an “AI Litigation Task Force” at the Department of Justice. This task force will ostensibly have the job of challenging state AI laws determined to be in violation of the federal program on AI (basically doing nothing) according to the attorney general.

    Even if the executive order turns out to lack a strong legal foundation, tying state laws up in legislation is still a dreary prospect, but New York State has rushed headlong into that eventuality with this law.

    In an explainer for Axios published Friday, legal experts talking to Maria Curi and Ashley Gold averred that Trump’s executive order relies on a strange reading of parts of the Constitution, such as the Dormant Commerce Clause, which is usually interpreted as an attempt to prevent states from writing self-dealing laws that are unfair to other states—not laws that are simply meant to fill a legal vacuum left by the federal government

    [ad_2]

    Mike Pearl

    Source link

  • The Controversial Kids Online Safety Act Faces an Uncertain Future

    The Controversial Kids Online Safety Act Faces an Uncertain Future

    [ad_1]

    After passing the Senate nearly unanimously last week, the future of the Kids Online Safety Act (KOSA) appears uncertain. Congress is now on a six-week recess, and reporting from Punchbowl News indicates that the House Republican leadership may not prioritize bringing the bill to the floor for a vote when legislators return.

    In response to Punchbowl’s reporting, Senate Majority Leader Chuck Schumer released a statement saying, “Just one week ago, Speaker Johnson said that he’d like to get KOSA done. I hope that hasn’t changed. Letting KOSA and [the Children and Teens’ Online Protection Act] collect dust in the House would be an awful mistake and a gut punch—a gut punch to these brave, wonderful parents who have worked so hard to reach this point.” The bill has also received support from vice president and Democratic presidential candidate Kamala Harris.

    But the bill created a massive divide among the digital rights and tech accountability community. If passed, the legislation would require online platforms to block users under 18 from seeing certain types of content that the government considers harmful.

    Proponents of the measure, which included the Tech Oversight Project, an nonprofit focused on tech accountability through antitrust legislation, saw the bill as a meaningful step toward holding tech companies accountable for the way their products impact children.

    “Too many young people, parents, and families have experienced the dire consequences that result from social media companies’ greed,” said Sacha Haworth, executive director of the Tech Oversight Project, in a statement in June. “The accountability KOSA would provide for these families is long overdue.”

    Others, like the nonprofit digital rights organization the Center for Technology and Democracy, said that, if enacted, the law could be used to prevent young users from accessing critical information about topics like sexual health and LGBTQ+ issues. This meant that some organizations that regularly lobby to hold Silicon Valley accountable found themselves siding with tech companies and their lobbyists in trying to kill the bill.

    “KOSA is not ready for a floor vote,” said Aliya Bhatia, policy analyst with the Center for Technology and Democracy’s Free Expression Project, in a statement in July. “In its current form, KOSA can still be misused to target marginalized communities and politically sensitive information.”

    Evan Greer, director of the nonprofit advocacy group Fight for the Future, which opposed the bill, tells WIRED that KOSA and legislation like it “divides our coalition” while allowing tech companies to “keep getting away with murder and avoiding regulation.”

    “This was never really about protecting kids,” Greer says. “It was sort of about lawmakers wanting to say that they’re protecting kids, and that doesn’t actually help kids.” Instead of legislators focusing on the “flawed” legislation, Greer says that Congress could have spent that same time and energy on antitrust-focused legislation like the American Innovation and Choice Online and the Open App Markets Act, or on the American Privacy Rights Act.

    “When our coalition is divided in fighting each other, we’re going to get rolled every time by Big Tech,” she says.

    Meanwhile, Linda Yaccarino, CEO of X, has said that she supports KOSA, as has the Center for Countering Digital Hate, a tech accountability nonprofit that was sued by X last year for exposing hate speech on its platform.

    Although the House Republican leadership’s decision may signal the beginning of the end of KOSA itself, Gautam Hans, an associate law professor at Cornell University, says that “given the bipartisan interest in enacting this law, I suspect other proposals will follow—with hopefully more extensive safeguards against potential censorship by the state.”

    [ad_2]

    Vittoria Elliott

    Source link

  • Amazon Is Investigating Perplexity Over Claims of Scraping Abuse

    Amazon Is Investigating Perplexity Over Claims of Scraping Abuse

    [ad_1]

    Amazon’s cloud division has launched an investigation into Perplexity AI. At issue is whether the AI search startup is violating Amazon Web Services rules by scraping websites that attempted to prevent it from doing so, WIRED has learned.

    An AWS spokesperson, who talked to WIRED on the condition that they not be named, confirmed the company’s investigation of Perplexity. WIRED had previously found that the startup—which has backing from the Jeff Bezos family fund and Nvidia, and was recently valued at $3 billion—appears to rely on content from scraped websites that had forbidden access through the Robots Exclusion Protocol, a common web standard. While the Robots Exclusion Protocol is not legally binding, terms of service generally are.

    The Robots Exclusion Protocol is a decades-old web standard that involves placing a plaintext file (like wired.com/robots.txt) on a domain to indicate which pages should not be accessed by automated bots and crawlers. While companies that use scrapers can choose to ignore this protocol, most have traditionally respected it. The Amazon spokesperson told WIRED that AWS customers must adhere to the robots.txt standard while crawling websites.

    “AWS’s terms of service prohibit customers from using our services for any illegal activity, and our customers are responsible for complying with our terms and all applicable laws,” the spokesperson said in a statement.

    Scrutiny of Perplexity’s practices follows a June 11 report from Forbes that accused the startup of stealing at least one of its articles. WIRED investigations confirmed the practice and found further evidence of scraping abuse and plagiarism by systems linked to Perplexity’s AI-powered search chatbot. Engineers for Condé Nast, WIRED’s parent company, block Perplexity’s crawler across all its websites using a robots.txt file. But WIRED found the company had access to a server using an unpublished IP address—44.221.181.252—which visited Condé Nast properties at least hundreds of times in the past three months, apparently to scrape Condé Nast websites.

    The machine associated with Perplexity appears to be engaged in widespread crawling of news websites that forbid bots from accessing their content. Spokespeople for The Guardian, Forbes, and The New York Times also say they detected the IP address on its servers multiple times.

    WIRED traced the IP address to a virtual machine known as an Elastic Compute Cloud (EC2) instance hosted on AWS, which launched its investigation after we asked whether using AWS infrastructure to scrape websites that forbade it violated the company’s terms of service.

    Last week, Perplexity CEO Aravind Srinivas responded to WIRED’s investigation first by saying the questions we posed to the company “reflect a deep and fundamental misunderstanding of how Perplexity and the Internet work.” Srinivas then told Fast Company that the secret IP address WIRED observed scraping Condé Nast websites and a test site we created was operated by a third-party company that performs web crawling and indexing services. He refused to name the company, citing a nondisclosure agreement. When asked if he would tell the third party to stop crawling WIRED, Srinivas replied, “It’s complicated.”

    [ad_2]

    Dhruv Mehrotra, Andrew Couts

    Source link

  • Microsoft Faces EU Charges Over ‘Abusive’ Bundling

    Microsoft Faces EU Charges Over ‘Abusive’ Bundling

    [ad_1]

    Brussels has accused Microsoft of illegally abusing its dominance in the business-software market at the expense of smaller rivals, following a complaint at the height of the pandemic by US competitor Slack.

    The European Commission said on Tuesday it found that Microsoft was restricting competition by selling its video-conferencing software Teams together in bundles with the company’s other popular office tools such as Office 365 and Microsoft 365 since at least 2019.

    “We are concerned that Microsoft may be giving its own communication product Teams an undue advantage over competitors, by tying it to its popular productivity suites for businesses,” the EU’s competition chief Margrethe Vestager said in a statement. “If confirmed, Microsoft’s conduct would be illegal under our competition rules.” The charges announced on Tuesday are only a “preliminary view,” meaning the commission has sent a “statement of objections” to Microsoft and the company has 10 weeks once it receives all the details to respond.

    The Microsoft charges arrive in the same week as the European Commission also charged Apple with breaking the European Union’s new digital markets act for failing to let app developers communicate freely with their users. Over the past decade, the EU has become the de facto Big Tech regulator, forcing US giants to alter the way they operate and issuing fines of billions of dollars.

    In an attempt to placate Brussels, Microsoft started excluding Teams from some Office bundles in July of last year. However, the commission said today that those changes were insufficient and expressed concern about how easy it was to use rival conferencing software in tandem with Microsoft’s other tools, a practice known as interoperability.

    “Having unbundled Teams and taken initial interoperability steps, we appreciate the additional clarity provided today,” said Brad Smith, vice chair and president of Microsoft, in a statement shared with WIRED. The company plans to work to find solutions to address the commission‘s remaining concerns, he added.

    If Microsoft and the EU cannot reach an agreement, the commission has the power to levy fines of up to 10 percent of the company’s annual worldwide turnover and can impose remedies on the company.

    The commission opened its investigation into Microsoft Teams following a complaint by Slack in July 2020, when there was fierce competition for the remote workers who relied on office software due to pandemic lockdowns. “This is much bigger than Slack versus Microsoft,” Jonathan Prince, then vice president of communications and policy at Slack, said at the time. “This a proxy for two very different philosophies for the future of digital ecosystems, gateways versus gatekeepers.”

    On Tuesday, Sabastian Niles, president and chief legal officer of Slack’s parent company Salesforce, described the European Commission’s position as “a win for customer choice and an affirmation that Microsoft’s practices with Teams have harmed competition.”

    German video conferencing company Alfaview, which filed a complaint to the commission following Slack, also welcomed the decision. The measures Microsoft has taken so far to unbundle Teams have been ineffective, Niko Fostiropoulos, CEO and founder of Alfaview, said in a statement. “Microsoft offers existing enterprise customers who opt out of Teams in the overall package only a minimal discount of €2 ($2.10),” he said. “This does not provide sufficient incentives to switch to another video conferencing service.”

    [ad_2]

    Morgan Meaker

    Source link

  • Tech Leaders Once Cried for AI Regulation. Now the Message Is ‘Slow Down’

    Tech Leaders Once Cried for AI Regulation. Now the Message Is ‘Slow Down’

    [ad_1]

    The other night I attended a press dinner hosted by an enterprise company called Box. Other guests included the leaders of two data-oriented companies, Datadog and MongoDB. Usually the executives at these soirees are on their best behavior, especially when the discussion is on the record, like this one. So I was startled by an exchange with Box CEO Aaron Levie, who told us he had a hard stop at dessert because he was flying that night to Washington, DC. He was headed to a special-interest-thon called TechNet Day, where Silicon Valley gets to speed-date with dozens of Congress critters to shape what the (uninvited) public will have to live with. And what did he want from that legislation? “As little as possible,” Levie replied. “I will be single-handedly responsible for stopping the government.”

    He was joking about that. Sort of. He went on to say that while regulating clear abuses of AI like deepfakes makes sense, it’s way too early to consider restraints like forcing companies to submit large language models to government-approved AI cops, or scanning chatbots for things like bias or the ability to hack real-life infrastructure. He pointed to Europe, which has already adopted restraints on AI as an example of what not to do. “What Europe is doing is quite risky,” he said. “There’s this view in the EU that if you regulate first, you kind of create an atmosphere of innovation,” Levie said. “That empirically has been proven wrong.”

    Levie’s remarks fly in the face of what has become a standard position among Silicon Valley’s AI elites like Sam Altman. “Yes, regulate us!” they say. But Levie notes that when it comes to exactly what the laws should say, the consensus falls apart. “We as a tech industry do not know what we’re actually asking for,” Levie said, “I have not been to a dinner with more than five AI people where there’s a single agreement on how you would regulate AI.” Not that it matters—Levie thinks that dreams of a sweeping AI bill are doomed. “The good news is there’s no way the US would ever be coordinated in this kind of way. There simply will not be an AI Act in the US.”

    Levie is known for his irreverent loquaciousness. But in this case he’s simply more candid than many of his colleagues, whose regulate-us-please position is a form of sophisticated rope-a-dope. The single public event of TechNet Day, at least as far as I could discern, was a livestreamed panel discussion about AI innovation that included Google’s president of global affairs Kent Walker and Michael Kratsios, the most recent US Chief Technology Officer and now an executive at Scale AI. The feeling among those panelists was that the government should focus on protecting US leadership in the field. While conceding that the technology has its risks, they argued that existing laws pretty much cover the potential nastiness.

    Google’s Walker seemed particularly alarmed that some states were developing AI legislation on their own. “In California alone, there are 53 different AI bills pending in the legislature today,” he said, and he wasn’t boasting. Walker of course knows that this Congress can hardly keep the government itself afloat, and the prospect of both houses successfully juggling this hot potato in an election year is as remote as Google rehiring the eight authors of the transformer paper.

    The US Congress does have legislation pending. And the bills keep coming—some perhaps less meaningful than others. This week, Representative Adam Schiff, a California Democrat, introduced a bill called the Generative AI Copyright Disclosure Act of 2024. It mandates that large language models must present to the copyright office “a sufficiently detailed summary of any copyrighted works used … in the training data set.” It’s not clear what “sufficiently detailed” means. Would it be OK to say “We simply scraped the open web?” Schiff’s staff explained to me that they were adopting a measure in the EU’s AI bill.

    [ad_2]

    Steven Levy

    Source link