ReportWire

Tag: ai policy

  • California’s new AI safety law shows regulation and innovation don’t have to clash  | TechCrunch

    SB 53, the AI safety and transparency bill that California Gov. Gavin Newsom signed into law this week, is proof that state regulation doesn’t have to hinder AI progress.  

    So says Adam Billen, vice president of public policy at youth-led advocacy group Encode AI, on today’s episode of Equity. 

    “The reality is that policy makers themselves know that we have to do something, and they know from working on a million other issues that there is a way to pass legislation that genuinely does protect innovation — which I do care about — while making sure that these products are safe,” Billen told TechCrunch. 

    At its core, SB 53 is a first-in-the-nation bill that requires large AI labs to be transparent about their safety and security protocols — specifically around how they prevent their models from catastrophic risks, like being used to commit cyberattacks on critical infrastructure or build bio-weapons. The law also mandates that companies stick to those protocols, which will be enforced by the Office of Emergency Services.  

    “Companies are already doing the stuff that we ask them to do in this bill,” Billen told TechCrunch. “They do safety testing on their models. They release model cards. Are they starting to skimp in some areas at some companies? Yes. And that’s why bills like this are important.” 

    Billen also noted that some AI firms have a policy around relaxing safety standards under competitive pressure. OpenAI, for example, has publicly stated that it may “adjust” its safety requirements if a rival AI lab releases a high-risk system without similar safeguards. Billen argues that policy can enforce companies’ existing safety promises, preventing them from cutting corners under competitive or financial pressure. 

    While public opposition to SB 53 was muted in comparison to its predecessor SB 1047, which Newsom vetoed last year, the rhetoric in Silicon Valley and among most AI labs has been that almost any AI regulation is anathema to progress and will ultimately hinder the U.S. in its race to beat China.  

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    It’s why companies like Meta, VCs like Andreessen Horowitz, and powerful individuals like OpenAI president Greg Brockman are collectively pumping hundreds of millions into super PACs to back pro-AI politicians in state elections. And it’s why those same forces earlier this year pushed for an AI moratorium that would have banned states from regulating AI for 10 years.  

    Encode AI ran a coalition of more than 200 organizations to work to strike down the proposal, but Billen says the fight isn’t over. Senator Ted Cruz, who championed the moratorium, is attempting a new strategy to achieve the same goal of federal preemption of state laws. In September, Cruz introduced the SANDBOX Act, which would allow AI companies to apply for waivers to temporarily bypass certain federal regulations for up to 10 years. Billen also anticipates a forthcoming bill establishing a federal AI standard that would be pitched as a middle-ground solution but would in reality override state laws. 

    He warned that narrowly scoped federal AI legislation could “delete federalism for the most important technology of our time.” 

    “If you told me SB 53 was the bill that would replace all the state bills on everything related to AI and all of the potential risks, I would tell you that’s probably not a very good idea and that this bill is designed for a particular subset of things,” Billen said.  

    Adam Billen, vice president of public policy, Encode AIImage Credits:Encode AI

    While he agrees that the AI race with China matters, and that policymakers need to enact regulation that will support American progress, he says killing state bills — which mainly focus on deepfakes, transparency, algorithmic discrimination, children’s safety, and governmental use of AI — isn’t the way to go about doing that. 

    “Are bills like SB 53 the thing that will stop us from beating China? No,” he said. “I think it is just genuinely intellectually dishonest to say that that is the thing that will stop us in the race.” 

    He added: “If the thing you care about is beating China in the race on AI — and I do care about that — then the things you would push for are stuff like export controls in Congress,” Billen said. “You would make sure that American companies have the chips. But that’s not what the industry is pushing for.” 

    Legislative proposals like the Chip Security Act aim to prevent the diversion of advanced AI chips to China through export controls and tracking devices, and the existing CHIPS and Science Act seeks to boost domestic chip production. However, some major tech companies, including OpenAI and Nvidia, have expressed reluctance or opposition to certain aspects of these efforts, citing concerns about effectiveness, competitiveness, and security vulnerabilities.  

    Nvidia has its reasons — it has a strong financial incentive to continue selling chips to China, which has historically represented a significant portion of its global revenue. Billen speculated that OpenAI could hold back on chip export advocacy to stay in the good graces of crucial suppliers like Nvidia. 

    There’s also been inconsistent messaging from the Trump administration. Three months after expanding an export ban on advanced AI chips to China in April 2025, the administration reversed course, allowing Nvidia and AMD to sell some chips to China in exchange for 15% of the revenue

    “You see people on the Hill moving towards bills like the Chip Security Act that would put export controls on China,” Billen said. “In the meantime, there’s going to continue to be this propping up of the narrative to kill state bills that are actually quite light tough.” 

    Billen added that SB 53 is an example of democracy in action — of industry and policymakers working together to get to a version of a bill that everyone can agree on. It’s “very ugly and messy,” but “that process of democracy and federalism is the entire foundation of our country and our economic system, and I hope that we will keep doing that successfully.” 

    “I think SB 53 is one of the best proof points that that can still work,” he said.

    This article was first published on October 1.

    Rebecca Bellan

    Source link

  • California’s new AI safety law shows regulation and innovation don’t have to clash  | TechCrunch

    SB 53, the AI safety and transparency bill that California Gov. Gavin Newsom signed into law this week, is proof that state regulation doesn’t have to hinder AI progress.  

    So says Adam Billen, vice president of public policy at youth-led advocacy group Encode AI, on today’s episode of Equity. 

    “The reality is that policy makers themselves know that we have to do something, and they know from working on a million other issues that there is a way to pass legislation that genuinely does protect innovation — which I do care about — while making sure that these products are safe,” Billen told TechCrunch. 

    At its core, SB 53 is a first-in-the-nation bill that requires large AI labs to be transparent about their safety and security protocols – specifically around how they prevent their models from catastrophic risks, like being used to commit cyber attacks on critical infrastructure or build bio-weapons. The law also mandates that companies stick to those protocols, which will be enforced by the Office of Emergency Services.  

    “Companies are already doing the stuff that we ask them to do in this bill,” Billen told TechCrunch. “They do safety testing on their models. They release model cards. Are they starting to skimp in some areas at some companies? Yes. And that’s why bills like this are important.” 

    Billen also noted that some AI firms have a policy around relaxing safety standards under competitive pressure. OpenAI, for example, has publicly stated that it may “adjust” its safety requirements if a rival AI lab releases a high-risk system without similar safeguards. Billen argues that policy can enforce companies’ existing safety promises, preventing them from cutting corners under competitive or financial pressure. 

    While public opposition to SB 53 was muted in comparison to its predecessor SB 1047, which Newsom vetoed last year, the rhetoric in Silicon Valley and among most AI labs has been that almost any AI regulation is anathema to progress and will ultimately hinder the U.S. in its race to beat China.  

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    It’s why companies like Meta, VCs like Andreessen Horowitz, and powerful individuals like OpenAI president Greg Brockman are collectively pumping hundreds of millions into super PACs to back pro-AI politicians in state elections. And it’s why those same forces earlier this year pushed for an AI moratorium that would have banned states from regulating AI for 10 years.  

    Encode AI ran a coalition of more than 200 organizations to work to strike down the proposal, but Billen says the fight isn’t over. Senator Ted Cruz, who championed the moratorium, is attempting a new strategy to achieve the same goal of federal preemption of state laws. In September, Cruz introduced the SANDBOX Act, which would allow AI companies to apply for waivers to temporarily bypass certain federal regulations for up to 10 years. Billen also anticipates a forthcoming bill establishing a federal AI standard that would be pitched as a middle-ground solution but would in reality override state laws. 

    He warned that narrowly scoped federal AI legislation could “delete federalism for the most important technology of our time.” 

    “If you told me SB 53 was the bill that would replace all the state bills on everything related to AI and all of the potential risks, I would tell you that’s probably not a very good idea and that this bill is designed for a particular subset of things,” Billen said.  

    Adam Billet, vice president of public policy, Encode AIImage Credits:Encode AI

    While he agrees that the AI race with China matters, and that policymakers need to enact regulation that will support American progress, he says killing state bills – which mainly focus on deepfakes, transparency, algorithmic discrimination, children’s safety, and governmental use of AI — isn’t the way to go about doing that. 

    “Are bills like SB 53 the thing that will stop us from beating China? No,” he said. “I think it is just genuinely intellectually dishonest to say that that is the thing that will stop us in the race.” 

    He added: “If the thing you care about is beating China in the race on AI — and I do care about that – then the things you would push for are stuff like export controls in Congress,” Billen said. “You would make sure that American companies have the chips. But that’s not what the industry is pushing for.” 

    Legislative proposals like the Chip Security Act aim to prevent the diversion of advanced AI chips to China through export controls and tracking devices, and the existing CHIPS and Science Act seeks to boost domestic chip production. However, some major tech companies, including OpenAI and Nvidia, have expressed reluctance or opposition to certain aspects of these efforts, citing concerns about effectiveness, competitiveness, and security vulnerabilities.  

    Nvidia has its reasons – it has a strong financial incentive to continue selling chips to China, which has historically represented a significant portion of its global revenue. Billen speculated that OpenAI could hold back on chip export advocacy to stay in the good graces of crucial suppliers like Nvidia. 

    There’s also been inconsistent messaging from the Trump administration. Three months expanding an export ban on advanced AI chips to China in April 2025, the administration reversed course, allowing Nvidia and AMD to sell some chips to China in exchange for 15% of the revenue

    “You see people on the Hill moving towards bills like the Chip Security Act that would put export controls on China,” Billen said. “In the meantime, there’s going to continue to be this propping up of the narrative to kill state bills that are actually quite light tough.” 

    Bilen added that SB 53 is an example of democracy in action – of industry and policymakers working together to get to a version of a bill that everyone can agree on. It’s “very ugly and messy,” but “that process of democracy and federalism is the entire foundation of our country and our economic system, and I hope that we will keep doing that successfully.” 

    “I think SB 53 is one of the best proof points that that can still work,” he said. 

    Rebecca Bellan

    Source link

  • Silicon Valley is knowingly violating A.I. ethical principles. Society can’t respond if we let disagreements poison the debate

    Silicon Valley is knowingly violating A.I. ethical principles. Society can’t respond if we let disagreements poison the debate

    With criticism of ChatGPT much in the news, we are also increasingly hearing about disagreements among thinkers who are critical of A.I. While debating about such an important issue is natural and expected, we can’t allow differences to paralyze our very ability to make progress on A.I. ethics at this pivotal time. Today, I fear that those who should be natural allies across the tech/business, policy, and academic communities are instead increasingly at each other’s throats. When the field of A.I. ethics appears divided, it becomes easier for vested interests to brush aside ethical considerations altogether.

    Such disagreements need to be understood in the context of how we reached the current moment of excitement around the rapid advances in large language models and other forms of generative A.I.

    OpenAI, the company behind ChatGPT, was initially set up as a non-profit amid much fanfare about a mission to solve the A.I. safety problem. However, as it became clear that OpenAI’s work on large language models was lucrative, OpenAI pivoted to become a public company. It deployed ChatGPT and partnered with Microsoft–which has consistently sought to depict itself as the tech corporation most concerned about ethics.

    Both companies knew that ChatGPT violates, for example, the globally endorsed UNESCO AI ethical principles. OpenAI even refused to publicly release a previous version of GPT, citing worry about much the same kinds of potential for misuse we are now witnessing. But for OpenAI and Microsoft, the temptation to win the corporate race trumped ethical considerations. This has nurtured a degree of cynicism about relying on corporate self-governance or even governments to put in place necessary safeguards.

    We should not be too cynical about the leadership of these two companies, which are trapped between their fiduciary responsibility to shareholders and a genuine desire to do the right thing. They remain people of good intent, as are all raising concerns about the trajectory of A.I.

    This tension is perhaps best exemplified in a recent tweet by U.S. Senator Chris Murphy (D-CT) and the response by the A.I. community. In discussing ChatGPT, Murphy tweeted: “Something is coming. We aren’t ready.” And that’s when the A.I. researchers and ethicists piled on. They proceeded to criticize the Senator for not understanding the technology, indulging in futuristic hype, and focusing attention on the wrong issues. Murphy hit back at one critic: “I think the effect of her comments is very clear, to try to stop people like me from engaging in conversation, because she’s smarter and people like her are smarter than the rest of us.”

    I am saddened by disputes such as these. The concerns that Murphy raised are valid, and we need political leaders who are engaged in developing legal safeguards. His critic, however, is not wrong in questioning whether we are focusing attention on the right issues.

    To help us understand the different priorities of the various critics and, hopefully, move beyond these potentially damaging divisions, I want to propose a taxonomy for the plethora of ethical concerns raised about the development of A.I. I see three main baskets: 

    The first basket has to do with social justice, fairness, and human rights. For example, it is now well understood that algorithms can exacerbate racial, gender, and other forms of bias when they are trained on data that embodies those biases.

    The second basket is existential: Some in the A.I. development community are concerned that they are creating a technology that might threaten human existence. A 2022 poll of A.I. experts found that half expect A.I. to grow exponentially smarter than humans by 2059, and recent advances have prompted some to bring their estimates forward.

    The third basket relates to concerns about placing A.I. models in decision-making roles. Two technologies have provided focal points for this discussion: self-driving vehicles and lethal autonomous weapons systems. However, similar concerns arise as A.I. software modules become increasingly embedded in control systems in every facet of human life.

    Cutting across all these baskets is the potential misuse of A.I., such as spreading disinformation for political and economic gain, and the two-century-old concern about technological unemployment. While the history of economic progress has primarily involved machines replacing physical labor, A.I. applications can replace intellectual labor.

    I am sympathetic to all these concerns, though I have tended to be a friendly skeptic towards the more futuristic worries in the second basket. As with the above example of Senator Murphy’s tweet, disagreements among A.I. critics are often rooted in the fear that existential arguments will distract from addressing pressing issues about social justice and control.

    Moving forward, individuals will need to judge for themselves who they believe to be genuinely invested in addressing the ethical concerns of A.I. However, we cannot allow healthy skepticism and debate to devolve into a witch hunt among would-be allies and partners.

    Those within the A.I. community need to remember that what brings us together is more important than differences in emphasis that set us apart.

    This moment is far too important.

    Wendell Wallach is a Carnegie-Uehiro Fellow at Carnegie Council for Ethics in International Affairs, where co-directs the Artificial Intelligence & Equality Initiative (AIEI). He is Emeritus Chair of the Technology and Ethics study group at the Yale University Interdisciplinary Center for Bioethics.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    More must-read commentary published by Fortune:

    Wendell Wallach

    Source link