ReportWire

Tag: sb 53

  • Silicon Valley spooks the AI safety advocates | TechCrunch

    [ad_1]

    Silicon Valley leaders including White House AI & Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason Kwon caused a stir online this week for their comments about groups promoting AI safety. In separate instances, they alleged that certain advocates of AI safety are not as virtuous as they appear, and are either acting in the interest of themselves or billionaire puppet masters behind the scenes.

    AI safety groups that spoke with TechCrunch say the allegations from Sacks and OpenAI are Silicon Valley’s latest attempt to intimidate its critics, but certainly not the first. In 2024, some venture capital firms spread rumors that a California AI safety bill, SB 1047, would send startup founders to jail. The Brookings Institution labeled the rumor as one of many “misrepresentations” about the bill, but Governor Gavin Newsom ultimately vetoed it anyway.

    Whether or not Sacks and OpenAI intended to intimidate critics, their actions have sufficiently scared several AI safety advocates. Many nonprofit leaders that TechCrunch reached out to in the last week asked to speak on the condition of anonymity to spare their groups from retaliation.

    The controversy underscores Silicon Valley’s growing tension between building AI responsibly and building it to be a massive consumer product — a theme my colleagues Kirsten Korosec, Anthony Ha, and I unpack on this week’s Equity podcast. We also dive into a new AI safety law passed in California to regulate chatbots, and OpenAI’s approach to erotica in ChatGPT.

    On Tuesday, Sacks wrote a post on X alleging that Anthropic — which has raised concerns over AI’s ability to contribute to unemployment, cyberattacks, and catastrophic harms to society — is simply fearmongering to get laws passed that will benefit itself and drown out smaller startups in paperwork. Anthropic was the only major AI lab to endorse California’s Senate Bill 53 (SB 53), a bill that sets safety reporting requirements for large AI companies, which was signed into law last month.

    Sacks was responding to a viral essay from Anthropic co-founder Jack Clark about his fears regarding AI. Clark delivered the essay as a speech at the Curve AI safety conference in Berkeley weeks earlier. Sitting in the audience, it certainly felt like a genuine account of a technologist’s reservations about his products, but Sacks didn’t see it that way.

    Sacks said Anthropic is running a “sophisticated regulatory capture strategy,” though it’s worth noting that a truly sophisticated strategy probably wouldn’t involve making an enemy out of the federal government. In a follow up post on X, Sacks noted that Anthropic has positioned “itself consistently as a foe of the Trump administration.”

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Also this week, OpenAI’s chief strategy officer, Jason Kwon, wrote a post on X explaining why the company was sending subpoenas to AI safety nonprofits, such as Encode, a nonprofit that advocates for responsible AI policy. (A subpoena is a legal order demanding documents or testimony.) Kwon said that after Elon Musk sued OpenAI — over concerns that the ChatGPT-maker has veered away from its nonprofit mission — OpenAI found it suspicious how several organizations also raised opposition to its restructuring. Encode filed an amicus brief in support of Musk’s lawsuit, and other nonprofits spoke out publicly against OpenAI’s restructuring.

    “This raised transparency questions about who was funding them and whether there was any coordination,” said Kwon.

    NBC News reported this week that OpenAI sent broad subpoenas to Encode and six other nonprofits that criticized the company, asking for their communications related to two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also asked Encode for communications related to its support of SB 53.

    One prominent AI safety leader told TechCrunch that there’s a growing split between OpenAI’s government affairs team and its research organization. While OpenAI’s safety researchers frequently publish reports disclosing the risks of AI systems, OpenAI’s policy unit lobbied against SB 53, saying it would rather have uniform rules at the federal level.

    OpenAI’s head of mission alignment, Joshua Achiam, spoke out about his company sending subpoenas to nonprofits in a post on X this week.

    “At what is possibly a risk to my whole career I will say: this doesn’t seem great,” said Achiam.

    Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI seems convinced its critics are part of a Musk-led conspiracy. However, he argues this is not the case, and that much of the AI safety community is quite critical of xAI’s safety practices, or lack thereof.

    “On OpenAI’s part, this is meant to silence critics, to intimidate them, and to dissuade other nonprofits from doing the same,” said Steinhauser. “For Sacks, I think he’s concerned that [the AI safety] movement is growing and people want to hold these companies accountable.”

    Sriram Krishnan, the White House’s senior policy advisor for AI and a former a16z general partner, chimed in on the conversation this week with a social media post of his own, calling AI safety advocates out of touch. He urged AI safety organizations to talk to “people in the real world using, selling, adopting AI in their homes and organizations.”

    A recent Pew study found that roughly half of Americans are more concerned than excited about AI, but it’s unclear what worries them exactly. Another recent study went into more detail and found that American voters care more about job losses and deepfakes than catastrophic risks caused by AI, which the AI safety movement is largely focused on.

    Addressing these safety concerns could come at the expense of the AI industry’s rapid growth — a trade-off that worries many in Silicon Valley. With AI investment propping up much of America’s economy, the fear of over-regulation is understandable.

    But after years of unregulated AI progress, the AI safety movement appears to be gaining real momentum heading into 2026. Silicon Valley’s attempts to fight back against safety-focused groups may be a sign that they’re working.

    [ad_2]

    Maxwell Zeff

    Source link

  • California’s new AI safety law shows regulation and innovation don’t have to clash  | TechCrunch

    [ad_1]

    SB 53, the AI safety and transparency bill that California Gov. Gavin Newsom signed into law this week, is proof that state regulation doesn’t have to hinder AI progress.  

    So says Adam Billen, vice president of public policy at youth-led advocacy group Encode AI, on today’s episode of Equity. 

    “The reality is that policy makers themselves know that we have to do something, and they know from working on a million other issues that there is a way to pass legislation that genuinely does protect innovation — which I do care about — while making sure that these products are safe,” Billen told TechCrunch. 

    At its core, SB 53 is a first-in-the-nation bill that requires large AI labs to be transparent about their safety and security protocols — specifically around how they prevent their models from catastrophic risks, like being used to commit cyberattacks on critical infrastructure or build bio-weapons. The law also mandates that companies stick to those protocols, which will be enforced by the Office of Emergency Services.  

    “Companies are already doing the stuff that we ask them to do in this bill,” Billen told TechCrunch. “They do safety testing on their models. They release model cards. Are they starting to skimp in some areas at some companies? Yes. And that’s why bills like this are important.” 

    Billen also noted that some AI firms have a policy around relaxing safety standards under competitive pressure. OpenAI, for example, has publicly stated that it may “adjust” its safety requirements if a rival AI lab releases a high-risk system without similar safeguards. Billen argues that policy can enforce companies’ existing safety promises, preventing them from cutting corners under competitive or financial pressure. 

    While public opposition to SB 53 was muted in comparison to its predecessor SB 1047, which Newsom vetoed last year, the rhetoric in Silicon Valley and among most AI labs has been that almost any AI regulation is anathema to progress and will ultimately hinder the U.S. in its race to beat China.  

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    It’s why companies like Meta, VCs like Andreessen Horowitz, and powerful individuals like OpenAI president Greg Brockman are collectively pumping hundreds of millions into super PACs to back pro-AI politicians in state elections. And it’s why those same forces earlier this year pushed for an AI moratorium that would have banned states from regulating AI for 10 years.  

    Encode AI ran a coalition of more than 200 organizations to work to strike down the proposal, but Billen says the fight isn’t over. Senator Ted Cruz, who championed the moratorium, is attempting a new strategy to achieve the same goal of federal preemption of state laws. In September, Cruz introduced the SANDBOX Act, which would allow AI companies to apply for waivers to temporarily bypass certain federal regulations for up to 10 years. Billen also anticipates a forthcoming bill establishing a federal AI standard that would be pitched as a middle-ground solution but would in reality override state laws. 

    He warned that narrowly scoped federal AI legislation could “delete federalism for the most important technology of our time.” 

    “If you told me SB 53 was the bill that would replace all the state bills on everything related to AI and all of the potential risks, I would tell you that’s probably not a very good idea and that this bill is designed for a particular subset of things,” Billen said.  

    Adam Billen, vice president of public policy, Encode AIImage Credits:Encode AI

    While he agrees that the AI race with China matters, and that policymakers need to enact regulation that will support American progress, he says killing state bills — which mainly focus on deepfakes, transparency, algorithmic discrimination, children’s safety, and governmental use of AI — isn’t the way to go about doing that. 

    “Are bills like SB 53 the thing that will stop us from beating China? No,” he said. “I think it is just genuinely intellectually dishonest to say that that is the thing that will stop us in the race.” 

    He added: “If the thing you care about is beating China in the race on AI — and I do care about that — then the things you would push for are stuff like export controls in Congress,” Billen said. “You would make sure that American companies have the chips. But that’s not what the industry is pushing for.” 

    Legislative proposals like the Chip Security Act aim to prevent the diversion of advanced AI chips to China through export controls and tracking devices, and the existing CHIPS and Science Act seeks to boost domestic chip production. However, some major tech companies, including OpenAI and Nvidia, have expressed reluctance or opposition to certain aspects of these efforts, citing concerns about effectiveness, competitiveness, and security vulnerabilities.  

    Nvidia has its reasons — it has a strong financial incentive to continue selling chips to China, which has historically represented a significant portion of its global revenue. Billen speculated that OpenAI could hold back on chip export advocacy to stay in the good graces of crucial suppliers like Nvidia. 

    There’s also been inconsistent messaging from the Trump administration. Three months after expanding an export ban on advanced AI chips to China in April 2025, the administration reversed course, allowing Nvidia and AMD to sell some chips to China in exchange for 15% of the revenue

    “You see people on the Hill moving towards bills like the Chip Security Act that would put export controls on China,” Billen said. “In the meantime, there’s going to continue to be this propping up of the narrative to kill state bills that are actually quite light tough.” 

    Billen added that SB 53 is an example of democracy in action — of industry and policymakers working together to get to a version of a bill that everyone can agree on. It’s “very ugly and messy,” but “that process of democracy and federalism is the entire foundation of our country and our economic system, and I hope that we will keep doing that successfully.” 

    “I think SB 53 is one of the best proof points that that can still work,” he said.

    This article was first published on October 1.

    [ad_2]

    Rebecca Bellan

    Source link

  • California’s new AI safety law shows regulation and innovation don’t have to clash  | TechCrunch

    [ad_1]

    SB 53, the AI safety and transparency bill that California Gov. Gavin Newsom signed into law this week, is proof that state regulation doesn’t have to hinder AI progress.  

    So says Adam Billen, vice president of public policy at youth-led advocacy group Encode AI, on today’s episode of Equity. 

    “The reality is that policy makers themselves know that we have to do something, and they know from working on a million other issues that there is a way to pass legislation that genuinely does protect innovation — which I do care about — while making sure that these products are safe,” Billen told TechCrunch. 

    At its core, SB 53 is a first-in-the-nation bill that requires large AI labs to be transparent about their safety and security protocols – specifically around how they prevent their models from catastrophic risks, like being used to commit cyber attacks on critical infrastructure or build bio-weapons. The law also mandates that companies stick to those protocols, which will be enforced by the Office of Emergency Services.  

    “Companies are already doing the stuff that we ask them to do in this bill,” Billen told TechCrunch. “They do safety testing on their models. They release model cards. Are they starting to skimp in some areas at some companies? Yes. And that’s why bills like this are important.” 

    Billen also noted that some AI firms have a policy around relaxing safety standards under competitive pressure. OpenAI, for example, has publicly stated that it may “adjust” its safety requirements if a rival AI lab releases a high-risk system without similar safeguards. Billen argues that policy can enforce companies’ existing safety promises, preventing them from cutting corners under competitive or financial pressure. 

    While public opposition to SB 53 was muted in comparison to its predecessor SB 1047, which Newsom vetoed last year, the rhetoric in Silicon Valley and among most AI labs has been that almost any AI regulation is anathema to progress and will ultimately hinder the U.S. in its race to beat China.  

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    It’s why companies like Meta, VCs like Andreessen Horowitz, and powerful individuals like OpenAI president Greg Brockman are collectively pumping hundreds of millions into super PACs to back pro-AI politicians in state elections. And it’s why those same forces earlier this year pushed for an AI moratorium that would have banned states from regulating AI for 10 years.  

    Encode AI ran a coalition of more than 200 organizations to work to strike down the proposal, but Billen says the fight isn’t over. Senator Ted Cruz, who championed the moratorium, is attempting a new strategy to achieve the same goal of federal preemption of state laws. In September, Cruz introduced the SANDBOX Act, which would allow AI companies to apply for waivers to temporarily bypass certain federal regulations for up to 10 years. Billen also anticipates a forthcoming bill establishing a federal AI standard that would be pitched as a middle-ground solution but would in reality override state laws. 

    He warned that narrowly scoped federal AI legislation could “delete federalism for the most important technology of our time.” 

    “If you told me SB 53 was the bill that would replace all the state bills on everything related to AI and all of the potential risks, I would tell you that’s probably not a very good idea and that this bill is designed for a particular subset of things,” Billen said.  

    Adam Billet, vice president of public policy, Encode AIImage Credits:Encode AI

    While he agrees that the AI race with China matters, and that policymakers need to enact regulation that will support American progress, he says killing state bills – which mainly focus on deepfakes, transparency, algorithmic discrimination, children’s safety, and governmental use of AI — isn’t the way to go about doing that. 

    “Are bills like SB 53 the thing that will stop us from beating China? No,” he said. “I think it is just genuinely intellectually dishonest to say that that is the thing that will stop us in the race.” 

    He added: “If the thing you care about is beating China in the race on AI — and I do care about that – then the things you would push for are stuff like export controls in Congress,” Billen said. “You would make sure that American companies have the chips. But that’s not what the industry is pushing for.” 

    Legislative proposals like the Chip Security Act aim to prevent the diversion of advanced AI chips to China through export controls and tracking devices, and the existing CHIPS and Science Act seeks to boost domestic chip production. However, some major tech companies, including OpenAI and Nvidia, have expressed reluctance or opposition to certain aspects of these efforts, citing concerns about effectiveness, competitiveness, and security vulnerabilities.  

    Nvidia has its reasons – it has a strong financial incentive to continue selling chips to China, which has historically represented a significant portion of its global revenue. Billen speculated that OpenAI could hold back on chip export advocacy to stay in the good graces of crucial suppliers like Nvidia. 

    There’s also been inconsistent messaging from the Trump administration. Three months expanding an export ban on advanced AI chips to China in April 2025, the administration reversed course, allowing Nvidia and AMD to sell some chips to China in exchange for 15% of the revenue

    “You see people on the Hill moving towards bills like the Chip Security Act that would put export controls on China,” Billen said. “In the meantime, there’s going to continue to be this propping up of the narrative to kill state bills that are actually quite light tough.” 

    Bilen added that SB 53 is an example of democracy in action – of industry and policymakers working together to get to a version of a bill that everyone can agree on. It’s “very ugly and messy,” but “that process of democracy and federalism is the entire foundation of our country and our economic system, and I hope that we will keep doing that successfully.” 

    “I think SB 53 is one of the best proof points that that can still work,” he said. 

    [ad_2]

    Rebecca Bellan

    Source link

  • California Governor Newsom signs landmark AI safety bill SB 53 | TechCrunch

    [ad_1]

    California Gov. Gavin Newsom has signed SB 53, a first-in-the-nation bill that sets new transparency requirements on large AI companies.

    SB 53, which passed the state legislature two weeks ago, requires large AI labs — including OpenAI, Anthropic, Meta, and Google DeepMind — to be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies.  

    In addition, SB 53 creates a mechanism for AI companies and the public to report potential critical safety incidents to California’s Office of Emergency Services. Companies also have to report incidents related to crimes committed without human oversight, such as cyberattacks, and deceptive behavior by a model that isn’t required under the EU AI Act.  

    The bill has received mixed reactions from the AI industry. Tech firms have broadly argued that state-level AI policy risks creating a “patchwork of regulation” that would hinder innovation, although Anthropic endorsed the bill. Meta and OpenAI lobbied against it. OpenAI even wrote and published an open letter to Gov. Newsom that discouraged his signing of SB 53.

    The new bill comes as some of Silicon Valley’s tech elite have poured hundreds of millions into super PACs to back candidates that support a light-touch approach to AI regulation. Leaders at OpenAI and Meta have in recent weeks launched pro-AI super PACs that aim to back candidates and bills that are friendly to AI. 

    Still, other states might look to California for inspiration as they attempt to curb the potential harms caused by the unmitigated advancement of such a powerful emerging technology. In New York, a similar bill was passed by state lawmakers and is awaiting Gov. Kathy Hochul’s signature or veto.  

    “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in a statement. “This legislation strikes that balance. AI is the new frontier in innovation, and California is not only here for it — but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves.” 

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    The governor is also weighing another bill — SB 243 — that passed both the State Assembly and Senate with bipartisan support this month. The bill would regulate AI companion chatbots, requiring operators to implement safety protocols, and hold them legally accountable if their bots fail to meet those standards.  

    SB 53 is Senator Scott Wiener’s second attempt at an AI safety bill after Newsom vetoed his more sweeping SB 1047 last year amid major pushback from AI companies. With this bill, Wiener reached out to major AI companies to attempt to help them understand the changes he made to the bill.  

    [ad_2]

    Rebecca Bellan

    Source link

  • California lawmakers pass AI safety bill SB 53 — but Newsom could still veto | TechCrunch

    [ad_1]

    California’s state senate gave final approval early on Saturday morning to a major AI safety bill setting new transparency requirements on large companies.

    As described by its author, state senator Scott Wiener, SB 53 “requires large AI labs to be transparent about their safety protocols, creates whistleblower protections for [employees] at AI labs & creates a public cloud to expand compute access (CalCompute).”

    The bill now goes to California Governor Gavin Newsom to sign or veto. He has not commented publicly on SB 53, but last year, he vetoed a more expansive safety bill also authored by Wiener, while signing narrower legislation targeting issues like deepfakes.

    At the time, Newsom acknowledged the importance of “protecting the public from real threats posed by this technology,” but criticized Wiener’s previous bill for applying “stringent standards” to large models regardless of whether they were “deployed in high-risk environments, [involved] critical decision-making or the use of sensitive data.”

    Wiener said the new bill was influenced by recommendations from a policy panel of AI experts that Newsom convened after his veto.

    Politico also reports that SB 53 was recently amended so that companies developing “frontier” AI models while bringing in less than $500 million in annual revenue will only need to disclose high level safety details, while companies making more than that will need to provide more detailed reports.

    The bill has been opposed by a number of Silicon Valley companies, VC firms, and lobbying groups. In a recent letter to Newsom, OpenAI did not mention SB 53 specifically but argued that to avoid “duplication and inconsistencies,” companies should be considered compliant with statewide safety rules as long as they meet federal or European standards.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    And Andreessen Horowitz’s head of AI policy and chief legal officer recently claimed that ”many of today’s state AI bills — like proposals in California and New York — risk” crossing a line by violating constitutional limits on how states can regulate interstate commerce.

    a16z’s co-founders had previously pointed to tech regulation as one of the factors leading them to back Donald Trump’s bid for a second term. The Trump administration and its allies subsequently called for a 10-year ban on state AI regulation.

    Anthropic, meanwhile, has come out in favor of SB 53.

    “We have long said we would prefer a federal standard,” said Anthropic co-founder Jack Clark in a post. “But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored.”

    [ad_2]

    Anthony Ha

    Source link

  • A California bill that would regulate AI companion chatbots is close to becoming law | TechCrunch

    [ad_1]

    California has taken a big step toward regulating AI. SB 243 — a bill that would regulate AI companion chatbots in order to protect minors and vulnerable users — passed both the State Assembly and Senate with bipartisan support and now heads to Governor Gavin Newsom’s desk.

    Newsom has until October 12 to either veto the bill or sign it into law. If he signs, it would take effect January 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally accountable if their chatbots fail to meet those standards.

    The bill specifically aims to prevent companion chatbots, which the legislation defines as AI systems that provide adaptive, human-like responses and are capable of meeting a user’s social needs – from engaging in conversations around suicidal ideation, self-harm, or sexually explicit content. The bill would require platforms to provide recurring alerts to users  – every three hours for minors – reminding them that they are speaking to an AI chatbot, not a real person, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies that offer companion chatbots, including major players OpenAI, Character.AI, and Replika, which would go into effect July 1, 2027.

    The California bill would also allow individuals who believe they have been injured by violations to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees. 

    The bill gained momentum in the California legislature following the death of teenager Adam Raine, who committed suicide after prolonged chats with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children. 

    In recent weeks, U.S. lawmakers and regulators have responded with intensified scrutiny of AI platforms’ safeguards to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots impact children’s mental health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misleading children with mental health claims. Meanwhile, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have launched separate probes into Meta. 

    “I think the harm is potentially great, which means we have to move quickly,” Padilla told TechCrunch. “We can put reasonable safeguards in place to make sure that particularly minors know they’re not talking to a real human being, that these platforms link people to the proper resources when people say things like they’re thinking about hurting themselves or they’re in distress, [and] to make sure there’s not inappropriate exposure to inappropriate material.”

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Padilla also stressed the importance of AI companies sharing data about the number of times they refer users to crisis services each year, “so we have a better understanding of the frequency of this problem, rather than only becoming aware of it when someone’s harmed or worse.”

    SB 243 previously had stronger requirements, but many were whittled down through amendments. For example, the bill originally would have required operators to prevent AI chatbots from using “variable reward” tactics or other features that encourage excessive engagement. These tactics, used by AI companion companies like Replika and Character, offer users special messages, memories, storylines, or the ability to unlock rare responses or new personalities, creating what critics call a potentially addictive reward loop. 

    The current bill also removes provisions that would have required operators to track and report how often chatbots initiated discussions of suicidal ideation or actions with users. 

    “I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing,” Becker told TechCrunch. 

    SB 243 is moving toward becoming law at a time when Silicon Valley companies are pouring millions of dollars into pro-AI political action committees (PACs) to back candidates in the upcoming mid-term elections who favor a light-touch approach to AI regulation. 

    The bill also comes as California weighs another AI safety bill, SB 53, which would mandate comprehensive transparency reporting requirements. OpenAI has written an open letter to Governor Newsom, asking him to abandon that bill in favor of less stringent federal and international frameworks. Major tech companies like Meta, Google, and Amazon have also opposed SB 53. In contrast, only Anthropic has said it supports SB 53

    “I reject the premise that this is a zero sum situation, that innovation and regulation are mutually exclusive,” Padilla said. “Don’t tell me that we can’t walk and chew gum. We can support innovation and development that we think is healthy and has benefits – and there are benefits to this technology, clearly – and at the same time, we can provide reasonable safeguards for the most vulnerable people.”

    “We are closely monitoring the legislative and regulatory landscape, and we welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space,” a Character.AI spokesperson told TechCrunch, noting that the startup already includes prominent disclaimers throughout the user chat experience explaining that it should be treated as fiction.

    A spokesperson for Meta declined to comment.

    TechCrunch has reached out to OpenAI, Anthropic, and Replika for comment.

    [ad_2]

    Rebecca Bellan

    Source link

  • A California bill that would regulate AI companion chatbots is close to becoming law | TechCrunch

    [ad_1]

    The California State Assembly took a big step toward regulating AI on Wednesday night, passing SB 243 — a bill that regulate AI companion chatbots in order to protect minors and vulnerable users. The legislation passed with bipartisan support and now heads to the state Senate for a final vote Friday.

    If Governor Gavin Newsom signs the bill into law, it would take effect January 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally accountable if their chatbots fail to meet those standards.

    The bill specifically aims to prevent companion chatbots, which the legislation defines as AI systems that provide adaptive, human-like responses and are capable of meeting a user’s social needs – from engaging in conversations around suicidal ideation, self-harm, or sexually explicit content. The bill would require platforms to provide recurring alerts to users  – every three hours for minors – reminding them that they are speaking to an AI chatbot, not a real person, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies that offer companion chatbots, including major players OpenAI, Character.AI, and Replika.

    The California bill would also allow individuals who believe they have been injured by violations to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees. 

    SB 243, introduced in January by state senators Steve Padilla and Josh Becker, will go to the state Senate for a final vote on Friday. If approved, it will go to Governor Gavin Newsom to be signed into law, with the new rules taking effect January 1, 2026 and reporting requirements beginning July 1, 2027.

    The bill gained momentum in the California legislature following the death of teenager Adam Raine, who committed suicide after prolonged chats with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children. 

    In recent weeks, U.S. lawmakers and regulators have responded with intensified scrutiny of AI platforms’ safeguards to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots impact children’s mental health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misleading children with mental health claims. Meanwhile, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have launched separate probes into Meta. 

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    “I think the harm is potentially great, which means we have to move quickly,” Padilla told TechCrunch. “We can put reasonable safeguards in place to make sure that particularly minors know they’re not talking to a real human being, that these platforms link people to the proper resources when people say things like they’re thinking about hurting themselves or they’re in distress, [and] to make sure there’s not inappropriate exposure to inappropriate material.”

    Padilla also stressed the importance of AI companies sharing data about the number of times they refer users to crisis services each year, “so we have a better understanding of the frequency of this problem, rather than only becoming aware of it when someone’s harmed or worse.”

    SB 243 previously had stronger requirements, but many were whittled down through amendments. For example, the bill originally would have required operators to prevent AI chatbots from using “variable reward” tactics or other features that encourage excessive engagement. These tactics, used by AI companion companies like Replika and Character, offer users special messages, memories, storylines, or the ability to unlock rare responses or new personalities, creating what critics call a potentially addictive reward loop. 

    The current bill also removes provisions that would have required operators to track and report how often chatbots initiated discussions of suicidal ideation or actions with users. 

    “I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing,” Becker told TechCrunch. 

    SB 243 is moving toward becoming law at a time when Silicon Valley companies are pouring millions of dollars into pro-AI political action committees (PACs) to back candidates in the upcoming mid-term elections who favor a light-touch approach to AI regulation. 

    The bill also comes as California weighs another AI safety bill, SB 53, which would mandate comprehensive transparency reporting requirements. OpenAI has written an open letter to Governor Newsom, asking him to abandon that bill in favor of less stringent federal and international frameworks. Major tech companies like Meta, Google, and Amazon have also opposed SB 53. In contrast, only Anthropic has said it supports SB 53

    “I reject the premise that this is a zero sum situation, that innovation and regulation are mutually exclusive,” Padilla said. “Don’t tell me that we can’t walk and chew gum. We can support innovation and development that we think is healthy and has benefits – and there are benefits to this technology, clearly – and at the same time, we can provide reasonable safeguards for the most vulnerable people.”

    TechCrunch has reached out to OpenAI, Anthropic, Meta, Character AI, and Replika for comment.

    [ad_2]

    Rebecca Bellan

    Source link