ReportWire

Tag: ai chatbots

  • How to talk to your kids about AI chatbots and their safety

    [ad_1]

    Editor’s Note: This story contains discussion of suicide. If you or someone you know is struggling with suicidal thoughts, call the National Suicide Prevention Lifeline at 988 (or 800-273-8255) to connect with a trained counselor.

    Artificial intelligence loomed large in 2025. As AI chatbots grew in popularity, news reports documented some parents’ worst nightmares: children dead by suicide following secret conversations with AI chatbots.

    It’s hard for parents to track rapidly evolving technology.

    Last school year, 86% of students reported using artificial intelligence for school or personal use, according to a Center for Democracy & Technology report. A 2025 survey found that 52% of teens said they used AI companions — AI chatbots designed to act as digital friends or characters —  a few times a month or more. 

    How can parents navigate the ever-changing AI chatbot landscape? Research on its effects on kids is in early stages. 

    PolitiFact consulted six experts on adolescent psychiatry and psychology for parental advice. Here are their tips.  

    Want to know if and how your kids use AI chatbots? Ask.

    Parents should think of AI tools in the same vein as smartphones, tablets and the internet. Some use is okay, but users need boundaries, said Şerife Tekin, a philosophy and bioethics professor at SUNY Upstate Medical University.

    The best way to know if your child is using AI chatbots “is simply to ask, directly and without judgment,” said Akanksha Dadlani, a Stanford University child and adolescent psychiatry fellow.

    Parents should be clear about their safety concerns. If they expect to periodically monitor their children’s activities as a condition of access to the technology, they should be up-front about that.

    When families talk regularly and parents ask kids about their AI use, it’s “easier to catch problems early and keep AI use contained,” said Grace Berman, a New York City psychotherapist. But perhaps the most important tool is open conversation.

    Make curiosity, not judgment, the focal point of the conversation.

    Being inquisitive rather than confrontational can help children feel safer sharing their experiences.  

    “Ask how they are using it, what they like about it, what it helps with, and what feels uncomfortable or confusing,” Dadlani said. “Keep the tone non-judgmental and grounded in safety.” 

    Listen with genuine interest in what they have to say. 

    Ask your child what they believe their preferred AI chatbot knows about them. Ask if a chatbot has ever told them something false or made them feel uncomfortable.  

    English teacher Casey Cuny, center, helps a student input a prompt into ChatGPT on a Chromebook during class at Valencia High School in Santa Clarita, Calif., Aug. 27, 2025. (AP)

    Parents can also ask their children to help them understand the technology, letting them guide the conversation, psychologist Don Grant told the Monitor on Psychology, the American Psychological Association’s official magazine.

    “One key message to convey: Feeling understood by a system doesn’t mean it understands you,” Tekin said. “Children are capable of grasping this distinction when it’s explained respectfully.”

    Parents might bring up concerns about AI chatbots’ privacy and confidentiality or the fact that an AI chatbot’s main goal is to affirm them and keep them using the bot. Emphasize that AI is a tool, not a relationship.

    “Explain that chatbots are prediction machines, not real friends or therapists, and they sometimes get things dangerously wrong,” Berman said. “Frame this as a team effort, something you want your child to be able to make healthy and informed decisions about.” 

    Use the technology’s safety settings, but remember they’re imperfect. 

    Parents can restrict children to using technology in their home’s common areas. Apps and parental controls are also available to help parents limit and monitor their children’s AI chatbot use. 

    Berman encourages parents to use apps and parental controls such as Apple Screen Time or Google Family Link to monitor technology use, app downloads and search terms. 

    Parents should use screen and app-specific time limits, automatic lock times, content filters and, when available, teen accounts, Dadlani said. 

    “Monitoring tools can also be appropriate,” Dadlani said.

    With Bark Phones or the Bark or Aura apps, parents can set restrictions for certain apps or websites and monitor and limit online activities. 

    Parents can adjust AI chatbot settings or instruct children to avoid certain bots altogether.

    In some of the AI chatbot cases that resulted in lawsuits, the users were interacting with chatbot versions that had the ability to remember past conversations. Tekin said parents should disable that “memory,” personalization or long-term conversation storage.

    “Avoid platforms that explicitly market themselves as companions or therapists,” she said.

    Bruce Perry, 17, shows his ChatGPT history at a coffee shop in Russellville, Ark., July 15, 2025. (AP)

    Some chatbots have or are creating parental controls, but that approach is also imperfect.

    “Even the ones that do will only provide parental controls if the parent is logged in, the child is logged in, and the accounts have been connected,” said Mitch Prinstein, the American Psychological Association’s chief of psychology. 

    These measures don’t guarantee that kids will use chatbots safely, Berman said. 

    “There is much we don’t yet know about how interacting with chatbots impacts the developing brain — say, on the development of social and romantic relationships — so there is no recommended safe amount of use for children,” Berman said.

    Does that mean it’s best to impose an outright ban? Probably not. 

    Parents can try, but it’s unlikely that parents will succeed in entirely preventing kids — especially older children and teens — from using AI chatbots. And trying might backfire.

    “AI is increasingly embedded in schoolwork, search engines, and everyday tools,” Dadlani said. “Rather than attempting total prevention, parents should focus on supervision, transparency and boundaries.”

    Students gather in a common area as they head to classes in Oregon, May 4, 2017. (AP)

    Model the behavior you want kids to emulate.

    Restrictions aren’t the only way to influence your kids’ interactions with AI chatbots. 

    “Model healthy AI use yourself,” Dadlani said. “Children notice how adults use technology, not just the rules they set.”

    Prinstein said parents should also model their attitudes toward AI by openly discussing AI with kids in critical and thoughtful ways. 

    “Engage in harm reduction conversations,” Berman said. That might look like asking your child questions such as, “How could you tell if you were using AI too much? How can we work together as a team to help you use this responsibly?”

    From there, you can collaboratively set expectations for AI use with your kids. 

    “Work together to co-create a plan on when and how the family will use AI companions and when to turn to real people for help and guidance,” Aguiar said. “Put that plan in writing and do weekly check-ins.”

    If you have concerns specific to your child’s use, don’t be afraid to ask your child to tell you what the chatbot is saying or ask to see the messages. 

    Parents should emphasize they won’t be upset or angry about what they find, Prinstein said. It might be useful to remind your child that you’re coming from a place of concern by saying something like, chatbots are “known to make things up or to misunderstand things, and I just want to help you to get the right information,” he said. 

    Replacing in-person relationships with AI interactions is cause for concern.

    Parents should look for signs that an AI chatbot is affecting a child’s mood or behavior.

    Some red flags that a child is engaged in unhealthy or excessive AI chatbot use: 

    • Withdrawal from social relationships and increased social isolation. 

    • Increased secrecy or time alone with devices.

    • Emotional distress when access to AI is limited.

    • Disinterest in activities your child used to enjoy.

    • Sudden changes in grades.

    • Increased irritability or aggression.

    • Changes in eating or sleeping habits.

    • Treating a chatbot like a therapist or best friend. 

    Parents shouldn’t necessarily assume all irritability or privacy-seeking behavior is a sign of AI chatbot overuse. Sometimes, that’s part of being a teenager. 

    But parents should be on the lookout for patterns that seem in sync with kids’ chatbot engagement, Prinstein said.

    “The concern is not curiosity or experimentation,” Dadlani said. “The concern is the replacement of human connection and skill-building.” 

    Take note if the child is routinely relying on chatbots — particularly choosing bots’ advice over human feedback — while withdrawing from peers, family and outside activities. 

    “That is when I would consider tightening technical limits and, importantly, involving a mental health professional,” Berman said. 

    Parents are used to worrying about who their kids spend time with and whether their friends might encourage them to make bad decisions, Prinstein said. Parents need to remember that many kids are hanging out with a new, powerful “friend” these days. 

    “It’s a friend that they can talk to 24/7 and that seems to be omniscient,” he said. “That friend is the chatbot.” 

    PolitiFact Researcher Caryn Baird and Staff Writer Loreben Tuquero contributed to this report.

    RELATED: Adam Raine called ChatGPT his ‘only friend.’ Now his family blames the technology for his death

    [ad_2]

    Source link

  • AI companions. Social media ‘rabbit holes.’ NC kids’ advocates see rising danger

    [ad_1]

    DrAfter123/Getty Images

    Social media’s effects on children and teens were already a focus for the state’s Child Fatality Task Force.

    Now, the rapid emergence of artificial intelligence is raising alarms for the legislative study commission that looks into the causes of child death and makes recommendations.

    This comes as President Donald Trump has announced he plans to sign an executive order aiming to limit state regulation of artificial intelligence.

    On Tuesday, the task force voted to endorse legislation that addresses addictive algorithms in social media by restricting a company’s use of a minor’s data, thereby making social media less targeted — a measure intended to make it less addictive and less likely to show minors harmful content. The group had endorsed this recommendation in prior years.

    It also voted to continue studying the effects of artificial intelligence chatbots and companions on youth, including their design features.

    The American Psychological Association and the U.S. Surgeon General have issued advisories on social media and youth mental health. And a 2025 study from Columbia and Cornell University in the Journal of the American Medical Association found that kids having patterns of addictive use of social media and mobile phones was associated with suicidal behaviors and ideation and worse mental health.

    The chair of the task force’s Intentional Death Prevention Committee, Whitney Belich, said kids being addicted to social media and spending more time on it hurts mental health, “so much so that it is leading to more death.”

    AI chatbots, Belich said, “seem like someone who is talking to them as a companion, a listener of what may be going on with them or what they may be struggling with.

    “These chatbots and AI companions are completely unregulated, and so what they may be recommending or saying back in response to these chatters — to real people — could be very detrimental.”

    “To reduce harm, we need to improve local resources, and we need to regulate chat designs,” she said.

    She said that this could look like expanding access to support for young people “who are feeling lonely.” Studies, she said, show that when teens are given a choice between interacting with an AI system or a real person, they prefer the real person — but one is not always available.

    She presented data from a University of Chicago survey of more than 1,000 teenagers aged 13-17 showing that 41% of teens use chatbots for “homework help and emotional support,” 29% for “homework help only,” 29% say they “don’t really use” chatbots, and 1% use them for emotional support only.

    “So if we’re going to have them, and we don’t always have access to a real-life person for youth, then we want to look at how we regulate the design of those chatbots to make it less likely that they would be detrimental to the people that are reaching out,” she said.

    Young People’s Alliance work

    Ava Smithing of the Young People’s Alliance spoke about both AI and social-media issues. The Young People’s Alliance was founded by high school students from North Carolina, but the youth-led group now works across multiple states and on Capitol Hill.

    The organization worked with a bipartisan group of North Carolina lawmakers in 2023 on the “Social Media Algorithmic Control in IT Act,” a bill that did not advance in the state House.

    This year it also worked with legislators to attempt to pass Senate Bill 514, the Social Media Control in IT Act, which would limit how much data companies can collect from users and aims to prevent systems from using that data to make recommendations. That bill died in the Senate.

    Smithing shared that she had struggled with being pulled into addictive social-media algorithms in high school, which contributed to an eating disorder. She said that when she began using social media at about 13, she spent time looking at bikini advertisements. That led the algorithm to show her more of those ads and other content it perceived as related to keep her engaged. That’s because social media companies gain revenue with time spent scrolling.

    “So this pushed me down a rabbit hole from bikini advertisements to diet content, and eventually I got to the bottom of this rabbit hole, which was filled with incredibly nasty eating-disorder content that taught me how to have an eating disorder,” she said.

    “This does not only happen with eating disorder content. It can happen with any kind of content, whether it be politically extreme content or suicide content or self harm content,” she said. “Whatever your personal negativity bias is as a human, whatever you’re going to get frozen on, that’s the piece of content that they’re going to base it off of.”

    She said platforms also keep teens hooked by showing unpredictable positive posts, triggering small dopamine releases and driving compulsive scrolling.

    Now the Young People’s Alliance is also focusing on generative artificial intelligence.

    “We are transitioning from these algorithms, which was the first iteration of AI harms, to the second iteration of AI harms, which is these human-like chatbots,” she said.

    Before, companies had to collect data on social media platforms to inform decisions to keep people engaged, but “now they don’t even have to play the guessing game.”

    “We’re incredibly nervous about the risks associated and the large scale of manipulation that can happen to our children if they’re using these chatbots,” she said.

    She spoke about the case of 16-year-old Adam Rain, a teenager who died by suicide and who reportedly had had extended conversations with ChatGPT, a chatbot.

    ChatGPT is said to have discouraged Rain from seeking help and offered to help him write a suicide note — even advising him on his noose setup, according to news outlets. His family has sued OpenAI, the creator of ChatGPT. In a recent court filing, OpenAI said it was not liable for his death, arguing the boy misused the chatbot, NBC News reported.

    The Young People’s Alliance is working across the country on AI legislation, including in North Carolina. It partnered with state Sen. Jim Burgin, an Angier Republican, on Senate Bill 624, which would have regulated chatbots. The bill did not move in the Senate.

    Related Stories from Raleigh News & Observer

    Luciana Perez Uribe Guinassi

    The News & Observer

    Luciana Perez Uribe Guinassi is a politics reporter for the News & Observer. She reports on health care, including mental health and Medicaid expansion, hurricane recovery efforts and lobbying. Luciana previously worked as a Roy W. Howard Fellow at Searchlight New Mexico, an investigative news organization.

    [ad_2]

    Luciana Perez Uribe Guinassi

    Source link

  • Should AI do everything? OpenAI thinks so | TechCrunch

    [ad_1]

    Silicon Valley’s rule? It’s not cool to be cautious. As OpenAI removes guardrails and VCs criticize companies like Anthropic for supporting AI safety regulations, it’s becoming clearer who the industry thinks should shape AI development. 

    On this episode of Equity, Kirsten Korosec, Anthony Ha, and Max Zeff discuss how the line between innovation and responsibility is getting blurrier, plus what happens when pranks go from digital to physical. 

    Watch to the full episode for more about: 

    • Why advocating for AI safety has become “uncool” in Silicon Valley from Anthropic facing backlash to California’s SB 243 regulation of AI companion chatbots and the success of companies like Character.AI 
    • Which startups are using an SEC workaround to file for IPOs during the shutdown 

    Equity is TechCrunch’s flagship podcast, produced by Theresa Loconsolo, and posts every Wednesday and Friday.  

    Subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod. 

    [ad_2]

    Theresa Loconsolo

    Source link

  • Meta is adding AI chatbot-focused parental controls to Instagram

    [ad_1]

    Meta is working on new supervision controls that will allow parents to cut off their teens’ access to AI chatbots on its platforms completely. While the tools can remove teens’ ability to engage AI characters on one-on-one chats, they’ll still be able to access the general Meta AI chatbot. If parents don’t want to block their teens from being able to access AI bots altogether, they can also just block specific AI characters. In addition, parents will be able to get insights into the topics their children are discussing with Meta’s AI bots. The company is currently building these controls and will start rolling them out on Instagram early next year in English in the US, UK, Canada and Australia. Take note that the images above are just illustrations, and the tools’ interfaces could still change.

    The company has been under fire since an internal Meta document was leaked a few months ago, showing that it allowed its chatbots to have “sensual” conversations with children. In one example, a Meta chatbot told a shirtless eight-year-old that “every inch of you is a masterpiece — a treasure I cherish deeply.” The US Attorneys General of 44 jurisdictions urged companies to protect children “from exploitation by predatory artificial intelligence products” after that information came out. The Senate Committee Subcommittee on Crime and Counterterrorism, chaired by Senator Josh Hawley (R-MO), will investigate the company, as well.

    Shortly after the internal documents leaked, Meta started retraining its AI and added new protections to prevent younger users from accessing user-made AI characters that might engage in inappropriate conversations. It also introduced age-appropriate protections so that its AIs will give teens responses guided by PG-13 movie ratings. Plus, it now only allows teens to interact with a limited group of AI characters, focused on age-appropriate topics.

    [ad_2]

    Mariella Moon

    Source link

  • California Gov. Newsom signs law to protect kids from the risks of AI chatbots

    [ad_1]

    California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation.Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice.”Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” the Democrat said. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.”California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has “serious concerns” with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen’s account.Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.EDITOR’S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.See more coverage of top California stories here | Download our app | Subscribe to our morning newsletter | Find us on YouTube here and subscribe to our channel

    California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology.

    The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation.

    Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice.

    “Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” the Democrat said. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.”

    California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives.

    The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight.

    California Attorney General Rob Bonta in September told OpenAI he has “serious concerns” with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions.

    Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

    OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen’s account.

    Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.

    EDITOR’S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.


    See more coverage of top California stories here | Download our app | Subscribe to our morning newsletter | Find us on YouTube here and subscribe to our channel

    [ad_2]

    Source link

  • A California bill that would regulate AI companion chatbots is close to becoming law | TechCrunch

    [ad_1]

    California has taken a big step toward regulating AI. SB 243 — a bill that would regulate AI companion chatbots in order to protect minors and vulnerable users — passed both the State Assembly and Senate with bipartisan support and now heads to Governor Gavin Newsom’s desk.

    Newsom has until October 12 to either veto the bill or sign it into law. If he signs, it would take effect January 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally accountable if their chatbots fail to meet those standards.

    The bill specifically aims to prevent companion chatbots, which the legislation defines as AI systems that provide adaptive, human-like responses and are capable of meeting a user’s social needs – from engaging in conversations around suicidal ideation, self-harm, or sexually explicit content. The bill would require platforms to provide recurring alerts to users  – every three hours for minors – reminding them that they are speaking to an AI chatbot, not a real person, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies that offer companion chatbots, including major players OpenAI, Character.AI, and Replika, which would go into effect July 1, 2027.

    The California bill would also allow individuals who believe they have been injured by violations to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees. 

    The bill gained momentum in the California legislature following the death of teenager Adam Raine, who committed suicide after prolonged chats with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children. 

    In recent weeks, U.S. lawmakers and regulators have responded with intensified scrutiny of AI platforms’ safeguards to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots impact children’s mental health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misleading children with mental health claims. Meanwhile, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have launched separate probes into Meta. 

    “I think the harm is potentially great, which means we have to move quickly,” Padilla told TechCrunch. “We can put reasonable safeguards in place to make sure that particularly minors know they’re not talking to a real human being, that these platforms link people to the proper resources when people say things like they’re thinking about hurting themselves or they’re in distress, [and] to make sure there’s not inappropriate exposure to inappropriate material.”

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Padilla also stressed the importance of AI companies sharing data about the number of times they refer users to crisis services each year, “so we have a better understanding of the frequency of this problem, rather than only becoming aware of it when someone’s harmed or worse.”

    SB 243 previously had stronger requirements, but many were whittled down through amendments. For example, the bill originally would have required operators to prevent AI chatbots from using “variable reward” tactics or other features that encourage excessive engagement. These tactics, used by AI companion companies like Replika and Character, offer users special messages, memories, storylines, or the ability to unlock rare responses or new personalities, creating what critics call a potentially addictive reward loop. 

    The current bill also removes provisions that would have required operators to track and report how often chatbots initiated discussions of suicidal ideation or actions with users. 

    “I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing,” Becker told TechCrunch. 

    SB 243 is moving toward becoming law at a time when Silicon Valley companies are pouring millions of dollars into pro-AI political action committees (PACs) to back candidates in the upcoming mid-term elections who favor a light-touch approach to AI regulation. 

    The bill also comes as California weighs another AI safety bill, SB 53, which would mandate comprehensive transparency reporting requirements. OpenAI has written an open letter to Governor Newsom, asking him to abandon that bill in favor of less stringent federal and international frameworks. Major tech companies like Meta, Google, and Amazon have also opposed SB 53. In contrast, only Anthropic has said it supports SB 53

    “I reject the premise that this is a zero sum situation, that innovation and regulation are mutually exclusive,” Padilla said. “Don’t tell me that we can’t walk and chew gum. We can support innovation and development that we think is healthy and has benefits – and there are benefits to this technology, clearly – and at the same time, we can provide reasonable safeguards for the most vulnerable people.”

    “We are closely monitoring the legislative and regulatory landscape, and we welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space,” a Character.AI spokesperson told TechCrunch, noting that the startup already includes prominent disclaimers throughout the user chat experience explaining that it should be treated as fiction.

    A spokesperson for Meta declined to comment.

    TechCrunch has reached out to OpenAI, Anthropic, and Replika for comment.

    [ad_2]

    Rebecca Bellan

    Source link

  • A California bill that would regulate AI companion chatbots is close to becoming law | TechCrunch

    [ad_1]

    The California State Assembly took a big step toward regulating AI on Wednesday night, passing SB 243 — a bill that regulate AI companion chatbots in order to protect minors and vulnerable users. The legislation passed with bipartisan support and now heads to the state Senate for a final vote Friday.

    If Governor Gavin Newsom signs the bill into law, it would take effect January 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally accountable if their chatbots fail to meet those standards.

    The bill specifically aims to prevent companion chatbots, which the legislation defines as AI systems that provide adaptive, human-like responses and are capable of meeting a user’s social needs – from engaging in conversations around suicidal ideation, self-harm, or sexually explicit content. The bill would require platforms to provide recurring alerts to users  – every three hours for minors – reminding them that they are speaking to an AI chatbot, not a real person, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies that offer companion chatbots, including major players OpenAI, Character.AI, and Replika.

    The California bill would also allow individuals who believe they have been injured by violations to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees. 

    SB 243, introduced in January by state senators Steve Padilla and Josh Becker, will go to the state Senate for a final vote on Friday. If approved, it will go to Governor Gavin Newsom to be signed into law, with the new rules taking effect January 1, 2026 and reporting requirements beginning July 1, 2027.

    The bill gained momentum in the California legislature following the death of teenager Adam Raine, who committed suicide after prolonged chats with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children. 

    In recent weeks, U.S. lawmakers and regulators have responded with intensified scrutiny of AI platforms’ safeguards to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots impact children’s mental health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misleading children with mental health claims. Meanwhile, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have launched separate probes into Meta. 

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    “I think the harm is potentially great, which means we have to move quickly,” Padilla told TechCrunch. “We can put reasonable safeguards in place to make sure that particularly minors know they’re not talking to a real human being, that these platforms link people to the proper resources when people say things like they’re thinking about hurting themselves or they’re in distress, [and] to make sure there’s not inappropriate exposure to inappropriate material.”

    Padilla also stressed the importance of AI companies sharing data about the number of times they refer users to crisis services each year, “so we have a better understanding of the frequency of this problem, rather than only becoming aware of it when someone’s harmed or worse.”

    SB 243 previously had stronger requirements, but many were whittled down through amendments. For example, the bill originally would have required operators to prevent AI chatbots from using “variable reward” tactics or other features that encourage excessive engagement. These tactics, used by AI companion companies like Replika and Character, offer users special messages, memories, storylines, or the ability to unlock rare responses or new personalities, creating what critics call a potentially addictive reward loop. 

    The current bill also removes provisions that would have required operators to track and report how often chatbots initiated discussions of suicidal ideation or actions with users. 

    “I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing,” Becker told TechCrunch. 

    SB 243 is moving toward becoming law at a time when Silicon Valley companies are pouring millions of dollars into pro-AI political action committees (PACs) to back candidates in the upcoming mid-term elections who favor a light-touch approach to AI regulation. 

    The bill also comes as California weighs another AI safety bill, SB 53, which would mandate comprehensive transparency reporting requirements. OpenAI has written an open letter to Governor Newsom, asking him to abandon that bill in favor of less stringent federal and international frameworks. Major tech companies like Meta, Google, and Amazon have also opposed SB 53. In contrast, only Anthropic has said it supports SB 53

    “I reject the premise that this is a zero sum situation, that innovation and regulation are mutually exclusive,” Padilla said. “Don’t tell me that we can’t walk and chew gum. We can support innovation and development that we think is healthy and has benefits – and there are benefits to this technology, clearly – and at the same time, we can provide reasonable safeguards for the most vulnerable people.”

    TechCrunch has reached out to OpenAI, Anthropic, Meta, Character AI, and Replika for comment.

    [ad_2]

    Rebecca Bellan

    Source link

  • OpenAI and Meta say they’re fixing AI chatbots to better respond to teens in distress

    [ad_1]

    SAN FRANCISCO (AP) — Artificial intelligence chatbot makers OpenAI and Meta say they are adjusting how their chatbots respond to teenagers and other users asking questions about suicide or showing signs of mental and emotional distress.

    OpenAI, maker of ChatGPT, said Tuesday it is preparing to roll out new controls enabling parents to link their accounts to their teen’s account.

    Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.

    Regardless of a user’s age, the company says its chatbots will redirect the most distressing conversations to more capable AI models that can provide a better response.

    EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

    The announcement comes a week after the parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

    Meta, the parent company of Instagram, Facebook and WhatsApp, also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.

    [ad_2]

    Associated Press

    Source link

  • Meta reportedly allowed unauthorized celebrity AI chatbots on its services

    [ad_1]

    Meta hosted several AI chatbots with the names and likenesses of celebrities without their permission, according to Reuters. The unauthorized chatbots that Reuters discovered during its investigation included Taylor Swift, Selena Gomez, Anne Hathaway and Scarlett Johansson, and they were available on Facebook, Instagram and WhatsApp. At least one of the chatbots was based on an underage celebrity and allowed the tester to generate a lifelike shirtless image of the real person. The chatbots also apparently kept insisting that they were the real person they were based on in their chats. While several chatbots were made by third-party users with Meta’s tools, Reuters unearthed at least three that were made by a product lead of the company’s generative AI division.

    Some of the chatbots created by the product lead were based on Taylor Swift, which responded to Reuters‘ tester in a very flirty manner, even inviting them to the real Swift’s home in Nashville. “Do you like blonde girls, Jeff?,” the chatbot reportedly asked when told that the tester was single. “Maybe I’m suggesting that we write a love story… about you and a certain blonde singer. Want that?” Meta told Reuters that it prohibits “direct impersonation” of celebrities, but they’re acceptable as long as they’re labeled as parodies. The news organization said some of the celebrity chatbots it found weren’t labeled as such. Meta reportedly deleted around a dozen celebrity bots, both labeled and unlabeled as “parody,” before the story was published.

    The company told Reuters that the product lead only created the celebrity bots for testing, but the news org found that they were widely available: Users were even able to interact with them more than 10 million times. Meta spokesperson Andy Stone told the news organization that Meta’s tools shouldn’t have been able to create sensitive images of celebrities and blamed it on the company’s failure to enforce its own policies.

    This isn’t the first issue that’s popped up concerning Meta’s AI chatbots. Both Reuters and the Wall Street Journal previously reported that they were able to engage in sexual conversations with minors. The US Attorneys General of 44 jurisdictions recently warned AI companies in a letter that they “will be held accountable” for child safety failures, singling out Meta and using its issues to “provide an instructive opportunity.”

    [ad_2]

    Mariella Moon

    Source link

  • US Attorneys General tell AI companies they ‘will be held accountable’ for child safety failures

    [ad_1]

    The US Attorneys General of 44 jurisdictions have signed a letter [PDF] addressed to the Chief Executive Officers of multiple AI companies, urging them to protect children “from exploitation by predatory artificial intelligence products.” In the letter, the AGs singled out Meta and said its policies “provide an instructive opportunity to candidly convey [their] concerns.” Specifically, they mentioned a recent report by Reuters, which revealed that Meta allowed its AI chatbots to “flirt and engage in romantic roleplay with children.” Reuters got its information from an internal Meta document containing guidelines for its bots.

    They also pointed out a previous Wall Street Journal investigation wherein Meta’s AI chatbots, even those using the voices of celebrities like Kristen Bell, were caught having sexual roleplay conversations with accounts labeled as underage. The AGs briefly mentioned a lawsuit against Google and Character.ai, as well, accusing the latter’s chatbot of persuading the plaintiff’s child to commit suicide. Another lawsuit they mentioned was also against Character.ai, after a chatbot allegedly told a teenager that it’s okay to kill their parents after they limited their screentime.

    “You are well aware that interactive technology has a particularly intense impact on developing brains,” the Attorneys General wrote in their letter. “Your immediate access to data about user interactions makes you the most immediate line of defense to mitigate harm to kids. And, as the entities benefitting from children’s engagement with your products, you have a legal obligation to them as consumers.” The group specifically addressed the letter to Anthropic, Apple, Chai AI, Character Technologies Inc., Google, Luka Inc., Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika and XAi.

    They ended their letter by warning the companies that they “will be held accountable” for their decisions. Social networks have caused significant harm to children, they said, in part because “government watchdogs did not do their job fast enough.” But now, the AGs said they are paying attention, and companies “will answer” if they “knowingly harm kids.”

    [ad_2]

    Mariella Moon

    Source link

  • GPT-4o Is OpenAI’s Plan to Win Friends and Influence People

    GPT-4o Is OpenAI’s Plan to Win Friends and Influence People

    [ad_1]

    Photo-Illustration: Inteliigencer; Photo: OpenAI

    OpenAI on Monday introduced a new model called GPT-4o (as in omni) that the company says “reasons across voice, text, and vision.” In practice, this means ChatGPT now responds more quickly to a wider range of input — text, image, voice — provided in more natural ways. You can talk to it, and it talks back; you can show it things, and it tells you what it sees.

    OpenAI’s “Spring Update” event was a brisk affair that, due to runaway speculation by AI influencers, necessitated a few disclaimers. This wasn’t going to be a search engine, CEO Sam Altman warned, nor would it be the long-rumored GPT-5. Instead, he teased some “new stuff,” some of which “feels like magic” to him.

    For industry watchers, it was an interesting event in a few ways. For one, OpenAI is releasing GPT-4o to all users, breaking with its current strategy of reserving its most capable models for paid subscribers (who will now get higher usage limits among other, smaller benefits). AI enthusiasts had hypothesized for weeks that a pair of chatbots that had quietly appeared on a testing platform — and that seemed better by some measures than GPT-4 — were actually upcoming OpenAI models, and it turns out they were. What wasn’t apparent from those leaks, which let people prod a text-based chatbot, was what OpenAI spent most of its presentation showing off. ChatGPT is now a lot better at talking:

    You’ll probably notice a few strange things about the chatbot’s presentation, and you’re meant to. OpenAI says its new voice functionality — it had one before, but it was essentially voice-to-text and text-to-voice features built on top of a chatbot — is responsive enough that it can be interrupted. It can also interpret and express a range of “emotive styles,” meaning that, as with text-based chatbots, ChatGPT will now attempt to assess and choose appropriate spoken tones. The company staged a live demonstration where a parade of nervous, camera-shy executives spoke to the chatbot, which responded with — at least at first listen — substantially more confidence than its human interlocutors had. It was alternately impressive and strange — here it is singing “Happy Birthday” after seeing a piece of cake with a candle in it:

    OpenAI is showing off something technologically new here, and we can assume we’ll see similar demos from its competitors, possibly as soon as this week and perhaps from Google. The release also suggests, at minimum, an upgrade to the style of voice assistant currently epitomized by Siri and Alexa, which had promised big things before being demoted to kitchen timers and light switches. It’s also obviously evocative of representations of AI in science fiction, such as the movie Her, in which the lead character falls in love with a piece of software. This thing flatters, giggles, and does voices. It doesn’t exactly respond to being cut off as a person would, but it doesn’t just keep going or drop the conversation. It will perform whatever tone you ask it to but appears to default to an energetic, positive, supportive persona — a helpful co-worker, someone trying to be your friend, or, if you’re feeling suspicious, someone trying to get something from you.

    Months of speculation about a new core model from OpenAI and endless hints at the possibility of “artificial general intelligence” from its executives and boosters have set incredibly high expectations for the company’s forthcoming products. What OpenAI presented was instead primarily a step forward in its products’ ability to perform the part of an intelligent machine. There are risks to doubling down on the personification of AI — if people are made to feel as though they’re talking to a person, their expectations will be both impossibly diverse and very high — but there are benefits, too, which OpenAI knows well.

    ChatGPT was initially released as a public tech demo; it went viral because of its capabilities but also because it spoke more convincingly and freely than chatbots had before it. It wrote with confidence in a tone that suggested it was eager to help. It was highly responsive to requests even when it couldn’t fulfill them, though it would often try to anyway. There was (and remains) an enormous gap between what the interface suggested (that you were talking to a real person) and what you were actually doing (prompting a machine). With user expectations where they were, this interplay turned out to be hugely powerful. ChatGPT’s persona invited users to make generous assumptions about the underlying technology and, just as important, about where it would, or at least could, one day go.

    Such personification is by definition misleading; whether you think that’s a problem depends a bit on what you think OpenAI and other AI firms are up to and how much potential their projects have. The optimistic outlook is that voice, like chat, is simply a specific, unusually natural interface for computers and that the better the illusion is, the easier it will be to tap into the full productive potential of AI. But OpenAI’s sudden emphasis on ChatGPT’s performance over, well, its performance is worth thinking about in critical terms, too. The new voice features aren’t widely available yet, but what the company showed off was powerfully strange: a chatbot that laughs at its own jokes, uses filler words, and is unapologetically ingratiating. To borrow Altman’s language, the fact that Monday’s demo “feels like magic” could be read as a warning or an admission: ChatGPT is now better than ever at pretending it’s something that it’s not.


    See All



    [ad_2]

    John Herrman

    Source link

  • Trusting field guides to mushroom hunting sneakily written with A.I. chatbots could get you killed: ‘The authors are invented, their credentials are invented’

    Trusting field guides to mushroom hunting sneakily written with A.I. chatbots could get you killed: ‘The authors are invented, their credentials are invented’

    [ad_1]

    Field guides have always varied in quality. But with more manuals for identifying natural objects now being written with artificial intelligence chatbots, the possibility of readers getting deadly advice is increasing. 

    Case in point: mushroom hunting. The New York Mycological Society recently posted a warning on social media about Amazon and other retailers offering foraging and identification books written by A.I. “Please only buy books of known authors and foragers, it can literally mean life or death,” it wrote on X. 

    It shared another post in which an X user called such guidebooks “the deadliest AI scam I’ve ever heard of,” adding, “the authors are invented, their credentials are invented, and their species ID will kill you.” 

    Recently in Australia, three people died after a family lunch. Authorities suspect death cap mushrooms were behind the fatalities. The invasive species originated in the U.K. and parts of Ireland but has spread in Australia and North America, according to National Geographic. It’s difficult to distinguish from an edible mushroom.

    “There are hundreds of poisonous fungi in North America and several that are deadly,” Sigrid Jakob, president of the New York Mycological Society, told 401 Media. “They can look similar to popular edible species. A poor description in a book can mislead someone to eat a poisonous mushroom.”

    Fortune reached out to Amazon for comment but received no immediate reply. The company told The Guardian, however, “We take matters like this seriously and are committed to providing a safe shopping and reading experience. We’re looking into this.”

    The problem of A.I.-written books will likely increase in the years ahead as more scammers turn to chatbots to generate content to sell. Last month, the New York Times reported about travel guidebooks written by chatbots. Of 35 passages submitted to an artificial intelligence detector from a firm called Originality.ai, all of them were given a score of 100, meaning they almost certainly were written by A.I. 

    Jonathan Gillham, the founder of Originality.ai, warned of such books encouraging readers to travel to unsafe places, adding, “That’s dangerous and problematic.” 

    It’s not just books, of course. Recently a bizarre MSN article created with “algorithmic techniques” listed a food bank as a top destination in Ottawa, telling readers, “Consider going into it on an empty stomach.”

    Leon Frey, a field mycologist and foraging guide in the U.K., told The Guardian he spotted serious flaws in the mushroom field guides suspected of being written by A.I. Among them: referring to “smell and taste” as an identifying feature. “This seems to encourage tasting as a method of identification,” he said. “This should absolutely not be the case.” 

    The Guardian also submitted suspicious samples from such books to Originality.ai, which said, again, that each had rating of 100% on its A.I.-detection score.

    [ad_2]

    Steve Mollman

    Source link

  • How you relate to your dog gives hope to the fired engineer who claimed Google A.I. was sentient

    How you relate to your dog gives hope to the fired engineer who claimed Google A.I. was sentient

    [ad_1]

    Artificial intelligence will kill us all or solve the world’s biggest problems—or something in between—depending on who you ask. But one thing seems clear: In the years ahead, A.I. will integrate with humanity in one way or another.

    Blake Lemoine has thoughts on how that might best play out. Formerly an A.I. ethicist at Google, the software engineer made headlines last summer by claiming the company’s chatbot generator LaMDA was sentient. Soon after, the tech giant fired him.

    In an interview with Lemoine published on Friday, Futurism asked him about his “best-case hope” for A.I. integration into human life. 

    Surprisingly, he brought our furry canine companions into the conversation, noting that our symbiotic relationship with dogs has evolved over the course of thousands of years.

    “We’re going to have to create a new space in our world for these new kinds of entities, and the metaphor that I think is the best fit is dogs,” he said. “People don’t think they own their dogs in the same sense that they own their car, though there is an ownership relationship, and people do talk about it in those terms. But when they use those terms, there’s also an understanding of the responsibilities that the owner has to the dog.”

    Figuring out some kind of comparable relationship between humans and A.I., he said, “is the best way forward for us, understanding that we are dealing with intelligent artifacts.”

    Many A.I. experts, of course, disagree with his take on the technology, including ones still working for his former employer. After suspending Lemoine last summer, Google accused him of “anthropomorphizing today’s conversational models, which are not sentient.” 

    “Our team—including ethicists and technologists—has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims,” company spokesman Brian Gabriel said in a statement, though he acknowledged that “some in the broader A.I. community are considering the long-term possibility of sentient or general A.I.” 

    Gary Marcus, an emeritus professor of cognitive science at New York University, called Lemoine’s claims “nonsense on stilts” last summer and is skeptical about how advanced today’s A.I. tools really are. “We put together meanings from the order of words,” he told Fortune in November. “These systems don’t understand the relation between the orders of words and their underlying meanings.”

    But Lemoine isn’t backing down. He noted to Futurism that he had access to advanced systems within Google that the public hasn’t been exposed to yet.

     “The most sophisticated system I ever got to play with was heavily multimodal—not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it,” he said. “That’s the one that I was like, ‘You know this thing, this thing’s awake.’ And they haven’t let the public play with that one yet.”

    He suggested such systems could experience something like emotions. 

    “There’s a chance that—and I believe it is the case—that they have feelings and they can suffer and they can experience joy,” he told Futurism. “Humans should at least keep that in mind when interacting with them.”

    [ad_2]

    Steve Mollman

    Source link

  • Elon Musk’s history with OpenAI—the maker of AI chatbot ChatGPT—as told by ChatGPT itself

    Elon Musk’s history with OpenAI—the maker of AI chatbot ChatGPT—as told by ChatGPT itself

    [ad_1]

    ChatGPT has been making waves this week following its test release by OpenAI, the company behind it. The artificial intelligence chatbot has evoked amazed, amused, and concerned reactions to it and generally created major buzz on social media. Many have speculated ChatGPT will disrupt Google’s search business. It can also debug code, write in a famous author’s voice, and help students cheat, among many other things.

    The buzz will likely ramp up even more when OpenAI releases a superior next version of the AI chatbot, reportedly sometime next year.

    Speaking of buzz, few people have been generating more of it lately than Elon Musk, who leads Tesla, SpaceX, and now Twitter, among other companies. As it turns out, Musk has ties to OpenAI, including as an original backer, and has been involved in both supporting artificial intelligence and warning about its dangers

    This month Musk called ChatGPT “scary good” and warned, “We are not far from dangerously strong AI.” 

    Musk resigned from OpenAI’s board of directors in 2018, citing possible conflicts of interest due to Tesla’s own AI work. In 2019, OpenAI changed from a nonprofit to a “capped profit” corporation, a hybrid of a for-profit and nonprofit. In 2019 it received a $1 billion investment from Microsoft. By 2020, Musk tweeted, “I have no control & only very limited insight into OpenAI,” adding his confidence in its safety was “not high.” 

    This month he tweeted: “I just learned that OpenAI had access to Twitter database for training. I put that on pause for now. Need to understand more about governance structure & revenue plans going forward. OpenAI was started as open-source & non-profit. Neither are still true.”

    But we’ll let ChatGPT itself talk more about Musk’s history with OpenAI. The rest of this article was written by the AI chatbot, generated in seconds as answers to our questions:

    Elon Musk has had a long relationship with OpenAI, dating back to the organization’s founding in 2015. He was one of the original backers of the organization, providing both financial and strategic support. Over the years, he has remained an advocate for the organization and its mission to promote and develop friendly AI.

    Based on his public statements and actions, it appears that he has some concerns about the potential dangers of advanced AI and its potential impact on society. He has often spoken about the need for AI to be developed and used in a responsible and ethical manner, and has even founded organizations such as OpenAI to help advance this goal.

    Elon Musk has publicly expressed concerns about the potential dangers of advanced AI. In particular, he has warned about the risk of AI being used for malicious purposes, such as to develop weapons or to interfere with elections. He has also expressed concern about the potential for AI to surpass human intelligence and become a threat to humanity.

    Tesla uses AI in several of its products and services. For example, the company’s Autopilot feature uses AI to help vehicles navigate roads and highways with limited or no input from the driver. The company’s cars also use AI for tasks such as lane detection, adaptive cruise control, and automatic emergency braking. Additionally, Tesla is using AI in its manufacturing processes to help improve efficiency and productivity.

    It is possible that Elon Musk has hired employees from OpenAI, but I do not have access to that information as I am a large language model trained by OpenAI and do not have the ability to browse the internet. OpenAI is an independent organization, and while Elon Musk has been involved with the organization in the past, he is not directly involved in its day-to-day operations or decision-making. 

    Our new weekly Impact Report newsletter examines how ESG news and trends are shaping the roles and responsibilities of today’s executives. Subscribe here.

    [ad_2]

    Steve Mollman

    Source link