ReportWire

Why This AI Company Just Stopped Minors From Using Its Chatbots

[ad_1]

“We do not take this step of removing open-ended Character chat lightly—but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” the company said.

In addition to removing access for users under 18, the company announced that it is working on age verification measures, and that it is establishing a nonprofit called the AI Safety Lab that will be focused on “innovating safety alignment for next-generation AI entertainment features.” Previous safety measures taken by the company include a notification sending users to the National Suicide Prevention Lifeline when self-harm and suicide are mentioned during chatbot conversations.

The decision comes after lawsuits against Character.AI filed by families and parents alleging that the company was liable for the death of their children. In August, Ken Paxton, the Texas attorney general, announced an investigation into the company and Meta AI Studio for “potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools.”

[ad_2]

Ben Butler

Source link