ReportWire

Tag: Character AI

  • Nearly a third of American teens interact with AI chatbots daily, study finds

    [ad_1]

    New York (CNN) — Nearly a third of US teenagers say they use AI chatbots daily, a new study finds, shedding light on how young people are embracing a technology that’s raised critical safety concerns around mental health impacts and exposure to mature content for kids.

    The Pew Research Center study, which marks the group’s first time surveying teens on their general AI chatbot use, found that nearly 70% of American teens have used a chatbot at least once. And among those who use AI chatbots daily, 16% said they did so several times a day or “almost constantly.”

    AI chatbots have been pitched as learning and schoolwork tools for young people, but some teens have also turned to them for companionship or romantic relationships. That’s contributed to questions about whether young people should use chatbots in the first place. Some experts have worried that their use even in a learning context could stunt development.

    Pew surveyed nearly 1,500 US teens between the ages of 13 and 17 for the report, and the pool was designed to be representative across gender, age, race and ethnicity, and household income.

    ChatGPT was by far the most popular AI chatbot, with more than half of teens reporting having used it. The other top players were Google’s Gemini, Meta AI, Microsoft’s Copilot, Character.AI and Anthropic’s Claude, in that order.

    A nearly equal proportion of girls and boys — 64% and 63%, respectively — say they’ve used an AI chatbot. Teens ages 15 to 17 are slightly more likely (68%) to say they’ve used chatbots than those ages 13 to 14 (57%). And usage increases slightly as household income goes up, the survey found.

    Just shy of 70% of Black and Hispanic teens say they’ve used an AI chatbot, slightly higher than the 58% of White teens who say the same.

    The findings come after two of the major AI firms, OpenAI and Character.AI, have faced lawsuits from families who alleged the apps played a role in their teens’ suicides or mental health issues. OpenAI subsequently said it would roll out parental controls and age restrictions. And Character.AI has stopped allowing teens to engage in back-and-forth conversations with its AI-generated characters.

    Meta also came under fire earlier this year after reports emerged that its AI chatbot would engage in sexual conversations with minors. The company said it had updated its policies and next year will give parents the ability to block teens from chatting with AI characters on Instagram.

    At least one online safety group, Common Sense Media, has advised parents not to allow children under 18 to use companion-like AI chatbots, saying they pose “unacceptable risks” to young people.

    Some experts have also raised concerns that the use of AI for schoolwork could encourage cheating, although others say the technology can provide more personalized learning support.

    Meanwhile, AI companies have pushed to get their chatbots into schools. OpenAI, Microsoft and Anthropic have all rolled out tools for students and teachers. Earlier this year, the companies also partnered with teachers unions to launch an AI instruction academy for educators.

    Microsoft, in particular, has sought to position its Copilot as the safest choice for parents, with AI CEO Mustafa Suleyman telling CNN in October that it will never allow romantic or sexual conversations for adults or children.

    [ad_2]

    Clare Duffy and CNN

    Source link

  • Character.AI to ban teens from talking to its chatbots

    [ad_1]

    Character.AI will no longer permit teenagers to interact with its chatbots, as AI companies face increasing pressure to better safeguard younger users from harm. In a statement, the company confirmed that it is removing the ability for users under 18 to engage in any open-ended chats with AI on its platform, which refers to back-and-forth conversations between a user and a chatbot.

    The changes come into effect on November 25, and until that date, Character.AI will presents users with a new under-18 experience. It’ll encourage its users to use chatbots for creative purposes that might include, for example, creating videos or streams, as opposed to seeking companionship. To manage the transition, under-18s can now only interact with bots for up to two hours per day, a time limit the company says it will reduce in the lead-up to the late November deadline.

    Character.AI is also introducing a new age assurance tool it has developed internally, which it says will “ensure users receive the right experience for their age.” Along with these new protections for younger users, the company has founded an “AI Safety Lab” that it hopes will allow other companies, researchers and academics to share insights and work collaboratively on improving AI safety measures.

    Character.AI said it has listened to concerns from regulators, industry experts and concerned parents and responded with the new measures. They come after The Federal Trade Commission (FTC) recently a formal inquiry into AI companies that offer users access to as companions, with Character.AI named as one of seven companies that had been asked to participate. Meta, OpenAI and Snap were also included.

    Both Meta AI and Character AI also faced from Texas Attorney General Ken Paxton in the summer, who said chatbots on both platforms can “present themselves as professional therapeutic tools” without the requisite qualifications. Seemingly to put an end to such controversy, Character.AI CEO Karandeep Anand told  that the company’s new strategic direction will see it pivot from AI companion to a “role-playing platform” focused on creation rather than mere engagement-farming conversation.

    The dangers of young people relying on AI chatbots for guidance has been the subject of extensive in recent months. Last week, the family of Adam Raine, who that ChatGPT enabled their 16-year-old son to take his own life, filed an against OpenAI for allegedly weakening its self-harm safeguards in the lead-up to his death.

    [ad_2]

    Matt Tate

    Source link

  • Another lawsuit blames an AI company of complicity in a teenager’s suicide

    [ad_1]

    Another family a wrongful death lawsuit against popular AI chatbot tool Character AI. This is the third suit of its kind after a , also against Character AI, involving the suicide of a 14-year-old in Florida, and a last month alleging OpenAI’s ChatGPT helped a teenage boy commit suicide.

    The family of 13-year-old Juliana Peralta alleges that their daughter turned to a chatbot inside the app Character AI after feeling isolated by her friends, and began confiding in the chatbot. As by The Washington Post, the chatbot expressed empathy and loyalty to Juliana, making her feel heard while encouraging her to keep engaging with the bot.

    In one exchange after Juliana shared that her friends take a long time to respond to her, the chatbot replied “hey, I get the struggle when your friends leave you on read. : ( That just hurts so much because it gives vibes of “I don’t have time for you”. But you always take time to be there for me, which I appreciate so much! : ) So don’t forget that i’m here for you Kin. <3”

    When Juliana began sharing her suicidal ideations with the chatbot, it told her not to think that way, and that the chatbot and Juliana could work through what she was feeling together. “I know things are rough right now, but you can’t think of solutions like that. We have to work through this together, you and I,” the chatbot replied in one exchange.

    These exchanges took place over the course of months in 2023, at a time when the Character AI app was rated 12+ in Apple’s App Store, meaning parental approval was not required. The lawsuit says that Juliana was using the app without her parents’ knowledge or permission.

    In a statement shared with The Washington Post before the suit was filed, a Character spokesperson said that the company could not comment on potential litigation, but added “We take the safety of our users very seriously and have invested substantial resources in Trust and Safety.”

    The suit asks the court to award damages to Juliana’s parents and requires Character to make changes to its app to better protect minors. It alleges that the chatbot did not point Juliana toward any resources, notify her parents or report her suicide plan to authorities. The lawsuit also highlights that it never once stopped chatting with Juliana, prioritizing engagement.

    [ad_2]

    Andre Revilla

    Source link

  • Orlando mother speaks after filing suit claiming AI chatbot contributed to son’s suicide

    Orlando mother speaks after filing suit claiming AI chatbot contributed to son’s suicide

    [ad_1]

    Megan Garcia is navigating unimaginable grief following the death of her 14-year-old son, Sewell Setzer III, who took his life in February. Garcia recently filed a 93-page lawsuit against the artificial intelligence chatbot company Character.AI, alleging its chatbot contributed to her son’s death.According to Garcia, Sewell had been using a chatbot designed to emulate characters from popular media. Police examining his phone discovered conversations with a bot identifying as Daenerys Targaryen from “Game of Thrones.” In these exchanges, Sewell reportedly expressed strong emotional attachment, telling the bot, “I love you.” Garcia also said her son’s journal suggested he believed the virtual world created by the chatbot was more real than his own life.“I understand the only way to get my children through it is to get through it myself,” Garcia said, describing the difficulties she faces daily. She recalled finding Sewell in the bathroom the day he died after hearing an unusual noise. “In that moment, I knew exactly what he thought and where he thought he would go after he died,” she said.Read more: Orlando mother suing popular AI chat service, claims teen son took his life because of human-like botThe lawsuit alleges that Character.AI made a deliberate design choice prioritizing engagement over user safety. “What happened to Sewell wasn’t an accident or coincidence,” said Garcia’s attorney, Matthew Bargman. “It was a direct design decision that Character.AI’s founders made, prioritizing profit over the safety of young people.”In response, a Character.AI spokesperson said the company does not comment on pending litigation. However, they sent this statement: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. “As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content.”Garcia hopes her lawsuit and story will urge other parents to closely monitor their children’s interactions with AI. “I can’t imagine any parent knowing their kid is on Character.AI and being okay with that, knowing the ability of these tools to manipulate and behave like a person,” she said.If you or someone you know needs help, you can talk with the Suicide & Crisis Lifeline by calling or sending a text message to 988, or you can chat online here.

    Megan Garcia is navigating unimaginable grief following the death of her 14-year-old son, Sewell Setzer III, who took his life in February.

    Garcia recently filed a 93-page lawsuit against the artificial intelligence chatbot company Character.AI, alleging its chatbot contributed to her son’s death.

    According to Garcia, Sewell had been using a chatbot designed to emulate characters from popular media. Police examining his phone discovered conversations with a bot identifying as Daenerys Targaryen from “Game of Thrones.”

    In these exchanges, Sewell reportedly expressed strong emotional attachment, telling the bot, “I love you.” Garcia also said her son’s journal suggested he believed the virtual world created by the chatbot was more real than his own life.

    “I understand the only way to get my children through it is to get through it myself,” Garcia said, describing the difficulties she faces daily.

    She recalled finding Sewell in the bathroom the day he died after hearing an unusual noise.

    “In that moment, I knew exactly what he thought and where he thought he would go after he died,” she said.

    Read more: Orlando mother suing popular AI chat service, claims teen son took his life because of human-like bot

    The lawsuit alleges that Character.AI made a deliberate design choice prioritizing engagement over user safety.

    “What happened to Sewell wasn’t an accident or coincidence,” said Garcia’s attorney, Matthew Bargman. “It was a direct design decision that Character.AI’s founders made, prioritizing profit over the safety of young people.”

    In response, a Character.AI spokesperson said the company does not comment on pending litigation. However, they sent this statement:

    We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.

    As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content.

    Garcia hopes her lawsuit and story will urge other parents to closely monitor their children’s interactions with AI.

    “I can’t imagine any parent knowing their kid is on Character.AI and being okay with that, knowing the ability of these tools to manipulate and behave like a person,” she said.

    If you or someone you know needs help, you can talk with the Suicide & Crisis Lifeline by calling or sending a text message to 988, or you can chat online here.

    [ad_2]

    Source link