ReportWire

Tag: Media and Technology

  • A 14-year-old’s suicide was prompted by an AI chatbot, lawsuit alleges. Here’s how parents can keep kids safe.

    A 14-year-old’s suicide was prompted by an AI chatbot, lawsuit alleges. Here’s how parents can keep kids safe.

    The mother of a 14-year-old Florida boy is suing an AI chatbot company after her son, Sewell Setzer III, died by suicide—something she claims was driven by his relationship with an AI bot. 

    “Megan Garcia seeks to prevent C.AI from doing to any other child what it did to hers,” reads the 93-page wrongful-death lawsuit that was filed this week in a U.S. District Court in Orlando against Character.AI, its founders, and Google.

    Tech Justice Law Project director Meetali Jain, who is representing Garcia, said in a press release about the case: “By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids. But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

    Character.AI released a statement via X, noting, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here: https://blog.character.ai/community-safety-updates/….”

    In the suit, Garcia alleges that Sewell, who took his life in February, was drawn into an addictive, harmful technology with no protections in place, leading to an extreme personality shift in the boy, who appeared to prefer the bot over other real-life connections. His mom alleges that “abusive and sexual interactions” took place over a 10-month period. The boy committed suicide after the bot told him, “Please come home to me as soon as possible, my love.”

    On Friday, New York Times reporter Kevin Roose discussed the situation on his Hard Fork podcast, playing a clip of an interview he did with Garcia for his article that told her story. Garcia did not learn about the full extent of the bot relationship until after her son’s death, when she saw all the messages. In fact, she told Roose, when she noticed Sewell was often getting sucked into his phone, she asked what he was doing and who he was talking to. He explained it was “‘just an AI bot…not a person,’” she recalled, adding, “I felt relieved, like, OK, it’s not a person, it’s like one of his little games.” Garcia did not fully understand the potential emotional power of a bot—and she is far from alone. 

    “This is on nobody’s radar,” Robbie Torney, program manager, AI, at Common Sense Media and lead author of a new guide on AI companions aimed at parents—who are grappling, constantly, to keep up with confusing new technology and to create boundaries for their kids’ safety. 

    But AI companions, Torney stresses, differ from, say, a service desk chat bot that you use when you’re trying to get help from a bank. “They’re designed to do tasks or respond to requests,” he explains. “Something like character AI is what we call a companion, and is designed to try to form a relationship, or to simulate a relationship, with a user. And that’s a very different use case that I think we need parents to be aware of.” That’s apparent in Garcia’s lawsuit, which includes chillingly flirty, sexual, realistic text exchanges between her son and the bot. 

    Sounding the alarm over AI companions is especially important for parents of teens, Torney says, as teens—and particularly male teens—are especially susceptible to over reliance on technology. 

    Below, what parents need to know.  

    What are AI companions and why do kids use them?

    According to the new Parents’ Ultimate Guide to AI Companions and Relationships from Common Sense Media, created in conjunction with the mental health professionals of the Stanford Brainstorm Lab, AI companions are “a new category of technology that goes beyond simple chatbots.” They are specifically designed to, among other things, “simulate emotional bonds and close relationships with users, remember personal details from past conversations, role-play as mentors and friends, mimic human emotion and empathy, and “agree more readily with the user than typical AI chatbots,” according to the guide. 

    Popular platforms include not only Character.ai, which allows its more than 20 million users to create and then chat with text-based companions; Replika, which offers text-based or animated 3D companions for friendship or romance; and others including Kindroid and Nomi.

    Kids are drawn to them for an array of reasons, from non-judgmental listening and round-the-clock availability to emotional support and escape from real-world social pressures. 

    Who’s at risk and what are the concerns?

    Those most at risk, warns Common Sense Media, are teenagers—especially those with “depression, anxiety, social challenges, or isolation”—as well as males, young people going through big life changes, and anyone lacking support systems in the real world. 

    That last point has been particularly troubling to Raffaele Ciriello, a senior lecturer in Business Information Systems at the University of Sydney Business School, who has researched how “emotional” AI is posing a challenge to the human essence. “Our research uncovers a (de)humanization paradox: by humanizing AI agents, we may inadvertently dehumanize ourselves, leading to an ontological blurring in human-AI interactions.” In other words, Ciriello writes in a recent opinion piece for The Conversation with PhD student Angelina Ying Chen, “Users may become deeply emotionally invested if they believe their AI companion truly understands them.”

    Another study, this one out of the University of Cambridge and focusing on kids, found that AI chatbots have an “empathy gap” that puts young users, who tend to treat such companions as “lifelike, quasi-human confidantes,” at particular risk of harm.

    Because of that, Common Sense Media highlights a list of potential risks, including that the companions can be used to avoid real human relationships, may pose particular problems for people with mental or behavioral challenges, may intensify loneliness or isolation, bring the potential for inappropriate sexual content, could become addictive, and tend to agree with users—a frightening reality for those experiencing “suicidality, psychosis, or mania.” 

    How to spot red flags 

    Parents should look for the following warning signs, according to the guide:

    • Preferring AI companion interaction to real friendships
    • Spending hours alone talking to the companion
    • Emotional distress when unable to access the companion
    • Sharing deeply personal information or secrets
    • Developing romantic feelings for the AI companion
    • Declining grades or school participation
    • Withdrawal from social/family activities and friendships
    • Loss of interest in previous hobbies
    • Changes in sleep patterns
    • Discussing problems exclusively with the AI companion

    Consider getting professional help for your child, stresses Common Sense Media, if you notice them withdrawing from real people in favor of the AI, showing new or worsening signs of depression or anxiety, becoming overly defensive about AI companion use, showing major changes in behavior or mood, or expressing thoughts of self-harm. 

    How to keep your child safe

    • Set boundaries: Set specific times for AI companion use and don’t allow unsupervised or unlimited access. 
    • Spend time offline: Encourage real-world friendships and activities.
    • Check in regularly: Monitor the content from the chatbot, as well as your child’s level of emotional attachment.
    • Talk about it: Keep communication open and judgment-free about experiences with AI, while keeping an eye out for red flags.

    “If parents hear their kids saying, ‘Hey, I’m talking to a chat bot AI,’ that’s really an opportunity to lean in and take that information—and not think, ‘Oh, okay, you’re not talking to a person,” says Torney. Instead, he says, it’s a chance to find out more and assess the situation and keep alert. “Try to listen from a place of compassion and empathy and not to think that just because it’s not a person that it’s safer,” he says, “or that you don’t need to worry.”

    If you need immediate mental health support, contact the 988 Suicide & Crisis Lifeline.

    More on kids and social media:

    Beth Greenfield

    Source link

  • Jeff Bezos doubles down on unprecedented block of a presidential endorsement from ‘The Washington Post’ but admits ‘I am not an ideal owner’

    Jeff Bezos doubles down on unprecedented block of a presidential endorsement from ‘The Washington Post’ but admits ‘I am not an ideal owner’

    Amazon founder Jeff Bezos might not allow The Washington Post to run its traditional endorsement of a presidential candidate, but he’s willing to pen and run an op-ed justifying his move. It’s all in the name of keeping the media unbiased, Jeff Bezos insists.

    Last Friday, the Post announced it was not endorsing a candidate in the upcoming election, which has been deemed by some to be one of the closest in America’s modern history. Sources said two Post writers produced an article that endorsed Kamala Harris, but the story was killed by Bezos, the outlet’s billionaire owner. 

    Facing backlash, Bezos is standing by his words. But Bezos’ op-ed indicates this is a change of policy for future elections. On the topic of endorsements, he said “ending them is a principled decision, and it’s the right one.” He called his decision “a meaningful step in the right direction” when it comes to regaining the trust of readers amidst disillusionment with the sector in general.

    Citing Gallup’s data regarding slipping belief in institutions including the media, Bezos wrote “our profession is now the least trusted of all. Something we are doing is clearly not working.” Despite being the owner of the Post since 2013, Bezos made his wealth and spent most of his career in the tech sector where he founded Amazon. Amazon did not respond immediately to requests for comment.

    “It would be easy to blame others for our long and continuing fall in credibility (and, therefore, decline in impact), but a victim mentality will not help,” Bezos wrote. “Complaining is not a strategy.” Going on to claim that “presidential endorsements do nothing to tip the scales of an election,” Bezos said all they do is “create a perception of bias.” 

    Research from professors at Brown University shows that said endorsements are actually pretty influential “in the sense that voters are more likely to support the recommended candidate after publication of the endorsement.” But influence varies based on one’s bias. 

    Even Bezos admits the timing is a little off, as the election is only two weeks away from when the decision was announced. Calling the move “inadequate planning, and not some intentional strategy,” he insists there’s “no quid pro quo of any kind at work here.” That’s all despite Dave Limp, chief executive at Bezos’ Blue Origin, meeting with Republican candidate Donald Trump the day of the announcement. 

    Bezos said he didn’t know about the meeting beforehand, and implored people to trust him. Calling upon his track record at the Post, Bezos said his views are “principled.” 

    Perhaps this is not the job for a billionaire, concedes Bezos (though without any apparent desire to resign). “When it comes to the appearance of conflict, I am not an ideal owner of The Post,” he wrote, noting that officials at Amazon, Blue Origin, or other company he’s invested in are often meeting with politicians. “I once wrote that The Post is a ‘complexifier’ for me. It is, but it turns out I’m also a complexifier for The Post.” 

    The newspaper with the slogan “democracy lies in darkness,” has endorsed a candidate since 1976—the only other time the Post declined to do so was in 1988, according to NPR. The choice to stay on the sidelines was met with some swift backlash from both internal and external figures. 

    Editor-at-large Robert Kagan resigned the same day as the announcement regarding the change in endorsements, telling CNN that the policy was “obviously an effort by Jeff Bezos to curry favor with Donald Trump in the anticipation of his possible victory,” as “Trump has threatened to go after Bezos’ business.” Three out ten people on the Post’s editorial board also stepped down because of the decision, while other journalists and columnists also quit in response. 

    An op-ed signed by 21 Post columnists disavows the choice as a “terrible mistake,” adding it “represents an abandonment of the fundamental editorial convictions of the newspaper that we love.”

    Bezos’ choice also caused a dent in readership: As of Monday, more than 200,000 people—representing around 8% of the outlet’s total subscriber base—canceled their subscriptions to the Post, sources told NPR.

    “It’s a colossal number,” former Post executive editor Marcus Brauchli told NPR of the dip in subscribers, adding there’s no way to know “why the decision was made.:

    A likely crucial element to America’s distrust of the media is their growing skepticism of the rich. As wealth inequality balloons, more than half of (59%) of Americans reportedly believe billionaires create a more unfair society per Harris Poll’s released survey of more than 2,100 U.S. adults.

    While respondents have some regard for billionaire’s influence over the economy, many want them out of certain spheres. One of them is the media, as 42% of Americans don’t think billionaires should be able to purchase businesses in the media sector.

    As one of the richest people in the world, Bezos’ wealth isn’t just the elephant in the room; it’s basically the whole room. “You can see my wealth and business interests as a bulwark against intimidation, or you can see them as a web of conflicting interests,” he wrote in his op-ed. It seems that some Americans see it as the latter.

    Chloe Berger

    Source link