ReportWire

Parents warned over Instagram’s new teen rules: “false sense of security”

[ad_1]

Instagram is introducing its biggest update yet to online safety for young users by applying PG-13 content guidelines to all teen accounts, Meta announced this week. 

Under the new regime, under-18s will continue being blocked from seeing sexually suggestive or explicitly violent content as before, but Meta said that the app will now step further by avoiding recommending posts containing strong language, risky stunts or anything that could “encourage potentially harmful behaviors.” 

Newsweek reached out to Meta’s press team via email.  

Instagram will also block searches on mature topics, such as “alcohol” or “gore”; penalize accounts that repeatedly post age-inappropriate content; and extend the curbs to Instagram’s AI features. Importantly, teens under 18 will be automatically placed in the 13+ mode and cannot opt out without parental permission.  

For parents seeking greater controls, Meta is introducing a stricter “Limited Content” mode that further restricts teen access and disables comment interactions.  

These changes will begin rolling out this week in Canada, the U.S., the U.K. and Australia, with global rollout scheduled by end of 2025, but campaigners, parents and tech experts remain deeply skeptical about how effective this shift will be in practice. 

Campaigner Concerns 

Advocacy groups argue that these revisions are far from sufficient. A recent report by the HEAT Initiative, ParentsTogether and others found that 60 percent of 13- to 15-year-olds had encountered unsafe content or unwanted messages on Instagram in the past six months, despite existing safety tools.  

Yaron Litwin, an online safety and AI expert, told Newsweek that enforcement will determine whether these new measures succeed. 

Litwin said: “Hopefully, its age prediction model will actually prevent … some children from accessing explicit and dangerous content on their feeds.

“However, that is [a] big if, and in any case, there is much harmful content on social-media platforms, including Instagram, that are not obvious enough for filters to catch.” 

Meta’s age classification system detects when a user is under 18, even if they claim otherwise. It analyzes signals from their profile and behavior, such as which accounts they follow, what content they engage with, and when their account was created to estimate whether they are likely underage. 

“Whether it’s hate speech, glorification of eating disorders, content that is technically compliant although very suggestive, a young Instagram user can still be exposed to much that his or her parents would find objectionable,” Litwin added. 

Parental Perspective 

Many parents have long struggled to monitor their teens’ online experience. U.K.-based mom Faye McCann is concerned about how the new guidelines will work in practice.  

McCann, also a business strategist and social media expert, told Newsweek there is a big gap between what Meta says its offering and what teens will actually see. 

“I can’t help but feel this is partly a reaction to years of public pressure,” McCann said. “Meta has been criticized relentlessly about teen safety, and this feels like a step in the right direction, but it’s not the full solution parents and campaigners have been asking for.

“I fully understand their intentions, but, right now, it feels more like a box-ticking exercise than a deep commitment to genuinely protecting young people.” 

Algorithms vs Real Life 

Other experts agreed that moderation—not messaging—is the real challenge. 

Miruna Dragomir, the chief marketing officer at Planable, a social-media management platform, said Instagram’s new rating system may make sense to parents, but it doesn’t solve the underlying problem. She added that young users are adept at outsmarting moderation systems. 

“People who use social media, especially youth, are very good at getting around limits by using code phrases, trendy lingo, and visual indicators that AI systems have trouble understanding,” Dragomir told Newsweek. “Every time a policy changes, kids come up with new ways to get around it, and they often know more about how to use the platform than adults do.”  

Dragomir said that these changes could give parents “a false sense of security.” 

“The most-honest answer is that these rules are a big step toward making areas safer, but they aren’t the only thing that will work,” she added. “Parents need to be involved in their teens’ online lives on a regular basis instead of just trusting what the site says. The best way to keep teens safe is to use better platform tools and have open family talks on how to think critically and use technology.” 

For parents like McCann, transparency is a priority. “I want clear, simple ways to see what my children are being exposed to and control over that exposure,” she said. “That means tools that actually work, not just guidelines on paper. Instagram can set the rules all it wants, but unless they can make them enforceable in the real world, teens will still find a way around them—and that’s where the real risk lies.” 

[ad_2]

Source link