{"id":702,"date":"2025-09-26T18:33:48","date_gmt":"2025-09-26T18:33:48","guid":{"rendered":"https:\/\/globaltalentholding.com\/?p=702"},"modified":"2025-09-30T12:32:26","modified_gmt":"2025-09-30T12:32:26","slug":"instagram-design-still-makes-it-unsafe-for-teens-report","status":"publish","type":"post","link":"https:\/\/globaltalentholding.com\/index.php\/2025\/09\/26\/instagram-design-still-makes-it-unsafe-for-teens-report\/","title":{"rendered":"Instagram design still makes it unsafe for teens: Report"},"content":{"rendered":"
A new report from a Meta whistleblower and university researchers found that many of Instagram\u2019s teen safety features<\/a>, introduced over the years in the face of congressional and public pressure, fall short of their aims and fail to protect younger users<\/a>.\u00a0<\/p>\n Nearly two-thirds of the social media platform\u2019s safety features were deemed ineffective or were no longer available, according to the report<\/a> from former Facebook engineering director Arturo B\u00e9jar and Cybersecurity for Democracy, a research project from New York University and Northeastern University.\u00a0<\/p>\n \u201cMeta\u2019s new safety measures are woefully ineffective,\u201d Ian Russell and Maurine Molak, who lost children following cyberbullying and exposure<\/a> to depression and suicide-related content<\/a> on Instagram, wrote in a foreword to the report. \u00a0<\/p>\n Russell’s Molly Rose Foundation and Molak’s ParentsSOS were also participants in the report, alongside the kids’ safety group Fairplay.<\/p>\n \u201cIf anyone still believes that the company will willingly change on its own and prioritize youth well-being over engagement and profits, we hope this report will put that to rest once and for all,\u201d they added. <\/p>\n Out of 47 safety features tested by researchers, 64 percent received a \u201cred\u201d rating, signifying that it was no longer available or that it was \u201ctrivially easy to circumvent or evade with less than three minutes of effort.\u201d <\/p>\n This included features such as filtering<\/a> for certain keywords or offensive content in comments, warnings on captions or chats, certain blocking capabilities and messaging restrictions<\/a> between adults and teens. \u00a0<\/p>\n Another 19 percent of Instagram\u2019s safety features received a \u201cyellow\u201d rating from researchers who found that they reduced harm but still faced limitations. <\/p>\n For instance, the report rated a feature that allows users to swipe to delete inappropriate comments as \u201cyellow\u201d because accounts can simply continue commenting and users cannot provide a reason for their decision. <\/p>\n Parental supervision tools<\/a> that can restrict teens’ use or provide information about when their child makes a report were also placed in this middle category because they were unlikely to be used by many parents.\u00a0<\/p>\n Researchers gave the remaining 17 percent of safety tools \u2014 such as the ability to turn off comments, restrictions on who can tag or mention teens and tools prompting parents to approve or deny changes to the default settings of their kids\u2019 accounts \u2014 a \u201cgreen\u201d rating. <\/p>\n Meta, the parent company of Instagram and Facebook, called the report \u201cmisleading and dangerously speculative\u201d and suggested it undermines the conversation around teen safety. <\/p>\n \u201cThis report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today,\u201d the company said in a statement shared with The Hill.<\/p>\n \u201cThe reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night,\u201d the company continued. \u201cParents also have robust tools at their fingertips, from limiting usage to monitoring interactions. We’ll continue improving our tools, and we welcome constructive feedback – but this report is not that.\u201d <\/p>\n Meta voiced concerns about how the report assessed the company’s safety updates, noting that several of the tools worked as designed but received a yellow rating because researchers suggested they should go further. <\/p>\n A blocking feature received a “yellow” rating, even though the report acknowledged that it works, because users cannot provide the reason why they wish to block an account, which researchers noted would be \u201can invaluable signal to detect malicious accounts.\u201d <\/p>\n B\u00e9jar, the former Facebook employee who participated in the report, testified before Congress<\/a> in 2023. He accused the company\u2019s<\/a> top executives of dismissing warnings about teens experiencing\u00a0unwanted sexual advances and bullying on Instagram.\u00a0<\/p>\n His testimony came two years after Facebook whistleblower Frances Haugen came forward with allegations that<\/a> the company was aware of the negative mental health impacts of its products on young girls.\u00a0<\/p>\n