Instagram announced Thursday that it will begin notifying parents if their children repeatedly search for terms explicitly associated with suicide or self-harm, reports Baltimore Chronicle via The Associated Press. These alerts will be sent only to parents who have enrolled in Instagram’s parental supervision program, allowing them to monitor and support their teen’s mental health more effectively.
The platform already prevents such content from appearing in teen search results and directs users to crisis helplines instead. Meta, Instagram’s parent company, emphasized that this measure is part of a broader effort to provide tools for parents without unnecessarily alarming them.
The announcement comes amid ongoing legal proceedings against Meta. In Los Angeles, a trial is examining whether Meta’s social media platforms intentionally addict minors and harm their wellbeing. Another trial in New Mexico is assessing whether the company failed to protect children from sexual exploitation on its platforms. Thousands of families, school districts, and governmental organizations have filed lawsuits against Meta and other social media companies, alleging that their platforms are designed to be addictive and insufficiently safeguard children from content linked to depression, eating disorders, and suicide.
Meta executives, including CEO Mark Zuckerberg, have consistently denied that their platforms cause addiction. In testimony during the Los Angeles case, Zuckerberg reiterated that current scientific research has not proven a direct causal link between social media use and mental health harm.
Notifications to parents will be delivered via email, text message, or WhatsApp, depending on the contact information provided, and also as an alert through the parent’s Instagram account.
“Our aim is to empower parents to intervene when their teen’s searches indicate potential need for support. At the same time, we are cautious not to overuse notifications, which could reduce their overall effectiveness,” Meta stated in a blog post.
In addition, Meta is developing similar alerts related to teens’ interactions with artificial intelligence on the platform. “These notifications will inform parents if a teen engages in certain AI conversations concerning suicide or self-harm,” Meta explained. “This work is ongoing, and we plan to share further updates in the coming months.”
Earlier we wrote about Tropical Cyclone Horacio Reaches Category 5 with 160 mph Winds in Remote South Indian Ocean