Instagram reinforces privacy and parental control with “teen accounts”

Meta-owned Instagram has introduced “teen accounts” as it seeks to reinforce its privacy protection and parental control capabilities for younger users.

The new experience will enable accounts opened by teenagers to have integrated built-in protections that will limit who can contact them and the type of content they see, while providing new ways for younger users to explore content that interests them.

With the move, teen users will automatically be granted these restrictions, with people under the age of 16 requiring parents’ permission to change any of these settings.

The new framework aims to provide maximised support for parents, giving them “peace of mind” that their teens are safe with the right protections in place, Instagram stated.

Teens will get access to a range of features, including a sensitive control feature uniquely designed to guide them into selecting topics with recommendations on positive content they might want to explore.

It will also limit the type of sensitive content (such as content that shows people fighting or promotes cosmetic procedures) teens see in categories like Explore and Reels.

The new capabilities will also place teens in the strictest messaging settings, so they will only be messaged by people they follow or are already connected to, with anti-bullying and hidden words features ensuring offensive language will be filtered across comments and DM requests.

Other settings will provide a set of notifications teens will receive after a browsing time of up to 60 minutes a day, and a sleep mode functionality that will mute any notifications received after 10pm.

The new “teen accounts” are being introduced from Tuesday in the UK, US, Canada, and Australia, as social media firms are facing increasing pressures to make their platforms safer amid concerns that young people are not protected enough from harmful online content.

UK children’s charity NSPCC said Instagram’s announcement represents a “step in the right direction”, but it added that account settings can “put the emphasis on children and parents needing to keep themselves safe.”

NSPCC’s online child safety policy manager Rani Govender highlighted that proactive measures are also needed to prevent harmful content and sexual abuse from proliferating on Instagram in the first place.

Meta stated the experience will provide “better support for parents and give them peace of mind that their teens are safe with the right protections in place.”

In 2022, Instagram introduced Family Centre, an app that allowed parents to supervise certain activities and includes an education hub with tips on discussing social media with teens. It also previously claimed to have more than 50 tools aimed at protecting children from online threats.

Last year, social media platform Snapchat rolled out new parental Content Controls to help prevent minors from being exposed to potentially inappropriate content while using the app.

The feature allowed parents and guardians to access new content filtering capabilities through Snapchat’s Family Centre supervision tool to block “sensitive or suggestive” content from appearing on their child’s Snapchat Stories or Spotlight feed.



Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.