top of page

You Must Be This Old To Scroll: Lessons from Adolescence

Updated: Aug 20

In this article, Malcolm reflects on lessons from the Netflix series, 'Adolescence'. He highlights how social media algorithms make it easy for youths to access age-inappropriate content, and discusses "teen accounts" or age-limited accounts as a possible solution. Pornography is harmful. And what Jamie from Netflix’s Adolescence viewed on his Instagram feed; as the show goes, this content motivated Jamie to kill his classmate.


Pictured: Screengrab of Jamie interacting with his therapist in 'Adolescence'
Pictured: Screengrab of Jamie interacting with his therapist in 'Adolescence'

Why are teens, like Jamie, shown inappropriate content on social media? The answer is content moderation. Put another way, the promotion, demotion, and labelling of content may promote age-inappropriate content. Unfortunately, teenagers viewing this content could believe the content’s messages. Teens holding these newfound beliefs may then treat others harmfully. But a solution can be found; Teen Accounts can minimise age-inappropriate content.


How Algorithms Curate Your Feed

Algorithms, and human moderators decide what social media content we see. But both forms of content moderation may promote, or leave, age-inappropriate content on our feeds. As a result, social media content moderation may promote age-inappropriate content.


Algorithms aim to attract our attention. They attract our attention by promoting content we like via a three step model.


Firstly, the Inventory Stage.

Here, algorithms collect a pool of content that you might be interested in. This pool comes from three sources:

  1. Directly connected content. This is content from accounts and posts that you follow. In Jamie’s case, directly connected content would come from accounts he has followed.

  2. Indirectly connected content. This is content from accounts and posts that your friends follow. In Jamie’s case, indirectly connected content arises from accounts his friends follow.

  3. Unconnected content. This is content that neither you nor your friends have followed before. This content is included because it is fresh, and you might like it. In Jamie’s case, unconnected content comes from public content creators.

Secondly, the Candidate Ranking stage.

Here, algorithms order the content from the previously collected pool. Algorithms rank content based on several factors. Different social media platforms prioritise different factors. The factors are as follows:

  • Usage intensity. This is the level of user interaction with the content. The higher the level of user interaction, the shorter the time spent on the social media platform. And vice versa. TikTok with its short form videos demands less user interaction, though users are encouraged to stay longer. But Reddit with its elaborate comment sections demands more user interaction, though users stay for shorter times.

  • Specificity. This is the level of nicheness of content. Youtube provides video recommendations based on specific channels you subscribe to. But X may recommend broader posts that many people have liked.

  • Novelty. This is the level of predictable content. Content is predictable if you have seen or liked it before. TikTok shows you plenty of short videos you have not seen before. But Facebook seems to only show posts from groups and people you follow.

  • Content timeliness. This is the preference for newer content over older content. Reddit shows newly written posts. But Youtube recommends decades old videos.

Finally, the Feed Assembly stage.

Here, algorithms show users their feed. Moreover, content is ordered according to the Candidate Ranking stage.


How Do We End Up Seeing Inappropriate Content?

At the Inventory stage, algorithms could select age-inappropriate content. The pool of content which algorithms gather includes unconnected content. For instance, a teen searching for Taylor Swift on TikTok might find suicide-related content instead. Worryingly, unconnected content comprises any content that the user or their friends have not seen, which obviously includes age-inappropriate content.


At the Candidate Ranking stage, algorithms could promote age-inappropriate content. At this stage, algorithms could promote novelty and usage intensity. Put another way, algorithms would promote unpredictable, and provocative content to attract user interaction. Age-inappropriate content, like sexual content, is both unpredictable and provocative. Such content could be ranked higher at this stage. Presumably, human moderation could correct the above issues.

Human Moderation: Is It An Adequate Solution?

Some platforms rely on human moderators to promote and demote content. Human moderators review content based on community rules, but each moderator possesses varying skill levels and opinions, and they apply the community guidelines differently.


Take Reddit for example. Reddit relies on human moderators for individual forums. However, some Reddit moderators may not be skilled enough to identify inappropriate content. And even if they were, moderators might not view inappropriate content as inappropriate, because there is no universal threshold for what is definitely age-inappropriate.


Uneven human moderation is not a platform-specific problem. On Facebook in 2018 too, some comments in Burmese which incited violence were left up; then, Facebook appeared not to have sufficient Burmese-speaking moderators to identify harmful content written in Burmese.


Can Teen Accounts Minimise Age-Inappropriate Content Then?

From September 2024 to April 2025, 54 million Teen accounts have been created on Instagram alone. It is clear that social media platforms see the intrinsic safety value, which is why they are scaling up the number of Teen accounts.


Teen accounts content settings are set at “the strictest setting”, but what this means is currently unclear. If teen accounts rely on algorithmic mechanisms, teens may receive less unconnected content at the Inventory stage, and the algorithm could demote intense and unpredictable content at the Candidate Ranking stage. If teen accounts rely on human moderation, human moderators will review content using stricter rules. Stricter rules are clearer, and larger in scope; the stricter “No violent content”, compared to “no graphically violent content”. Consequently, community rules are applied less leniently to content. More age-inappropriate content is removed. Taken together, teens see more of the safe content they are used to.


Importantly, teen account content settings cannot be bypassed by teens. Teen accounts identify teens using age assurance, which involves age verification and age estimation. Age verification involves the platform scanning official ID, like your NRIC, and age estimation involves the platform or vendor scanning your selfie. The scanner links your facial features with age-specific facial features it has seen before. Finally, the scanner estimates your age according to these age-specific facial features.

Teen account content settings protect young teens from 13 to 17. If age assurance estimates that the user is under 16 , the user cannot change content settings. Moreover, if age assurance estimates that the user is 16 to 17, parents may continue to regulate teens’ account content settings. The “strictest” content settings continue to apply, and hopefully, teens continue to see less age-inappropriate content.


Current content moderation has quite a way to go, and age-inappropriate content continues to be an issue that plagues youth's feeds. But new solutions, like age assurance, could minimise age-inappropriate content on our youths’ feeds and make the Internet a safer, kinder and healthier place to be.

Comments


scriptblr_black_whitebg_edited.png
  • Instagram
  • Facebook
  • LinkedIn
  • Discord

Get in touch

zoe.toh@youtech.sg
beatrice.tan@youthtech.sg

Stay Connected with Us!

© 2035 by Scriptblr. Powered and secured by Wix 

bottom of page