The debate over banning social media for children is intensifying worldwide. This is due to concerns about the mental health of minors, as well as dangerous online trends such as #SkinnyTok. However, the core of the problem lies not in the age of the users, but in the platforms themselves: their algorithmic recommendation systems specifically drive engagement and amplify extreme content. Therefore, blanket age limits fall short and merely shift the risks. Consequently, the focus is shifting towards new regulations that hold major platforms more accountable for their design and operation — with the aim of minimising harm while preserving the opportunities offered by social media.

Your browser does not support the audio element or the audio file is not available.

Article

Regulating Social Media: Should we Ban Kids, or is there a Better Way?

Countries around the world are considering banning children from social media to address growing concerns about mental health and harmful online content. Yet the risks associated with platforms such as TikTok or Instagram stem less from who uses them than from how their engagement-driven algorithms shape attention, interaction, and information flows.
Summary The debate over banning social media for children is intensifying worldwide. This is due to concerns about the mental health of minors, as well as dangerous online trends such as #SkinnyTok. However, the core of the problem lies not in the age of the users, but in the platforms themselves: their algorithmic recommendation systems specifically drive engagement and amplify extreme content. Therefore, blanket age limits fall short and merely shift the risks. Consequently, the focus is shifting towards new regulations that hold major platforms more accountable for their design and operation — with the aim of minimising harm while preserving the opportunities offered by social media.
Published on 02.04.2026
Simon Mayer, Luka Bekavac, Alice Palmieri and Aurelia Tamò-Larrieux

Robert, a twelve-year-old, starts watching dance videos on their social media account. Within days, the platform’s recommendation engine begins suggesting extreme dieting tips, and displayed content might even shift towards self-harm “recovery” communities that subtly normalize destructive behavior. Next door, Sophie, a thirteen-year-old with a passion for coding, begins posting short, accessible tutorials explaining how to build simple apps. At first, only a handful of classmates watch. But one video—“Build Your First Game in 60 Minutes”—catches the algorithm’s attention. A stranger from another country asks a thoughtful question; a professional software engineer casually suggests a more efficient approach. The comment section becomes a space of dialogue rather than passive consumption, boosting the teenager’s self-confidence and motivating them to carry on with their passion.

Journeys such as Robert’s are documented, for instance, as part of the 2025 #SkinnyTok trend on TikTok—a loosely connected stream of videos promoting extreme thinness and restrictive dieting, which went “beyond typical nutrition guidance and instead recommend[ed] dangerous levels of restriction, along with verbal abuse”. #SkinnyTok is not an official category, but emerged through hashtags, trends, and algorithmic clustering. At the same time, experiences such as Sophie’s demonstrate that, rather than becoming a trap, social media may be a launchpad where talent can bypass traditional gatekeepers, and where the community—and the teenagers themselves—may benefit greatly.

Banning Social Media? 

A common response to market failure is regulation, and countries around the world have started to react to the power of platforms. With respect to social media environments, both regulation and academic research struggle to keep pace with developments, which have recently given rise to more radical ideas that aim at rapid remediation, notably: banning social media for children. Australia has recently taken the lead by implementing age-based bans, barring children under 16 (or 14 in some iterations) from social media entirely; and similar bans have been proposed in France, Austria, Germany, and the United Kingdom as well as worldwide; it seems only a matter of time until this discussion will flare up in Switzerland. 

“A common response to market failure is regulation, and countries around the world have started to react to the power of platforms.”

In Australia, “banning social media” has meant requiring "age-restricted social media platforms" to take reasonable steps to keep children under 16 off their services. The law specifically targets platforms whose main purpose is to let users connect, share content, and interact with one another—such as Instagram, YouTube, or TikTok—and exempts online gaming and messaging apps. This model, which ties platform safety for children to keeping them off platforms, is currently quickly spreading worldwide, including across Europe. It is further motivated, as for instance in Italy, by a growing commercial exploitation of “kidfluencers”. The French lower house overwhelmingly approved a bill to ban under-15-year-olds from social media in January 2026. Germany is considering how such a ban could take form, and the UK is debating about which age should become the mandatory minimum. 

While such a binary approach to regulating social media might appeal to worried parents and politicians seeking a clear stance, this approach fails to consider the technical and social complexity of “Social Media” platforms. The seemingly most basic question then is: “What is Social Media? And what does it mean to ban ‘Social Media’ for a specific age group?

What is Social Media, really? 

To illustrate the large diversity of views on how ‘Social Media’ might be defined, start at the very basis: from an infrastructure perspective, social media is a communication platform, much like an e-mail service or the Internet itself. From a communication perspective, social media is a messaging service. And from a participatory Web perspective, social media is an enabler of democratized content creation. All of these perspectives, though, miss a central aspect: the recommendation perspective. Through automated recommendations, social media has changed the very fabric of social interactions for the better and worse, as numerous research on the anxious generation posits. Specifically, this shines light on platforms’ aggressive use of content recommendation systems that optimize for engagement as a key performance metric – keeping users on a platform for as long as possible. These systems structure the information environment users encounter and shape patterns of attention, interaction, and amplification at a massive scale.

“Content recommendation systems shape patterns of attention, interaction, and amplification at a massive scale.”

Importantly, while much of the current debate focuses on risks to minors, the potential harms associated with aggressive recommender systems are not limited to children. Rather, many of the underlying dynamics are structural properties of contemporary social media architectures. The same design features that may expose children to harmful or addictive content also drive broader societal effects among adult users, including misinformation diffusion, polarization, attention manipulation, or radicalization pathways. This illustrates a very fundamental risk of regulation: By banning social media for under-x-year-olds, we implicitly declare that it is “ok” for everyone else. However, engagement-driven recommender systems are an issue not only for children. 

Beyond the issue of defining what ‘Social Media’ is, framing the problem primarily as one of children’s access therefore risks misdiagnosing the problem. If the architecture of social media platforms systematically produces harmful dynamics, excluding minors from these environments does not resolve the underlying problem. Instead, it merely delays exposure. Once young people reach the permitted age threshold, they enter the very same environments shaped by the very same engagement-maximizing systems. 

The more fundamental question, therefore, is whether society should exclude the most vulnerable group from these environments while leaving the underlying architecture unchanged, even though the associated harms continue to affect millions of users daily.

A new era of regulation?

A regulatory approach that targets the structural features producing these harms would not only better protect children but improve the information environment for all users. To this end, a new era of social media regulation has emerged in Europe and Switzerland. The stated goal is to move toward a framework of systemically oriented accountability.

“By banning social media for under-x-year-olds, we implicitly declare that it is "ok" for everyone else. However, engagement-driven recommender systems are an issue not only for children.”

To raise accountability standards of social media platforms, the European Union has established the Digital Services Act (DSA). The DSA, as well as current legislative discussions in Switzerland on the Bundesgesetz über Kommunikationsplattformen und Suchmaschinen, propose a tiered approach: Rather than applying a one-size-fits-all rule to every local blog and global giant, the DSA’s focus is on Very Large Online Platforms and Search Engines, so-called VLOPSEs. These are platforms with systemic impact, reaching millions of users. Platforms such as TikTok, Instagram, or LinkedIn fall in this category, along with the most prominent search engines, and, perhaps soon, ChatGPT.

Under this regulatory framework, the focus is no longer solely on posts and content, or on banning specific user groups, but rather on considering the platform’s overall architecture and design. Platforms must perform risk assessments on how their design affects aspects including the mental health and well-being of minors, the spread of disinformation, and the amplification of illegal content. And regulators are already in the midst of enforcing such provisions, with the European Commission notably taking the lead on investigating large US social media platforms.  

The DSA also encourages research into systemic risks of platforms and mandates researchers' data access mechanisms that permit studying platforms and their impact on society. Through data access mechanisms and technical tools, platforms behaviour is audited by a growing community; one example is the CoCoDa project where the University of St.Gallen—together with colleagues from the University of Lausanne, the University of Maastricht, and the London Open Data Institute—is proposing techno-legal tools for mitigating the concentration of control and data in online platforms.

“The problem is not a lack of knowledge on the side of the platforms, but balancing harm reduction against engagement and revenue targets.”

Importantly, the platforms themselves are already aware of problematic dynamics in their ecosystems, and specifically of their effects on young users: Evidence from recent litigation in the United States shows that major social media companies closely track how their products affect minors. And this evidence convinced a jury in the United States to hold Meta and YouTube accountable for their addictive design practices. Internal documents further reveal that platforms already measure problematic usage patterns, including excessive screen time and exposure to harmful content, often with very high accuracy. The problem is not a lack of knowledge on the side of the platforms, but balancing harm reduction against engagement and revenue targets.  

What we need today, also to feed into the DSA and the Swiss KompG, is clarification of the origin of harms (and benefits) that are attributed to ‘Social media’. Such clarity would open up a vast array of regulatory tools that could be wielded without blanket bans. Mapping out this space would allow us to reach a nuanced understanding of how the benefits and harms of social media map to specific platform features, and thereby permit the application of specific regulation to these features to turn the balance towards societally beneficial social media. Such conceptual clarity would further enable more efficient discussions on the topic, both in civil society and in politics. And it would enable us to rigorously regulate harmful phenomena such as #SkinnyTok while preserving social media’s function to connect and empower.

Media tips

Sander van der Linden: Social media bans for teens lack evidence (2026)

Article

Sander van der Linden: Social media bans for teens lack evidence (2026)

In his article published in Nature Health, Sander van der Linden argues that online harms constitute an urgent societal challenge. He also states that governments must prepare teenagers to use digital technology responsibly.

World Happiness Report 2026

Study

World Happiness Report 2026

The World Happiness Report is the world’s foremost publication on global wellbeing and how to improve it. It combine wellbeing data from over 140 countries with high-quality analysis by world-leading researchers from a range of academic disciplines. The report Shows that life satisfaction is highest at low rates of social media use and lower at higher rates of use, according to data from the Programme for International Student Assessment (PISA) covering seven internet activities for 15-year-old students in 47 countries.

«Alpha Boys» (2025)

Podcast

«Alpha Boys» (2025)

The four-part SRF podcast Alpha Boys explores how young men become embroiled in the manosphere, a digital community where influential figures such as Andrew Tate amass millions of followers. The series takes an in-depth look at digital communities where the pursuit of self-improvement and strength can lead to misogyny and incitement to violence.