Bluesky Facebook Reddit Email

“Better design instead of blanket bans”

04.02.26 | Technical University of Munich (TUM)

SAMSUNG T9 Portable SSD 2TB

SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.


US courts have ruled against platform providers for failing to protect children, and the debate over age restrictions for social media has gained momentum. An international group of experts from academia, children’s rights organizations and non-profit institutions is convinced that bans would be the wrong approach. In the journal Science they advocate for new strategies for the digital safety of children and youths aged 13 and older. Prof. Sandra Cortesi and Prof. Urs Gasser from the Technical University of Munich (TUM) explain when artificial intelligence could intervene on smartphones, what role peer groups can play and why children should be involved in shaping their digital education.

In the US, Meta and Google were ordered to pay substantial fines just a few days ago for failing to adequately protect children and youths on their social media and video platforms, respectively. What significance do these rulings hold in light of your working group’s findings?

Urs Gasser : These rulings could mark a turning point because they underscore that child safety in the digital world is not simply a matter of harmful content, but also a matter of platform design. The courts have examined how platforms are built, what kinds of risks their features generate and whether companies can be held responsible when those risks are foreseeable and insufficiently addressed. These questions strike at the heart of our working group’s recommendations: designing digital spaces to ensure safety, agency and well-being of children and youths from the outset. In the context of the cases heard in the US, this means excluding features that can be addictive and providing protection against abuse by adults.

Several countries have banned social media for children under a certain age or are planning to do so. Why are you opposed to a ban?

Urs Gasser: Our argument is not against regulation. Legal requirements are indispensable. However, we believe that policymakers should do more than just establish red lines. Rather, they should require providers to design their platforms and products in a child-friendly manner. That is more demanding than a blanket ban, but also more promising. After all, what we really want is for children and youths to be able to learn how to use media autonomously and in a way that has a positive impact on them.

The working group proposes using AI to make the platforms safer.

Sandra Cortesi: In addition to banning clearly harmful features, new tools can empower older children and youths to act autonomously within an age-appropriate framework. Artificial intelligence can detect and intervene when adolescents are at risk. For example, AI could say: “I see that you’ve been looking at a lot of posts about weight loss lately. I see that you’re interacting with three people who support that. I’d like to recommend three posts with a different perspective.” AI could also recognize that a teenager wants to take a selfie showing a lot of bare skin and ask: “Are you sure you want to take this selfie? Think about what you want to do with it.” Similarly, if a child is contacted by someone who usually only interacts with adults, AI can assume that the person is an adult and display a corresponding warning.

That seems like the providers would find out a lot of personal information.

Sandra Cortesi: Such analyses must take place exclusively on the devices themselves and must not be transmitted to the operators. But even if privacy is guaranteed, it would be ideal if families sat down together to consider: What kind of media consumption do we want? What is, so to speak, our diet plan for the digital world? Then, the device or platform shows us all the options we can enable or disable to achieve that goal. For example, I mainly want to see positive content. If the AI notices that I’m straying from that path, it supports me. Older teens might decide: “I want to have my own experiences for three months and don’t want the AI watching me or telling me anything.” At the family level, we also consider bans to be less effective than having this kind of discussion. On the one hand, because this strengthens trust and self-efficacy. On the other hand, many children and youths already know how to bypass the restrictions their parents have set on their smartphones. However, it’s clear that not all families have the time or expertise for these considerations, which is why protective default settings are very important.

Even the best settings and legal requirements likely won't completely prevent young people from encountering disturbing content or digital violence. What should be done in this case?

Urs Gasser: Research shows that it is important for older children and youths to be able to report such content and incidents anonymously and receive immediate support. In many cases, they feel ashamed and guilty. Therefore, it is important that such reports do not go unnoticed for weeks, but rather that understanding is shown and help is offered immediately. Ideally, other young people would say: "I understand you. I’ve been through this too." Some countries already have support services where trained young people, with professional support, serve as contact persons. Such services should become standard.

The working group also suggests that it shouldn’t just be companies that involve children and youths in the design process. Schools should also place greater emphasis on participation.

Sandra Cortesi : Many young people say they don’t feel comfortable or happy. They see a future full of dangers and feel they have no control over their own lives. By involving children and youths, schools have a tremendous opportunity, on the one hand, to show them a future in which the digital world isn’t just full of a thousand risks, and on the other hand, to empower them with a sense of self-efficacy. The message wouldn’t be: “We’ll show you how the digital world works.” Rather, it would be: “We as schools have a lot to learn from you, because we may not know all the tools, but you know exactly how to use them. As adults, we also have important contributions to make, such as our social values and experience. Let's create learning content together.” This would go a long way toward ensuring the digital safety of children and young people.

About the interviewees:
Sandra Cortesi is a professor of Participation and Diversity in Digital Societies at the Technical University of Munich (TUM). She heads the new Youth and Media Lab at the TUM Think Tank. Previously, Cortesi conducted research at the Berkman Klein Center for Internet & Society at Harvard University, where she is a Faculty Associate today. She is also a Senior Research and Teaching Associate at the University of Zurich. She has obtained a PhD in psychology.

Urs Gasser is a professor of Public Policy, Governance and Innovative Technology at the Technical University of Munich (TUM). He is Dean of the TUM School of Social Sciences and Technology as well as Rector of the Munich School of Politics and Public Policy (HfP) at TUM. Previously, he was Executive Director of the Berkman Klein Center for Internet & Society at Harvard University and a professor at Harvard Law School.

Further information:
The Frontiers in Digital Child Safety project brought together more than 40 experts from academia, children’s rights organizations and non-profit institutions in the fields of social sciences, technology, design, psychology and law. The project was funded by Apple Inc. The project group was coordinated at the TUM Think Tank by Prof. Sandra Cortesi and Prof. Urs Gasser in collaboration with researchers from Harvard University and the University of Zurich. The TUM Think Tank brings together academia, civil society, politics and business to jointly develop solutions and tools for pressing societal problems.

Science

10.1126/science.aec7804

Commentary/editorial

People

Digital child safety at the frontier: From evidence to action

2-Apr-2026

The authors declare no competing interests.

Keywords

Article Information

Contact Information

Klaus Becker
Technical University of Munich (TUM)
becker@zv.tum.de

How to Cite This Article

APA:
Technical University of Munich (TUM). (2026, April 2). “Better design instead of blanket bans”. Brightsurf News. https://www.brightsurf.com/news/147P2VN1/better-design-instead-of-blanket-bans.html
MLA:
"“Better design instead of blanket bans”." Brightsurf News, Apr. 2 2026, https://www.brightsurf.com/news/147P2VN1/better-design-instead-of-blanket-bans.html.