Newsroom
European lawmakers are urging stronger measures to protect children from the physical, mental, and social risks posed by online platforms. In a plenary vote on Wednesday, Members of the European Parliament (MEPs) adopted a non-legislative report by 483 votes to 92, with 86 abstentions, calling for an EU-wide minimum age of 16 for access to social media, video-sharing platforms, and AI companions.
The report explains growing concern over the digital habits of minors, noting that 25% exhibit “problematic” smartphone use. Research cited in the report shows that 97% of young people go online daily, with 78% of 13- to 17-year-olds checking their devices at least hourly. The Eurobarometer 2025 survey found that more than 90% of Europeans support urgent action to protect children online, citing mental health risks, cyberbullying, and exposure to inappropriate content.
Protecting Children Through Age Verification and Platform Accountability
To help parents manage children’s online activity, MEPs propose that minors aged 13 to 16 may access digital services only with parental consent. Lawmakers also endorsed the European Commission’s initiative to develop an EU age verification app and the European digital identity (eID) wallet, emphasizing that these systems must ensure both accuracy and privacy.
MEPs stressed that age verification will not exempt platforms from designing their products to be safe and age-appropriate. Senior platform managers could face personal liability for serious or persistent breaches of EU rules, particularly in relation to the Digital Services Act (DSA).
Bans on Harmful Features and AI Risks
The report urges stronger action against addictive online practices. Measures include banning engagement-based recommendation systems, loot boxes, infinite scrolling, autoplay, pull-to-refresh, and other manipulative “gamification” features for minors. Lawmakers also call for restrictions on targeted advertising, influencer marketing aimed at children, and commercial incentives for “kidfluencing.”
The Parliament highlighted urgent concerns over generative AI tools, including deepfakes, AI companionship chatbots, and AI nudity apps that create non-consensual manipulated images. MEPs urged clear regulations to prevent exploitation and harm.
A Unified Message from the EU
Rapporteur Christel Schaldemose (S&D, Denmark) said:
"I am proud of this parliament, that we can stand together in protecting minors online. Together with strong, consistent enforcement of the Digital Services Act, these measures will dramatically raise the level of protection for children. We are finally drawing a line. We are saying clearly to platforms: your services are not designed for children. And the experiment ends here."
With 97% of young Europeans online daily and one in four showing signs of dysfunctional device use, the Parliament’s proposals signal a landmark step toward safeguarding children’s wellbeing in the digital age. Member states are already beginning to implement measures, including age verification systems and stricter access limits, in line with the EU’s ambition to create a safer online environment for minors.




























