MALAYSIA is considering banning social media for children under 16, following a series of incidents involving children harming other children in schools.
Although the causes of such violence are undetermined and multi-varied, the proposal is motivated by genuine concern about social media.
But will a ban protect children from the deeper, structural issues with social media? The reality of digital life is far more complex.
In Australia, for instance, platforms will soon be required to take “reasonable steps” to prevent under-16s from creating accounts.
But before the measures even take effect, there is already talk of young people bypassing age checks using virtual private networks, ‘old man’ masks or even parental assistance.
Verifying users ages, such as by using facial recognition or ID verification, raise privacy concerns for everyone, not just children.
And if children are excluded entirely, it may deepen social isolation, in a world where social interaction increasingly happens online.
ID verification may also disenfranchise marginalised groups — stateless persons, for example, may lack IDs required to access these sites.
Even if bans were enforced perfectly, what happens when a child turns 16?
The tragic case of a 16-year-old Malaysian girl, who died by suicide after posting an Instagram poll about whether she should live or die, shows that online harm does not disappear with age.
A ban may also risk creating a false sense of security; convincing us that children are safe simply because they are kept away, while the platforms remain unsafe.
The underlying problem is that platforms’ business models thrive on engagement and attention, even when that means amplifying harmful or addictive content.
Instead of keeping children out, perhaps the better question is: can we make platforms safe by design?
Brazil appears to be attempting this with its new Digital ECA law, which applies to digital products and services “targeted at or likely to be accessed by minors”.
It requires accounts of those under the age of 16 to be linked with a parent or guardian, and mandates platforms to build in parental supervision tools such as ability to set time limits and restrict purchases.
Both Brazil and European regulations prohibit platforms from profiling minors to serve targeted ads.
In the EU, children’s accounts are private by default and cannot be publicly recommended. This responds to past abuses where predators exploited “recommended friend” algorithms to find children.
Alongside its proposed age restrictions, Australia has plans to introduce a Digital Duty of Care, requiring platforms to proactively prevent harm rather than simply react after it occurs.
These laws are still new and their efficacy will depend heavily on accompanying regulations and enforcement, but they are similar in that they attempt to regulate “upstream” features relating to platform design.
In Malaysia, however, conversations still mostly centre on downstream measures: ordering takedowns, prosecuting harmful posts, or now, proposing bans.
These steps focus on control after harm has occurred, or on keeping children away, without fixing the structural problems that allow harm to persist.
Large tech companies, especially social media platforms, have largely escaped legal oversight in Malaysia and much of Southeast Asia, despite their role in facilitating well-documented harms. This is due to several reasons:
Social media platforms are perceived as too large, complex and essential to regulate.
Platforms switch roles as it suits them — publishers when moderating content, “innocent carriers” when trying to avoid accountability.
Harms are not confined to social media platforms. They appear on gaming platforms, live-streaming sites and increasingly, in artificial intelligence-powered chatbots.
Most of the harms, however, are not new. False advertising, impersonation, gambling, fraud and misinformation have existed long before Facebook or TikTok. Miracle cures, for example, have existed in many forms — from 19th century “snake oil” cures to today’s AI hallucinations providing harmful medical advice.
Regulatory frameworks have been built over the years to protect society from such harms.
In Malaysia, this includes consumer protection laws, financial regulation, accreditation of professionals such as doctors, intellectual property protection via agencies like MyIPO, and the Penal Code for threats and incitement of violence.
Regulating giant tech platforms as a whole is daunting. But what if we reviewed its current regulatory framework — on consumer protection, advertising standards, and child protection laws—– and updated those to address contemporary harms?
For example, ads targeting children under 12 could be banned across all media including streaming, gaming and social media platforms.
This would be akin to California laws disallowing kids’ meal toys linked to unhealthy food. In any event, many social media platforms already require users to be at least 13.
If social media platforms cannot guarantee that such ads won’t reach children, their services could be classified as 18+ by default.
To lift that rating, they would need to show concrete measures to prevent child-targeted advertising, with penalties for non-compliance.
Updating the regulatory framework may be challenging, but it’s certainly not unprecedented.
Malaysia reformed its laws to prepare for the Internet era and again for the digital transformation era.
There’s no reason it can’t do the same, especially when the safety and well-being of children is involved.
Ding Jo-Ann is with a global non-profit working on the impact of technology on society. Khairil Yusof is a coordinator at Sinar Project, an organisation promoting transparency and open data in Southeast Asia
The views expressed in this article are the author’s own and do not necessarily reflect those of the New Straits Times
© New Straits Times Press (M) Bhd






