Age-gating the internet: Protecting kids or invading privacy? - Ivo Vegter

Age-gating the internet: Protecting kids or invading privacy? - Ivo Vegter

Governments push age verification laws, but at what cost to privacy and freedom?
Published on

Key topics:

  • Countries push age checks online, risking privacy and inconsistent rules

  • Australia bans under-16s from social media; enforcement left to platforms

  • Experts warn age checks may harm privacy more than protect children

Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.

Support South Africa’s bastion of independent journalism, offering balanced insights on investments, business, and the political economy, by joining BizNews Premium. Register here.

If you prefer WhatsApp for updates, sign up to the BizNews channel here.

The auditorium doors will open for BNIC#2 on 10 September 2025 in Hermanus. For more information and tickets, click here.

Many countries have, or will soon, enact some sort of age verification requirement online. Doing so is a privacy nightmare, however.

A growing number of countries are implementing age verification requirements on at least some internet content. The requirements vary widely, but all pose significant technical challenges and risks to privacy and online anonymity.

The justification, as always, is to “think of the children”. The notion that parents are both responsible for, and perfectly capable of, monitoring and guiding children’s online activities is foreign to both lawmakers and the general public. They want a top-down legislated solution that keeps children away from content they deem sensitive.

What exactly constitutes “sensitive” varies from place to place. Some locations want age verification applied only to pornography, while others are making it a requirement for major social media platforms, too.

Australia and New Zealand

Late in 2024, Australia passed a world-first social media ban for children under 16, and gave major social media platforms a year to come up with a way to enforce the ban before attracting massive fines. The ban attracted widespread public support and was passed with large majorities in both houses of the Australian legislature.

Nobody knows how such a ban would be implemented, however. Somehow, parents will have to verify their children’s age, but how parents will prove their own bona fides is an exercise left up to the social media companies. The full onus for protecting children will be placed on those companies; parents or children who evade the ban will not be penalised.

The ban will apply to most large social media platforms, including Facebook, Instagram, Reddit, Snapchat, TikTok, and X. Services used for health care and education, such as Messenger Kids, WhatsApp, YouTube, Kids Helpline, and Google Classroom, will be exempt from the law.

Yes, you read that right. YouTube – which hosts the Andrew Tates of the world, along with a flood of misogynists and racists, body image influencers, dangerous health misinformation sources, political disinformation campaigns, and crank conspiracists – will be exempt.

So will lesser social media platforms, which inevitably will be where children turn to for online social interaction.

New Zealand is likely to follow Australia’s lead.

UK, EU, and France

The UK passed a similar (and similarly vague) law, the Online Safety Act, which will come into effect later this month. It’s up to the social media companies to figure out how they implement age restrictions.

Bluesky, a microblogging platform that seeks to rival X, is implementing age verification in the UK by requiring either a facial scan, which an artificial intelligence will scan for apparent age, or a state-issued ID, or a valid credit card.

In the EU, the Digital Services Act and the Audiovisual Media Services Directive both require platforms to keep minors away from harmful content. The former specifically applies to “very large online platforms” and “very large online search engines”, which each get an acronym to be better understood by bureaucrats: VLOP and VLOSE, respectively. The threshold to be considered “very large” is to have more than 45 million active users in the EU. Smaller platforms will still be permitted to welcome children with open arms.

The EU also plans to implement a “digital wallet” that will serve as an age verification agent.

In France, lawmakers have implemented robust age verification requirements on all sites that offer pornographic content. The law requires age verification to be offered by a third-party service provider that ensures “double anonymity” – that is, neither the website nor the age verification service should know both the user’s identity and the websites they visit.

Canada and the US

measure in Canada is still in legislative limbo, but it would make offering sexually explicit material for commercial purposes (as opposed to legitimate educational, artistic, or scientific uses) a criminal offence unless the provider implements an age verification system.

In the US, several states have tabled varying inconsistent bits of legislation that would impose age verification requirements on either porn sites, or social media platforms, or both.

A Texas law requires age verification for sites that consist more than 33% of pornography, so interspersing your porn with enough pictures of cats and puppies will get you off the hook.

Read more:

Age-gating the internet: Protecting kids or invading privacy? - Ivo Vegter
Social Media's 'big tobacco' moment: why parents are suing to save their kids

In Tennessee, under-18s can access porn with parental consent. I’d love to be a fly on the wall for those conversations. “Hey, dad, would you please enter your Social Security number here on nastyanal.com?”

Africa

In Kenya, draft rules have been tabled to use national biometric identification cards for age verification online.

And in South Africa, the Internet Service Providers Association has, inexplicably, called for “local debate around the growing global issue of age verification on the internet”. Given South Africa’s experience with other technology-related, any government action on age verification will be a confused mess.

Principle vs practice

In principle, it seems reasonable to want to gate content that is not age-appropriate. In practice, however, the attempts to do so create a lot more problems than they solve.

For a start, “age-appropriate” varies from one child to the next. Traditionally, violence has been deemed perfectly fine for younger teens to consume, while any hint of sexuality is frowned upon.

Some parents would wish that the bias lay in the other direction: that sex should not be pathologised, repressed, and made a matter of shame, while graphic and gratuitous violence should be a little less ubiquitous and be treated with moral disapproval.

Many supporters of age verification base their arguments on tearful parents who blame social media for their children’s suicide after online bullying or sextortion. Yet the suicide data is far from clear.

Overall suicide rates have been declining worldwide. Suicide rates among young people have risen recently in the UK, but only to the level at which they peaked in the 1990s, well before social media was around.

In France, the suicide rate peaked in the 1980s and has declined ever since, unlike in the UK. The same is true for Canada.

In the US, the peak happened in the 1970s, and after a decline to 2005, the rate has risen to the peak level again, which could plausibly be attributed to the rise of social media.

In Japan, the peak was in the 1950s, with only a moderate rise in the 21st century. In Australia, the peak was in the late 1990s, and despite a slow rise over the last two decades, today’s rates are nowhere near the peak.

In short, the correlation between the rise of social media and the suicide rate among young people seems weak and inconsistent.

Privacy risks

Even if age verification is conducted via a trusted third party, with the “double anonymity” requirement that the EU envisages, requiring everyone online to prove their age to use affected websites and platforms is a massive privacy risk.

Even the largest online platforms have proven to be vulnerable to hacks and data breaches, which means it is only a matter of time before such services are compromised, and their data leaked to criminal syndicates.

Such a breach could provide astonishingly effective material for blackmail and extortion, as well as more traditional identity theft.

There is also no guarantee that such verification systems will be immune to government snooping. There are many countries in which you wouldn’t trust the government to have access to material that could compromise the public reputation of activists, journalists, or members of the political opposition.

Censorship regimes of any sort have always been abused for nefarious political purposes, and there’s no reason to think age verification systems will be an exception.

Loopholes

None of the laws I have seen appear to offer a reliable means of age verification, although the EU proposal, with its double-blind third-party service system, comes the closest.

If a child requires parental consent, it is relatively trivial for them to obtain access to their parents’ identity card, driver’s licence, credit card, or other means of verification.

Face recognition is fraught with error. A false beard or grey wig could easily fool it, and many people look quite young despite being over the age of majority. The technology used for face recognition is also notoriously poor at assessing the faces of non-white people, thus potentially perpetuating racial discrimination in access to online services.

Read more:

Age-gating the internet: Protecting kids or invading privacy? - Ivo Vegter
Children’s mental health crisis: Keeping kids off their phones – Lisa Jarvis

Ultimately, however, you’re going to have to put the entire internet behind an age verification wall if you really want to protect children.

Blocking or banning only large sites will just send children to smaller sites that are not policed. If they really want to access porn, there are many ways to find it that do not involve the large, well-known websites.

Entire paywalled porn sites have been scraped and made available via the bittorrent protocol, for example – a protocol which is decentralised, and is also used for legitimate purposes such as open source software downloads. Where there is a will, there is a way.

It is entirely likely that age verification systems will do more to prevent unsophisticated adults from accessing information to which they have constitutional rights, than they will do to protect children from harmful content.

Harms versus benefits

Campaigners for banning children from social media cite anecdotal cases of children lured by adult abusers or even murderers.

The same argument could be made, however, to put age restrictions on churches, or scouting groups, or extramural sports. Isolated cases of abuse are not a sufficient justification to impose invasive rules on everyone.

What these campaigners don’t acknowledge either, is that banning social media will also isolate children in vulnerable situations who now find social support online. This is especially true for LGBTQ+ children, or children with mental health challenges that are insufficiently supported by (or even rejected by) their schools, parents, or churches.

Especially in the modern world where even older children are no longer free to leave the house alone to visit friends, the risk of social isolation and depression is much greater than it was for older generations.

Parenting

The most effective way of keeping children away from online content to which parents do not want their children exposed remains “nanny” software.

This can easily be implemented on both mobile phones and computers and cannot be trivially bypassed. They can restrict social media use to certain times of the day, limit the time spent on various websites or services, or entirely block content that is inappropriate for children.

In combination with technical blocking and monitoring solutions, parents should also educate their children about the dangers of online content; how to avoid nasty stuff, and how to evaluate and critically assess social media content. Governments could contribute to this effort by including online safety in the school curriculum.

An open and honest relationship with a child, in which they feel free to discuss matters that puzzle or worry them, produces well-adjusted, critically thinking older teens. (I’m no qualified expert, but I do speak from experience with a teenager.)

By contrast, swaddling children in cotton wool is notoriously bad at protecting them. Trust me: I grew up in a highly censored environment. It did not protect me from anything.

If you want to tempt children to rebel and seek out something harmful, just slap a “banned” sticker on it. It also makes them more vulnerable to those harms once they inevitably do come across them, because they have not learnt to view them with skepticism and discernment.

Ultimately, protecting children in a free society is the responsibility of parents, and not the state. Parents are present, and can impose conditions and restrictions that are appropriate to their individual children.

The state will always only have blunt tools at its disposal, and those tools are as likely as not to do unintended harm without doing the intended good.

*Ivo Vegter is a freelance journalist, columnist and speaker who loves debunking myths and misconceptions, and addresses topics from the perspective of individual liberty and free markets.

This article was first published by Daily Friend and is republished with permission

Related Stories

No stories found.
BizNews
www.biznews.com