Would Banning Children from Social Media and AI Be Constitutional?
Probably not, for reasons that point to the limits of what Canada can do here.
Recently, governments in Manitoba and Ontario have signaled their support for a ban on social media for children under 16 years of age, and last month, the federal Liberals passed a resolution to this effect at a party convention. Manitoba’s premier Wab Kinew and the national Liberal Party are also keen to ban youth access to “all AI chatbots and other potentially harmful forms of AI interaction.”
Passing a complete ban would entail Canada following Australia’s lead, which, in late 2025, banned social media for everyone under 16. Governments in Europe have taken a less stringent approach by imposing age-verification and parental consent rules around social media access.
It remains to be seen whether Ottawa will choose the European or Australian path. But for reasons I sketch out here briefly, the European path seems more likely — and we may see the Carney government take this path when it soon re-tables the Online Harms Act.
Many commentators have been critical of the idea of Canada imposing a ban on social media or AI for young people due, among other reasons, to the fact that we have a Charter of Rights and Freedoms that guarantees everyone a right to free expression.
If a ban on social media or AI would infringe this right, could it still be legal? Yes it could, because no right in the Charter is absolute. All of our rights under the Charter are subject to reasonable limits — as decided by the courts — under section 1. (And of course, in the case of limits to free expression, we the people have the final say under section 33.)
So if a ban on access to social media or AI from those under 16 would violate section 2(b) of the Charter, would it be a reasonable limit on that right? No, it probably wouldn’t.
The thrust of this post is to briefly show what a challenge would look like and why access with parental consent or other guardrails is likely the furthest Canada can go in restricting youth from their favourite platform or chatbot.
Who would challenge the ban and what would they argue?
Three groups would have standing to challenge a ban: users under 16, creators, and platforms themselves.
The right to free expression under section 2(b) of the Charter protects any activity that conveys or attempts to convey meaning — the sole exception being expression that takes a violent form.
The Supreme Court of Canada recognized in Ford and Irwin Toy, seminal early cases on 2(b), that free expression protects communications between speaker and audience, which encompasses the right to hear or receive expression as well as to convey it.
Scrolling TikTok, Instagram, or querying a chatbot would all convey meaning. Users and content creators would easily make out a violation of 2(b). And so would the social media platforms themselves, on the basis that feed curation itself is a form of expression — or at least, our courts are likely to agree with the US Supreme Court on this point.
What about OpenAI or Anthropic? Would language model composition or tuning be considered comparable to curation? A harder question. But speaking to a chatbot would, I think, be captured by 2(b).
Would the infringement be justified?
Government limits on rights can be valid under the Charter so long as a court decides the limitation is reasonable. To decide this, the court applies a test set out in R v Oakes (1986): the government must show (1) a pressing and substantial objective, and (2) proportionality — meaning a rational connection between the state’s purpose and the limit at issue; (3) minimal impairment of the right; and (4) proportionality, namely, that the benefits of the law must outweigh the severity of the rights violation
Would the government have a ‘pressing and substantial objective’ in banning young people from access to social media or AI?
It would argue that the point of a ban is to protect children from online harms that include mental health damage, predatory behaviour, algorithmic manipulation, and exposure to harmful content. In Irwin Toy itself, a case about restrictions on advertising to children, the Court held that protecting children as a “vulnerable group” from “media manipulation” is a pressing and substantial objective. The government would likely be safe here.
Rational connection
The ban would have to be rationally connected to the government’s aim of reducing harm. The challengers would contest this.
Research on the relationship between social media use and adolescent mental health and other harms is contested. In response to Jonathan Haidt and Jean Twenge’s well-known work on social media’s harmful impact on young people, a number of scholars have questioned the causal claims they make. Amy Orben and Andrew Przybylski's large-dataset studies have found effect sizes too small to justify sweeping intervention, and researchers like Candice Odgers have argued the evidence for social media as a primary driver of adolescent mental illness is weak to nonexistent.
The Supreme Court has, in some cases, including Butler, shown deference to the government at this stage where social science evidence is contested. But challengers might argue that deference has limits, that a measure cannot be rationally connected to an objective on evidence that experts in the field actively dispute.
Minimal impairment
The challengers would be on the strongest ground at the minimal impairment stage. Assuming the court were to accept a rational connection on a balance of probabilities, it would likely find the lack of strong evidence that a ban specifically — and nothing short of it — would reduce harm to be fatal at this stage.
If the evidence suggests that harms are driven by specific features of social media, such as algorithmic amplification of distressing content, infinite scroll, and engagement-maximizing recommendation systems (or in the case of AI, inadequate safeguards), then a total ban on access would not be minimally impairing. More targeted measures could address these concerns with less collateral damage.
Obvious alternative measures include parental controls, algorithmic transparency obligations, time-limit features, and content warnings. The same logic would apply to AI chatbots. Rules around use limits, parental supervision, and so on, present a viable alternative. (In April, seeing which way the wind is blowing, Meta announced child safeguards for AI along these lines.)
Proportionality of effects and objective
If the government manages to clear the rational connection and minimal impairment tests, the court would finally ask whether the salutary effects of the law outweigh its deleterious effects on the right.
The effects would be considerable. A complete ban would remove a sizable demographic from an essential forum for public discourse. Many young people would also use these platforms for purposes entirely remote from the harms the law would target, including political engagement, artistic expression, peer connection, and access to information.
There is also evidence emerging from the Australian experiment that a ban would not have a measurable impact on reducing cyberbullying. And, as Michael Geist has argued, any workable ban would require mandatory age verification, which would mean tens of millions of Canadians submitting government-issued identification to third-party providers, raising privacy issues not only for youth but for the entire adult population.
On the basis that a ban’s salutary effects are at best empirically contested at this point in time, the government is unlikely to succeed at this final stage.
Lawyers behind the scenes at the Department of Justice have probably mapped all this out and are advising the government against following the Australian path of imposing a complete ban, in favour of the European model of imposing stricter rules around access.
The less time kids spend on social media the better. But an absolute ban is probably not workable or lawful.