Restricting Speech By Purportedly Protecting Children

1 hour ago 5

Rommie Analytics

While governments around the world have imposed speech restrictions to fight misinformation and hate speech, they also have attempted to curb free speech for a less controversial reason: protecting children. But many of these restrictions stem from vague, unspecified, or speculative harms and corral wide swaths of speech that do not harm children. Censoring speech in the name of protecting children is not a terribly new phenomenon, especially in authoritarian countries. In 2012, for instance, Russia's parliament passed a law allowing the country's media censorship agency to unilaterally blacklist websites and take them offline, without any court approval. The lawmakers' justification was protecting children from online harm, but civil liberties groups correctly predicted that the government would use these powers to curb far more speech. In recent years, such efforts have moved beyond authoritarian countries and taken hold in Western democracies.

The United States has seen repeated attempts to curb speech in the name of saving the children. Although they have failed, governments have continued to try over many decades. In 1969, the US Supreme Court struck down the Des Moines, Iowa, school district's ban on black armbands worn to protest the Vietnam War, writing that "state-operated schools may not be enclaves of totalitarianism." In 1997, the Supreme Court invalidated much of the Communications Decency Act, which criminalized the online transmission of "indecent" content to minors, writing that the "interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorship." And in 2011, the court struck down a California law that banned sales of "violent video games" to minors, writing that the First Amendment does not give the government "a free-floating power to restrict the ideas to which children may be exposed."

The moral panic did not stop with those cases. Across the country, states are scrambling to address the harms associated with minors' use of social media. Many high-profile commentators and politicians have criticized social media for harming the mental health of teenagers, though there is substantial debate as to whether they have presented sufficient evidence of causation. In May 2023, then-Surgeon General Vivek Murthy issued an advisory on social media and youths' mental health: "The most common question parents ask me is, 'Is social media safe for my kids?' The answer is that we don't have enough evidence to say it's safe, and in fact, there is growing evidence that social media use is associated with harm to young people's mental health."

States have stepped in to try to regulate social media. Among the highest profile recent attempts is Utah's Minor Protection in Social Media Act, which the state legislature enacted in March 2024. The Utah law requires social media companies to "implement an age assurance system to determine whether a current or prospective Utah account holder on the social media company's social media service is a minor." For minors who have accounts, social media companies must impose a number of restrictions, including setting "default privacy settings to prioritize maximum privacy," limiting direct messaging abilities, disabling search engine indexing of their profiles, and limiting a minor's ability to share content with others. Those privacy settings cannot be changed without verifiable parental consent. The law also requires social media companies to disable functions that "prolong user engagement" for minors, such as autoplay functions.

The Utah law does not apply to all platforms, however. It only restricts "social media companies," which it defines as a "public website or application" that mainly displays content created by users, permits those individuals to create public accounts, allows them to "interact socially with each other," provides them with lists of other users with whom they are connected, and lets them post content that others can see. The law explicitly states that cloud storage and email is excluded from the definition of "social media company.

Why did the Utah legislature see the need to impose such limits on minors' use of social media? In its findings, the legislature discussed the negative mental health impacts of "the addictive design features of certain social media services" and asserted that the platforms "are designed without sufficient tools to allow adequate parental oversight, exposing minors to risks that could be mitigated with proper parental involvement and control." The legislature rationalized that it has "enacted safeguards around products and activities that pose risks to minors," such as medications and cars. Missing from the state's justification was the acknowledgement that unlike, say, car safety regulations, Utah's social media law involves First Amendment–protected speech. Not surprisingly, the technology trade group NetChoice, along with Utah residents, sued the state, alleging that the law violates the First Amendment.

Central to NetChoice's case was the argument that the statute's definition of "social media company" would lead to over-regulation of protected speech. "Using a vague content-, speaker-, and viewpoint-based definition of 'social media company,' the Act imposes restrictions on certain websites' ability to disseminate and facilitate the speech of their users," NetChoice wrote in its motion for a preliminary injunction blocking the law. "Yet there is a fundamental mismatch between the State's putative goals in regulating certain means of disseminating speech, and the Act's haphazard regulation of certain websites. The Act does not regulate many websites across the Internet that use the same means of disseminating speech the Act restricts, while simultaneously burdening many websites that do not use those means at all."

On September 10, 2024—less than a month before the law was set to go into effect—Utah federal judge Robert J. Shelby issued a preliminary injunction blocking the law. Speech regulations are particularly difficult to justify under the First Amendment if they are "content based." And Shelby concluded that the Utah law is content based, because it only applies to platforms that the law "singles out [as] social media companies" and does not apply to other platforms.

Content-based speech regulations survive First Amendment challenges only if they are narrowly tailored to serve compelling state interests. Shelby concluded that Utah fell short of making that case, writing that although he "is sensitive to the mental health challenges many young people face," the state has not "provided evidence establishing a clear, causal relationship between minors' social media use and negative mental health impacts." And even if Utah had a compelling interest, Shelby stated, the law is not narrowly tailored to advance that goal. He suggested that parents — not the government — should be the arbiters of the content their children see and share on social media: "While Defendants present evidence suggesting parental controls are not in widespread use, their evidence does not establish parental tools are deficient. It only demonstrates parents are unaware of parental controls, do not know how to use parental controls, or simply do not care to use parental controls."

Shelby also questioned the efficacy of the Utah law, noting that it "ultimately preserves minors' ability to spend as much time as they want on social media platforms." That weakens the state's argument that the act is necessary to combat excessive use of social media. Conversely, Shelby found that the law blocks far more protected speech than necessary to achieve its goals: "Specifically, Defendants have not identified why the Act's scope is not constrained to social media platforms with significant populations of minor users, or social media platforms that use the addictive features fundamental to Defendants' well-being and privacy concerns." Utah has appealed the ruling to the Tenth Circuit.

Speech restrictions in the name of child safety are not limited to the state level. Throughout 2024, members of Congress advocated for various versions of the Kids Online Safety Act, which would impose a duty of care on online platforms to "prevent and mitigate" online harms to children, with enumerated harms including eating disorders, suicide, and substance abuse. Senator Richard Blumenthal (D-CT), the bill's sponsor, defended the duty of care as a standard requirement in many sectors. "Companies in every other industry in America are required to take meaningful steps to prevent users of their products from being hurt, and this simply extends that same kind of responsibility to social media companies, too," he said on his website.

But, like the Utah law, the federal proposal could cause platforms to over-censor legitimate educational materials about those topics, out of fear of liability. In a July 2024 letter to lawmakers, civil liberties groups, including the ACLU and the Electronic Frontier Foundation (EFF), noted the bill's free-speech problems: "One common concern among these diverse groups is the Duty of Care requirements that may cause companies to take down content to avoid liability. This could lead to aggressive filtering of content by companies preventing access to important, First Amendment–protected, educational and even lifesaving content."

Such threats to free speech are not limited to the United States. In 2023, the United Kingdom's Parliament approved the 300-page Online Safety Act, a sweeping set of mandates for online platforms. Among the most troubling, from a free-speech perspective, is a duty of care for preventing harms to children, including the vagueness of the law's requirements and the delegation of broad enforcement powers to Ofcom, the UK's communications regulator.

The duty of care is not the only concerning aspect of the UK law. It also allows Ofcom to compel platforms to search for illegal content, something that the Electronic Frontier Foundation says poses a real threat to the viability of end-to-end encryption. As the EFF wrote in 2023, "Such a backdoor scanning system can and will be exploited by bad actors. It will also produce false positives, leading to false accusations of child abuse that will have to be resolved. That's why the OSB is incompatible with end-to-end encryption—and human rights."

Another troubling aspect of the UK law is its requirement that websites verify the age of users, to block "harmful" online content from minors. As the EFF noted, "To prevent minors from accessing 'harmful' content, sites will have to verify the age of visitors, either by asking for government-issued documents or using biometric data, such as face scans, to estimate their age. This will result in an enormous shift in the availability of information online, and pose a serious threat to the privacy of UK internet users." Such invasive verification practices threaten the ability of both minors and adults to access the internet anonymously.

Because the UK Online Safety Act is still being implemented, it is unclear the full extent to which the government would use the law to censor speech. But in a November 2024 policy paper, the UK's Secretary of State for the Department for Science, Innovation, and Technology Peter Kyle indicated plans for expansive use of the new legal powers. For instance, Kyle wrote that "the growing presence of disinformation poses a unique threat to our democratic processes and to societal cohesion in the United Kingdom and must be robustly countered. Services should also remain live to emerging information threats, with the flexibility to quickly and robustly respond, and minimize the damaging effects on users, particularly vulnerable groups." Kyle did not indicate precisely how the government might work with (or pressure) platforms to deal with misinformation. Nor did he say who determines what is "disinformation" or suggest ways to counter it. That vagueness is precisely the harm that such laws have. They empower large bureaucracies to claim sweeping mandates to decide what sorts of content are too harmful to be on the internet.

Excerpted from The Future of Free Speech: Reversing the Global Decline of Democracy's Most Essential Freedom by Jacob Mchangama and Jeff Kosseff. Copyright 2026. Published with permission of Johns Hopkins University Press.

The post Restricting Speech By Purportedly Protecting Children appeared first on Reason.com.

Read Entire Article