Use Spotify? Access BizNews podcasts here.
Use Apple Podcasts? Access BizNews podcasts here.
___STEADY_PAYWALL___
How Elon Musk Can Liberate Twitter
The site needs moderation, but there are ways to do that while abolishing viewpoint discrimination.

By Vivek Ramaswamy and Jed Rubenfeld in the WSJ
Elon Musk wants Twitter to “adhere to free speech principles.” That’s easier said than done. Porn, racial slurs and spam are all protected under the First Amendment, but few users want to see them. Even for the narrow categories of speech that aren’t protected, nearly all content blocking on social media goes against the first principle of free-speech jurisprudence—the ban on prior restraint, or censorship without judicial review.
The first step to solving these conundrums is to recognize that different free-speech principles apply in different contexts, and there are three key different kinds of forums. Speech protection is strongest in a “public forum.” If Twitter were such a forum, almost all content blocking would be an impermissible prior restraint. But Twitter isn’t a public forum, most obviously because it isn’t run by the government (even though its censorship is sometimes at official behest). At the other end of the spectrum is private property. If you’re a visitor in someone else’s home, he’s free to kick you out simply for offending him.
Between these poles are “limited public forums”—places generally open to the public where speech can be subjected to reasonable regulation. One kind of restriction, however, is forbidden: viewpoint discrimination. That’s how Mr. Musk should think of Twitter.
Nearly everyone agrees that social-media platforms shouldn’t engage in viewpoint discrimination—including the platforms themselves, which deny they do so. But of course they do. Conservative opinions about transgenderism are censored as “attacks” on a “protected group.” Conservative views on Covid are flagged as “misinformation.”
In May 2020, Twitter censored as a “glorification of violence” President Trump’s “when the looting starts, the shooting starts” tweet, while leaving untouched Ayatollah Ali Khamenei’s tweets calling for the destruction of Israel and Colin Kaepernick’s tweets supporting the burning of police precinct houses. Claims that the Democrats stole the presidency in 2020 are censored, while claims that Russia did the same in 2016 go untouched—and of course the truthful Hunter Biden laptop story was suppressed as “misinformation.”
This is an especially challenging problem because Twitter and others smuggle viewpoint discrimination into supposedly neutral content-moderation categories—primarily misinformation, incitement and hate speech. Stopping that should be Mr. Musk’s first priority.
False speech isn’t necessarily protected, especially in a limited public forum. But even for clearly unprotected false speech, such as defamation, perjury or false advertising, the law imposes a simple yet crucial requirement in all such cases: The plaintiff or prosecutor has to prove the statement was false.
Twitter and other platforms don’t follow that principle. They and their “fact checkers” label content “misinformation” when they deem it merely “unsupported,” “unproven” or “lacking context.” Without proof of falsity, these are no more than differing opinions about the truth—and there is no such thing as a false opinion.
The Constitution also defines “incitement” narrowly. In Brandenburg v. Ohio(1969), the Supreme Court established that incitement requires proof that the speech was both intended and likely to induce “imminent lawless action.” If the rule is that speech can be banned when it “could” lead to violence in the opinion of Twitter’s employees, the category becomes broad enough to cover nearly any speech, and it will be enforced against speech they disfavor.
Bans on “hate speech” would have to end. Every Twitter user knows that countless tweets are hateful but only certain hateful speech is censored, depending on its viewpoint. Racist and sexist speech expresses an opinion, however odious, and banning opinions is the essence of viewpoint discrimination. That’s why the U.S. Constitution doesn’t allow the government to ban hate speech.
Does that mean Twitter users, already awash in snark, must be flooded with racial slurs too? No. Mr. Musk can avoid that result by changing the paradigm for content moderation.
Twitter, like every other Big Tech platform, deploys centralized top-down censorship, dictating to users what content is too offensive for anyone to see. That model should be turned upside down: Users should decide for themselves.
One way to do this is through simple opt-in buttons. Mr. Musk could keep in place all of Twitter’s offensive-speech protocols, but give every user the ability to opt in or out of them. If a user doesn’t want to see hate speech, there’s no reason he should have to. The same goes for constitutionally protected sexually explicit material.
A more ambitious option would be to harness artificial intelligence and develop an individualized filtering mode. Each user would decide for himself whether to remove certain posts, and an AI algorithm would learn from his choices, creating a personalized filter. If Michael flags racial epithets or Laura deletes certain images, Twitter’s algorithms would be trained not to show them such epithets in future. They’d be free to change their minds and could adjust their settings accordingly. Mr. Musk could poke fun at other Big Tech platforms for employing an outmoded centralized censorship model that is a relic of broadcast media when the technology now exists to run personalized AI models.
One objection to this approach is that it might exacerbate online echo chambers. But users who wish to see contrary viewpoints could instruct their filter to keep showing them challenging opinions and facts. Users who choose otherwise would be no worse off than cable-news viewers. Such self-siloing may be an inevitable product of 21st-century media and civic culture, beyond any company’s power to counteract. But if we’re stuck with such echo chambers, better that they be ones of our own creation rather than imposed on us by a central authority.
There is no silver bullet to resolve the irreducible challenges of operating a user-friendly social media company that also protects free speech. But these principles offer a starting point for a pragmatic path forward: Conceive Twitter as a limited public forum, stop censoring viewpoints, and promote user choice over centralized content moderation.
Mr. Ramaswamy is an entrepreneur and author of “Woke, Inc.: Inside Corporate America’s Social Justice Scam” and “Nation of Victims: Identity Politics, the Death of Merit, and the Path Back to Excellence,” forthcoming in September. Mr. Rubenfeld is a constitutional scholar and First Amendment lawyer.