Back to News & Commentary

The Problem With Censoring Political Speech Online – Including Trump’s

Man holding iPhone with social media icons displayed on the screen
No one is required to publish politicians’ speech, but online platforms should be cautious when censoring them.
Man holding iPhone with social media icons displayed on the screen
Vera Eidelman,
Staff Attorney,
ACLU Speech, Privacy, and Technology Project
Kate Ruane,
Former Senior Legislative Counsel,
Share This Page
June 15, 2021

In January, many online platforms decided they no longer wanted to host President Trump’s speech. Google, Twitter, Facebook, Pinterest, and other social media services announced they would no longer distribute Trump’s hateful, demeaning, outrageous speech or anything else he might have to say. Many people were pleased. Others, including the ACLU, expressed concern that a few of these companies — namely Facebook, Google, and Twitter — wield such enormous power over online speech that, if they used it against people with fewer outlets than the president of the United States, the companies could effectively silence them.

The issues are complicated. But some policymakers, inspired by factually unsupported rhetoric claiming social media platforms disproportionately silence conservative voices all the way up to the former president, have taken steps that are clearly wrong. For example, Florida enacted a new law that, among other things, prohibits online platforms from blocking or terminating the account of any candidate for political office. It also forces them to publish anything candidates write — regardless of whether what they write is protected by the First Amendment (with the sole exception of obscenity) or violates the platforms’ community standards. Florida Gov. Ron DeSantis announced this law as a way to prevent the platforms from “discriminat[ing] in favor of the dominant Silicon Valley ideology,” while Lt. Gov. Jeanette Nuñez billed it as a response to “the leftist media” that seeks to silence “views that run contrary to their radical leftist narrative.”

The Florida law is clearly unconstitutional. The Supreme Court struck down a strikingly similar law, also in Florida, nearly 50 years ago, in a case called Miami Herald v. Tornillo. The law at issue in Tornillo required newspapers that published criticisms of political candidates to then publish any reply by those candidates. In other words, it forced private publishers to carry the speech of political candidates, whether they liked it (or agreed with it) or not.

As the Supreme Court recognized in Tornillo, a government-mandated “right of access inescapably dampens the vigor and limits the variety of public debate.” It makes no difference whether the right of access is to a newspaper or an online platform. Enabling platforms to make different choices about how to treat political candidates’ speech is good for public discourse, it’s good for users, and it’s also a right protected by the First Amendment. We filed a friend-of-the-court brief, along with the Reporters Committee for Freedom of the Press and others, making these arguments this week.

While the government cannot force platforms to carry certain speech, that doesn’t mean the largest platforms should engage in political censorship, either. The biggest social media companies are central actors when it comes to our collective ability to speak — and hear the speech of others — online. They blocked the accounts of a sitting President, after all, and that substantially limited the reach of his message. The Florida law reaches far beyond Facebook, Twitter, and Google, governing much smaller online communities and platforms. For the big three, though, our view is that — while the First Amendment protects whatever choice they make with respect to whether and how to publish the speech of political candidates — they should preserve as much political speech as possible, including content posted by candidates for political office.

To date, online companies have taken different approaches to political figures’ speech, as shown by their treatment of Trump’s accounts. Prior to the January 6 attack, the platforms experimented with various responses to posts by Trump that violated their community standards, from simply leaving them up, to labeling them, to restricting their distribution. On and after January 6, the platforms took action at the account level. Twitter permanently suspended Trump’s account “due to the risk of further incitement of violence.” YouTube suspended his account indefinitely, applying a sanction that didn’t appear to exist in its policies, also pursuant to the platform’s incitement-to-violence policy, and has since said it would end the suspension when it determines the risk of violence has sufficiently fallen.

Facebook initially also suspended Trump’s accounts indefinitely, also without tethering the decision to an existing sanctions policy. In response, its Oversight Board ordered Facebook to impose a clear and proportionate penalty, and to explain where it came from. Last week, Facebook announced that it would suspend Trump for two years — until Jan. 6, 2023, which, it should be noted, is shortly after the next midterm elections. At this point the company stated it “will look to experts to assess whether the risk to public safety has receded.” In addition, Facebook stated that it would not treat politicians any differently than other users when it comes to its “newsworthiness allowance,” a policy lever it has used to keep up content that is “important to the public interest,” even if it violates the platform’s community standards. Going forward, Facebook said it would “remove the content if the risk of harm outweighs the public interest.”

As we’ve said before, we have parted company with other advocacy organizations that have been more willing to accept limitations on the speech of political leaders on social media platforms. While politicians’ advocating hatred or violence may be more persuasive and impactful, there is also a greater public interest in having access to their speech. At a minimum, statements of political leaders are important for government transparency — they give the electorate more information about the people running for office, and they may also reveal intent or uncover the meaning of policies in ways that matter for voters and courts alike. For example, courts considered President Trump’s tweets as evidence in several challenges to his official acts, including the transgender military ban and the Muslim ban. Much of what politicians and political leaders say is, by definition, newsworthy, and can at times have legal or political consequences.

Given the importance of protecting political speech by political figures, the biggest platforms should strive to allow as much political speech as possible and avoid account-level punishments. And, if they decide to censor candidates, they should have a consistent plan in place for preserving the offending speech for transparency, research, and historical record purposes.

In addition, all platforms should publicly explain their rules for removing posts and accounts of political figures and all users, and explain the penalties that can apply. Those rules must take into account the needs of human rights advocates, researchers, journalists, and others to access rule-violating content. And — contrary to what we’ve seen from most if not all of the companies — penalties should not be imposed on an ad-hoc or political basis.

We recognize that the major platforms are private entities with their own First Amendment rights to control the content they publish. But the largest platforms’ central role in online speech also means they should err on the side of preserving political speech — and, given their scale, they must also offer clarity upfront, at a minimum stick to their own rules, and offer opportunities for appeals when they (inevitably) get things wrong.

Learn More About the Issues on This Page