Brett M. Pinkus Partner, Wick, Phillips, Gould & Martin, LLP
ISSUE 9 | SPRING 2023 | CONSTITUTIONAL LAW |
The public seems to have a fundamental misunderstanding about the true extent of “freedom of speech” under the First Amendment. Who can or cannot restrict free speech? What type of speech can be restricted? And how does this apply to speech restrictions on social media platforms which have become so prevalent?
Lawsuits alleging free speech violations against social media companies are routinely dismissed. The primary grounds for these dismissals are that social media companies are not state actors and their platforms are not public forums, and therefore they are not subject to the free speech protections of the First Amendment. Consequently, those who post on social media platforms do not have the right to free speech on these social media platforms. This article will attempt to explain the relationship between social media and free speech so that we can understand why.
Who Can Restrict Free Speech - State v. Private Actors The overarching principle of free speech under the First Amendment is that its reach is limited to protections against restrictions on speech made by the government.¹ The text of the First Amendment itself only prevents Congress (i.e., U.S. Congress) from making laws that restrict the freedom of speech. This protection is extended to the states, and to local governments, through the State Action Doctrine and the Due Process Clause of the Fourteenth Amendment.²However, under the State Action Doctrine, First Amendment restrictions traditionally do not extend to private parties, such as individuals or private companies.³ In other words, a private person or private company (such as a social media company) cannot violate your constitutional free speech rights, only the government can do so. That is, unless the private party attempting to restrict speech qualifies for one of the three exceptions to the State Action Doctrine.
The first exception is when an action to restrict speech by a private party involves a function that is traditionally and exclusively reserved for the State, which is known as the Exclusive Public Function Doctrine.⁴ The Exclusive Public Function Doctrine is limited to extreme situations where a private party has stood in the shoes of the state. For example, when a private company has been given control of a previously public sidewalk or park, it has been found that the private company is performing municipal powers exclusively performed by the state.⁵ Courts have repeatedly refused efforts to characterize the provision of a news website or social media platform as a public function that was traditionally and exclusively performed by the government.⁶
The second and third exceptions, which are related to each other, are the entanglement and entwinement exceptions. The entanglement exception applies when an action to restrict speech by a private party is such that the state has significantly involved, or entangled itself, with the private action.⁷ This occurs when the “power, property, and prestige” of the government is behind the private action, and where there is evidence of the overt, significant assistance of state officials.⁸ The entwinement exception applies when an action of a private party can be treated as though the action were of the government itself (i.e., overlapping identities).⁹ These exceptions are rarely used in free speech cases and apply in very limited situations, typically in cases involving the Equal Protection or Establishment Clauses which are not relevant in most social media contexts.
Where Can Speech Be Restricted - Public v. Private Forums When speech takes place in a public forum, that speech can qualify for protection of speech under the First Amendment.¹⁰ This is known as the Public Forum Doctrine. While there is no constitutional right for a person to express their views in a private facility (such as a shopping center),¹¹ speech that takes place in a traditional or designated public forum for expressive activity (such as a sidewalk or park on government property) is protected and only limited restrictions of speech are allowed.¹² A designated public forum can only be created when the government intentionally opens a nontraditional forum for public discourse.¹³ A private forum (such as a grocery store or comedy club), however, does not perform a public function by merely inviting public discourse on its property.¹⁴
Social media platforms are often characterized as a digital public square. Yet, courts have repeatedly refused arguments that social media platforms are public forums subject to the First Amendment.¹⁵ This reasoning is justified because their networks are private, and merely hosting speech by others does not convert a private platform to a public forum.¹⁶ Only in limited cases have social media sites been found by courts to qualify as a public forum. For example, in a recent case, an appellate court held that the official Twitter page operated by then President Donald Trump was a designated public forum. As a result, government officials could not engage in viewpoint discrimination by blocking individuals from posting comments with critical views of the President and his policies.¹⁷ In contrast, a private person or organization’s social media page is not a public forum and is not protected by the First Amendment.
Social media platforms may also be analogized to newspapers when they attempt to exercise editorial control and judgment over the publishing of users’ posts. In this scenario, the Supreme Court has held that newspapers exercise the freedom of the press protected by the First Amendment and cannot be forced to print content they would not otherwise include.¹⁸ This is due to a newspaper’s ability to exercise editorial control and judgment, including making decisions on the size and content of the paper, along with treatment of public issues and public officials (whether such treatment is fair or unfair). This leads us to next examine what protections are afforded to social medial companies for content posted by their users on their platforms.
Social Media’s Immunity for User Content - 47 U.S.C. § 230(c) Section 230 of the Communications Decency Act (“CDA”), codified as 47 U.S.C. § 230, was enacted in response to a court decision ruling that an internet service provider, Prodigy, was considered a “publisher” of defamatory statements that a third party had posted on a bulletin board hosted and moderated by Prodigy, and Prodigy could therefore be subject to a civil lawsuit for libel.¹⁹ Sec. 230(c)(1) remedies this by providing immunity to internet service providers from lawsuits that attempt to make them liable for the user content posted on their sites.²⁰ Social media companies, which are currently considered to be service providers under Sec. 230(c)(1), are broadly protected from responsibility for what users say while using their social media platforms.²¹
The next question that logically follows is whether a social media company can restrict or exercise editorial control over content on its platform. Sec. 230(c)(2) of the CDA answers this, by precluding liability for decisions to remove or restrict access to content that the provider deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”²² Social media platforms therefore set their policies and Terms and Conditions to state that they can remove violent, obscene, or offensive content and can ban users who post or promote such content. For example, Facebook, Twitter, and YouTube have banned terrorist groups that post material promoting violence or violent extremism, and have also banned ISIS, Al Qaeda, and Hezbollah solely because of their status as U.S.-designated foreign terrorist organizations. As was recently seen following the 2020 Presidential election, Facebook, Twitter, Snapchat, YouTube (Google), Reddit, and Twitch (Amazon) also justified their suspension of the accounts of President Trump and some of his supporters under Sec. 230(c)(2) for continuing to post misinformation, hate speech, and inflammatory content about the election.
What are Permissible Restrictions on Speech As discussed above, if a social media company chooses to remove content from its platform in accordance with its designated policies, that removal does not raise a First Amendment issue and there is no civil liability as a result of Sec. 230 of the CDA. But what if precedent was to be reversed, and a social media platform was declared a state actor or a public forum such that the First Amendment would apply to them? Or what if Sec. 230 was repealed to make social media companies liable for their users’ posts when they attempt to moderate the content? If either were to happen, the type of speech being restricted would play a significant role in its permissibility.
Restrictions of speech in a public forum are permissible if they are appropriately limited in time, place, and manner.²³ Speech can be restricted under a less demanding standard when it is done without regard to the content of the speech or the speaker’s point of view.²⁴ A content-neutral restriction on speech, for example, would be prohibiting all picketing within 150 feet of any school building while classes are in session without regard to their message, whereas a content-based restriction would be one that allows picketing only if the school is involved in a labor dispute.²⁵ Other reasonable content-neutral regulations include regulating noise by limiting decibels, or the hours and place of public discussion.²⁶ It is unlikely that content-neutral restrictions could be implemented to effectively regulate violent, obscene, or offensive content on social media platforms, which leaves us with content-based restrictions that would be subjected to heightened scrutiny. Content-based restrictions in a public forum require that there must be a compelling government interest in the restriction and the least restrictive means are employed to further that interest.²⁷
It is important to emphasize that the First Amendment “does not guarantee the right to communicate one’s views at all times and places or in any manner that may be desired.”²⁸ For that reason, if there is an alternative channel of communication for the desired speech, it may be a suitable alternative even if it is not a perfect substitute for the preferred forum that has been denied.²⁹ For example, if a user were blocked from posting on a social media platform, alternative channels to make the desired speech might include other social media platforms or different forms of media. Other possibilities might include remedial steps for regaining posting privileges, such as imposing temporary posting suspensions that can be lifted over time or requiring the poster to agree to specific posting restraints before regaining unrestricted access.
What Types of Content-Based Restrictions are Permitted
It is also worthwhile to review the types of protected and unprotected content-based speech to understand the extent of the speech protected by the First Amendment, particularly in view of the recent unrest reflected on social media following the 2020 election. Content-based restrictions on speech have been permitted within a few traditionally recognized categories of expression.³⁰
Misinformation, Defamation, Fraud, Perjury, Government Officials
Misinformation is defined as false or inaccurate information. False statements of fact about a public concern or public officials are protected from censorship under the First Amendment, unless the statement is made with knowledge or reckless disregard that it was a false statement made and/or made with intent to harm.³¹ It is not safe to assume that false statements can be made on social media platforms without impunity. There can be civil liability imposed for defamatory statements, which are knowingly false statements of fact published without authorization that damage others’ reputations (e.g., libel if written and slander if spoken), and for fraud, which is a false statement of fact made with the intent to cause the hearer to alter their position.³² At the time of this writing, statements pushing claims of election fraud following the 2020 election made by various public figures and news commentators on television and social media are being pursued for defamation by electronic voting machine manufacturers Dominion Voting Systems and Smartmatic.
Hate Speech and Speech that Incites Imminent Lawless Action
The First Amendment generally protects even hate or racist speech from government censorship. However, speech advocating the use of force is unprotected when it incites or is likely to incite imminent lawless action.³³Likewise, speech that is considered an incitement to riot, which creates a clear and present danger of causing a disturbance of the peace, is also not protected by the First Amendment.³⁴ “Fighting words” which “by their very utterance inflict injury or tend to incite an immediate breach of the peace” are unprotected and may be punished or prohibited.³⁵
Harassment and True Threats of Violence
Harassment refers to unwanted behavior that makes someone feel degraded, humiliated, or offended. Harassing someone for the purpose of irritating or tormenting them is protected from censorship by the First Amendment. However, harassment that goes so far as to present a “true threat of violence,” is an exception not protected by the First Amendment and is banned by all social media platforms. True threats of violence directed at a person or group of persons that have “the intent of placing the target at risk of bodily harm or death” are unprotected, regardless of whether the speaker actually intends to carry out the threat.³⁶ Intimidation “is a type of true threat,” and would likewise be unprotected by the First Amendment.³⁷
Advertisements Advertising, which is a type of commercial speech, receives only limited protection under the First Amendment.³⁸ If an advertisement is shown to be misleading or unlawful, a restriction on that speech is permissible.³⁹ A website or social medial platform, much like a newspaper, cannot be forced to print advertisements in contravention of their right of editorial control.⁴⁰
Conclusion Current legal precedent conclusively establishes that social media users do not have a right to free speech on private social media platforms. Social media platforms are allowed to remove offending content when done in accordance with their stated policies as permitted by Sec. 230 of the CDA, and that removal does not raise a justiciable First Amendment issue or a real risk of civil liability. The users, on the other hand, put themselves at risk of being banned for making violent, obscene, or offensive content on social media, and may even expose themselves to civil liability for making false, misleading, or violence-inciting statements.
Sources: ¹ Matal v. Tam, 137 S.Ct. 1744, 1757 (2017). ² United States Constitution, 1st Amendment, (“Congress shall make no law . . . abridging the freedom of speech”); Gitlow v. New York, 268 U.S. 652, 666 (1925) (applying the freedom of speech from the 1st Amendment to the states by virtue of the Due Process Clause of the 14th Amendment); Hudgens v. NLRB, 424 U.S. 507, 513 (1976) (“the constitutional guarantee of free speech is a guarantee only against abridgment by government, federal or state”). ³ Civil Rights Cases, 109 U.S. 3, 11 (1883) (“[i]t is state action of a particular character that is prohibited. Individual invasion of individual rights is not the subject-matter of the [Fourteenth] amendment.”); see also Shelley v. Kraemer, 334 U.S. 1, 13 (1948) (“[the Constitution] erects no shield against merely private conduct, however discriminatory or wrongful”). ⁴ Jackson v. Metro. Edison Co., 419 U.S. 345, 352 (1974). ⁵ Marsh v. Alabama, 326 U.S. 501, 505–09 (1946) (a private entity operating a company town is a state actor and must abide by the First Amendment); but see Lloyd Corp. v. Tanner, 407 U.S. 551, 569 (1972) (confining Marsh’s holding to the unique and rare context of “company town[s]” and other situations where the private actor “performs the full spectrum of municipal powers); Terry v. Adams, 345 U.S. 461 (1953) (holding public elections is an exclusive public function). ⁶ See. e.g., Prager Univ. v. Google LLC, No. 17-CV-06064-LHK, 2018 U.S. Dist. LEXIS 51000, at *26 (N.D. Cal. Mar. 26, 2018), affirmed 951 F.3d 991 (9th Cir. 2020); Quigley v. Yelp, Inc., 2017 U.S. Dist. LEXIS 103771, at *4 (“The dissemination of news and fostering of debate cannot be said to have been traditionally the exclusive prerogative of the government.”); see Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1930 (2019) (“merely hosting speech by others is not a traditional, exclusive public function”); Howard v. Am. Online Inc., 208 F.3d 741, 754 (9th Cir. 2000) (providing internet service, web portal, and emails was not “an instrument or agent of the government.”). ⁷ Moose Lodge No. 107 v. Irvis, 407 U.S. 163, 173 (1972). ⁸ Burton v. Wilmington Parking Authority, 365 U.S. 715 (1961). ⁹ Brentwood Acad. v. Tenn. Secondary Sch. Athletic Ass'n, 531 U.S. 288, 298, 303 (2001). ¹⁰ Perry Education Association v. Perry Local Educators’ Association, 460 U.S. 37 (1983) (There are three categories of government property for purposes of access for expressive activities: (1) traditional, or quintessential, public forums (such as a sidewalk or park on government property) in which content-based restrictions on speech are highly suspect; (2) limited, or designated, public forums in which reasonable time, place, and manner regulations are permissible and content-based prohibitions must be narrowly drawn to effectuate a compelling state interest; and (3) nonpublic forums in which the government can reserve the forum for its intended purposes with reasonable regulations on speech that do not discriminate based on opposition to the speaker’s viewpoint.). ¹¹ PruneYard Shopping Ctr. v. Robins, 447 U.S. 74, 81 (1980); Prager, 951 F.3d at 998. ¹² Perry, 460 U.S. at 37. ¹³ Cornelius v. NAACP Legal Def. & Educ. Fund, Inc., 473 U.S. 788, 802 (1985). ¹⁴ Prager Univ., 951 F.3d at 998. ¹⁵ See, e.g., Prager, 2018 U.S. Dist. LEXIS 51000, at *25–26 (“Defendants do not appear to be at all like, for example, a private corporation . . . that has been given control over a previously public sidewalk or park . . . .”); Estavillo v. Sony Comput. Entm’t Am. Inc., No. C-09-03007 RMW, 2009 U.S. Dist. LEXIS 86821, at *3–4 (N.D. Cal. Sept. 22, 2009); Nyabwa v. Facebook, 2018 U.S. Dist. LEXIS 13981, Civil Action No. 2:17-CV-24, *2 (S.D. Tex.) (Jan. 26, 2018) (dismissing lawsuit filed by a private individual against Facebook by explaining that “the First Amendment governs only governmental limitations on speech); Freedom Watch, Inc. v. Google, Inc., 368 F. Supp. 3d 30, 40 (D.D.C. 2019) (“Facebook and Twitter … are private businesses that do not become ‘state actors’ based solely on the provision of their social media networks to the public.”), affirmed, 816 Fed.Appx. 497 (D.C. Cir. 2020); see also Halleck, 139 S.Ct. at 1930 (“merely hosting speech by others … does not alone transform private entities into state actors subject to First Amendment constraints.”). ¹⁶ See cases cited supra note 16. ¹⁷ Knight First Amendment Institute v. Trump, 928 F.3d 226 (2d 2019), petition for cert. pending. ¹⁸ Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241, 258 (1974) (Challenging a law giving political candidates the right to reply to criticism, the newspapers were found to exercise editorial control and to be more than a passive receptacle or conduit for news, comment, and advertising. The law violated the function of editors by forcing them to print content that they would not otherwise include.). ¹⁹ Stratton-Oakmont, Inc. v. Prodigy Services Co., No. 31063/94, 1995 N.Y. Misc. LEXIS 229, at *1 (N.Y. Sup. Ct. May 26, 1995); compare Cubby, Inc. v. CompuServe Inc., 776 F. Supp. 135 (S.D.N.Y. 1991) (CompuServe was found not liable for defamatory content posted by users because it allowed all content to go unmoderated and lacked editorial involvement, and, as such, it was considered a distributor rather than a publisher.). ²⁰ 47 U.S.C. § 230(c)(1); Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997), cert. denied, 524 U.S. 937 (1998); see Barnes v. Yahoo!, Inc., 570 F.3d 1096 (9th Cir. 2009) (an internet service provider cannot be held responsible for failure to remove objectionable content posted to their website by a third party under Sec. 230(c)(1)); but see Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008) (Roomates.com was considered an information content provider, rather than a service provider, because it created or augmented content, and was ineligible for protection under Sec. 230). ²¹ See Doe v. MySpace, Inc., 528 F. 3d 413, 420 (5th Cir. 2008). ²² 47 U.S.C. § 230(c)(2); see Murphy v. Twitter, Inc., 60 Cal. App. 5th 12, 274 Cal. Rptr. 3d 360, 375 (2021). ²³ Perry Educ. Ass'n v. Perry Local Educators' Ass'n, 460 U.S. 37, 45 (1983). ²⁴ Clark v. Community. for Creative Non–Violence, 468 U.S. 288, 295 (1984). ²⁵ Mastrovincenzo v. City of New York, 435 F.3d 78, 101 (2d Cir. 2006). ²⁶ Saia v. New York, 334 U.S. 558, 562 (1948). ²⁷ Sable Commc’ns of Cal., Inc. v. FCC, 492 U.S. 115, 126 (1989). ²⁸ Heffron v. Internat'l Soc. for Krishna Consciousness, Inc., 452 U.S. 640, 647 (1981). ²⁹ 47 U.S.C. § 230(c)(1). ³⁰ U.S. v. Alvarez, 567 U.S. 709 (2012). ³¹ Id.; New York Times v. Sullivan, 376 U.S. 254 (1964). ³² Id. ³³ Brandenburg v. Ohio, 395 U.S. 444, 447 (1969). ³⁴ Feiner v. People of State of New York, 30 U.S. 315, 320 (1951). ³⁵ Chaplinsky v. New Hampshire, 315 U. S. 568, 571-572 (1942). ³⁶ Virginia v. Black, 538 U.S. 343, 359-360 (2003). ³⁷ Id. ³⁸ Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n, 447 U.S. 557, 561 (1980). ³⁹ Id; Langdon v. Google, Inc., 474 F. Supp. 2d 622, 630 (D. Del. 2007). ⁴⁰ Langdon at 630; Zhang v. Baidu.com, Inc., 10 F. Supp. 3d 433, 443 (S.D.N.Y. 2014) (decision to block certain search results was not commercial speech because it related to matters of public concern).