By Sam Fiske (CMC ’21)
Introduction
Social media websites that originated as a platform to connect with friends and family have evolved into some of the most intense, poorly regulated battlegrounds for political discourse. Companies that began in dorm rooms and garages have blossomed into Silicon Valley giants, yielding unprecedented power to shape public opinion and gnawing away at the deliberative core of our democracy. At the same time, false information on these platforms has skyrocketed.
In this paper, I will examine the rise of fake news, its legal protections, and its corrosive effects on democracy. My subsequent examination of potential solutions to lessen the influence of fake news ultimately endorses solutions that prioritize the behavior of users rather than solutions that target the content of our speech. I demonstrate that though the consequences of fake news are dangerous, the consequences of regulating fake news can be far worse.
I. Defining Fake News
While social media has led to a recent surge of interest in fake news, the phenomenon of misinformation itself is not novel. Misinformation has been spread for personal gain for as long as news has been circulated. Federalists who disagreed with Thomas Jefferson used newspaper publications to spread rumors that he had died in a bid to elect John Adams instead. In 1896, a presidential candidate started his own newspaper to combat “an epidemic of fake news” in the United States. In the 1980s, the KGB was instrumental in spreading rumors that AIDS was a human-made bio weapon. False information has always been used to shape public opinion.
Modern technology, however, has completely changed the way we create and consume information. Recent technological advances in media are exactly why the current fake news phenomenon is so dangerous to our democracy. News no longer rolls off of a printing press in someone’s basement to be handed out the streets; rather, it is now created on the internet and shared instantaneously with thousands of people. Modern misinformation is spread at an unprecedented speed to a vast audience, and fake news stories regularly reach as many readers as Fox News, CNN, or the New York Times. There is also a substantial lag between publication of false information and its debunking, meaning that stories can become wildly popular before being flagged as untrue.
Recent technological advances not only allow anyone to create and share a fake news story, but also encourage these stories to spread like wildfire. Only a few decades ago, traditional media and academic institutions monopolized the output of information, but the internet has since completely disrupted their authority to decide what makes a newsworthy story. Though social media and the internet en masse serves a purpose in sharing ideas that might have been overlooked in the past, this transformation of the modern media landscape has also provided a platform to a litany of amateur reporters, conspiracy theorists, and outright conmen. In the words of researchers Nancy L. Rosenblum and Russell Muirhead, “as priestly epistemic authority over the word of God was displaced by the fifteenth-century invention of the movable-type printing press… so contemporary authorities have been sidelined by digital technology.”
So, what is fake news? Indeed, the popularity of the phrase “Fake News” is relatively new. Donald Trump did not create what he called “one of the greatest of all terms I’ve come up with,” but he has been instrumental in bringing the phrase into the modern lexicon. Trump tweeted about fake news 180 times during the first year of his presidency. However, many of his references fail to align with the conventional definition of fake news, which Washington Post columnist Margaret Sullivan defines as “fabricated stories intended to fool you.” His attacks are mainly directed at well-established news organizations such as Time, The Washington Post, CNN, and CNBC. Thus, recently the term “fake news” has evolved to include a wide range of information, some of which accurately reflects the world around us and some of which fails to do so. In the last few years, the phrase has increasingly become weaponized jargon, used to attack the authority of media institutions and straying far from its original meaning. Modern “fake news” is concerned with genuine falsity, but it also appears concerned with who is sharing information, what their political affiliation is, and a number of other factors that do not directly contribute to truth. In some instances, the phrase is plainly used as a rebuttal of disagreeable facts. This categorization of legitimate news as “fake” is as dangerous as it is unprecedented.
And at the same time that the definition of fake news has been challenged, there has been an increase of genuinely false information on social media platforms. For this reason, the importance of creating a concise, accurate definition of fake news cannot be overstated. I will define fake news as knowingly false news stories that are presented in a context in which they are expected to be believed and intended to deceive their audience. This definition has been informed by researchers, such as Hunt Allcott and Matthew Gentzkow of Stanford University and their economic analysis of fake news and the 2016 American election cycle, government reports, such as the UK’s House of Commons 2019 report describing disinformation, and philosophers, such as Neo-Kantian ethicist Seana Shiffrin. Thus, in order to be considered fake news, the story must meet three criteria:
- The creator knows what they are writing is untrue.
- The story must be presented in a context where consumers expect to receive true information.
- The story must be created for the purpose of deceiving people who consume it.
Such a definition notably excludes two major sources of false information: satirical news and honest reporting mistakes. The former category — such as articles from The Onion — is not considered fake news because they are not meant to be believed. Drawing on irony and overt exaggeration, satirical news is defined by its humor. The Onion, and organizations like it, choose to forgo an accurate representation of the world in the name of comedy. The intention of the authors of the piece is not to deceive their readers, but rather to make them laugh. If someone actually believes the article, it is not because The Onion has successfully duped them, but because that person has failed to do a minimal degree of due diligence. Any legitimate deception is accidental; it’s a joke taken too seriously.
In the latter case, when Time Magazine falsely reported that the Trump administration had removed a bronze bust of Martin Luther King Jr. from the Oval Office, it did so without knowing the falsity of that statement. Such a case represents an honest reporting mistake. Although the consequence of the mistake was an untrue story, the intention was to share the truth. It fails to qualify as fake news because the deception was accidental. Although a reporting mistake from a large, reputable organization, like Time Magazine, may have more of an impact than many fake news stories, the key difference comes from the intention of the creator. Despite the falsity of their claims, the story was created to reflect an accurate depiction of reality. Again, although the outcome was deception, the intention was to share accurate information.
The decision to exclude unintentional deception not only aligns with the definitions established by other researchers but also resonates strongly with our conventional sense of lying. The intention of creators is important because it emphasizes the ethical issues that arise from deceit. A fake news author chooses to use their platform to manipulate and instrumentalize their readers. Conversely, satire and honest reporting mistakes are done in good faith.
A legitimate example of fake news can be seen in the example of an article titled “‘Tens of Thousands’ of Fraudulent Clinton Votes Found in Ohio Warehouse.” This article, written by a twenty-three-year-old named Cameron Harris in 2016, reached over six million internet users. Although it is impossible to measure exactly how many people were convinced by the article, the sheer volume of engagement suggests that it proved compelling as a byline. After writing the article in fifteen minutes, Harris purchased the web domain “ChistianTimesNewspaper.com” to increase the piece’s legitimacy and ensure that it would be taken seriously. When asked about his intentions, Harris claimed that he only wrote the false story for advertisement money. He knew that he would earn lucrative user traffic by choosing such a sensational headline, asserting that “Given the severe distrust of the media among Trump supporters, anything that parroted Trump’s talking points people would click.” Harris created a story people wanted to believe and exploited his audience’s political bias for profit. This case falls under the umbrella of fake news, as Harris created a story that was untrue and presented it for the purpose of deceiving his audience.
Harris represents only one of many of those who wield social media platforms to share fake news for financial or political gain. For example, an article titled “WikiLeaks Confirms Hillary Sold Weapons to ISIS” garnered nearly eight hundred thousand engagements prior to the 2016 election. Similar to Harris, the author of this article bought a plausible web domain and preyed on the bias of his anticipated audience. Websites like nationalreport.net, endingthefed.com, thepoliticalinsider.com, and abcnews.com.co all proliferate fake news, but use domain names that seem reliable on face value to maintain plausibility, which demonstrates just how easy it is to create viral false stories. If a social media user only briefly visits these sites (or even just glances at the name of the organization) they will see something that resembles a trustworthy source. Unless they spend a few minutes digging through the website and checking its claims against other reputable sources, users can be fooled. To distort the opinion of the American electorate, authors rely on a compelling story, a deceiving domain name, and narrative of bias to add to.
This is the exact strategy utilized by Russian agents in 2016. Tapping into existing fears and prejudice of American voters, Russians used social media as an instrument to sow discord. A single Russian firm alone created misleading or false content that reached around 150 million Facebook users. A Senate report found that this campaign not only influenced the 2016 presidential election but also “harm[ed] Hillary Clinton’s chances of success and support[ed] Donald Trump at the direction of the Kremlin.” The penetrative nature of this interference suggests that Russia, or any other foreign influence, could substantially affect elections in the future. Fake news, in addition to creating confusion and distrust, directly challenges the sanctity of democracy via foreign influence.
II. How Democracy is Threatened by Fake News
Fake news forms epistemic dissonance, creating different standards of truth among parties. Everyone has access to the same information, but that information offers different value to different people — what is true to one person may not be true to another. In other words, fake news has eroded our conception of “collectively trustworthy” information. It creates a “condition in which some inhabit a world where their common sense tells them that it is absurd to suppose Hillary Clinton’s campaign chairman is running an international child sex ring from a pizzeria in northwest Washington, DC, and others inhabit a world.” When the ability to agree on realities of a situation is absent, we no longer reach a mutually acceptable outcome through discussion. Without a basic set of facts to cement a disagreement, such differences in opinion become irreconcilable.
The collapse of productive dialogue between parties is important because the survival of democracies requires a degree of mutual tolerance. As Steven Levitsky and Daniel Ziblatt argue in How Democracies Die, unwritten rules of democracies, such as the peaceful transition of power, are essential to their sustainability. They contend that without the norm of mutual tolerance — “the idea that as long as our rivals play by constitutional rules, we accept that they have an equal right to exist, compete for power, and govern” — democracy will collapse. Fake news destroys mutual tolerance by eroding our collective understanding of issues. It is not only that people are not on the same page — they are reading totally different books. In creating these multiple realities, fake news threatens our ability to reason with each other.
The Mueller investigation demonstrates exactly how far removed average citizens are from the government agents who conduct investigations and deliberate over the significance of certain evidence. The vast majority of people will not see the legal documents or hear the testimony that experts use to come to a conclusion. Rather, they must rely on expert knowledge and assume its validity because it comes from a trusted, knowledge-producing institution. However, fake news undermines that validity. A false article claiming that the Mueller investigation proved that Trump had committed perjury, for instance, creates an alternative narrative that can undercut the authenticity of expert knowledge. To some, the Mueller investigation loses its truth value, whereas for others, it maintains its value. It is not that people can’t agree, but rather that they can’t even begin a discussion because there’s no foundation of common interpretation. Thus, the greatest destructive feature of fake news is that “we are embattled on our sense of what it means to know something.” Political questions are distorted into epistemic crises with no common answer.
In addition to eroding the deliberative core of democracy, fake news necessarily misleads our political decision-making. When fake news infiltrates the voting consciousness, certain voters make decisions based on truth while others cast their vote from falsehoods. And because the preferences of some individuals are predicated on lies, electoral results cannot be an accurate reflection of the real interests of a nation. Granted that voters use information to guide their political preferences, accurate information is paramount to establishing legitimate political preferences. Fake news drives a wedge between the preferences of voters and the political expression that would allow them to best realize those preferences. In short, fake news prevents voters from consistently choosing the best candidate for them. This line of reasoning does not imply that fake news is the only explanation for why electoral results fail to accurately reflect the legitimate interests of voters, but it does contend that fake news contributes to that dissonance.
And although voters have undoubtedly been misled in the past, fake news has a new, profound ability to change the minds of voters. What is unique about the current state of fake news is both its reach and its ability to offer compelling information targeted on an individual basis. While Thomas Jefferson’s supporters may have been successful in fooling some voters with a false article, they did not have the revolutionary technology that companies like Facebook use in their political advertisements. As one Facebook employee put it, Facebook’s technology, “allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.” This “micro-target” advertising strategy increases the effectiveness of campaigns and allows politicians, PACs, and nonprofits to take advantage of the 1.62 billion users who visit Facebook daily. Fake news, supported by powerful social media technologies, is without historical parallel in its potential to shape the minds of its audience.
Fake news is a grave threat to democracy everywhere. It not only corrupts the epistemic foundation of democratic debate, but also skews the preferences of a voting electorate so that their vote no longer reflects their true interests. And though false information has been wielded for political ends for centuries, the recent developments in technology and social media pose a novel set of problems. As Facebook and Twitter become the most powerful modern tools for communication and political debate, they are also morphing into ideal platforms for attempts to affect U.S. electoral results undertaken from a single computer in a different country. The mutual tolerance, trust, and discourse required to sustain a liberal democracy are in peril.
III. Legal Protections of Fake News
In the United States, speech is broadly protected by the First Amendment, which ensures the “free exercise thereof; or abridging the freedom of speech, or of the press” from the government. Although fake news can be harmful, it falls under the same protection because it is still considered an expression of speech. As the Supreme Court recognized, monitoring the content of speech raises difficult ethical questions of censorship. The Court has established precedent that aims to promote the free exchange of ideas, regardless of their falsity. As Justice Brennan recognized, tolerating certain falsehoods is essential to keeping democratic debate “uninhibited, robust, and wide-open.”
New York Times v. Sullivan, a Supreme Court case in 1964, affirmed such a philosophy. The libel lawsuit was brought forward by L. B. Sullivan, a head police officer in the Montgomery Police Department, after the Times ran a full-page ad that included several false statements about police actions in response to peaceful protests. Although Sullivan won the case at the state level, the verdict was later overturned as the Times argued that it 1) had no reason to believe the ad contained false information, and 2) had no intention in damaging the reputation of the Montgomery Police Department. The Supreme Court ruled that defendants would only be required to pay damages if the plaintiff could prove that they acted with “actual malice.” Put another way, Sullivan had to demonstrate that the Times publishers were both aware that the statement was false at the time of publication and acted with reckless disregard as to its falsity.
New York Times v. Sullivan established an incredibly difficult legal standard, but the Court’s decision was carefully calculated. The ruling effectively created a buffer of tolerance for some false information in the media and provided media companies ample room for honest reporting mistakes. The Court reasoned that encouraging open debate about public officials was more important than policing well-intentioned falsehoods that might damage the reputation of those officials. In order to protect robust political discussion, Justice Brennan realized that Americans must tolerate “sometimes unpleasantly sharp attacks on government and public officials.” If the Supreme Court sided with Sullivan, the ruling had the potential to result in a dangerous suppression of journalism. Because reporters lack the ability to verify every claim in their publication, they would barely be able to report anything out of fear of expensive lawsuits that would follow from mistakes. Additionally, public officials would be emboldened to bring lawsuits against media companies for any disagreeable article, thus weakening the free press and impeding the free exchange of ideas.
In 2012, United States v. Alvarez further solidified protections for false speech, contributing to the protective legal framework of the modern fake news epidemic. Alvarez did not deal with media organizations, but rather addressed the content of speech directly. The case involved Xavier Alvarez, a member of the Three Valleys Municipal Water Directors, who falsely claimed to have received the Congressional Medal of Honor while introducing himself to the board. After it was revealed that Mr. Alzarez lied about his achievements, he was charged and convicted of violating the Stolen Valor Act of 2005, a federal law that prohibits people from falsely identifying themselves as recipients of military awards. However, Alvarez appealed the ruling on the basis that the Stolen Valor Act unconstitutionally violates free speech protections granted under the First Amendment.
Eventually, the case made its way to the Supreme Court, where he was granted a favorable ruling. Because the Stolen Valor Act dealt with the content of one’s speech, it was held under strict scrutiny analysis, the highest level of legal scrutiny. This legal standard is well-established by cases like Ashcroft v. American Civil Liberties Union, in which the justices argued that the First Amendment commands “that content-based restrictions on speech be presumed invalid . . . and that the Government bear the burden of showing their constitutionality.” In order to pass such scrutiny, 1) the United States government must have a clear, compelling interest in application of that law, and 2) the application must be narrowly tailored. There are some notable types of content-based speech regulation that have survived strict scrutiny analysis, such as “incitement, obscenity, defamation, speech integral to criminal conduct, so-called ‘fighting words,’ child pornography, fraud, true threats” among others, but false speech simply fell beyond the scope of legal precedent before 2012. Alvarez was unprecedented because it raised questions regarding content-based regulation on the basis of falsity alone.
The Court found that the United States government did have a compelling interest in protecting the integrity of military awards, but had no way of narrowly applying the law. In other words, the Stolen Valor Act did not successfully meet both requirements established by strict scrutiny analysis. The majority also argued that false statements on their own do not present the grave and imminent threats created by other types of speech, such as falsely shouting fire in a crowded theater and causing panic. The key difference is the context of the speech; Alvarez established that false speech needs to be combined with a situation where that false speech directly poses danger to warrant regulation. The Stolen Valor Act had been drafted vaguely, prohibiting some false speech that posed no legitimate danger to the nation, thereby unnecessarily and unconstitutionally limiting speech. As a result, the justices reasoned that just because a statement is an intentional, obvious lie, “falsity alone may not suffice to bring the speech outside the First Amendment.” Of course, if we combine false statements with contexts that warrant regulation (such as there not actually being a fire in the theater), they may fall under categories of speech I have mentioned that have passed strict scrutiny analysis. However, holding all else equal, United States v. Alvarez affirmed that false speech has the same legal protections as true speech.
The Communications Decency Act (CDA) has also provided tremendous protections to private technology companies for the content shared on their platform. Originally passed in 1996 to limit minors’ access to internet pornography, the CDA is most relevant for its clause regarding platform users and platform creators. Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This practically “immunizes online content providers from liability for unlawful user-generated content.” Even if a plaintiff is successful in collecting damages from a fake news article, the platform on which it was created and shared cannot be held responsible. For example, though a media organization can be successfully sued for libel for publishing a knowingly false article, Facebook, Twitter, or any other website cannot be sued for their role in allowing the story to spread. As a result, CDA has effectively upheld the internet as a safe haven for free speech. Unfortunately, private media companies have no legal obligation to combat false information because they cannot be held liable.
Recently, the CDA has become a point of contention between Silicon Valley and the Trump Administration when Trump passed an executive order in May 2020. The administration argued that when websites like Twitter or Facebook chose to censor some information, they are “engaged in editorial conduct” and should not be afforded the protections of a neutral publisher. However, this executive order did not have much bearing on the implementation of the CDA. Beyond granting legal immunity to platforms for the contents users share, Section 230 also “guards [interactive internet service providers] when making decisions on removing objectionable content.” Platforms are allowed to “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be … objectionable, whether or not such material is constitutionally protected.” Due to the intentional ambiguity surrounding the phrase “good faith,” this stipulation is rarely tested in court.
Times v. Sullivan, United States v. Alvarez, and Section 230 of the CDA combine to create several layers of legal protection for fake news publications and the websites that share them with users. Still, certain fake news articles do constitute libel and these plaintiffs have good reason to seek legal action. Libel is not a crime in most states, meaning fake news publishers will not face criminal prosecution; they can, however, be sued in civil court, an area of law called torts. Losing in a tort lawsuit will typically result in monetary compensation. Under the right conditions, these lawsuits can be successful, but fake news publishers have several defenses.
Firstly, many false news “organizations” are able to hide under the guise of parody. For example, by “readily identif[ying] itself as a source of fiction, parody, or satire,” websites exempt themselves from the same scrutiny as faced by legitimate news organizations. By distinguishing a fake news organization from legitimate news organizations, libel claims lose efficacy. Secondly, many fake news articles may not have a specific victim. For example, the article entitled “Pope Francis Shakes World, Endorses Donald Trump” garnered almost a million engagements and likely benefited the Trump campaign, but it only indirectly harmed the Clinton campaign with false statements. Likewise, articles that generally target the government or a political party cannot be sued for libel in a torts lawsuit since torts compensate for individual harm. Public figures within a group might sue the publishers, so long as individual harm is demonstrated. But under the standard created by New York Times v. Sullivan, public officials are less likely to be successful in these lawsuits.
Consequently, this legal precedent gives more freedom to media outlets to seek truth and critique our government without the fear of retaliation, but it also provides less legal recourse against fake news organizations that spread misinformation without a clear target. New York Times v. Sullivan provides the sort of protections we expect in a well-functioning democracy, but it also limits our capacity to lessen the amount of false information on the internet. And as Russia demonstrated in the 2016 election, fake news campaigns can have foreign roots. Because libel is concerned with tort law involving “non contractual harm between two private individuals,” it does not afford the same extradition powers as criminal law. In that sense, foreign powers are effectively immune from the consequences of their untruths unless Congress or the executive office chooses to issue sanctions against them.
IV. Attempted Solutions
Given the serious consequences of fake news, it has been argued that content-based restrictions could pass the strict scrutiny standard afforded to the First Amendment. In other words, if Congress were to pass a law forbidding fake news, there’s a chance that it becomes one of the rare bills that survives strict scrutiny analysis. The United States government clearly has a real interest in maintaining the integrity of our electoral process; preventing foreign influences, mitigating voter fraud, preserving a marketplace of ideas, and guaranteeing that voters are supporting appropriate candidates are all important features of a healthy democracy that are threatened by fake news. And so long as Congress is able to pragmatically apply the law, it could potentially meet the narrow tailoring requirement established by judicial precedent. If legislation was focused enough “to target only a specific breed of false speech-that which subverts the election process and has no chilling effect on protected speech,” there is a plausible path for constitutionality.
For example, instead of banning fake news outright, researchers have suggested false speech restrictions in the few months before a major election, since that is when fake news has been most prevalent in the past. However, any regulation, whether partial or full, raises serious questions about what “outlawing” fake news actually looks like. While appealing at face value, such a law would invariably create problems surrounding the legitimate arbiters of truth in the media, undermining its effectiveness and threatening democracy in new ways.
When Singapore passed a fake news law in 2019, it seriously harmed free political speech. This law not only included prison terms and hefty fines for the online publication of false information, but also afforded the government significant leeway in determining what constitutes falsehood. When the law was first invoked, it targeted a member of the opposition party “that questioned the governance of the city-state’s sovereign wealth funds” through an online article. After review, the government required the opposition member to include the government’s position at the top of the post and reposted a screenshot of the article with a comically large “false” written across the page. While the falsity of the article is disputed, the opposition member maintained that they did not include untrue statements, suggesting that the law may have been used to silence criticisms rather than promote a healthier marketplace of ideas. This is especially problematic because “Government ministers can decide whether to order something deemed fake news to be taken down, or require a correction to be put up alongside it” with total autonomy. In an attempt to mitigate the effects of fake news, the government officials of Singapore have given themselves unprecedented power over the information their population consumers. And Facebook and Google, companies that strongly opposed the bill, were forced to abide by the law if they hoped to maintain their user base in Singapore.
While there exist legitimate reasons to lessen the influence of fake news, allowing the government to do so would undoubtedly damage the protection of free speech, especially political speech. Government actors would be incentivized to ban unfavorable publications and promote politically useful misinformation. In an even more concerning scenario, incumbent politicians might use their office to suppress their opponents, raising serious ethical questions about centralizing the operational definition of “truth” in a single government entity or committee. By entrusting the government to address fake news without being held accountable, the political reality of nations could be shaped by their officials.
Another possible way to address fake news would be to entrust private technology companies to ban fake news on their platforms to appease the public. As of recently, companies like Facebook and Twitter have been leading the way in combating false information, especially when it comes to information regarding public health concerns. However, as each company struggles to find the best solution for their platform, an outright ban seems improbable. For example, when Facebook contracted human fact checkers to monitor articles in 2018, they simply failed to keep pace with the monstrous flow of information on their platform. Facebook also experimented with creating algorithms that can automatically detect click bait articles, but still makes serious mistakes in identifying false articles. Further, though algorithms can identify machine-generated text in fake news, articles that are human-written are far more challenging to identify because coders themselves can be fooled. Unless social media companies prioritize some sources over others when fact checking, an effective false news algorithm may never materialize. Even then, prioritizing some sources over others may lead to biased code that gives more legitimacy to certain ideologies. And importantly, because technology companies must cast a wide net to prevent the spread of misinformation, most solutions will suppress true stories in the hopes of stemming false narratives. The censorship of truth is a significant cost, one that companies should be wary to incur.
Another popular solution is to entrust users to flag information that they deem to be false. Under this model, when a user sees information that they believe to be untrue, they would mark it as false. After being marked as untrue, the information would then go to a third-party fact-checking organization. If certain content is never flagged for inaccuracy, it is never reviewed. This model was recently implemented by Facebook, though the platform does not share data regarding the efficacy of such a user-based flagging system. However, the system ultimately relies on the good intentions of users. There is nothing stopping users from flagging information they simply disagree with, and if users decide to erroneously label information as “false,” the system of fact-checking falls apart. Additionally, despite the fact that trusting users to flag false information limits the amount of content sent to fact-checkers, Facebook is still inundated with a continuous flow of information that it cannot fully process. Even when a post is deemed “false,” that rating can be appealed and overturned. And, somewhat counterintuitively, content labeled “false” still remains viewable on Facebook. The only penalty for false information is that it is relocated to a lower spot on the “newsfeed,” decreasing the impact of fake news but ultimately failing to eliminate the issue.
V. Proposed Solutions
The idea of banning fake news offers a quick and absolute solution to the epistemological epidemic that currently plagues social media, but it is a solution that ultimately creates more problems than it solves. The day fake news is banned will be a grim day in the history of free speech, and a serious threat to democracy everywhere. Rather than swiftly eliminating the costs of misinformation, recent attempts to limit fake news have done little beyond entering the murky waters of philosophical waters of epistemology and paternalism. If we allow governments to classify and manage fake news with impunity, dominant political parties will be able to shape the political realities of users. Likewise, if we allow technology companies to filter fake news according to their preferences, there is little evidence to suggest that those companies will be able to present an unbiased (or even effective) way to limit the influence of false information on their websites. By centralizing the classification of falsity to technology companies or the government, Washington or Silicon Valley will become the omnipotent arbiters of truth. In doing so, the range of ideas presented to users will invariably be constrained. Until our technology is able to distinguish fact from fiction with consistency and precision, the suppression of fake news necessarily means the suppression of truth. Banning fake news has serious appeal, but the downside from the potential suppression of true speech would outweigh any sort of gains. Because social media websites have become integral to informing citizens about the state of the world, we must ensure that the greatest amount of information is available, even if that means users may struggle through false information. Still, improvements can and should be made. Instead of restricting content, users and technology companies should share responsibility for determining the validity of online information. Even if there is no silver bullet to address the effects of fake news, a careful combination of practices will make a substantial difference.
First, Congress could pass source disclosure requirements for all advertisements on social media. This would require organizations like Facebook and YouTube to clearly identify which organization is funding a specific advertisement, thus making users more aware of potential political leaning. Research has indicated that “disclosure requirements for paid online content might enable viewers to make more accurate inferences about its truth” while promoting free speech. As it stands, foreign influences rely on Facebook’s laissez faire policy toward violating its advertisement disclosure violations. For instance, researchers from New York University found that over thirty-seven million dollars of online advertisements on Facebook, which “represent[ed] 55 percent of all pages with political ads during the study period,” had simply failed to meet the source disclosure requirements established by Facebook. Researchers also found that it took Facebook an average of 210 days to shut down these pages, which often intentionally disguised themselves as clickbait pages to “secretly promote their interests.” Source disclosure requirements represent a simple, inexpensive way to mitigate the effect of the influence of misinformation without hampering the free flow of ideas online. Organizations should be identified with the ideas they are paying to promote.
While this solution is incomplete at best, it does take a step toward the sort of transparency that is sorely lacking from online advertising. Unfortunately, source disclosure requirements would not mitigate harm done by bot accounts, fake users that automatically post false information, heavily-shared articles under the disguise of a legitimate profile, and articles written by real people. The Facebook groups, memes, and misleading videos created by the Russian agents who interfered in the 2016 election also similarly fall outside of the domain of advertisements. So while source disclosure would be an excellent first step in reducing the influence of false and misleading advertisements on social media, a huge range of influential content would be entirely unaddressed.
Secondly, technology giants should prompt users to read articles before sharing them. Rather than focusing on inflammatory headlines, this policy will redirect users’ attention to the article as a whole, challenging users to consider its validity more seriously. In June of 2020, this exact policy was adopted by Twitter. Recognizing the viral nature of misinformation on Twitter, Twitter product lead Kayvon Beykpour said that sharing articles on the platform is “powerful but sometimes dangerous, especially if people haven’t read the content they’re spreading.” Rather than policing individual pieces of content for falsity, encouraging users to fully understand the content they share encourages self-policing. Social media companies would not need to monitor millions of posts each day themselves; they would instead decentralize that task by shifting responsibility to the users themselves. Beyond limiting the spread of false information, this approach has two serious benefits. First, there is the practical benefit of circumventing the difficulty that comes with policing fake news. If users become more aware of their own tendency to erroneously share false information, the reliance on algorithms and fact-checkers will be lessened. And second, this solution has the potential to lessen the influence of fake news without resorting to content regulation. Social media companies can address the issue of false information and avoid the suppression of free speech.
However, like source disclosure requirements, prompting users to read articles has its limits. For example, users can simply claim to have read the article before sharing it. There are no penalties for ignoring the prompt, nor is there any way for social media companies to verify their claim. Users can simply ignore the prompt and post anyway. Additionally, for individuals that feel like social media companies are already unfairly targeting them, the request to read an article before sharing it might be seen as an unwelcome challenge. Policies like these, among other recent events, are exactly what is driving users to join fringe social media apps with less stringent policies regarding false information. However, encouraging users to become more knowledgeable about the information they share will likely challenge them to fact-check their own sources and avoid fake news. Again, this solution is not complete, but it does show promise.
Lastly, technology giants should be required to develop a fake news identification training program for their user base. In order to continue using a Facebook account, for example, the user would be required to complete a brief course explaining how to verify sources and identify misleading information. Similarly, new users would be required to complete the training in order to activate their account. This course would focus on basic research techniques and require a degree of demonstrated proficiency before allowing the user to post on a certain platform. Much like earning a driver’s license to use a motor vehicle, a basic test of internet research competency would be required to operate a social media account. Because the actions of one user necessarily affect others, a fake news course is one reasonable way to address the collective costs that come with misinformation. The responsibility to determine truth from fiction would ultimately fall on users rather than developers, but users will have the knowledge needed to be successful. Instead of relying on media companies to filter fake news, the focus will remain on users. Such a measure would be far less expensive than policing every piece of information that users create and share, and would allow smaller tech companies to address misinformation without the tremendous resources of their larger competitors. And again, giving the tools to users would mitigate the erosive consequences of fake news without limiting free speech; social media companies would not have to impose content-based speech restrictions to address fake news. These provisions would lessen the influence of fake news while circumventing the constitutional questions that arise from the government monitoring speech.
Without centralizing the power to determine truth from lie, source disclosure requirements, encouraging users to read the posts they share, and requiring a basic education in fake news among users, have the power to drastically change the corrosive effects of fake news. Most importantly, all of these solutions can be implemented today, without government regulation. If users demonstrate an interest in lessening the influence of fake news and social media companies realize their social responsibility to improve the communities they build, users and platforms can work together to address online misinformation.
VI. Conclusion
As the coronavirus pandemic rages worldwide, false information about the virus rages online. Some online users are going so far as to claim that the virus is a man-made bioweapon. Alex Jones, the infamous creator of Infowars, sold toothpaste that he claims kills COVID-19, profiting from the American public’s fear and fomenting hope in a false cure. A post that reached approximately 1.5 million people claims that Dr. Anthony Fauci, a member of the coronavirus task force, is part of a secret group opposing the president. QAnon followers, a group of ultra right-wing conspiracy theorists who believe that the world is controlled by a group of elite Satan-worshipping pedophiles, have been using their massive online presence to insist that the virus does not exist. Even though websites like Facebook, Twitter, and YouTube have made commendable efforts to promote the truth regarding this fatal virus, false information continues to thrive on their platforms. Like an insidious stain, false information continues to seep into the fabric of civil debate.
Though I have argued that the greatest threat of fake news is its ability to undermine the deliberative core of democracy, the current pandemic has shown how misinformation can directly lead to the death of thousands of innocent people. False information about the pandemic raises serious ethical questions about the wrongs of lying and the role of social media companies in our society. The importance of mitigating the influence of false information has never been greater. But though fake news poses grave threats, the danger of censoring false information may be equally, if not more, serious. We must trust the ability of others to find truth. We must have faith in knowledge. As Justice Anthony Kennedy argued, “The remedy for speech that is false is speech that is true… The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight-out lie, the simple truth.”