Soksamnang Lim (PO’23)
The recent Iowa Caucus marks a significant milestone in America’s transition into a digital democracy, defined as the use of information and communication technology in political and electoral processes. However, the Caucus’ delayed results from the poorly tested polling app demonstrate the growing role of technology within America’s election process and the precautions America should take before allowing technology to automate its election processes. Adapting American politics to technology’s rapid advancement has been an arduous task plagued by issues of security, inefficiency, bugs, and inexperience.
The expansion of technology into politics has the potential to manipulate democracies. This was demonstrated by Cambridge Analytica’s involvement in the 2016 U.S. Presidential election as well as their involvement in an additional 100 campaigns across the globe. Yet, the internet has allowed citizens across the globe unprecedented access to information that could allow citizens of functioning democracies to hold their governments accountable. In 2018, the Pew Research Center found that 20% of the U.S. population relied on social media for their news. With a significant portion of the population relying on social media for their information, social networks and individualized timelines have a significant influence in shaping political ideology.
In a chart demonstrating American political values over the last two decades, the Pew Research Center found that beginning 2011, the share of Americans with moderate values has decreased while ideologically consistent political values (consistently liberal and consistently conservative) has significantly increased. When looking at politically engaged Americans, the polarization of American political values is noticeably more pronounced. While political polarization cannot be attributed to the rise of social media during the same period, social media’s role as a platform for Americans to obtain news and engage in political discussion may be a contributing factor.
Our preference to associate with people and organizations sharing similarly-held ideas in our digital and non-digital social networks reinforce epistemic bubbles— informational networks that restrict information. According to a study by Eytan Bakshy, a member of the Facebook Data Science Team, algorithms that serve to maintain user engagement through cultivating relevant content do reinforce epistemic bubbles— however, the algorithm only lowered the chance of users from seeing politically cross-cutting stories by 1%. The same study found that for liberals, 23% of political news shared by their friends is conservative, while 20% of the pieces they click on are conservative. For conservatives, 34% of political content shared by their friends is liberal, while 29% of the pieces they click on are liberal. Although Facebook’s algorithms play less of a role in developing epistemic bubbles, this study shows that individual habits and preferences to read content sharing our opinions play a larger role in sustaining a social network that filters the information on our timeline. Popping this epistemic bubble could be as easy as reading a dissenting article.
Does this mean that social networks are officially acquitted of all responsibility? Not necessarily.
Echo chambers are similar to epistemic bubbles, defined as social structures in which other relevant, and often opposing voices and opinions are discredited. In a digital democracy, the extreme form of discrediting is to charge media sources with deliberately attempting to misinform readers and influence public opinion (colloquially known as “fake news”). Social media platforms have grown to the size where their inability to prevent its distribution of fake news now undermines the integrity of campaigns and American democracy. Fake news not only misinforms audiences, but causes members of political parties to distrust ideological different news sources and dismiss dissenting opinions– further reinforcing echo chambers and exacerbating polarization.
Large social media technology firms are taking different stances in curbing the spread of fake news. On October 30th 2019, CEO of Twitter Jack Dorsey, tweeted “We believe political message reach should be earned, not bought” and announced that political advertising would cease on its platform. While this stops campaigns from potentially distributing misinformation, this does not stop individual accounts from spreading fake news. However, starting in March, Twitter claims the company will begin labeling or taking down fake videos and photos using a measure of tests scrutinizing a post for its fabrication, deceptiveness, and potential to cause harm. As enforcement, Twitter will display warnings and clarifications to users before they retweet and like a post and prevent the tweet from being recommended.
Facebook CEO Mark Zuckerberg takes a different stance from Dorsey. In an earnings call he stated, “In a democracy, I don’t think it’s right for private companies to censor politicians, or the news”, arguing that fact-checking political ads would be a violation of free speech. Facebook claims it is taking active measures to prevent the spread of fake news by employing contracted fact-checkers, displaying alternative fact-checked sources to articles, utilizing machine learning to prevent the distribution of fake news, and demonetizing repeat offenders. After a post goes through Facebook’s counteractive measures, the company claims an 80% reduction in the distribution of the post. However, it commonly takes three days for contracted fact-checkers to evaluate a post, in which by then, the virality of a post has already made a significant impact. Furthermore, users are not updated on the misinformation of content they have read and engaged with.
As the regulations begin to roll out, many questions arise: How should sarcasm be interpreted within the context of fake news? Are social media platforms choosing not to regulate fake news a threat to democracy? Is regulation a violation of freedom of speech and a move into censorship?