AI and Legal Personhood

A

Blake Plante (PO ‘18)

In March 2016, an artificial intelligence named Sophia—now a citizen of Saudi Arabia—was jokingly asked “do you want to destroy humans?” She responded, “Ok. I will destroy humans.” This does not mean that Sophia has an agenda to exterminate humanity; rather, it is indicative that Sophia is not aware of what she is saying. Sophia does not have consciousness, though her creators at Hanson Robotics suggest that consciousness could come in just a few years. Fictive scenarios, a la Blade Runner and Frankenstein, give us an opportunity to anticipate the rights struggles that may take place should artificial intelligence one day acquire consciousness, while simultaneously helping us refine our understanding of fundamental concepts such as consciousness and what it means to be human.

Consciousness is slippery to define because we do not know what produces it. But, as Lawrence B. Solum notes in his landmark essay on legal personhood for artificial intelligence, “We can get consciousness out of neurons. Why not transistors?” (24). The same could be the case for feelings—emotions may turn out to be a computational process. Though even if the consciousness and emotional ability of an artificial intelligence could not be biologically or scientifically confirmed, one might nonetheless entertain a scenario in which the AI may still have a claim to personhood. Say for instance that an AI files an action for emancipation from its owner based on the Thirteenth Amendment to the US Constitution. How would that case go?

The owner’s lawyer argues that while the AI acts and talks like natural persons do, it only gives the appearance of consciousness—and appearances are deceiving. The AI is not a person; it is a zombie: an unconscious machine that only acts as if it is aware. The AI’s counsel rebuts that the doubt about the AI’s consciousness is no different from doubt about the consciousness of one’s neighbor. Solum writes, “you cannot get into your neighbor’s head and prove that she is not really a zombie, feigning consciousness. One can only infer consciousness from behavior and self-reports” (25)—and this AI behaves in ordinary life as only conscious human beings do. Thus for all intents and purposes, the AI’s consciousness should be accepted as real and not feigned. This fictive scenario allow us to imagine that recognition of AI “consciousness” may be reliant on human observation of relevant behaviors rather than a direct measure of a nebulous “consciousness.”

Setting aside concerns about whether or not conscious AI is possible, we can then begin to imagine what it might mean if conscious AI are created—a responsibility long held by the science fiction author.

Fritz Lang’s 1927 Metropolis opens with an image of “the day shift”: future-revolutionaries walking in columns and rows like synchronized machines, their heads down, below the surface of the earth. The film serves as a reminder that “between the mind that plans and the hands that build there must be a mediator, and this must be the heart.” Denis Villeneuve’s Blade Runner 2049 extends Metropolis’ argument with a universe where human-like, artificially-created slaves do the work that humans do not want to or cannot. Both contend that whether man or machine, there is danger in casting subjects outside the protective definitions of the law (Derrida). Indeed, a prescient 1982 U.S. Presidential committee report on genetic engineering warns of a reversal of power that will turn technological slave into master, citing Frankenstein’s monster—”You are my creator, but I am your master – obey!” (Shelley 165). The fear is one that has played out in history time and time again, that those denied protections of the law will assert their “humanity” and demand to be treated ethically.

Yet even if artificial beings are extended personhood, ethical, just, or moral treatment is not a guarantee. Samir Chopra in “Rights for autonomous artificial agents” concisely outlines that personification of non-humans may only serve as a tool to reproduce current power structures and the supremacy of genuine humans.

“The law has never considered humanity as a necessary or sufficient condition for being a person. For example, in 19th century England, women were not full persons; and in the modern era, the corporation has been granted legal personhood. The decision to grant legal personhood to corporations is instructive because it shows that granting personhood is a pragmatic decision taken in order to best facilitate human commerce and interests. In so doing, we did not promote or elevate corporations; we attended to the interests of humans.” (Chopra 2010: 38-40)

What matters, explains Shulamit Almog, “is not whether one is defined as a ‘Replicant’, ‘human’ or ‘animal’, but the power to decide who is protected and who is not” (186). So long as this power remains solely in the hands of humans, we may be doomed to reproduce anthropocentric power structures, meaning the exclusion of anything not defined as genuinely human.

A moral conclusion should thus assert two conclusions that Solum arrives at. Firstly, if cognitive science confirms that the underlying processes producing an AI’s behaviors are sufficiently similar to the processes of the human mind, “we would have very good reason to treat AIs as persons.” Secondly, “in a future in which we interact with such AIs or other intelligent beings… we might be forced to refine our concept of human.”

 

 

Almog, Shulamit. “When a Robot Can Love—Blade Runner as a Cautionary Tale on Law and

Technology.” Eds. Hildebrandt, Mireille, and A. M. P Gaakeer. Human Law and

Computer Law: Comparative Perspectives. London, Springer, 2013. 181-194.

Chopra, S. 2010. “Rights for autonomous artificial agents?” Communications of the ACM 53(8): 38-40.

Derrida, J. 1990. Force De Loi: Le “Fondement Mystique De L’Autorite”/Force of law: The

“Mystical Foundation of Authority.” Cardozo Law Review 11(5-6):920-1045.

Lang, Fritz, director. Metropolis (1998). Madacy Entertainment Group, Inc.

President’s Commission for the Study of Ethical Problems in Medicine and Biomedical

Behavioural Research (1982: 27-28).

Shelley, Mary Wollstonecraft. Frankenstein. Baronet Books, 2008.

Solum, Lawrence B. “Legal personhood for artificial intelligences.” NCL Rev. 70 (1991): 1231.

Villeneuve, Denis, director. Blade Runner 2049. Warner Bros. Pictures.

 

 

 

 

About the author

Claremont Journal of Law and Public Policy

Read the Latest Print Edition

Recent Posts

Contact Us