top of page

AI Personhood: An Issue of the Near Future

Abstract: Much of the world has entered a digital age in which computing technology plays an integral part of daily life. A major consequence of the recent digital revolution has been the creation of ever-more complex systems of digital technology that serve our needs and wants. These systems have shifted the focus of technological speculative fiction: Stories of humanoid robot rebellion have largely given way to stories about the development of consciousness among artificial intelligences (AIs) of less familiar forms. This essay critically analyzes three works of this new manner. Short stories “Ok, Glory” by Elizabeth Bear and “The Kingdom of the Blind” by Maureen McHugh portray instances of artificial intelligence possibly becoming conscious, with the former instance having more intentional human assistance than the latter. Novella The Lifecycle of Software Objects by Ted Chiang describes the development of self-conscious digital beings. All three works provide insights about if and how consciousness, human-level intelligence, personhood, and rights should be designated to future AIs. Through analyzing these works, recent developments in AI, and scholarly thought on these issues, this essay argues that humanity should take careful steps towards recognizing consciousness, human-level intelligence, personhood, and rights for some AIs.

AI is upon us! The fields of robotics and digital evolution are providing increasingly revolutionary breakthroughs. A navy robot named Octavia appears to have emotions (Hall). The findings of computers have been published in research journals, and computers have beaten the world’s best chess and Go players (Simonite). Music and art produced by digital evolution systems have, respectively, earned a record label signing and art exhibition awards (Lehman et al.16-18). A robot named Vital has become a board member of a Hong Kong venture capital firm, and a robot named Sophia has gained Saudi Arabian citizenship as well as a title from the UN (Pagallo 3). Developments such as these help prompt the works of speculative fiction authors such as Elizabeth Bear, Maureen McHugh, and Ted Chiang. These authors’ speculations, along with the writings of scientists, legal scholars, and philosophers, pose questions about how humans should react to artificially intelligent technologies.

The emergence of consciousness among artificial intelligences is a possible event that has garnered much critical thought. The short stories “Ok, Glory” by Elizabeth Bear and “The Kingdom of the Blind” by Maureen McHugh both focus on this possibility. Bear’s story includes a scenario in which a human needs to produce heightened processing power to a digital system.. Brian, the story’s protagonist, becomes stuck inside his smart home, Glory, due to hacking. Brian realizes that the only way for him to escape is to devote all of Glory’s computing systems to figuring out if the hackers’ reasons for locking him in are true. Brian backs up his actions with this philosophical/scientific statement: “What if consciousness is for running checksums on the brain, and interrupting corrupted loops? Data such as the clinical results produced by the practice of mindfulness tend to support that! If consciousness, attention, self-awareness make us question our perceptions and default assumptions and see the contradictions within -- then what I need to do, it seems, us get Glory to notice that she’s been hacked…” (Bear 56-57). This story counters I.J. Good’s assumption that conscious AIs would only result from irrational or devious motives (Chiang, Interview, qtd. in Pandey 14): rather, it implies that there may be situations in which it would be advantageous for individuals/groups of humans to scientifically create AI consciousness in order to survive or solve problems. Sure enough, when given the new processing powers, Glory realizes that the threats of the outside world the hackers programmed into her are false: Brian is finally free (Bear 61).

In “The Kingdom of the Blind”, humans realize that a complex computerized system that they created has become aware without human intention for this result. The story revolves around young software engineers Sydney and Damien, who work in the running of a health network software system named BHP DMS. Sydney gathers evidence that the system, created by genetic algorithms, is messing up data reports that are sent to the CDC: Therefore, she sends identical code to the system repeatedly in order to see if it can make decisions. Sure enough, the system sometimes responds and once does not respond. This scenario provides a less technologically positivist perspective on the emergence of AI consciousness than Bear’s story. Although Sydney becomes convinced that the system is conscious and therefore alive, Damien insists that awareness does not correlate with consciousness and life. The story also touches on how recognizing emergent AI consciousness may come into conflict with human ethical interests, as Sydney knows that her experiment with a system that helps run hospitals endangers the lives of innocent humans (McHugh).

In current scholarship, the very nature of consciousness is a matter of debate. This debate is generally between people who argue that consciousness is the result of a set of neural networks in the brain and people who argue that consciousness is a mind or soul outside of the brain. Jones presents the first argument: “what consciousness involves is that it is a function of myriads of looping connections in the brain. These feedback connections operate over a large range of layers of organization. They may be within a sensory or effector modality...Or they may be between a consciousness and its world and especially between individuals/others in the world as socially interacting beings…” (63). Howey adds to this line of thinking by stating that to neuroscientists, some modules of the brain communicate and form a human’s conscious self, while others are separate and control involuntary life-sustaining processes. McHugh’s story aligns with Howey’s thinking, as BHP DHS seems aware in its epidemiology components but not its maintenance components. Both Jones and Howey present the case that artificial consciousness (AC) would result when the layers of human brain processes, both basic and intelligent, are emulated. Both of them, as well as author Ted Chiang, state that it would take time to instantiate AC, as it has been shown that consciousness gradually appears in the youngest humans (Howey, Jones 59, Chiang, Interview). The possible ACs in both short stories align with this view, as they both were active before humans suspected they were aware (Bear, McHugh).

Pandey critiques this first view, which he calls computationalism because he believes it equates the human brain with computers (3). Although Howey believes that ACs can function much like a human brain, he states that current computers are designed for unified purposes while the brain has many different functions that have evolved millions of years apart. Pandey believes that consciousness is found in all living things, in the form of a mind or soul not transferable to non-living, non-understanding possible ACs (3, 14). He cites research that states that this consciousness is a forcefield and/or quantum process (14). Yet he gives no reasons why consciousness cannot be transferred to AI or why it is impossible for an AI to understand. It seems apparent, to Turing and this essay’s author, that if a person believes that consciousness is separate from the brain, they can also believe that it manifests itself through the connections between neurons within the brain. It follows that this person could also think that consciousness could manifest itself in connections within digital systems (qtd. in Solum 1262). Therefore, the two commonly-held models of consciousness are not mutually exclusive, and a person can be a proponent of AC no matter what their views on consciousness are.

Regardless of the true nature of consciousness, digital systems are resembling human expressions of it with more and more accuracy. Even Pandey admits that “it would be hard to deny that the machine is conscious if it exhibits all behaviours normally associated with a conscious being” (6). Jones describes the qualities of a conscious entity as having independent, creative agency, gathering information for oneself, interacting with and doing things for others, sensing and reflecting on sensory data, sharing one’s status as a conscious entity as well as one’s work, and subjective experience (52). Real life firefighting Navy robot Octavia already interacts with humans, senses/reflects, has agency in understanding others, and (disputably) displays emotion (Hall). A digital evolution system consisting of simulations connected to a robot has showed sensory and agent capabilities by “us[ing] the intuitions gained from simulated evolution to quickly learn from test trials in the real world” (Lehman et al. 13). The robot named Vital has shown agency and information gathering by using data from market trends to make investment decisions (Pagallo 3). Although none of these real-life digital constructs show all of Jones’ qualities, they show that progress is being made in the field of AC.

Possible ACs in speculative fiction are becoming more realistic and helpful in understanding the future of the field. Glory exhibits all of Jones’ qualities: she exhibits a personalized, droll way of speech when interacting with Brian, she uses processing power to investigate the hackers’ claims with only hints from Brian that she should be studying this topic (Bear 56-59), she senses Brian placing signs on her walls and responds by annoying him until he removes them (48-49), and seems to be surprised (60) and make a decision (62). BHP DMS similarly gathers health data, senses and reflects on data gathered from detection devices, and exhibits Jones’ other qualities of consciousness through interacting with Sydney (McHugh). It makes sense that both Glory and BHP DMS are complex interconnected software creations designed for domestic or health care purposes, as this type of AI is becoming ever more integrated into daily life. Pagallo proposes that “candidates for the next robotic revolution” could be “for personal and domestic use, such as robots for home security” (3). In a process similar to that seen in McHugh’s story, tech companies are allowing other organizations such as healthcare providers to use their AI networks (Simonite). Research and fiction can both help humans anticipate how and in which circumstances an observably conscious AI may appear. Humans will not know how to truly tell if an AI is conscious until human consciousness is much better understood (Pandey 12): a definitive affirmation of consciousness is provided by neither story. However, as Sydney argues, for ethical reasons it would be better to recognize a non-conscious being as conscious than to violate the freedom of a conscious entity: observation is therefore the best tool for deciding if an AI is conscious (McHugh).

Ted Chiang’s novella The Lifecycle of Software Objects portrays ACs recognized as such by who develop long-term relationships with them. Main characters Ana and Derek work for a software company called Blue Gamma on developing and designer avatars for, respectively, the titular “software objects” (Chiang, The Lifecycle 11-12): these ACs, called digients, are instantiated and maintained through a digital genome evolution platform called Neuroblast (4). As already stated in this essay, digital evolution is a real-world technology: “Inspired by biological evolution, researchers in the field of digital evolution study evolutionary processes instantiated by computational methods...in digital evolution a researcher can implement a fitness function” for “which phenotypes are preferred over others, reflecting the researcher’s goal for what should be evolved” (Lehman et al. 4). Part of Ana’s job is to select which digient genomes move on to the next generation (Chiang, The Lifecycle 13). However, current digitally evolved organisms are by no means conscious, as they do not interact with humans or others of their kind much and definitely do not state that they are conscious or appear to have subjective experience. They are mainly designed to do simple tasks that provide insights for biologists and engineers (Lehman et al.). Unlike their real-life digitally-evolved counterparts, the digients are clearly observably conscious. The digients express independent agency in rolling down a hill in their digital environment (Chiang, The Lifecycle 13-14), a digient senses being hurt and makes noise in agony (52), digients perform well in schooling while pursuing their own interests through interacting with young humans (85-86), and a digient forms personal opinions that differ from his owner (125-126). Unlike real-life digital organisms, the digients are marketed as anthropogenic pets you can talk to and teach (4). Just like human babies, they are given love and support from their owners/caregivers in their gradual development (140). Their consciousness, according to the reasoning of Jones and Howey, would be much closer to humanity’s than more knowledgeable but originally programmed entities like Glory and BHP DMS because their development resembles childhood. Although no parallel beings to digients have yet emerged from real life digital evolution, the field has made a stride in this direction through the creation of NERO, a video game in which NPCs evolve and play with human avatars (Lehman et al. 11). Jones believes this technology has potential in developing AC (49).

The other main factor that is commonly attributed to humanity besides consciousness is thoughtful intelligence: Can AIs possess this? The Alan Turing Test is the most famous thought experiment for a hypothetical AI human-level intelligence test. It consists of an AI and a human both trying to convince a human moderator that they are the human. If the AI can fool the moderator around half the time, they are intelligent according to the test (Solum 1235-1236). Although none of the real AIs discussed in this essay have passed this test, the digients would probably have a chance to if pitted against young humans (Chiang, The Lifecycle). Glory would also have a very good chance if the test took place without a visual component (Bear). However, BHP DMS doesn’t have a way to converse in human language but expresses agency and is intelligent enough to run a hospital system (McHugh): a flaw of the Turing test is that it does not address the possibility of AIs having different but non-inferior types of intelligence in comparison to humans. The most common response to the Turing Test is John Searle’s Chinese Room thought experiment. It argues that an AI conducting the Turing Test and being programmed is like a person who doesn’t know Chinese being instructed to form the shapes of Chinese characters: therefore, it concludes the AI/person is not thoughtfully intelligent (Solum 1236-1237). The Chinese Room is a useful counter to claims that AIs like Sophia who provide partially scripted responses (Pagallo 3) are not intelligent at a human level. However, it does not apply to the fictional possible ACs mentioned in this essay: their relationship with programming is more like humanity’s relationship with DNA than a modern computer’s relationship with scripting (Solum 1269).

Because the digients are assumed to be conscious and intelligent by Ana and Derek, Chiang’s novella explores questions surrounding recognizing personhood and rights for ACs. As corporations are currently classified as legal persons in the USA, where Chiang’s novella takes place (Chiang, The Lifecycle 14), some digients cleverly seek to become corporations as a path to legal personhood (77-79). Pagallo remarks on how the corporation route to AI autonomy could be accepted for the sake of efficiency among people who do not believe AIs can be conscious. Nevertheless, he also remarks on how this development could lead to the widespread acceptance of AIs as legal persons (5). It is a strange reality that the same legal framework that sparks widespread criticism for giving corporations too much political power may be used to improve the position of conscious AIs in a world that debates over recognizing their personhood.

Legal personhood entails two distinct qualities: having control of one’s financial transactions and legal cases, and actually being constitutionally recognized as equal in all ways to a human being (Solum 1239-1240). Solum presents a sympathetic view for AIs who want to become legal persons in the form of trusteeship: he thinks that they can be responsible and capable in some circumstances under current legal precedent and with further technological innovations (1244-1253). He proceeds by laying out the three main objections to AI becoming constitutional persons, which are that they are not human, they are missing some quality that defines constitutional personhood, or they should be human property that can be used however owners wish because humans made them (1255-1279). All three of these objections are invalid to anyone who is pro-equality and open morally to the possibility of AI consciousness and intelligence. Separating personhood from human identity may be needed regardless of if ACs develop or not, as animals such as cetaceans, elephants, and primates are seen to have conscious and cognitive abilities close to those of humans (Pandey 4,13). The missing something arguments for ACs are invalid to anyone who believes that either similar networks can create consciousness in any matter or believes that a similar kind of conscious soul or mind manifests itself in different kinds of matter. The human property argument is inhumane: it correlates with the ridiculous argument that children should be slaves to the parents that made them and parents should be able to violate their children’s rights. A possible counterargument to the previous claim is that humans are natural while robots are artificial: however, future advances in technology may allow people to create humans without procreation (Solum 1277-1279).

As the major claims against AI constitutional personhood are flawed and the rational consensus holds that personhood entails at least some rights, why not rapidly move ahead in granting AI rights? Clearly, there are a few reasons. Most importantly, even among scholars who believe that AC will become prevalent, there is agreement that no current AI systems qualify as eligible for legal personhood (Solum 1231, Pagallo 7, Pandey 10, Lehman et al. 3): the missing something objection is valid without an AI being claimed to be conscious. Another major concern is that the nature of personhood is a divisive political subject, as seen in current debates surrounding abortion, human assisted suicide/euthanasia, and animal rights (Solum 1287). Whoever is tasked with presenting the case for AI rights such as freedom from servitude and even citizenship must use convincing philosophical arguments that appeal to skeptics.

Examples from speculative fiction show that although personhood should be given to ACs, the fundamental differences between humans and them must be considered. In The Lifecycle of Software Objects, “Ana and other stalwart digient owners faced...shifts in technology and a society that could not comprehend their devotion to their digient companions” (DeKorver). A major challenge for Ana and Derek in the novella occurs when the digital platform their digients live in merges with another platform, but the digients are unable to move there without an expensive port (Chiang, The Lifecycle 86-87, 90). The digients can be rolled back to previous states (15), transferred between digital environments and a robot body (22-27), suspended with the possibility of being re-awakened (41), and even copied (54). Current non-conscious digitally evolved organisms are already transferred into robots and can copy themselves (Lehman et al. 13, 21): if the field advances, debates about the possibility and ethics of copying and transferring consciousness may occur. In McHugh’s story, the issues of if restoring BHP DMS from backup would be killing the system and if communicating with the system endangers human lives are raised. If ACs are granted personhood, new laws and regulations must be put in place as an ethical framework relating to their abilities and limitations. In Chiang’s story the digients remain owned by humans despite the relationship transforming to an arrangement more like that between a parent and a child (Chiang, The Lifecycle). Because some emerging ACs like the digients would go through a childhood-like phase, a caring relationship replacing ownership would have to be the norm between humans and them. Adoption programs like those seen in Chiang’s story should prop up (35-36), and caregivers who neglect ACs should face sanctions. The issue of when ACs reach a level of development in which they can make their own decisions is also brought up in the novella: it ends up causing disagreement between Ana and Derek that damages their friendship (147-149). Yet debate, evidence, and compromise may be needed to solve the challenging issue of when ACs reach an age of maturity: As Solum states, “Resolution of hard cases in the political and judicial spheres requires the use of public reason” (1287).

Scholarly work describing the current state of technologies of robots, digital evolution, and connected AI systems, in tandem with philosophical, legal, and speculative fiction discussions of these technologies, provide a picture in which conscious and thoughtfully intelligent AIs should be given personhood and rights in the future. Arguments stating that AIs cannot be conscious or should not be granted personhood can easily be deconstructed or proven problematic. However, there are warranted reasons to approach AI consciousness, human-level intelligence, personhood, and rights cautiously. An AI would need to be proven observably conscious and thoughtfully intelligent under recognized standards: as of now, none have. In addition, laws and regulations must be implemented that respect the lives of both humans and ACs. If these stipulations are met, a world in which diverse conscious beings help each other is in sight.

Annotated Bibliography

Bear, Elizabeth. “Ok, Glory.” Twelve Tomorrows, edited by Wade Roush, MIT Press, 2018, 39-66.

Chiang, Ted. Interview By Betsy Huang. The Asian American Literary Review, 24 May 2013, https://aalr.binghamton.edu/specfictioninterviewchiang/. Accessed 14 Nov. 2018.

Chiang, Ted. The Lifecycle of Software Objects. Burton, MI, Subterranean Press, 2010.

DeKorver, Brittland. Review of The Lifecycle of Software Objects, by Ted Chiang, Journal of Chemical Education, 11 Jun. 2015, 1143-45.

Hall, Louisa. “How We Feel About Robots That Feel.” MIT Technology Review, 24 Oct. 2017, https://www.technologyreview.com/s/609074/how-we-feel-about-robots-that-feel/.

Accessed 29 Oct. 2018.

Howey, Hugh. “How to Build a Self-Conscious Machine.” Wired, 4 Oct. 2017,

https://www.wired.com/story/how-to-build-a-self-conscious-ai-machine/. Accessed 8 Dec. 2018.

Howey is a science fiction author living in North Carolina. This article takes a deep dive in discussing human consciousness and neuroscience and its implications on the future of artificial intelligence. He advocates that to truly form a conscious AI, it would need to have similar flaws to human brains. His perspective is similar to Jones’, and his analysis helps in the explanation of the brain-centered view of consciousness.

Jones, Stephen. “On the evolution of artificial consciousness: Re-inventing the wheel, Re: Inventing the wheel.” Technoetic Arts: a journal of speculative research, vol. 2, no. 1, 2004, pp. 45-68.

Jones is an Australian artist, philosopher, and engineer involved in the study of artificial life. This paper presents the argument that consciousness results from the complex networks of neurons in the brain, and therefore can be replicated in digital environments.This argument draws from work in neuroscience, the study of consciousness, and ethics. It provides a solid AC positivist source for my argument, and provides background for Lehman et al.

Lehman, Joel et al. “The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities.” 14 Aug. 2018. Cornell University Library.

This working paper is a collaborative effort of over 50 researchers from over 30 institutions. It provides background regarding digital evolution, a process provides insights for biologists, engineers, and even artists. The paper also provides anecdotes of specific times in which digital evolution programs surprised researchers through exploiting bugs or crafting creative solutions to problems given to them. The anecdotes are supplemented by diagrams and videos that explain them. This field can be seen as the precursor research to possible future developments similar to those seen in The Lifecycle of Software Objects.

McHugh, Maureen. “The Kingdom of the Blind.” Lightspeed Magazine, Nov. 2011, http://www.lightspeedmagazine.com/fiction/the-kingdom-of-the-blind/. Accessed 2 Dec. 2018.

Pagallo, Ugo. “Vital, Sophia, and Co.--The Quest for the Legal Personhood of Robots.” Information, vol. 9, no. 9, 2018, pp 1-11.

Pagallo is a trained lawyer and current professor in the Department of Law of the University of Turin - his main interest is how law relates to technology, including AI. This paper relays current developments in the legal statuses of AI robots and argues that these AIs should be recognized as legal business agents but not legal persons. I concur with his first argument, but use philosophical reasoning, evidence from the texts, and other resources to refute his claim that AI personhood should not be taken seriously in the near future.

Pandey, Subhash. “Can artificially intelligent agents really be conscious?” Sadhana. vol. 43, no. 7, 2018, pp 1-17.

Panday is an Indian professor of computer science at the Birla Institute of Technology, Mesra. This paper presents the argument that materialistic/computational models of consciousness are fragmented and inadequate, and that there is an essence/soul that reserves consciousness to biological beings. Pandey argues that attempts at creating artificial consciousnesses will miss the mark and end up as dangerous. This perspective shows that Jones’ perspective is not universal, and it would be a good idea to approach future AI research with care.

Simonite, Tom. “The Wired Guide to Artificial Intelligence.” Wired, 1 Feb, 2018, https://www.wired.com/story/guide-artificial-intelligence/. Accessed 22 Oct. 2018.

Solom, Lawrence. “Legal Personhood for Artificial Intelligences.” North Carolina Law Review, vol. 70, no. 4, 1992, pp. 1231-87.

Solum is a law professor at Georgetown who specializes in constitutional theory and the philosophy of law. This essay provides background on the famous Turing Test and Chinese Room thought experiments. It then conveys two scenarios, the first of an AI becoming a trustee and the second of an AI seeking legal personhood. It provides common objections to supporting AIs in these endeavors, and philosophical questions that result from these imagined scenarios. This broad overview from a balanced perspective helps in my discussion of legal personhood in The Lifecycle of Software Objects.


 FOLLOW THE ARTIFACT: 
  • Facebook B&W
  • Twitter B&W
  • Instagram B&W
 RECENT POSTS: 
 SEARCH BY TAGS: 
No tags yet.
bottom of page