Does ‘The Image of God’ Extend to Robots, Too?
Image: Courtesy HBO

Christians believe that people—unlike robots—are made in the image of God. But shows like Westworld makes us wonder: In whose image are robots made?

The answer seems to be “ourselves.” As research into artificial intelligence continues, we will continue on the path of making artificial intelligence (AI) in our image. But can Christian thought provide an alternative approach to how robots are made?

The original Westworld, which starred Yul Brynner as the Gunslinger, was a product of the 1970s—a time when an intelligent robot was still a far-off possibility. At the time, filmmakers and audiences treated these robots instrumentally; there was little sympathy for the robot dead.

Times, however, have changed. Christopher Orr, writing in The Atlantic, notes that there is a major philosophical shift in the newest version of Westworld: A shift from concern for the creators, made of flesh and blood, to concern for the created, made of steel and silicon.

As Orr points out, storytellers from books and film have wondered about intelligent robots for at least a century. What is changing now, though, Orr notes, is the perspective: The former instrumental view of artificial intelligence is being replaced with a much more personal view. Like the iPhone-esque tricorder first popularized by James T. Kirk in 1967, the intelligent robot—predicated on rapid advancement in AI—is no longer a figment of scientific imagination, but a developing reality poised to take center stage in our rapidly changing world. Indeed, just this month, the European Union began discussing the establishment of “electronic personhood” to ensure rights and responsibilities for the smartest AI.

Exactly what this implies legally will be controversial—but the one part of AI that is understood by all is the “artificial” part. AI is clearly “artificial” in the sense that it will always be created by human artifice, and not through natural processes or supernatural agency. We may object to the way our Creator made us and our world (at least by virtue of its current, imperfect state), but as we inch closer toward some hoped-for singularity, we may find that being a creator ourselves is much more difficult than we realize. AI won’t be the first test for humanity, but it may prove to be one of the biggest.

The Uneasy Ethics of Electronic Personhood

The philosopher Charles Taylor has popularized the idea that our identity—the “who we are” part of ourselves—is a relatively new development in the history of the world. One key aspect of this is what he calls “moral agency”—that is, the differentiation between what one should do and what one does. This moral agency is what allows people to be persons, as defined by their relationships to others.

Increasingly, however, we are designing creations that are beginning to have something at least resembling moral agency, from commonly used computer algorithms that predict what products we want to buy to the self-driving cars that make life-or-death decisions about when to stop and when to go. These machines’ moral agency may only be partial, but it nonetheless raises questions about how we ought to act toward them.

Admittedly, though, this tension isn’t new. Way back in 1994, when computers were large, square, beige, and decidedly non-people-like, studies like those by Nass, Steuer, and Tauber found that people acted socially toward computers through a number of different tests. This was before computers could really talk back—as in, for instance, the Spike Jonze movie Her, where in the not-too-distant future a sad and lonely letter writer falls in love with his computer’s AI operating system.

Subscribe to CT and get one year free.
View this article in Reader Mode
Christianity Today
Does ‘The Image of God’ Extend to Robots, Too?