Opinion | Sexuality

Can Robots Be Sexist?

How artificial intelligence might reinforce and amplify gender stereotypes.
Can Robots Be Sexist?
Image: fotojog / iStock

In April of this year, an engineer in China named Zheng Jiajia married a robot he had constructed himself. The robot resembles a young, slender Chinese woman. While the union is not legally recognized, Zheng carried on as if it were by organizing a ceremony and taking wedding photos with his “bride.” In a country in which men significantly outnumber women, Zheng, 31, said he was tired of searching for a human spouse and dealing with the pressure to marry. This unusual union comes more than seven years after a man in Japan married a digital avatar from a Nintendo game called “Love Plus.” In a ceremony that was broadcast live on the web, the 27-year-old man called the avatar his dream girl and claimed that he felt no need for a human girlfriend.

According to a growing body of research, such deep expressions of affection toward robots should come as no surprise. One study found that participants who viewed images of a human hand and a robot hand being cut with a knife or pair of scissors responded with essentially the same measure of horror and sympathy to both scenarios. Humans are deeply empathetic creatures, and anything we can anthropomorphize—pets, robots, or even the disembodied voice of a virtual assistant like Apple’s Siri—can engender very real emotions in us.

However, our desire to attribute humanity (and even spirituality) to machines reflects the intense complexity of artificial intelligence (AI) and the existential questions that come with it: What does it mean to be human? What characteristics define a healthy relationship with another being? If humans have a soul that is, in principle, independent of the body yet also functionally integrated, then how do we understand robots? And, as Christians, how can we best honor the imago Dei in a world where man and machine are sometimes hard to distinguish?

The way in which we answer these questions could very well determine what forms of AI we find acceptable helpmates and what forms could push us dangerously toward dehumanizing others—especially women.

Considered a critical component of what some have called the coming Fourth Industrial Revolution, artificial intelligence is one of the hottest trends among high-tech companies and research institutions today. Venture capital is pouring into this area, with its promise of increased efficiency, productivity, and performance. According to Eric Kim, cofounder and managing partner at Goodwater Capital, which invests in consumer technology companies, “The hope around AI is that we can utilize it to create better experiences than what we’re able to offer today.”

While the applications for artificial intelligence are vast and ever expanding, some of the most common forms of AI have already taken up residence in our cars, smartphones, workplaces, appliances, and homes. Coded programs operate as customer service representatives and administrative assistants, as financial advisers, dating coaches, and even therapy bots. Even with all these existing applications, Kim contends that AI is still “in its infancy.” “We’re in the top of the first inning of a very long ballgame with AI,” he said.

In these early stages, much of the power in AI lies with the architects and designers who are making decisions about the application, approach, and interface of these programs. And unlike the ultimate Creator, these individuals have very real limitations that lead them to program their prejudices, stereotypes, and biases into the code, affecting how the algorithms respond to gender, race, and other key demographics. Even with the most benign of intentions, applied AI has the potential to create a continuous feedback loop of unhealthy, and even sinful, human attitudes and behavior.

Of particular concern to ethicists and technologists is gender bias. Social scientists have known for at least 20 years that people project their existing gender biases onto computers, including machines with very minimal gender cues.

Today, the majority of AI interfaces we interact with—from the navigator voice in GPS to moving and talking sexbots—are feminized through name, form, and other characteristics. This is by no means random; studies have found that people generally respond more positively to female voices. But these interfaces can also reinforce and amplify gender stereotypes, like the subservient speech patterns of Amazon’s Alexa and other virtual assistants.

Sexbots, designed purely for male physical pleasure, perpetuate the objectification of women and habituate men into using their sexual partners without any need to develop an emotional or spiritual bond. “They are effectively reproducing real women, complete with everything, except autonomy,” writes Laura Bates in the New York Times of a recently released sexbot that comes with a rape-simulation setting. These and other sexbots, she contends, will arguably “influence societal attitudes toward women.”

One obvious explanation is that the high-tech industry continues to be dominated by men, many of whom cultivate a work culture that is not particularly affirming of women. (A recent study found that 60 percent of women in tech had experienced sexual harassment, and a whopping 88 percent have had clients and colleagues ask questions of male peers that should have been addressed to them.) According to AI entrepreneur Tabitha Goldstaub, the risks of predominantly male-developed AI extend far beyond flawed interfaces and will likely have other real-life implications.

As industries like healthcare and recruiting rely more and more on algorithms to determine complex, individualized services, women may be on the receiving end of decisions that are biased toward a male customer base. And, because our brains have trouble distinguishing between humans and human-like machines, how we treat actual women could be affected by how much we are exposed to feminine AI that fills only certain roles of assistance, service, and pleasure. Men, in particular, may bring unrealistic expectations to their relationships with real women—or they may be content to look to AI for a more controlled form of female companionship.

Late last year, for example, a Japanese company released a holographic anime “robot” called Gatebox, which features an animated character dressed to look like a cross between a nurse and a maid. In addition to controlling home appliances, Gatebox also functions like a stay-at-home girlfriend, sending flirty text messages to her “owner” throughout the day and celebrating with winks and twirls when he comes home. The product website calls the anime character “a comforting female character that is great to those living alone. She will always do all she can just for the owner.”

As AI becomes more sophisticated and human-like, the demarcation line between a person and a program will likely become even more blurred. A recent episode of the podcast RadioLab featured a leading researcher who was duped into believing—not once, but multiple times—that he was corresponding with an actual human through a dating website. Such scenarios will likely happen more frequently, even while we as a society attempt to debate the appropriate dividing line between acceptable human substitutes and unacceptable ones. Are we more comfortable interacting with an AI therapist who asks thoughtful questions and less comfortable with an AI sexbot that’s used as an object for self-pleasure? If so, why? And how do we prioritize AI applications that honor the humans they interact with and represent—especially women?

In response to such questions, Rob High, chief technology officer of IBM Watson, names transparency as one of the most important characteristics of ethical AI. Humans need to know when they are interacting with an algorithm, and what its purpose and expectations are. Others are calling for more high-tech professionals with a humanities education who can “grasp the whys and hows of human behavior.” Just last month, a leading Silicon Valley engineer wished she had “absorbed lessons about how to identify and interrogate privilege, power structures, structural inequality, and injustice. That I’d had opportunities to debate my peers and develop informed opinions on philosophy and morality.”

As Christians, we believe that our bodies, our personalities, and other forms of human expression are outward manifestations of the soul that resides in each of us. For Pope John Paul II, this theology of the body was so central to faith that he dedicated 129 lectures to it. “The body, and it alone, is capable of making visible what is invisible: the spiritual and the divine,” he said. “It was created to transfer into the visible reality of the world the mystery hidden since time immemorial in God, and thus to be a sign of it.”

Whether we are ready or not, we are all riding an unstoppable train toward a society in which AI will be almost as prevalent as other humans and these questions of body and soul will become all the more pressing. Despite the uncharted path before us, this is the moment for those with training, gifts, and interest in engineering and human behavior to step into the fray. The apostle Paul’s exhortation to offer our bodies as is just as relevant in this next technological revolution. With God’s wisdom and discernment, we can encourage AI that enhances our appreciation for the embodiment of the divine, rather than detracts from it.

Dorcas Cheng-Tozun is a columnist for Inc.com and the author of Start, Love, Repeat: How to Stay in Love with Your Entrepreneur in a Crazy Start-up World. She lives in the San Francisco Bay Area with her husband and their adorable hapa son.

September
Subscribe to CT and get one year free.

Read These Next