How Artificial Super-Intelligence Is Today’s Tower of Babel

Mimicking the human brain might be man’s search for significance in himself.

Christianity Today June 17, 2020
Illustration by Mallory Rentsch / Source Images: WikiMedia Commons / Sarah Dorweiler / Charles Deluvio / Unsplash

In May, Microsoft unveiled a new supercomputer at a developer conference, claiming it’s the fifth most powerful machine in the world. Built in collaboration with OpenAI, the computer is designed to train single massive AI models in self-supervised learning, forgoing the need for human-labeled data sets. These AI models operate in distributed optimization, resulting in significant improvement in both speed and level of intelligence. This is a major step forward in mimicking the human brain, with the ultimate goal of attaining artificial super-intelligence (ASI), a fruitful outcome from Microsoft’s $1 billion investment in OpenAI in July 2019.

Is achieving ASI hubris? Can artificial intelligence created by humans be superior than human intelligence created by God, displaying man’s supremacy, glory, and independence in himself, apart from his Creator?

As a technologist in the field, I am intrigued by the cleverness in designs and algorithms of various AI disciplines advancing the world every day. However, I take issue with making super intelligence that out-performs humans the ultimate goal of AI. First, such an agenda not only faces immense technical limitations, but it also extremely underestimates the intricacy of God’s design in his creation of mankind. Second, such an agenda will incur an expensive opportunity cost to augmented intelligence, the agenda of which is human collaboration, not competition to supersede humans, as a more realistic and practical approach to benefit humanity.

Scientists define artificial intelligence as a machine’s ability to replicate higher-order human cognitive functions , such as learning, reasoning, problem solving, perception, and natural language processing. In a system like this, its engineering goal is to design machines and software capable of intelligent behavior. OpenAI’s daring goal is classified as artificial super-intelligence—a state in which machines become superior to humans across all domains of interests, exceeding human cognition. Some scientists envision ASI as a monolithic, super-intelligent machine called the “singleton,” a single decision-making agency at the highest level of technological superiority, so powerful that no other entity could threaten its existence.

Past progress made this aspiration seem hopeful. In 2016, Google’s DeepMind AlphaGo beat South Korean Go champion Lee Se-dol. In 2011, IBM Watson won US quiz show Jeopardy!, demonstrating AI’s superior performance over humans in processing speed and data-volume. In 1996 and 1997, IBM Deep Blue defeated chess grandmaster Garry Kasparov, demonstrating the machine’s superiority in looking ahead at different possible paths to determine best moves. Do these breakthroughs mean ASI is within reach?

In Genesis 11:1–9, the people of the earth sought to build the Tower of Babel, a monolithic super-state in the land of Shinar. Today, scientists seek to build ASI, a monolithic decision-making agency as a super-intelligent singleton. Similarities between the two transcend time and space. Both are a quest for supremacy of mankind: one with a tower that reaches the heavens, the other with a singleton that is capable of dominating man. Both are quests for self-glory: making a name for themselves, seeking the glory in themselves instead of seeking the glory of God. Both are a quest for independence from God: People would rather trust the creations of their own hands than trust their Creator.

Unmasking the unspoken presumptions

Famous physicist Stephen Hawking believed that when we arrive at ASI, “(AI) will take off on its own, and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Hawking presumed that a human is only a brain, no different from a computer. Hawking’s stance, popular within the AI community, presumes (1) evolution, (2) non-existence of God, and (3) humanity as no different from objects.

As a technologist in this field, I operate with a strikingly different set of presumptions that are based on my faith in God and informed by his Word. There are four biblical pillars, I call them, that anchor my presumptions and draw the perimeter within which I explore and formulate my AI points of view.

First, mankind is the image bearer of God. Theologian Anthony A. Hoekema said, “The most distinctive feature of the biblical understanding of man is the teaching that man has been created in the image of God.” This presumption not only asserts the existence of God but also validates the value of people as bearers of God’s image. It states the hierarchical order between God, the Creator and man, the creatures. Any endeavor to defy this hierarchical order is outside the set perimeter. Computer scientists such as David Poole, Alan Mackworth, and others asked among themselves, Just as an artificial pearl is a fake pearl, is artificial intelligence real intelligence?

Second, I consider the divine mandate of mankind to subdue the earth and everything within it (Gen. 1:28), including AI. This presumption guides the AI agenda into submission to mankind and takes issue with an AI agenda seeking to supersede it.

I also rest on a third pillar: Humans are integrated beings—spirit, soul, and body (1 Thess. 5:23)—a striking contrast to Hawking’s presumption on humanity. While AI attempts to digitize a fragment of human intelligence, the human spirit, the imprint of God’s image, the faculty that connects to God, is not within the reach of AI.

Last, I remember that humans are beings of love as God is love (Ps. 8:4–8; Ps. 57:10; Ps. 139:13–18), demonstrated by Jesus, who came to earth to serve (Phil. 2:6–8). Therefore, when we ask “why AI?” the answer is to see AI as a tool to serve and to bless for the benefit of the world.

The ASI singleton: Will mankind arrive?

The inception of AI can be traced back to 1950, when Alan Turing published the landmark paper asking “Can machine think?” and devised the Turing test using humans as his benchmark.

Generally speaking, most AI we think of today has only achieved capability of the first of three stages of AI, as defined by computer scientists. The breakthroughs previously cited, which beat humans in different games, solved specific problem of narrow domain but are incapable of adaptability to derive solutions for problems in diverse contexts. These machines, which rely on humans to feed them data, are called artificial narrow intelligence or ANI. They “learn” by performing statistical analysis over training data to formulate a generalized model that can be used to predict and prescribe. As training data and generalized models accumulate, machines are able to solve problems of broader scope with more precision. But they have limitations. By design, they can only find correlations, not discover causality. Their accuracy depends on the training data’s representativeness of the population data at large. Amazon’s AI hiring engine behaved discriminately against hiring women, illustrating the problem of representativeness in data resulting in bias.

Then, before we even get to a singleton, there is yet another stage to achieve. AI would have to be comparable to humans (artificial general intelligence or AGI). Machines would become capable of self-adaptation, self-understanding, transferring learning to problem contexts not previously exposed. With autonomous control, machine intelligence would be proactive and interactive like humans. While machines can surpass humans in certain areas such as retention and retrieval of knowledge, extracting insights from data in volume, speed, etc., AGI is still considered ambitious, with many challenges before it can claim to be comparable to humans holistically.

For example, emotional AI, formally known as affective computing, can only process and simulate sadness in very limited forms, such as facial recognition and language processing, but is unable to elicit emotions, such as sadness coming from compassion or empathy triggered by a flashed memory of a deceived love one or seeing a starving child.

So, the final echelon of accomplishment of artificial super intelligence is yet far off. A survey of AI experts published in 2016 indicated that there is a 90 percent chance of reaching AGI by 2075 and a 75 percent chance of reaching ASI by 2105. The core question remains: Is arriving at ASI a function of time? Or is it a function of nature? Would 155 years since 1950’s landmark discovery be all it takes for human-created intelligence to become superior than human intelligence created by God? Or is ASI unattainable by nature? Is ASI today’s Tower of Babel, another project of humanity waiting to fail?

Setting an alternate AI trajectory

Not everyone in the AI field shares the ASI presumptions and its agenda. Some doubt the plausibility of the ASI agenda. Others realize that humans are more than their intelligence. Wisdom, as differentiated from intelligence, is uniquely human and superior to intelligence. Intelligence only addresses the what, demonstrated in efficiency, capacity, and accuracy. Wisdom addresses the why, encapsulating the moral compass, discernment, sound judgment, discretion, prudence, understanding, compassion, empathy, intuition, etc. We feel the effects of wisdom: harmony, peace, sense of justice, respect, fruitfulness, righteousness, purity, love, prosperity. Psychologist Mark McMinn further calls out critical wisdom as “embedded in complexity and paradox, requiring exceptional discernment and creativity,” compared to conventional wisdom as “living a good and effective life.” To accomplish the ASI goal to supersede mankind, surpassing human intelligence, even if successful, is insufficient. ASI must also surpass human wisdom, coming from the imprint of God’s image in mankind.

Furthermore, we are free to choose a strikingly better trajectory for AI. If “better” is defined and measured by the number of people benefited and the magnitude of the benefits, then we may assert that blessing humanity is a better AI agenda than creating a singleton to supersede humanity. It is up to us to step up to subdue the earth as beings of love, by creating and applying AI technologies, such as augmented cognition, for the blessings and the betterment of others: better management of resources entrusted to us, healing the sick, offering cognitive relief to the stressed out workforce, and more.

Setting an AI trajectory in alignment with God’s prescribed hierarchical order, with his heart to love and to serve under the lordship of God, gives us access to his divine wisdom for our AI work to bring wise solutions to solve critical problems that are also dear to God’s heart. It is far more intriguing for mankind to be the embodiment of God’s divine wisdom (DW) than AI as the embodiment of mankind’s intelligence. Adding to McMinn’s critical wisdom, divine wisdom is the spirit of mankind receiving God’s revelations, the “secret and hidden wisdom” (1 Cor. 2:7, ESV), the “great and unsearchable things you do not know” (Jer. 33:3), through the Spirit of God that God promised to generously grant to those who call on him and ask in humility.

Joanna Ng is a founder of an AI startup. A former IBMer, she headed up research in IBM Canada and is an IBM Master Inventor; she has 44 patent grants, with 12 pending; and has published 2 computer science books and 20-plus papers.

Our Latest

Review

The Virgin Birth Is More Than an Incredible Occurrence

We’re eager to ask whether it could have happened. We shouldn’t forget to ask what it means.

Why Armenian Christians Recall Noah’s Ark in December

The biblical account of the Flood resonates with a persecuted church born near Mount Ararat.

The Nine Days of Filipino Christmas

Some Protestants observe the Catholic tradition of Simbang Gabi, predawn services in the days leading up to Christmas.

The Bulletin

Neighborhood Threat

The Bulletin talks about Christians in Syria, Bible education, and the “bad guys” of NYC.

Join CT for a Live Book Awards Event

A conversation with Russell Moore, Book of the Year winner Gavin Ortlund, and Award of Merit winner Brad East.

Excerpt

There’s No Such Thing as a ‘Proper’ Christmas Carol

As we learn from the surprising journeys of several holiday classics, the term defies easy definition.

Advent Calls Us Out of Our Despair

Sitting in the dark helps us truly appreciate the light.

Advent Doesn’t Have to Make Sense

As a curator, I love how contemporary art makes the world feel strange. So does the story of Jesus’ birth.

Apple PodcastsDown ArrowDown ArrowDown Arrowarrow_left_altLeft ArrowLeft ArrowRight ArrowRight ArrowRight Arrowarrow_up_altUp ArrowUp ArrowAvailable at Amazoncaret-downCloseCloseEmailEmailExpandExpandExternalExternalFacebookfacebook-squareGiftGiftGooglegoogleGoogle KeephamburgerInstagraminstagram-squareLinkLinklinkedin-squareListenListenListenChristianity TodayCT Creative Studio Logologo_orgMegaphoneMenuMenupausePinterestPlayPlayPocketPodcastRSSRSSSaveSaveSaveSearchSearchsearchSpotifyStitcherTelegramTable of ContentsTable of Contentstwitter-squareWhatsAppXYouTubeYouTube