“How can any living thing be deemed sacred when it is just a pattern of information?”—Algeny, by Jeremy Rifkin

There is a group of professionals who perhaps more than any other regularly thinks about the questions, “What is human? What is man?” These professionals are not theologians, ministers, church leaders, psychologists, doctors, novelists, or artists. They are computer scientists, especially those building artificial intelligences that will fundamentally and irrevocably change what we call human. These scientists are revolutionizing society worldwide.

The Japanese government, for example, has made a national commitment to global domination within the next decade of the burgeoning computer business. Japanese scientists are designing the fifth generation of computers, whose chief characteristic will be artificial intelligence (AI). They believe that there the future lies, and they are willing to stake much of their economic lives on it (this information comes from The Fifth Generation; see review, page 70).

It is difficult to imagine the significance of what is coming. But think of what the world was like 75 years ago in the auto industry. That is about where AI is today. (The computer business is, to all intents and purposes, AI.) There is one significant difference, however. Although the auto industry changed aspects of our lives, intelligent machines will change thought, reason, and imagination in a way that has never happened previously. The only comparable change was the invention of writing and later the invention of printing technology. But even those may not have been as revolutionary as AI. Because of this, artificial intelligence has profound implications for Christians.

The union between man—old Adam—and an artificial intelligence creates a new Adam, a whole whose sum is greater than its parts. What we have is a creature not made by God in his image, but made by us in our image. We will stand to this creation as God stands to us. At the same time, the machine mind will in certain ways far surpass our own—even surpass those minds that originally created it. It would be like our having the ability to surpass God, which, of course, is what we have been trying to acquire since the Garden of Eden. Now we may finally be able to try. (What will be interesting is whether our creation treats us as we have treated God, a colleague remarked to us.)

Christians, we are afraid, have little to prepare them for what is coming. Noted evangelical futurist Tom Sine does not list AI research as an area for Christians to investigate, though that technology will make possible much of what he predicts. Some Christians who have heard about the field refuse to admit that man can make intelligence artificially. They will not investigate. But we cannot afford to be foolish. Before Christians hook themselves and their churches to intelligent machines, we need to understand what this field is all about, what its implications are, and what issues AI research raises. Obviously, it presents a great challenge to those of us who define man from Genesis 1. Even with humanists, we had some commonality about man as rational being, despite the great arguments we had with them about what that meant. With AI, that little is gone—for us, for the humanists.

Article continues below
What Is Artificial Intelligence?

So what is artificial intelligence? That is no simple question to answer, partly because to do so we must define intelligence, and partly because not even the researchers themselves can agree about AI or about whether they have created a machine that actually thinks. The researchers sometimes look to cognitive scientists, sometimes to behaviorists or psychologists. But now, those professionals are looking to the AI people for answers as to what intelligence is, how man learns, and what memory is. We need to understand where AI research has been, where it would like to go, and where it is now. Its grandiose claims may have some basis in fact.

Over the last 25 years, the meaning of artificial intelligence has changed and matured. At its outset, researchers like Herbert Simon, Allen Newell, and John McCarthy were impressed with themselves—that is, they judged intelligence based on what they themselves could do, how fast they could do it, and how complex was the idea or problem that they could solve. This separated them from the average persons who might be competent salesmen or accountants but certainly not whiz kids. And what were these early researchers good at? Solving mathematical or linguistic puzzles, playing chess, calculating mentally, and writing long chains of logical arguments. In other words, they were good at tasks requiring strict, linear thought patterns, and they were good at solving problems algorithmically (that is, following a set procedure). This is important for understanding how software was originally written and what kind of intelligence it possessed at that time.

(We think it important to stress here that when we talk about computers we mean the software that makes the computers run, and not the nuts and bolts that are the mere body. AI really has nothing to do with machines, except insofar as that is the physical entity in which the software, the AI, is housed.)

Article continues below

Most people find algorithmic thinking difficult. Those who were attracted to the computer field in the early days did not. In their schools and careers, such thinking set intelligent people apart from the rest of us. And since software was written strictly from an algorithmic perspective (it has since changed, as we shall see), the programs also had to be considered intelligent.

Machines That Make Mistakes

Mental tasks that would require a human being a lifelong effort would take a machine a matter of minutes. Even the creators of the programs could not approach the speed of the machine. But were these programs really intelligent in the broad sense of the word and not just large, expensive number crunchers? Some researchers in the field began to think so. They rejected Simon’s notion that man and his intelligence were quite simple; only his environment was complex. (This view was held by many of the early researchers, and thus their predictions of what they could create and how quickly were far too optimistic.) Once the researchers broadened their definition of intelligence and finally admitted that man was indeed complex, the significant work in AI began. Intelligence, they now admitted, was more than the ability to prove certain theorems or play championship chess. It had something to do with understanding ordinary language, with consciousness, with learning through mistakes, with applying principles from one set of circumstances to another. Yes, admitted the researchers, machines could do many tasks that people found difficult or boring, but human beings could do many more things that were enormously complicated for a machine to do.

Suddenly the ordinary intelligence that most of us have—what we might call common sense—took on new luster. The human propensity to make mistakes, to forget, to tolerate sloppiness, to live with inconsistencies, to form unscientific beliefs, to create stereotypes, and to make sweeping generalizations about everything and anything didn’t seem so bad after all. AI researchers decided that rather than qualities to be programmed out in their search for smart software, perhaps they should be programmed in. Maybe, they said, these things were even essential to the functioning of genuine intelligence. Without them, negative though they might be, human beings could not survive, let alone act rationally.

Article continues below

Take forgetting, for example. If each bit of information that our minds need to get through in a single day were all equally clear and present to us, we would never make it to the breakfast table and that first cup of coffee. But for the blessing of forgetfulness, our brains would become totally congested, helplessly clogged. In many ways, it is similar to pain—a seemingly negative and yet a great spiritual blessing at the same time. Could any human relationships survive without forgetting? Could we ever forgive each other? (Psychologists know that people who don’t have the ability to forget are deeply troubled people.)

On the other hand, we remember the strangest things—those that trigger our creative endeavors. It almost seems that we need to forget the unimportant and the trivial to remember what is really important (and in this we approximate God’s nature: he forgets our unimportant acts—our sins, our shortcomings—to remember our words of confession and our pleas for grace).

This is an area that keenly interests AI research: the nature of memory and reminding. Roger Schank of Yale is a leader in this new aspect of AI. At some point, if he succeeds, the old adage about learning from mistakes will apply equally to machines and people. He and others think it is a mistake to make programs error free, so long as the errors written in are the right kinds of errors. As we’ve noted, people learn from their mistakes. We store, or remember, the contextual information surrounding our errors so that the next time a similar circumstance arises we will have “learned” something from the past. From that point, our mental associations are altered, and our reality is never the same again.

Of course, it’s not quite that simple, otherwise human beings would not keep making the same mistakes over and over again; however, that is basically the way our memories help us learn. Most small children, once burned on a stove, for example, will not repeat the mistake. That is the kind of memory and the kind of learning that Schank would like to give AI, though he hasn’t thought through all the drawbacks of giving machines what is, in a word, free will. What would happen if the program simply refused to learn its lesson? Will it need a counselor? A therapist? Jail?

Nevertheless, AI researchers are trying to write programs that will have a sense of anticipation. When circumstances don’t turn out as expected, these programs would dynamically alter the structure of their memories and create a new set of anticipations.

Article continues below

Our propensity to stereotype is another example of negative characteristics with a positive side. Yet behind it is the quality that enables us to make analogies and to carry over general principles from one field to another. It may even be part of our ability to create metaphors. AI researchers would be delighted to produce one—just one—superstitious, bigoted, opinionated program, a stereotyper that could effectively operate in a problem-solving environment, the kind we live in. They would love one who could handle uncertainty, inconsistency, and fuzziness. Just imagine how difficult it is to program “maybe” in a machine that operates on the true/false, right/wrong, on/off principle.

How Machines “Think” Now

We’ve seen how AI started and what it would like to do, but what can it do now? There already are programs that exhibit, if we were talking about humans, what we would call intelligence (or at least certain aspects of intelligence). These programs are called “expert systems.” They exist for diagnosing medical problems, planning molecular genetic experiments, analyzing mass spectrographs, and inferring protein structures from electric density maps. In some cases, these expert systems can outperform their human counterparts.

These programs rely heavily on extensive data bases, the ability to make complex logical inferences, and heuristics; that is, they use general guidelines or rules of thumb to solve problems. Unlike the old, algorithmic way software has been designed—and still is by some people—heuristics gives programs flexibility and a measure of creativity.

Some of these programs can change and adapt, grow in knowledge, and learn. They can also be unpredictable; more comes out of them than their designers thought went into them. It seems remarkable that machines, programmed in advance, could produce anything really surprising. Yet, they do. They can prove theorems from logic and mathematics and even make new discoveries, as one program already has. Given only the most rudimentary notions of mathematics, it discovered integers, addition, subtraction, multiplication, division, prime numbers, and even an area involving maximal factors of prime numbers of which the program designer himself was unaware. In other words, starting from virtually nothing, it created arithmetic.

This may not seem like evidence for what we insisted at the outset—that artificial intelligence will change the nature of man. But no one has seen anything like this before: it is the merest beginning. As we have indicated, researchers are now trying to create programs that will in some ways recognize certain information as important for future reference. It will then store the information to be recalled at appropriate—or perhaps inappropriate—moments. Here the program begins to recognize and create analogies, the wellspring of imagination.

Article continues below

But the real test for AI is to understand ordinary language. You can now talk to machines. You can even buy one if you have enough money. Your conversation would be limited to what the machine knows. If you did so limit yourself, it would understand and respond intelligently. This is the crux of the matter. We use the words “understand” and “respond intelligently.” Most of the leaders in AI research concede that no program exists that can really understand ordinary language as a human being does. (Although in the computer field, “can’t” is always followed by “yet.”)

The machine can translate the sound of your voice into symbolic patterns, identify individual words and phrases, analyze their syntactic structures, contextualize them in its memory, make thousands of logical inferences from them, and respond in its owner’s native tongue. It’s so right, so fast, so canny that if you based your judgment solely on behavior you would say it was human. Yet it understands nothing. Nothing at all.

The program has merely manipulated symbols like a fancy typewriter. The person speaking to it has provided all the understanding. Some people have argued that the difference, despite the behavior, is that persons are conscious of their actions and words, while the program is not. But poets, artists, and scientists alike tell us that they do their most creative work when they are not conscious of what they do. So, again, we are left with saying that the program understands and yet it doesn’t. It is unconscious, but so are we much of the time. It listens, reasons, and communicates. There’s no doubt that it is smart about some things. But it is a missing ingredient, something we touched on earlier: it has no common sense.

Storytelling Teddy Bears

The following is an unedited story written by a program (and cited in The Handbook of Human Intelligence):

“One day Joe Bear was hungry. He asked his friend Irving Bird where some honey was. Irving told him there was a beehive in the oak tree. Joe threatened to hit Irving if he did not tell him where some honey was.”

Article continues below

We know that poor Irving answered his question. Although this program was designed with sophisticated reasoning patterns, some obvious things were overlooked. So, the designer told the program that hives contain honey. Here is the next story:

“One day Joe Bear was hungry. He asked his friend Irving Bird where some honey was. Irving told him there was a beehive in the oak tree. Joe walked to the oak tree. He ate the beehive.”

Such a program has definite commercial value. This kind of story delights children. They appreciate the humor of such lapses in common sense, perhaps because they have so recently acquired it themselves. Just imagine a four-year-old girl playing with a teddy bear that could tell Joe Bear stories. And the teddy bear could understand and talk with her, too.

“Oh you silly little bear,” the child would shout gleefully. “He’s not supposed to eat the beehive. He eats the honey that’s in the beehive. Now tell it again.”

This toy could be manufactured today; it only needs a mass market to bring down the cost.

This small story shows how much specific, detailed knowledge we need to make common sense judgments. Natural language is filled with such hidden assumptions and covert knowledge. Does any of us know when we learned about beehives and honey? It’s almost as though we always knew it. There are many things that we are never taught, yet nevertheless learn. Despite what AI has not been able to accomplish, it is astounding what the researchers have achieved—a manmade contrivance that can reason at all.

In a few short years we won’t be dealing with a handful of smart machines costing hundreds of thousands of dollars, but an interactive network of them that cost loose change in comparison. These machines will continuously provide one another updated information and transmit anywhere a century’s worth of meticulously garnered common-sense information in a matter of minutes or seconds.

Then there are the people. Thousands of natural intelligences hooked up with thousands of artificial ones—talking, teaching, learning, discovering, working on the same problems at the same time, being monitored, directed, and organized by an amalgam of human and non-human intelligences. The new Adam.

Too fanciful? Realize that the technology is in essence already here. It is simply a matter of hooking us all up—a matter of good business. This is Japan’s vision, and we will be hearing more and more about it. Every corner of our lives will be filled with an alien blend of both kinds of intelligence. It will become increasingly difficult to tell which is which.

Article continues below

Think of that little girl again. Her smart little teddy bear will do a lot more than just amuse her while her parents are at work. From the time she is a year old, it will assume a significant part of her upbringing.

The next generation won’t feel and respond to this new situation as we will. It won’t think it strange. The children of that generation will have been assimilated. We are alone; no one before us or after us has stood or will ever stand where we do—between old and new Adams.

What Makes Man “Man”?

Old Adam and new Adam: these words are theologically charged. We’re not talking about lifestyle changes or how the marketplace will be different, but how man and his nature will look. If you can’t tell a machine from a man in the way it talks and acts (appearance doesn’t count), and your definition of man includes something so unprovable as a soul, who are you to declare that the machine isn’t human?

Once our society accepts behaviorism as its primary criterion for making judgments of any kind—and we know that this is pretty close to what has already happened—there is little to battle against. Christians, unfortunately, have much in their theology that is behavioristic. We don’t realize the extent to which our thinking is formed by our culture. We want to prove God by his behavior (God, of course, doesn’t care much about that). We want to prove his blessing by our behavior or by other Christians’ behavior. Truth for us comes only through what we observe; or, at least, that’s how we comfort ourselves that it is truth. Jesus understood the temptations of behaviorism when he told the people that they were foolish and obstinate to seek signs.

We have done the same with our emphasis on man’s rationality, his reason. We have said that we can reason our way to knowing God and to understanding man and his place in God’s creation. But we have a new man coming, one made of metal, another a combination of silicon chips and DNA, yet another a union of natural and artificial intelligence. If machines can reason, what does that do to our definitions, our fundamental presuppositions about life?

Look at the humanists, who are immediately set adrift with no self-image and no distinctiveness. This is a key to the entire issue. Man is not unique among creation any longer. What of those Christians who have seen in humanists a dangerous foe? Yet the day of the humanists is over. AI researchers in a few short years have so eroded the view of man from what it has been since the mid-eighteenth century that only a whisper of a once-held general belief remains. The match has moved to another arena. With the demise of humanism, Christians have lost an ally, which, though not all of us may have realized it, has stood with us against the influence of mechanistic science. The defense of humanism—the rationality of man—is gone.

Article continues below

Christians must come forward and take the field. Our theologians need to be keenly aware of what it means to be made in the image of God. We must promote our definition of man, one not based solely on his behavior or his reason. Unless we do, there will be no one left who understands enough even to raise the right questions, let alone answer them.

Listen to Robert Jastrow, director of NASA’s Goddard Institute for Space Studies: “In the 1990s, when the sixth generation appears, the compactness and reasoning power of an intelligence built out of silicon will begin to match that of the human brain. By that time, ultra-intelligent machines will be working in partnership with our best minds on all the serious problems of the day, in an unbeatable combination of brute reasoning power and human intuition. Dartmouth president John Kemeny, a pioneer in computer usage, sees the ultimate relation between man and computers as a symbiotic union of two living species, each completely dependent on the other for survival.… Child of man’s brain rather than his loins, it will become his salvation in a world of crushing complexity.… We can expect that a new species will arise out of man, surpassing his achievements as he has surpassed those of his predecessor, Homo Erectus. Only a carbon-chemistry chauvinist would assume that the new species must be man’s flesh-and-blood descendants, with brains housed in fragile shells of bone” (Time, Feb. 20, 1978).

Jastrow has put his finger on a number of crucial issues: A perception of the next evolutionary stage of man; the combination of biology and technology; and the disdain of “species prejudice” (that simply means the belief that man is somehow special, a belief held by Christians with good theological reason and by humanists for no particular reason at all).

Challenges At Hand

We have focused on artificial intelligence. It is only one of the many allied issues that are confronting us. But they are the place to start. Computers make possible not only in conception but in practice the technology from which new life forms can be created and old ones manipulated—a domain hitherto the sole province of God. We will see genetic codes engineered, genes spliced, new biological structures developed that are part human and part plastic in their very genetic structure. The goal is perfect human beings. “There can be no twisted thought without a twisted molecule,” claims a prominent neurophysiologist. These are only some of the implications of AI.

Article continues below

All of this will be difficult to fight; the issues are so complex and our theology so entangled with non-Christian beliefs and attitudes. And Christ’s command to be in the world but not of it is going to be harder than ever to fulfill. We haven’t had—ever—such an intellectual and spiritual challenge.

Subtle and not-so-subtle arguments to refute AI have so far been unsuccessful. As soon as we say “you can’t,” they do. We don’t know how to argue against results, numbers, statistics, behavior. We’ve been beguiled by so-called scientific objectivity as the proof for truth. Christians cannot disallow AI researchers to use these arguments and then turn around and use them to validate their own witness. Christ understood this problem. He knew that by rejecting Satan’s methods once, as he did in the wilderness, he could never return to them. Unlike Christ, who was willing to live and die with the consequences, we have not been.

Our unwillingness has brought us to this impasse. Before we can confront the theological issues inherent in AI, we must stop depending on objective results, quantifiable results for our judgments. And yet, the theological issues are many. Who is man? Who is God? What does language mean? Where does the sacred fit? What is the place of salvation? AI also directly confronts the Incarnation and the Resurrection, the first because of its redefinition of man, the second because of its dualistic split between body and mind, with the body becoming an irrelevancy.

It is an enormous challenge, and time to get to work.

Tim Stafford is a free-lance writer living in Santa Rosa, California. He is a distinguished contributor to several magazines. His latest book is Do You Sometimes Feel Like a Nobody? (Zondervan, 1980).

Have something to add about this? See something we missed? Share your feedback here.

Our digital archives are a work in progress. Let us know if corrections need to be made.

Tags:
Issue: