If a machine learns to think, what will it have in mind?

My interest in artificial intelligence began quite simply. I was concerned about the amount of time my stepchildren were spending playing video games. Although I knew very little about video games, I saw the children’s personalities change when they had been playing for some time. And it seemed to me that the pattern memorization that video games require might teach an unhealthy approach to thinking.

I was concerned about the addiction video games brought. Even more so than television, these machines isolate children from their parents and peers and promote the idea that if a person knows the right steps he can succeed. The more I thought about it the more it seemed that someone connected to a machine—in this case a video game player—became a new creature, neither person nor machine, but something different than either.

My husband was as concerned as I. But his background in computers and his training as a mathematician helped him see the issue in a much larger context. We have briefly explored this context in the preceding article. Our investigation of the greater issue, the evolution of a new kind of human being (of which video games comprise a small, albeit lucrative, part) led us through complex books on how computers work and fanciful explorations of the wonders of artificial intelligence (AI) to quite popular books that have captured the imaginations of the general reading public.

I suspect that most people are in the same position that I am—nontechnical, certainly not mathematically literate. Yet, I wanted to know who was predicting and planning what in this field. I also wanted to know what the responsible Christian attitude should be. In reading the literature, I was relieved to find that I could indeed understand the researchers’ positions. It also did not take me long to discover that if I wanted help deciding the appropriate Christian response I wasn’t going to get it from anything that had been written to date.

The cognitive scientists, psychologists, psycholinguists, and AI researchers discount religion entirely and are, for the most part, strict behaviorists—and dualists as well. As I read their views I realized with some shock how deeply imbedded into our theology certain of their presuppositions had become. And I realized how monumental was the task to think and judge simply as Christians and not as late-twentieth-century Americans enamored with gadgetry—gadgetry we had been led to believe by IBM and its competitors was ordinary machinery. We have been glad for the lie, because we all know what machines are: mere tools. But what the books I’ll review admit is that intelligent machines are not mere tools. Computers do not necessarily do just what we tell them to, despite what we’ve heard on television or been told by our company’s information officer.

Article continues below
Three Primary Books

Of the seven books, three are central to the issue of old Adam, new Adam. One is specifically about the latest theory in AI. The others provide the context for understanding our much-publicized computer revolution. (This is not intended as an exhaustive review, but merely represents the main ideas from academicians and journalists.)

Probably the most quoted of these books, and the most controversial in AI circles, comes from Joseph Weizenbaum, a computer scientist from MIT who developed the ELIZA program—a simulation of Rogerian therapy. He called his program ELIZA because, as with Shaw’s original in Pygmalion, this Miss Doolittle could learn to speak better and better. That alone is significant. How can a software program implanted in a machine become greater than it was when it was written? This is the heart of artificial intelligence—to create programs that can learn just as human beings learn.

To Weizenbaum’s credit, but to the dismay of what he calls the artificial intelligentsia, he decided that his program was immoral (he didn’t use those words, but that is the impact of what he said). He wrote Computer Power and Human Reason: From Judgment to Calculation (W. H. Freeman, 1976) to say that human beings and machines are not synonymous. Now, that may sound obvious, but the metaphor of man as machine has so increased in potency and frequency since the early nineteenth century that people have come to accept the metaphor as literally true. Weizenbaum also emphasizes that though there are many things computers could be made to do, there are some things that computers should never be made to do.

Such a radical departure from the cheerleading squad brought Weizenbaum notoriety and disdain from most of his colleagues. It was tantamount to heresy for him to suggest that researchers were ignoring ethics in the quest to create a machine that thinks. (AI supporters would use the word “who” where I used the word “that,” an important grammatical assumption on their part.)

Much of the literature in the field is a not-too-gentlemanly debate about whether machines can think. Weizenbaum considers this a red herring. The answer lies with what thinking means and how we define intelligence, and whose definition will prevail. Weizenbaum grants that at some point machines may approximate human thought; his own ELIZA showed the possibility. His concern is with “the proper place of computers in the social order.”

Article continues below

Weizenbaum is concerned with power and language. Most of us are mystified by programs and by the languages in which they are written. This hocus-pocus atmosphere makes us vulnerable and weak, says Weizenbaum. The elitist core, those who understand the language—those, we might say, who are writing the dictionary—become the rulers. Not only do they understand the technical languages, they redefine our sturdy everyday tongue so that we no longer know what we mean when we speak. We are left dumb.

The solution is not for everyone to enroll in a course on Fortran or LISP (the high-level language of AI). Nor is it to discard what Weizenbaum calls “instrumental reason,” that method of thinking on which science maintains itself. Nor does Weizenbaum suggest that we replace science. Rather, he argues for reason to be melded with intuition and feeling—to recognize that more than mere rationality makes man human.

Weizenbaum’s solutions are not satisfactory, which is the weakest part of the book. But he goes far in demythologizing the computer for the layperson (he has two fine chapters in which he explains in nontechnical language how computers work). And at least he raises some crucial questions, unlike Pamela McCorduck, who is deliriously happy with the idea that man as we have defined him is now an obsolete concept. For the general reading public she is head cheerleader.

McCorduck’s Machines Who Think (Freeman, 1979) and Weizenbaum’s book are written simply, but are for computer scientists and others who have more than a casual interest in the subject. McCorduck is a journalist who teaches a workshop in science writing at Columbia University. Her recent book was written with Edward A. Feigenbaum, a leader in AI research at Stanford University. It is The Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to the World (Addison-Wesley, 1983).

Machines Who Think is more serious than The Fifth Generation, and better written. McCorduck purports to give a history of AI research. She provides some interesting anecdotes of and interviews with the names to know: Claude Shannon, Marvin Minsky, and John McCarthy, among others. Her real purpose, however, is less to give a history of AI than to convince readers of its glamour, its sparkle, its inevitability, its absolute rightness. To do this, she defends the mechanistic metaphor of man, reducing him to mere bits of information stored in his genetic code. Not only is his biological system a mere software program, but his role in life is that of information processor. Man receives inputs, which are sent through his biological, neurological data banks and result in outputs of information.

Article continues below

McCorduck knows that this mechanistic view of humanity offends some people, so she argues that this metaphor has been with us ever since the days of Genesis. The only difference between our era and that of the Greeks, for example, is that we have been more successful in making the metaphor literal.

Metaphors are central to changing the way people think, perhaps because most people don’t understand what a metaphor is and how it functions. The oftener we hear a metaphor the sooner it ceases to be metaphor and becomes accepted as reality. So we become comfortable with the idea that man is another kind of machine. Thus, we can now consider McCorduck’s real question, “Can a machine think?” That, she says, all depends on what you define as machine and how you define thinking.

Remember Weizenbaum’s point about who is writing the dictionary?

McCorduck’s argument runs something like this. We know that the body is a mechanism, so why not the mind? It, too, has a physical process (her definition of a mechanism). Then if another physical entity that is similar to a mind exhibits intelligent behavior, why can’t we call it a mind as well, no matter what its casing? Here we have dualism and behaviorism. As long as something acts, or behaves, with characteristics we have always called human, then why not define it as human? What difference does it make if it is metal and not skin and bones?

McCorduck, confessing herself to be Hellenic rather than Hebraic in nature, is untroubled by ethical issues. She dismisses Weizenbaum as a disgruntled, used-up scientist who is now making a career out of debunking his colleagues.

In The Fifth Generation, which focuses on the economic threat of Japan, McCorduck continues her attack on those who question AI. To say that machines can’t think is as silly as saying women can’t think. “Intelligence was a political term, defined by whoever was in charge.” (I’m not certain I follow her reasoning there; I offer it as an example of how she argues.)

Although this book is irritatingly repetitive—McCorduck and Feigenbaum keep pounding in the nail of “knowledge information processing” long after it has been imbedded in the wood—it is valuable as a tract for AI. The authors know that AI will solve the world’s problems, from poverty to aging. They know that the day of learning machines is nearly here. They know how much power these machines will give the country, or business, or person who holds their secrets. As good chauvinists, they want that country to be the United States, those businesses to be ours. They admit that “the computer will change not only what we think, but how.” Although most people agree with their statement, not everyone welcomes it as they do.

Article continues below

The Fifth Generation also contains helpful appendixes—summaries of current experimental and operational expert systems, names and addresses of companies working in AI, and a brief glossary that defines some key terms such as heuristics, expert systems, and knowledge base management system.

A More Technical Approach

Authors McCorduck and Weizenbaum perhaps unintentionally hint that all is not as unanimous among AI researchers as they would like us to think (we must discount Weizenbaum here, who is really no longer in the field). Roger Schank, for example, is working to develop machines that learn as a child does, machines that can be bored, he says; no one can claim they think. This idea greatly disturbs McCorduck, who believes it a benefit that it can never become bored. Schank, more sophisticated in his goals, intends to create machines that are just like humans, even down to their feelings, intents, and errors.

To do this, Schank has until recently concentrated on natural language processing, which is central to the success of AI and one of the things the Japanese are stressing. Now, though, he has shifted focus to memory, because he believes that giving machines “dynamic memory” will make them understand consciously what is being said to them and what is happening around them. “Our memories,” writes Schank, “are structured in a way that allows us to learn from our experiences. They can reorganize to reflect new generalizations—in a way a kind of automatic categorization scheme—that can be used to process new experiences on the basis of old ones. In short, our memories dynamically adjust to reflect our experiences. A dynamic memory is one that can change its own organization when new experiences demand it. A dynamic memory can learn.” This is the main point of Dynamic Memory: A Theory of Reminding and Learning in Computers and People (Cambridge, 1982).

Article continues below

Note the title, which tells us two significant facts about the book. First, Schank is talking about theory—and as we discover, the theory is his own. He has not based the book on experimental psychology, research by cognitive scientists, or neurological studies. Rather, he asks, “How do I remember?” and then builds his theory accordingly. Although he had students answer questionnaires on how they remember, this is a personal inquiry. I’m not convinced that everybody’s memory works the way his does; mine doesn’t. My husband’s does, perhaps because of his mathematical training. This would not be a problem, except that in science the persuasiveness of the person often makes theory equal fact with little to support it (witness Darwin).

The second thing the title tells us is that Schank is linking computers and people naturally. Computers don’t remember the way people do—yet. But if we become accustomed to thinking of computers and people as synonyms, we will no longer know to ask the right questions about such an assumption.

Most of Schank’s book is quite technical, and it would be difficult to find in a bookstore. I include it here because he is a leader in the newest area of AI research. His book, as well as his essay in The Handbook of Human Intelligence, a hefty volume, are worth the effort.

Judas Iscariot

“Jesus said, ‘Friend, do what you have come for.’ ”

—Matt. 26:50

Call him a miscalculation.
Everyone has a few bad rolls.
Maybe Jesus was looking the wrong way
at the right moment, or the right way
at the wrong moment.
Maybe.
Call him a misunderstanding.
Christ’s kingdom somehow soured,
His clauses hobbled.
The magic lost its bite.
Call him a mistake.
While most of the pots bear the heat
and blast, this one cracks.
The fertilized fruit tree
sometimes decays.
The fruit thuds to the ground,
rots and steams. The roots curl.
Call him a miss,
near miss, bullseye, or dead center.
All of the above.
None.
Call him whatever you like.
Jesus called him, “Friend,”
and warned him to finish
his call.

—Mark R. Littleton

Books That Are Fun To Read

Not all the books on AI are work to read or hard to find. Some of them are sheer fun and for that reason have captured best-seller status. They are not necessarily central to the philosophical issues and intellectual controversies surrounding the field. Yet they provide insight into the minds and attitudes of the people who actually make the systems that all of us, if the predictions are accurate, will soon have in our homes. (Business Week in a recent issue quoted an expert as saying that microcomputers will be as common as toilets in just a few years.)

Article continues below

The Soul of a New Machine, by Tracy Kidder (originally published by Little, Brown, now in paper with Avon and soon to be a film), won the 1982 Pulitzer Prize. It tells about the making of a new computer, manufactured by Data General in Massachusetts. Not only is it a fascinating adventure, but it provides great insight into how a computer mesmerizes those who work with it. A computer becomes “a face, a person to me—a person in a thousand different ways,” one of the computer hackers tells Kidder. (A hacker is someone addicted to hands-on programming, someone who can’t stay away from the keyboard.)

A leader of the project explained it to Kidder this way: “I loved writing programs. I could control the machine. I could make it express my own thoughts. It was an expansion of the mind to have a computer.… It really is like a drug, I think.… It was great for me to learn that priestly language. I could talk to God, just like IBM.” These are the people against whom Weizenbaum warns.

The mystery and addiction that surrounds computers, and to which The Soul of a New Machine testifies, has been explored and exploited by some of our most talented novelists today. We can read excerpts from them in a marvelous, intriguing book, The Mind’s I: Fantasies and Reflections on Self and Soul (originally published by Basic Books in 1981, now in paper from Bantam). This collection of philosophical essays, excerpts from science fiction, and papers from scientists all touch on the theme of AI. Douglas R. Hofstadter, who wrote the Pulitzer Prize-winning Godel, Escher, Bach in 1980, and Daniel C. Dennett, author of Brainstorms, “composed and arranged” the book. Hofstadter is a computer scientist, Dennett a philosopher. Here we find Jorge Luis Borges and Stanislaw Lem, as well as A. M. Turing, the father of the computer. Each selection ends with “reflections” by Dennett or Hofstadter. Both men are pro-AI.

The most pertinent essay is from John R. Searle, “Minds, Brains, and Programs.” He effectively shows the problems in “strong” AI. (He differentiates those who claim that computers are tools from those who declare that computers will be human; throughout this review I have been referring to strong AI.) Searle explains that here we find dualism, “not the traditional Cartesian variety that claims there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain.” This is important to note, because AI separates the mind from the brain, the program from the machine, which splits intelligence from its biological moorings.

Article continues below

What this will bring is the subject of Algeny (Viking, 1983), the final book I will mention. Author Jeremy Rifkin does not focus on computers but on bioengineering and the end of the age of Darwin. Picking his cue from Alvin Toffler in The Third Wave, Rifkin asserts that the Industrial Age is over; the language of the biotechnical age is the computer. He foresees the computer and life sciences coming together. Soon we will be programming in living tissues. (This, of course, is why the assumption of dualism is central to AI.) Intelligent machines will make possible the sweeping changes in our civilization.

Rifkin’s book is must reading in conjunction with those on AI. I wish, though, that he had a better grasp of AI research. He seems to draw the battle lines too late with bioengineering (which is undoubtedly only one result of having intelligent machines). We need to focus on the cause and not the effect.

Where does this leave Christians? Certainly we can appreciate those who warn us about the possible results of AI. Yet, to put it simply, Weizenbaum and his supporters don’t know what to do. The humanists have been defeated; they have no spiritual resources. It is time we focused on the future, on the issues. Reading these books is a good place to start.

Reviewed by Cheryl Forbes.

Have something to add about this? See something we missed? Share your feedback here.

Our digital archives are a work in progress. Let us know if corrections need to be made.

Tags:
Issue: