You say you can deliver the Straight Dope on any topic. Try this one. Who am I? or Who is the knower? or What is consciousness? (All the same question, roughly.) If consciousness (which we all experience intimately) is merely an epiphenomenon of the mind, which is an epiphenomenon of the brain, then there must be a physical mechanism in the brain that accounts for it. But then the same question can be (and must be) asked again: What submechanism within the broader mechanism is responsible for consciousness? –Jeremy Fields, Evanston

Whoa. Let’s see if I can get through this pup in 600 words or less.

Though small minds might consider it a thumb sucker, inquiry into consciousness has been one of the central debates in the field of artificial intelligence. In 1950, when “thinking machines” first seemed a real possibility, computer pioneer Alan Turing reasoned that since consciousness is subjective and thus inscrutable, the only way we can know if a computer is intelligent is to ask it questions. If its answers can’t be distinguished from those of a human, the computer has to be considered intelligent.

Some people took this a step further. A machine that passed the Turing test, they argued, wouldn’t merely simulate thought, it would honest to God think.

No way, said the skeptics. The best-known argument, formulated in 1980 by the philosopher John Searle, went like this: Suppose I’m locked in a black box with two slots in it marked “Input” and “Output.” Pieces of paper with black squiggles on them are periodically shoved through the Input slot. My job is to look up the squiggles in a rule book I’ve been given and shove pieces of paper marked with other black squiggles through the Output slot as the rule book directs.

Unbeknownst to me, the black squiggles are Chinese characters. Outside the black box, scientists have been inputting questions in Chinese, and I’ve been sending back Chinese responses. My answers have convinced the scientists that the black box understands Chinese. But I don’t understand Chinese at all! So how can a computer, which operates in the same way, be said to understand Chinese–by extension, to think?

Ah, said proponents. You don’t understand Chinese. But the system as a whole (you + the rule book + the box) does.

Nonsense, replied Searle. Suppose I memorize the rule book and dispense with the black box. Now I constitute the whole of the system. People hand me symbols; I respond with other symbols based on the rules. I appear to understand Chinese, but I don’t. I merely display a facility in Chinese syntax. Chinese semantics, the essence of thought, eludes me. Just so with computers.

You don’t get it, the other side responded. Is not the human brain a machine? Does it not consist of zillions of neurons, no one of which can be said to think? Yet the brain as a whole has thoughts, understands Chinese, etc. Could not this machine be replicated?

Of course, said Searle. Artificial intelligence may be possible. It’s just not likely to arise from computers as currently understood.

That’s the gut issue, you see. A key assumption among some AI researchers has been that computers and the human brain work in similar ways, that with the rapid improvement of computer technology it’s only a matter of time until we’re able to produce artificial consciousness. In his book The Age of Spiritual Machines: When Computers Exceed Human Intelligence computer scientist Ray Kurzweil predicts that soon we’ll be able to copy our brains into computers and thereby attain immortality.

To which the Searles of the world say: Yeah, right. There’s strong evidence that computers and the brain are fundamentally different. For example, AI researcher Douglas Lenat has been trying to teach a computer common sense. He estimates this will require 100 million discrete chunks of information, and 15 years into the project he guesses it will take his team another 25 years to enter all the necessary data. Yet a normal human being learns it all effortlessly in childhood.

Nobody really knows how consciousness arises, but it seems evident there’s more to it than computer programs. Some believe the brain needs to be installed in a body. I venture to say that some breadth of sensory input and the ability to interact with your environment in complex ways–in short, to learn–are also required.

We’re nowhere near producing machines that can do this now, and claims to the contrary are just hype. Sure, maybe the computer Deep Blue can beat a human at chess, but all that means is that a team of programmers toiling for years can build a machine that surpasses a human at a single task. As Searle points out in a review of Kurzweil’s book, do we freak out because a pocket calculator can outdo us at math?

Art accompanying story in printed newspaper (not available in this archive): illustration/Slug Signorino.