Drivers here actually stop at yellow lights. Ticket takers at the ballpark hold up the line by talking with each customer. So far, this is Roger Schank’s only complaint about Chicago: “It’s slow!”
No one is likely to accuse Schank of such a vice. In addition to teaching computers how to think like human beings, he plans to fix our schools. A onetime “60s hippie” who’s still a little shaggy around the edges, he came to Northwestern last summer, bringing with him about $1 million in Department of Defense contracts–plus the design for an institute that, by enabling business and the university to cooperate without compromising each other, may make military funding superfluous to his research. He has two books completed and about to be published: one technical (That Reminds Me of a Story) and one popular (The Food and Wine Lover’s Guide to the Mind).
Twenty years, 14 books, and 67 articles ago, at the age of 23, he became an assistant professor of linguistics and computer science at Stanford–“even then,” says friend and colleague Jerome Feldman, “he was not in any way retiring.” Then he went to Yale for 15 years, directing the artificial intelligence (AI) project there and starting four software companies. Now he has moved to Northwestern University, where he holds three faculty appointments: as John Evans Professor of Electrical Engineering and Computer Science, as professor of psychology, and as professor of education and social policy. He’s also chairman of the brand-new Institute for Learning Sciences–a business-university hybrid of his design housed at the Northwestern University/Evanston Research Park and funded by Arthur Andersen Consulting to the tune of $2.5 million a year. In October Chicago Computing’s Jonathan Day heralded Schank’s arrival as “one of the most exciting things to hit the Chicago computing community since the IBM PC.” If everyone else seems a little slow to him, perhaps it’s understandable.
“He’s getting three professorships–that’s almost unheard-of in the academic world–and in unrelated fields,” sniffs Harvey Newquist, publisher of the Arizona-based industry newsletter AI Trends and no buddy of Schank’s. “Basically, either Andersen’s got a guy who’s spread way too thin, or else Northwestern has.”
Don’t count on it. A generation ago, perhaps, computer science had little to do with psychology or education. Now, thanks in part to Schank’s AI research, they have a good deal in common.
Say “artificial intelligence” and most people think of chess-playing computers. World champion Gary Kasparov can still thrash the best electronic chess wizard, but most players can’t even come close. For years AI research focused on chess (and on teaching computers to prove mathematical theorems), but chess holds little professional interest for Schank. Most chess programs rely on “brute force”–the computer’s ability to quickly visualize thousands of different possible future positions–and brute memory of standard variations, rather than the intuitive search patterns of human chess masters. There is a sense in which a standard chess computer, even a very good one, is just a powerful adding machine–and we don’t call a calculator “intelligent” just because it can total up a column of figures faster than we can.
Playing chess and proving theorems, to put it bluntly, are rather limited tasks compared to the real-world kind of intelligence people display, for instance, in simply managing an evening at a restaurant. Does this mean machines can’t think? Not necessarily, says Schank with characteristic sharpness: first we have to figure out whether people can think. (The psychology-department connection.)
“People are fooled by politicians, doctors, generals, lawyers, businessmen, and experts of every kind into thinking that they [the experts] are intelligent,” writes Schank (with coauthor Peter Childers) in The Cognitive Computer. “If a computer could be programmed so that its responses were indistinguishable from those of a human, we would probably believe that the machine was as intelligent as a human.”
This is a variation on the “imitation game” proposed by British computer scientist Alan Turing in the early 1950s. Essentially, Turing said that the question “Can a machine think?” is meaningless unless interpreted very concretely. All we can tell is whether its output is distinguishable from a human’s; if it is not, then the machine must be thinking, because we don’t get anything more than output from people either.
What do you ask people to find out if they are intelligent? One thing you don’t do–provided that you are intelligent–is give them standardized tests. (The education-department connection.) “It is probable,” writes Schank, “that we could create a machine that would get consistently high scores on . . . the SAT, a test that supposedly predicts performance in college, but if you sent the machine to college it would be very disappointing. A machine could get 800 on a verbal SAT and still not have anything interesting to say.” Instead of giving tests, he says, you ask people what they’ve learned from their past experiences, you trade stories, you tell jokes, you ask them to explain themselves, and–at an almost unconscious level most of the time–you judge from the responses.
Plenty of perfectly intelligent people can’t play chess and can’t prove the Pythagorean theorem. But you might wonder about someone who went to Burger King and sat down at a table expecting a waiter to bring a menu. You would wonder even more if they had been to a fast-food joint before. People get along “intelligently” in such situations not because they reason them out anew every time, but because they have a “script”–a pattern of routine expectations derived from similar experiences before. (Schank developed the path-breaking “script” idea in a book coauthored with Yale psychologist Robert Abelson in 1977.) Because you’re familiar with a “restaurant script,” you can read “John went to a restaurant. He asked the waitress for coq au vin. He paid the check and left” and understand, without being told, that John ordered from a menu and that he ate the coq au vin. By contrast, Schank points out, a structurally similar story makes no sense if we have no script for it: “John went to the park. He asked the midget for a mouse. He picked up the box and left.”
“This story is difficult to understand,” writes Schank, “because we know of no standard situation in which midgets, parks, mice, and boxes relate. . . . If there existed a standard mouse-buying script in which mice typically were purchased from midgets in the park, who wrapped them in boxes, then we could connect the parts of the story.”
Of course, scripts can change. Schank perhaps betrays his age by using the example of a diner encountering a fast-food place for the first time, who has to realize that there is no menu, that one pays first, etc. That first visit–the “explanation failure” when your old script failed and you had to scramble to adapt the old one or get a new one–sticks out in memory as thousands of later visits, now routine, do not.
What does all this–which only seems obvious after someone has pointed it out–have to do with artificial intelligence? Quite a bit. AI turns out to be a kind of two-faced discipline: you need to understand very clearly how people think in order to program a computer to do so–and at the same time, you can check up on your understanding by seeing whether the computer in fact does what you meant it to do. CYRUS, a program Schank and student Janet Kolodner developed at Yale, understood enough of the diplomatic life and travels of then-secretary of state Cyrus Vance to be able to figure out that Vance’s wife and the wife of Israeli premier Menachem Begin had probably met at a state dinner–just from its fund of knowledge about their schedules and about the scripts of such meetings (e.g., banquet at which spouses are present, etc). This kind of reasoning is much closer to what people “intelligently” do every day than calculating the consequences of a chess move at 720,000 positions a second.
Not every attempt to reproduce human intelligence in silicon has been as straightforward. Schank has long been interested in stories (a script is just a kind of stereotype story) and how much of human conversation is in fact the exchange of stories we remind one another of. One of his students at Yale, James Meehan, spent considerable time devising a program called “TALE-SPIN.” Given some basic information and story-telling rules, the program was supposed to be able to create a reasonable facsimile of an animal fable. An early result was: “One day Joe Bear was hungry. He asked his friend Irving Bird where some honey was. Irving told him there was a beehive in the oak tree. Joe threatened to hit Irving if he didn’t tell him where some honey was.”
The program had made a recognizable narrative. It even answered its question about where the honey was but then didn’t know that the question had been answered. Meehan proceeded to add enough information about beehives so that the computer could figure out that Irving had told Joe something useful. The next version went: “One day Joe Bear was hungry. He asked his friend Irving Bird where some honey was. Irving told him there was a beehive in the oak tree. Joe walked to the oak tree. He ate the beehive.”
Oops! Schank writes, “Since the program contained a conceptual representation for source of food we merely told it that beehive was one. . . . but we forgot to mention that source as container is different from source as object. Finding a refrigerator will do when you are hungry, if you know to look inside it, and not to eat it.”
As the stories grew more complicated, with extra characters and more abstruse concepts, the bugs in the program got harder to fix (and the “misspun tales” became funnier). But the conclusion to be drawn is the same. AI researchers have an experimental tool–the computer–that gives them a clear advantage over philosophers and psychologists who for millennia have thought about how people think.
The problem has always been that we’re people, and we take too much about ourselves for granted. Because the computer takes nothing for granted, it produces strange results when we fail to make our assumptions explicit. And making these “obvious” assumptions explicit is much harder work than it seems. That’s why Schank likes Edison’s quip about genius being 1 percent inspiration and 99 percent perspiration: “I don’t think we need more ideas. What we need is more execution of ideas. And that turns out to be really tough.”
I asked Schank if there was now a program like TALE-SPIN that could turn out good animal fables on demand. “Oh, no,” he said. “In general, the goal of our research is never to achieve anything in that sense.” In other words, we don’t need a mechanical Aesop so much as we need the lessons learned in trying to create one. Besides, a later student of Schank’s, Natalie Dehn, wrote a thesis arguing that the idea behind TALE-SPIN was too simplistic. “Humans tell stories based on past experiences,” says Schank now, “and there didn’t seem to be much point in having a story-telling program with no past experiences.”
But stories remain central to Schank’s research. Another part of what you mean when you say people are intelligent is that they tell relevant stories. If you told a coworker about the terrible delay that morning on the Howard el, you would find it reasonable if she responded by telling you about CTA service during the blizzard of ’79, but not if she told you about the cute thing her grandchild said last night.
In fact, Schank is happy to define intelligence as “the ability to say the right thing at the right time.” (If you think of the right retort two hours or two days later, “That’s not as intelligent.”) The amazing thing, he says, is not so much that we know so many different stories as that we are able to tell the right one at the right time.
Here’s the education-department connection again: This is also Schank’s theory of good teaching, whether done person-to-person or computer-to-person. “We remember physics teachers telling their stories visually with great drama,” he writes in a recent paper on education and training. “We remember history teachers telling good stories from history. We remember English teachers telling good stories about former students’ writing problems, or about the lives of famous authors. We often have trouble remembering anything else but the stories. The stuff we were quizzed on has long since vanished.
“A good teacher tells good stories. He tells them in a way that we can remember them, and he tells them at a point where we might be able to understand their significance.” A key question, then–both for understanding how people think and for getting a computer program to do the same–is about “reminding”: how do we mentally label our stories so that the right one will “come to mind” at the right time?
“Psychologists unfortunately have not studied spontaneous remindings,” writes sometime Schank collaborator Robert Abelson in his foreword to Schank’s book Dynamic Memory. “That reminds me . . .” does not lend itself to a series of carefully controlled and statistically tabulated experiments–the kind psychologists favor. This didn’t stop Schank. He got plenty of examples from friends and colleagues, enough to work on and enough to impress even the nonspecialist with the subtlety and intricacy of something we all do every day.
One student, for instance, was reading about how the standard QWERTY keyboard was invented for the purpose of slowing 19th-century typists down and keeping them from jamming the keys, and how nevertheless some people now try to claim it makes typing easier. The student was reminded, oddly enough, of a story in which the bride cut off one perfectly good end of the ham she was preparing for dinner. She claimed that was just how it was always done; it turned out to be a family custom dating back to when her grandmother had never had a big enough pan!
What’s the connection? Explanation failure, says Schank. One part of our everyday scripts is that things we do customarily will, if examined, turn out to make sense. In both the QWERTY and ham stories, they didn’t. The ham story stood out in memory because it violated a script, just like that first visit to a fast-food restaurant.
Schank has already found ways to plug this abstract research into the day-to-day business problems of Andersen Consulting, which does an enormous amount of in-firm training and whose clients do even more. “There are certain concepts I’m interested in,” he says. “I can find them at Arthur Andersen, I can find them anywhere. One is what I call video case-based teaching. As you solve a problem, you run into things, difficulties that other people have run into. You want to ask an expert, ‘What about that?’ Doctors do this–they consult specialists. Lawyers do it all the time. Why shouldn’t everybody be able to?” Because there aren’t enough (genuine) experts to go around. So, Schank says, “you want everyone on a video disc–a library of everyone and every problem they’ve ever had. Then every time something goes wrong, up pops somebody who says, ‘I had that problem once, shall I tell you about it?’
“Andersen has a professional selling school out in Saint Charles, where real experts come in and tell war stories. We’re pretty close to a format, a labeling scheme–not necessarily the ‘correct’ one–and then we’ll be collecting and labeling those stories” for just such an AI-based video library.
Ultimately, Andersen sales trainees should wind up with software far different from the usual deadly monotony of “programmed” learning–more like having all the company’s top salespeople ready to respond to the latest dilemma with relevant stories from their own experience. The end result of AI is a computer you can talk with and learn something from.
Artificial intelligence is not a subject that rests comfortably in the ivory tower–it needs to be applied and tested on real-world problems. Having gone through two past career stages–professor, then professor-entrepreneur–Schank has been able to devise an institutional setup where he can do his research under university auspices, and apply it under business auspices, without (he hopes) treading on anybody’s toes.
Schank began his career as a more or less ordinary professor. (With emphasis on the “less”: he showed up at Stanford, not yet having finished his linguistics PhD at Texas, saying that he had some good new ideas and that they should hire him. They did.) Later on, at Yale, he tried his hand at being a professor-entrepreneur, founding Cognitive Systems, Inc., in 1979 and three other related small software firms after that.
He learned a lot about business–“now I can go in and see a CEO and have a conversation we both understand.” But he didn’t care for being a small business, and he came to loathe venture capitalists. “In the small-business world, the goal is what big company you can sell the company to. I could not believe that I was working with investors whose only interest was in how and when we would sell the company. I’d only just started it, for Christ’s sake!” (Schank was president of Cognitive Systems until 1983 and chairman of the board until 1988, and he still holds stock in it; since he left it has showed its first profit.)
Schank had an idea for a new kind of institute that would mediate between businesses and researchers in his field, but he knew he couldn’t bring it off in New Haven. Yale, he says, was too self-satisfied and too exclusively a humanities university to give him the flexibility he was looking for: “They’re not too sure about the 20th century.”
Still, Schank’s coming to Chicago was not the result of an academic raid on Yale by Northwestern. Andersen Consulting provided the funds and picked Northwestern, and Schank agreed to come. Andersen–the number-three U.S. company in computer-systems integration, corporate brother to the accounting and auditing firm of the same name–wanted a stronger computer-science presence in its headquarters town, preferably at the school where Arthur Andersen himself had once taught. Northwestern wanted to upgrade its offerings in computer science and cognitive science. Andersen turned out to be the first big company Schank approached that was willing to put up serious money, so the fit was perfect.
Schank seems pleased with his new university home. “The president of Yale is always a Yale graduate,” he says. “The president of Northwestern is not awestruck by its traditions.” (Schank chuckles that Yale even awarded him an honorary master’s degree, apparently because of some subliminal suspicion that an outsider really should have Yale credentials to teach there.) “Northwestern has added faculty slots for us, which Yale chose not to do. This is not a major research university in my field,” but, he adds with characteristic modesty, “we can fix that.” New on the faculty thanks to Schank are computer scientists Christopher Riesbeck and Lawrence Birnbaum from Yale, Gregg Collins from the University of Illinois, and Paul Cooper from the University of Rochester; psychologist Andrew Ortony from the U. of I.; and researcher on human understanding and computers Alan Collins from BBN Laboratories in Cambridge, Massachusetts.
Northwestern vice president for research and graduate-school dean David Cohen oversees the institute and is pleased. “He’s attracted a cohort [about 18] of outstanding graduate students in computer science, of national quality, that we couldn’t have attracted before.”
Schank’s Institute for Learning Sciences (a part of Northwestern) is a business-university hybrid, but one designed by a professor who knows he doesn’t want to be an entrepreneur. The faculty are paid by Northwestern (roughly $1 million a year) and have the usual teaching and recruiting duties. Schank does a good deal more recruiting than teaching. Besides thesis advising and teaching a graduate- student seminar, he’s scheduled to teach one undergraduate course for one quarter every other year. He also maintains his maverick status by refusing to read any memos not directed personally to him.
The institute’s location reflects its nature–it rents the entire third floor at 1890 Maple, an attractive new building in the burgeoning Northwestern University/Evanston Research Park. Its members research problems facing the big businesses that choose to pitch in and support it. Andersen is the founding sponsor; it will contribute $12.5 million over five years, and ten of its programmers work there. More corporate backers are expected soon.
It’s a good bet they’ll be big names, because you can’t buy in for less than $500,000 a year for three years. “I want to be on the big companies’ side,” says the ex-entrepreneur. The software they help finance “will ramify through the system. I can help them, and they want to help education.”
Schank’s experience with big businesses has not always been so positive. He had a summer job during college, transferring data from one computer printout to another for an insurance company; when he wrote a computer program to do the job faster and better, “I was told to shut up, and I received an unsatisfactory job rating at the end of the summer.”
More recently, Schank and his colleagues devised an intelligent financial-adviser program that would converse with bank clients in normal English about their financial needs. “It worked,” he says. “But the bank decided it didn’t want to put keyboards in the hands of the public.”
Yet Schank has considered the alternative. And if small businesses seem to exist primarily for the purpose of being swallowed by big ones, then he is content to be on the side of the swallowers rather than the swallowees.
What exactly does Andersen Consulting get for its money and manpower? Nothing very specific, according to both Andersen managing director of research and development Bruce Johnson (“You take the best research people you can find, and you enable them”) and Schank (“I just walked around and picked [research topics]”). In general, Andersen’s interests dovetail with Schank’s research interests–the company trains thousands of its own people every year and would like to do the job better–but there is no specific software assignment, as some news reports have implied.
Still, there is reason to expect that some kind of useful software will emerge from Schank’s labors. When it does, the Andersen programmers, who will have been intimately involved with it from day one, will take it back to the company with them–so that when bugs pop up they’ll know what to do. Had Schank instead written the program for the company and dropped it in their laps, it might have been hard for them to get the hang of it.
This more efficient way of transferring technology also bypasses a typical conflict facing other business-university hookups. The university is (or should be) ideologically committed to free availability of information, publication in professional journals, etc. But the business customer or underwriter naturally feels it should have some proprietary right to the information it has paid to develop. The institute’s system, which Schank devised–“I think it’s the ultimate in tech transfer”–means that publication of a program wouldn’t limit Andersen’s head start with it. And the firm will have the extra advantage of a bevy of programmers who know its workings inside out.
Until now, most of the grants for Schank’s research have come from the U.S. Department of Defense. At first it wasn’t easy for him to accept that. “I was once a 60s hippie. The first military contract I got, I went ‘Ohhhh.’ And I rationalized that every dollar they spent on me was one less they could spend on bullets. I don’t think that was actually true.”
Now that he is a famous researcher and not a hippie, Schank contends that DOD is “the least bureaucratic of all research-funding agencies in the country. I love it.” In contrast, says Schank, funding officials at the National Science Foundation tend to be academics on loan and are often interested in putting down applicants who may be more academically successful than they. “Military people are different. Say they were in the Air Force and were seen to be a little too smart, so the Air Force sent them to school. Now they have no particular ax to grind–they’re genuinely excited by research, they don’t get hung up with peer review.”
In that case, why bother setting up an institute whose nonuniversity funding will come mostly from big business? “Because I think peace is breaking out,” Schank says. Without a communist bogeyman to frighten taxpayers with, research funds from the military may become scarcer. “And I have to go where the money is. It’s been that way since the 15th century. The researcher has to have a patron, be it the Medicis or whoever.”
But Schank does have a goal beyond finding the most reliable patron he can. Like other educational reformers, he believes that learning should be customized to the individual student’s pace; unlike them, he expects to develop the tools to provide that–cheaply–to every student. A computer program that can tell appropriate stories to a company’s trainees is likely to resemble the programs needed to tell appropriate stories to students. Schank describes today’s run-of-the-mill “educational software” as a “disaster.” It tends to be either the same old dull workbooks in electronic form, or the same old trivial video games (“shoot down the verb”) in “instructional” form–both tied to a stultifying series of standardized tests.
Instead of this, says Schank in his 1988 book The Creative Attitude, “children must learn to think for themselves, to ask their own questions, not to constantly be answering the questions of someone else. There is no room in programmed learning”–or in most schooling–“for children to explore, to question, to create, or to fail in an interesting way.”
Truly educational software, says Schank, would have two parts. One would be a simulation (not unlike a flight simulator program): “Instead of having to memorize the immense amount of abstract knowledge involved in basic chemistry, for example, a student could create reactions and manipulate different chemicals graphically on the computer. The program would teach the student why certain things combine in certain ways, and allow him to learn from his mistakes. The student . . . would learn, rather than merely repeat, the principles of chemistry.” Better yet: “Most children want desperately to make something blow up in a chemistry lab. . . . There are good reasons for prohibiting them from doing this. But a computer-graphics experiment that taught a student all the principles involved in an explosive reaction and then allowed him to set one off would harm nobody if it blew up.”
Packaged with the simulation would also be a “Socratic Teaching Tool,” a computer tutor that would observe what the student did with the simulation and that would be “able to make suggestions, answer questions, and by intervening with the simulation itself, cause situations to present themselves to the student that might be particularly useful at a certain time”–including, say, a story about an explosion that failed to go off for some reason.
The question, says Schank, is how to produce such programs. The education bureaucrats are not likely to be interested or to have the money if they are, and the venture capitalists–according to Schank–can’t reason well enough to see the merit in investing in something like this. The Institute for Learning Sciences is his end run around both.
His strategy is that big business will support the research needed to develop captivating, individualized training software. If Schank succeeds, the software will eventually be so appealing that parents will demand it for their children. By that time he expects to be able to go to the schools and offer them, say, an entire second-grade curriculum (or as much of it as it makes sense to turn over to the computer)–because the hard programming problems of such software will have already been worked out under Andersen’s and other companies’ auspices.
Schank has written an entire book on creativity, but he doesn’t expect his ideal school system to produce nothing but unorthodox question askers. He’ll settle for much less. “The electorate should be intelligent enough to vote on something more than who looks best on TV. People should be able to ask a few critical questions, be willing to point out that the emperor has no clothes.”
In short, Schank has had it with mindless rule following, and he has never been one to suffer fools gladly. “Now that I’ve moved, I have an apartment in New York to rent. The agent sends me checks, I pay her fee, it’s all been arranged, the apartment is rented. And now she sent my last check back because it wasn’t certified! Evidently somewhere there’s a company rule, the check has to be certified before the deal. But here the deal’s already been made! The schools just didn’t turn out someone who could point out this silliness to the boss.” It would be a delicious irony if well-programmed computers–in the popular mind the ultimate in mindless rule following–turned out to be able to promote that most human and creative activity of rule breaking when necessary.
Art accompanying story in printed newspaper (not available in this archive): photo/Art Wise.