Models of my life 3

Arguments addressed to the Ford Foundation that this was not the way to go had no effect. Foot dragging had little more. (“Whose food I eat, his songs I sing.”) At least, I thought, we should be making strong efforts to convert area studies into genuine social science. To do that, we should

establish committees for comparative research on important topics, drawing together the appropriate area specialists. That would impel them to conceptualize what they were doing at a higher theoretical level.
We never found the funding to do much of this, and I was never certain how much understanding and support I had from the council staff. When I made my periodic speech on this topic, heads nodded in agreement, but no visible action followed. I am sure they were more realistic than I about how movable our funders were.
As I look at the scene today, however, I see more comparative analysis among cultures, much of it sponsored by SSRC, and I am correspondingly more positive about the long-run effects of the area studies programs. I don’t think I can claim any personal credit for these newer developments.
My service on the Board of the SSRC had little apparent effect on the direction of development of the social sciences, and probably was not highly productive. But it was educational to me, and extremely pleasant. In addition to its bimonthly meetings in New York, each autumn the council held a larger gathering at Skytop, a resort in the Poconos, where I had the opportunity to become good friends and engage in stimulating conversation with a great many leading social scientists.
In my activities with the Ford Foundation and with SSRC, I had my first experiences of seeking to influence large affairs, where it is never clear whether one’s efforts have any result but where a result even of size epsilon could be important. (Epsilon times infinity can be a large number.)

Chapter 11
Mazes Without Minotaurs
In my 1956 paper, “Rational Choice and the Structure of the Environment,” I wove around the metaphor of the maze a formal model of how an organism (a person?) could meet a multiplicity of needs and wants at a satisfactory level and survive without drawing upon superhuman powers of intelligence and computation. The model provided a practicable design for a creature of bounded rationality, as all we creatures are.
I was so pleased with the paper’s account of rationality that a year later I found myself writing a short story, “The Apple,” fashioned after it. The maze of my story, unlike the labyrinth of the Cretan myth, provides no heroics, no Theseus to seek out the fearsome Minotaur at its center and then escape by following the thread given him by Ariadne. Its central figure is not Theseus but Hugo, an ordinary man. The story describes Hugo’s life, much like every human life, as a search through a maze. In doing so it strips the mathematical wrappings from the technical paper that provided its metaphor.
Some light is thrown on my preoccupation with mazes, and hence my urge to write “The Apple,” by a conversation I had with the writer Jorge Luis Borges when I was in Buenos Aires in 1970.
A Conversation with Jorge Luis Borges
In December of 1970, Dorothea and I visited Argentina, where I was to give some lectures on management. In my correspondence about arrangements, I did something I have never done before nor sinceI asked for an audience with a celebrity. For a decade, I had admired the stories of Jorge Borges (I didn’t then know his poetry), and had been struck by the role that

mazes played in them. I wanted to know why. I wrote to him (in English, since I knew he was fluent in it):
My profession is that of social scientist, and I seek to understand human behavior by means of mathematical models (or, more recently, with simulation models programmed for computers). In 1956, I published an article which described life as a search through the corridors of a labyrinth, greatly branching and populated by a large number of goals to attain.
A few years later I stumbled upon Ficciones,* in particular the story “La Biblioteca, de Babel,” to discover that you too conceive of life as a search through the labyrinth. I asked if ever there had occurred a comparable transmigration, from the inert body of a mathematical model to the live flesh of literature.
(I did not admit to Borges, then or later, that in 1956 I had also tried to manage a transmigration of the soul of my mathematical model into a short story. You will see the result of that attempt later in the chapter.)
I met Borges in his beautiful high-ceilinged baroque office in the Biblioteca Nacional. We had several hours of conversation (in English), of which I reproduce here only the portion relevant to labyrinths.
BORGES: But I’d like to know why you are interested in having this conversation.
SIMON: I want to know how it was that the labyrinth entered into your field of vision, into your concepts, so that you incorporated it in your stories.
BORGES: I remember having seen an engraving of the labyrinth in a French bookwhen I was a boy. It was a circular building without doors but with many windows. I used to gaze at this engraving and think that if I brought a loupe close to it, it would reveal the Minotaur.
SIMON: Did you see it?
BORGES: Actually, my eyesight was never good enough. Soon I discovered something of the complexity of life, as if it were a game. In this case I am not referring to chess. Perhaps I can express it with a poem:
I Have Become Too Old for Love

My love
has made me old.

  • Ed Felgenbaum brought the book to my attention during the academic year 1960 61, when I was at RAND.

But never so old as not to see the vast night
that envelops us.
Something hid deep in love and passion
still amazes me.
Here there is a play on words. In English, the word for ''labyrinth" is maze and for “surprise,”
amazement. There is a clear semantic connotation as well.
This is the form in which I perceive life: a continual amazement; a continual bifurcation of the labyrinth.
SIMON: What is the connection between the labyrinth of the Minotaur and your labyrinth, which calls for continual choice? Does the analogy go beyond the general concept?
BORGES: When I write, I don’t think in terms of teaching. I think that my stories, in some way, are given to me, and my task is to narrate them. I neither search for implicit connotations nor start out with abstract ideas; I am not one who plays with symbols. But if there is some transcendental explanation of one of my stories, it is not for me to discover it, that is the task of the critics and the readers.
I write for the tale itself, simply by interest in the characters and thoughts that perhaps will also interest others. The critics and scholars have attributed all sorts of intentions to me; that this or that story should evidence some specific political or religious ideologyeven a metaphysical one. Perhaps the intention is, in me, subconscious and not at a conscious level. Nor do I try to use it to this level.
I suppose this can be an illusion, but I believe that those sorts of things are proper to the explanatory writing of thinkers, and I am not a thinker, except in the measure that all men are.
SIMON: Without doubt there are clear differences among the distinct labyrinths that appear in your works. Clearly in that of “The Library of Babel” you start from an abstraction.
BORGES: Not true! I can tell you how this story spewed out. I worked in a small public library on the west side of Buenos Aires. I worked nine years in this library with a miserable wage, and the people who worked there were very disagreeable. They were ignorant people, stupid really. And this gave me nightmares.
One day I said to myself that my entire life was buried in this library. Why not invent a universe represented by an interminable library? A

library where one can find all the books that have been written. At the same time, I read something about permutations and combinations, and saw in this library possibilities little less than infinite. And this is an example of a story where I know the origin of this theme.
The concept of this library evokes in me my deepest, most intrinsic, pleasures. I felt truly happy when writing about it. And it was not merely an intellectual happiness. One feels this kind of bliss.
SIMON: And why are you attracted so strongly to the idea of the Minotaur?
BORGES: It is curious. That idea does not attract me so much as another name attributed to this mythological being. I encountered the name of Asterión in a dictionary. It held connotations of a heavenly body or stars. It is an image that I always thought readers could enjoy.
SIMON: I find definitely that the concept of the labyrinth has a unity, truly conceptual, in your writings, notwithstanding the very interesting differences in the specific hues that are given it by every story or narration.
BORGES: In truth, I believe that this unity arises because all of my stories that speak of the labyrinth respond to a particular state of my spirit that carries me precisely to this theme.
SIMON: As to your ideas on combinatorial analysis, what were your sources?
BORGES: I read a very interesting book. It was Bertrand Russell’s Introduction to Mathematical Philosophy. Then I was much interested by a book called The World and the Individual [Josiah Royce (1899)], which presented a very singular specimen of this theme. It presented the case of a map of England drawn on the very terrain of the island. And assume that the map itself is somewhere within the whole map. And within the first, the map of the map, and so on. That gives an idea of the infinite. From my father I inherited the taste for these forms of reasoning. He used to take me aside to converse or to ask me questions about my beliefs. On one occasion he took an orange and said to me, “In your opinion, is the flavor inside the orange?” I said, “Yes.” Then he asked me, ''Good, then you think the orange is continuously tasting itself?"
SIMON: I would suppose that the resolution of these questions would lead you to a deep solipsism.
BORGES: In fact, my father didn’t send me to the philosophical sources. He only presented me with concrete problems. Much later he showed me a history of philosophy where I encountered the origins of all of these questions. In the same way my father taught me to play chess, although actually I have always been a poor player, and he a very good one. Moreover, my father transmitted to me the taste for poetry. His bookshelves were filled with such authors as Keats, Shelley, and other poets.

And he also recited them from memory. Even today when I repeat verses of FitzGerald’s Omar Khayyam
and some others, my mother says that she seems to be listening to my father. SIMON: Someone told me that you read Don Quixote the first time in English. BORGES: Yes, that’s true.
SIMON: That’s curious, because I first read it in Spanish. When I encountered it in English, the humor of Quixote lost all its subtlety.
BORGES: That’s true. The experience with translations is often like that.
Then Borges asked me about my work, and I began to talk about computers and the implications of a belief in the possibility of computer simulation of human thought for free will:
SIMON: This is the form in which I conceive free will: It resides in the fact that I am that which acts when I take a given action. And the fact that something has caused this behavior in no manner makes me (the I who acts) unfree.
So when we reach a bifurcation in the road, of the labyrinth, “something” chooses which branch to take. And the reason for my researches, and the reason why labyrinths have fascinated me, has been my desire to observe people as they encounter bifurcations and try to understand why they take the road to the right or to the left.
BORGES: It seems to me that these sorts of things happen continually in my stories . . . but if I did not write these stories in specific terms, all would be artificial. That is to say, if I write these stories it is because I have to, or because I need them. Because if not, I could invent other stories, and these stories would have no meaning for me, or perhaps for the reader. Because the reader will feel that they are artificial literary exercises.
So Borges denied that there was an abstract model underlying “The Library of Babel” or “The Garden of Paths That Fork.” He wrote stories; he did not instantiate models. He was a teller of tales.
A Short Story
In my one attempt at story writing, I did start with a model, in fact, the model of a maze that I had just described in my 1956 paper on “Rational Choice and the Structure of the Environment.” No doubt that can be detected in the finished product. Nevertheless, I too had to write it. You may

take it as philosophy, as a story, or as an artificial literary exercise. For those of you who don’t like equations, it will at least provide a relatively painless introduction to my theories of decision.
THE APPLE: A STORY OF A MAZE
There once was a man named Hugo who lived in a castle with innumerable rooms. Since the rooms were windowless, and since he had lived there since his birth, the castle was the only world he knew. His mother, who had died when he was very young, had told him of another world “outside,” lighted by a single large lamp that was turned on and off at intervals of ten or twelve hours. She had not seen the outside world herself, but stories about it had been handed down from generation to generation. Hugo was never certain whether his ancestors had really lived in, or viewed, this world, or whether the stories had been invented in some remote time to entertain the children of the castle. At any rate, he knew of it only through his mother’s tales.
The rooms of the castle were rectangular and very longit took Hugo almost ten minutes of brisk walking to go from end to end of one of them. The walls at each end of each room were pierced by four or five doors. These doors were provided with locks so that they could be opened from one side, but not the other. The doors on the west end of a room opened into the room, while those on the east end opened out into adjoining rooms. When Hugo entered a room and the door shut behind him, he could not again return on the path along which he had come, but could only go on through one of the eastern doors to other rooms beyond.
At one time, Hugo became curious as to whether the rooms might be arranged in cycles, so that he could return to a room by a circuitous path, if not directly. It was not easy to decide, for many of the rooms looked much alike. For a time, he dropped a few crumbs of bread in each room through which he passed, and watched for evidences of his return to one of these. He never saw any of the bread crumbs again, but he was not certain but what they had been eaten by the mice that lived in the castle with him.
After the death of his mother, Hugo lived alone in the castle. Perhaps it seems strange that he or his ancestors had not long since died of hunger in this isolated life. Most of the rooms were quite bare, containing only a chair or two and a sofa. These Hugo found comfortable enough when he wanted to rest in his wanderings or to sleep. But from time to time, he entered a room where he found, on a small table covered with a white linen cloth, food for a quite adequate and pleasant meal.
Those of us who are accustomed to a wide range of foods, gathered for

the pleasure of our table from the whole world, might not have been entirely satisfied with the fare. But for a person of simple tastesand Hugo had not developed elaborate onesthe fruits and green vegetables, the breads, and the smoked and dried meats that Hugo found in these occasional rooms provided an adequate and satisfying diet. Since Hugo knew no other world, it caused him no surprise that the arrangements of the castle provided for his weariness and hunger. He had never asked his mother who it was that placed the food on the tables.
The rooms stocked with food were not very numerous. Had his education in mathematics not been deficient, Hugo could have estimated their relative number. For the connecting doors between the rooms were of clear glass, and peering through one of them, Hugo could see through a series of five or six doors far beyond. It any of the rooms in this range of vision had dining tables in them, he could see them from where he stood.
When Hugo had not eaten for some time, and was hungry, he would stand, in turn, before each of the four of five exit doors, and peer through them to see if food was visible. If it was notas usually happenedhe would open one of the doors, walk rapidly through the next room and reinspect the situation from the new viewing points now available to him. Usually, within an hour or two of activity, he would finally see on his horizon a room with a table; whereupon he walked rapidly toward it, assured of his dinner within another hour. He had never been in real danger of starvation. Only once had he been forced to continue his explorations as long as four hours before a dining room became visible.
Since life in the castle was not very strenuous, and since the meals that were spread before Hugo from time to time were generous, he seldom took more than two meals a day. If, in the course of a stroll when he was not actually hungry, he came upon a dining room, he simply passed through it, seldom pausing to pick up even a snack. Sometimes he would search for a dining room before retiring, so as to be assured of a prompt breakfast when he awoke.
As a result of this generosity of natureor of the castle’s arrangements, however these were brought aboutthe search for food occupied only a small part of Hugo’s time. The rest he spent in sleep and in idle wandering. The walls of most rooms were lined with attractive murals. Fortunately for him, he found these pictures and his own thoughts sufficiently pleasant and of sufficient interest to guard him from boredom, and he had become so accustomed to his solitary life that he was not bothered by loneliness.
Hugo kept a simple diary. He discovered that his time was spent about as follows: sleep occupied eight or ten hours of each twenty-four; his search for a dining room, about three hours; eating his meals, two hours. The

remaining ten hours were devoted to idle wandering, to inspection of the castle’s decorations, and to daydreaming in the comfortable chairs with which the rooms were provided.
In this existence, Hugo had little need for personal possessions, other than the clothes he wore. But his mother had given him a small knapsack that he carried with him, containing a comb, a razor and strop, and a few other useful articlesand a single book, the Bible. The Bible, which was the only book he had known or even seen, had been his primer under his mother’s tutelage, and continued to provide him with an enjoyable and instructive activity, even though a large part of the “world outside” it talked about was almost meaningless to him.
You might suppose that the murals on the castle’s walls would have helped him to understand this world outside, and to learn the meaning of such simple words as “tree.” But the pictures were of little helpat least in any ordinary wayfor the designs the castle’s muralists had painted on its walls were entirely abstract, and no object as prosaic as a treeor recognizable as such to an inhabitant of the outside worldever appeared in them.
The murals helped Hugo in another way, however. The long hours spent in examining them developed in him a considerable capacity for understanding and appreciating abstract relations, and it must be supposed that he read the creation myths and the parables of the Bible in much the same waythe concrete objects taking on for him an abstract symbolic meaning. That is to say, his way of understanding the Bible was just the reverse of the way in which it was written. The authors of these stories had found in them a means for conveying to humble people in terms of their daily experiences profound truths about the meaning of the world. Hugo, deprived of these experiences, but experienced in abstraction, could usually translate the stories directly back to the propositions they sought to communicate.
I do not mean to imply that Hugo completely understood all that he read. The story of the Garden of Eden was particularly puzzling to him. What attraction did the Tree of Knowledge possess that led Eve to such wanton recklessnessto risk her Edenic existence for an apple? If he did not know what a tree was, he was familiar enough with apples, for he had often found these on the linen-covered dining tables, and his mother had taught him their name. Hugo found apples pleasant enough in taste, but no more so than the many other things that were provided for his hunger. Perhaps in this case, the very fact of his actual experience interfered with his powers of abstraction and made this particular story more difficult to understand than the others. He did, in time, learn the answer, but experience and not abstraction led him to it.
On the afternoon of a winter dayas judged, of course, by the events

and calendars of the outside worldHugo, who had been relaxing in an armchair, felt the initial stirrings of hunger. In his accustomed way, he arose, walked to the east end of the room, and peered through the glass in search of a table. Seeing none, he opened the second door, walked through the next room, and repeated his surveillance. This time he saw, five rooms beyond the fourth door, the table for which he was searching. In less than an hour he had arrived in the dining room ready to enjoy the meal that was waiting for him there.
But on this occasion, Hugo did something he never had done before. Before sitting down to his meal, he scanned the table to see what kind of bread had been provided. He saw in the middle of the table, surrounded with sausages and cheese, a freshly-baked half loaf of dark rye bread. And as this met his eye, there came unaccountably to his nostrilsor more likely to his brain, since his nostrils could have had nothing to do with itthe odor of French bread baked with white flour, and accompanying this imagined odor, he felt a faint distaste for the meal before him.
If Hugo, at this critical moment in his life, had stopped short and pondered, the vague movements of his imagination might have quieted themselves, and his life could have gone on as before. But Hugo, though he had spent much of his life in reflection, had never before had occasion to deliberate deeply about a course of practical conduct, and he did not deliberate now. Without pausing further, he turned from the table, walked around it, and marched on quickly to the next set of doors.
No table was visible through the glass. He pulled open one of the doors, and resumed his rapid walk. At the end of the third room he saw again, through an exit door, a distant table. He peered hard at it to see if he could identify the food that lay on it, but the distance was too great. He walkedalmost rantoward it, and was delighted to find on entering the last room that a loaf of white French bread was included in the collection of items spread before him. He ate his dinner with great gusto, and soon afterward fell asleep.
Hugo’s subsequent developmentor discoveryof his tastes and preferences was a very gradual matter, and for a time caused him no serious inconvenience. Although not every table was provided with French bread or with ripe olives (he soon began to develop a taste for ripe olives), a great number of them were. Besides, he did not insist on eating these delicacies at every meal. To be sure, the amount of time he spent daily in the search for food increased, but this meant merely that he could substitute a more serious purpose for some of his idle wanderings, which perhaps even increased slightly his pleasure in life.
But several major happenings foretold a more difficult future. On one

occasion, Hugo passed by four tables in turn, because the food did not please him, and then, famished with hunger, hurried on for three hours more until he found a fifthwhich was no different from the other four except that his hunger was now greater. For several days after this experience, Hugo was less particular in his diet.
At about the same time, Hugo discovered that his preferences were now extending also to the pictures on the castle’s walls. Twice, he found himself turning away from a door after a brief inspection of the room beyond, because the colors or designs of the murals did not please him. A few weeks later, he saw a distant dining room at a time when he was moderately hungry, but formed a dislike for the decorations of the rooms through which he must pass to reach it. In the second room he turned aside and peered through the other glass doorsthose not leading to the tableto see whether there might be another meal prepared for him that could be reached through more pleasant surroundings. He was disappointed, and proceeded on his original path, but on later occasions he turned aside often with nothing more than a hope that his new path would provide him with a meal.
Now Hugo’s diary took on a very different appearance than before. First of all, almost all of his waking time was now occupied in the search for his preferred foods, a search that was further impeded by his distaste for certain rooms. Second, his diary now included more than an enumeration of the paths he took. It was punctuated with the feelings of pleasure and annoyance, of hunger and satiety, that accompanied him on his journeys. If he could have added these feelings together, he could have abbreviated the diary to a simple quarter-hourly log of the level of his satisfaction. This level was certainly now subject to violent fluctuations, and these fluctuations, in turn, sharpened his awareness of it.
Hugo felt himself helpless to blunt these sharpening prongs of perception whose prick he was now beginning to feel. Perhaps it is reading too much into his thoughts to say that "he felt himself helpless. "More probably, the idea did not even occur to him that his tastes and preferences might be matters within his controland who is to say whether in fact they were?
But if Hugo did nothing to curb his desires, he did begin to consider seriously how he was to satisfy them. He began a search for clues that would tell him, when he looked through a series of doors and saw a distant table, what kind of food he would find on it. He developed a theory that rooms decorated in green were more likely to lead to white bread than other rooms, while the color blue was a significant sign that he was approaching some ripe olives.
Hugo even began to keep simple records to test his predictions. He also

developed a sort of profit-and-loss statement that told him how much time he was spending searching for foodand with what resultand how much his tastes in decoration were costing him in the efficiency of his search. (In spite of the propitiousness of green and blue for good meals, he really preferred the cheerfulness of red and yellow.)
To a certain extent, these scientific studies were successful, and served to reduce temporarily the increasing pressures on his time. But the trend revealed by the profit-and-loss statements was not reassuring. Each month, the time devoted to finding the best meals increased, and he could not persuade himself that his satisfaction was increasing correspondingly.
As Hugo became gradually more perceptive about his surroundings, and more reflective in his choices, he began also to observe himselfsomething that he had almost never done in the past. He found that his tastes in decoration were slowly changing, so that he actually began to prefer the green and blue colors that his experience had taught him were most likely to lead him to particularly desired foods. He even thought he detected a reverse effect: that his aversion for highly symmetrical murals, which seemed always to be present in rooms stocked with caviar, was spoiling his taste for that delicacy. But this sentiment was so weak that it might have been merely a construct of his imagination.
This gradual adaptation of his eyes to his stomach served somewhat to quiet Hugo’s anxiety, for he realized that it made his task easier. In retrospect, he wondered whether his initial preferences for certain kinds of murals had not developed unconsciously from eating particularly delicious meals in rooms similarly decorated.
Hugo’s researches, and the gradual reconciliation of his conflicting tastes, could only have postponed, not prevented, disaster if the growth of his demands had continued. At the time of which we are speaking, he had reached a truly deplorable state. As soon as he awoke each day, he seated himself at the table he had discovered before retiring. But however delicious the meal he had provided himselfeven if the eggs were boiled to just the proper firmness, and the bread toasted to an even brownhe was unable to enjoy his meal without distraction. He would open his notebook on the table and proceed to calculate frantically what his objectives should be for the day. How recently had he eaten caviar? Was this a good day to search for peach pie, or, since he had eaten rather well the previous day, should he hold out for fresh strawberries, which were always difficult to find?
Having worked out a tentative menu, he would consult his notebook to see what his past experiences had been as to the time required to find these particular foods. He would often discover that he could not possibly expect to locate the foods he had listed in less than ten or fifteen hours of explo-

ration. On occasions when he was especially keenly driven by search for pleasure, he planned menus that he could not hope to realize unless he were willing to forgo meals for a week. Then he would cross off his list the items, or combinations of items, most difficult to find, but only with a keen feeling of disappointmenteven a dull anger at the niggardliness of the castle’s arrangements.
Again before he retired, Hugo always opened his notebook and recorded carefully the results of his day’s labor. He made careful notes of new clues he had observed that might help him in his future explorations, and he checked the day’s experiences against the hypotheses he had already formed. Finally, he made a score-card of the day’s success, assigning 10 or 15 points each to the foods he particularly liked (and a bonus of 5 points if he had not eaten them recently), and angrily marking down a large negative score if hunger had forced him to stop before a table that was not particularly appetizing. He compared the day’s score with those he had made during the previous week or month.
A period of two or three months followed during which Hugo became almost wild with frustration and rage. His daily scores were actually declining. Fewer and fewer of the relatively abundant items of his diet seemed to him to deserve a high point rating, and negative scores began to appear more and more frequently. The goals he had set himself forced him to walk distances of twenty or thirty miles each day. Although he often found himself exhausted at the end of his travels, his sleep refreshed him very little, for it was disturbed by nightmarish visions of impossible feasts that disappeared before his eyes at the very moment when he picked up knife and fork to enjoy them. He began to lose weight, and because he now begrudged the time required to care for his appearance, his haggardness was further emphasized by a stubbly beard and unkempt hair.
Midway one day that had been particularly unsuccessful, Hugo, almost at the point of physical collapse, stumbled into an armchair in the room through which he was walking, and fell into a light sleep. This time, unaccountably, he was troubled by no dreams of food. But a clear picture came to him of an earlier daysome two years pastwhen he had been sitting, awake, in a similar chair. Perhaps some resemblance between the sharply angular murals of the room he was in and the designs of that earlier room had brought the memory back to him. Whatever the reason, his recollection was extremely vivid. He even recaptured in this dream the warm feeling of comfort and the pleasant play of his thoughts that had been present on that previous occasion. Nothing of any consequence happened in the dream, but it filled Hugo with a feeling of well-being he had not experienced for many months. An observer would have noticed that the furrows on his

forehead, half hidden by scraggly hair, gradually smoothed themselves as he dozed, and that the nervous jerks of his limbs disappeared in a complete relaxation. He slept for nine or ten hours.
When Hugo awoke, the dream was still clear in his mind. For a few moments, indeed, his present worries did not return to him. He remained seated in the chair admiring the designs on the wall oppositebold, plunging lines of deep orange and sienna, their advance checked by sharp purple angles. Then his eye was caught by the white page of his notebook, lying at his feet where it had slipped from his sleeping fingers. A pain struck deep within him as though a bolt had been hurled from the orange and purple pattern of the wall. Sorrow, equally deep inside him, followed pain, and broke forth in two sobs that echoed down the hall.
For the next few days, Hugo had no heart for the frantic pursuit in which he had been engaged. His life returned very much to its earlier pattern. He rested, and he wandered idly. He accepted whatever food came his way, and indeed, was hardly aware of what he ate. The pain and sorrow he had felt after his dream were diffused to a vague and indefinable sadnessa sadness that was a constant but not harsh reminder of the terror he had passed through.
It was not long, however, before he felt the first stirrings of reviving desire, and began again in a cautious way to choose and select. He could not bear to open the notebook (though he did not discard it), but sometimes found himself thinking at breakfast of delicacies he would like to eat later in the day. One morning, for example, it occurred to him that it had been a long time since he had tasted Camembert. He searched his memory for the kinds of clues that might help him find it, and passed two tables that day because he saw no cheese on them. Although his search was unsuccessful, his disappointment was slight and did not last long.
More and more, he discovered that after he had had a series of successful days, his desires would rise and push him into more careful planning and more energetic activity. But when he failed to carry out his plans, his failure moderated his ambitions and he was satisfied in attaining more modest goals. If Camembert was hard to obtain, at least ripe olives were reasonably plentiful and afforded him some satisfaction.
Only this distinguished his new life from that of his boyhood: then he had never been pressed for time, and his leisure had never been interrupted by thoughts of uncompleted tasks. What he should do from moment to moment had presented no problem. The periodic feelings of hunger and fatigue, and the sight of a distant dining room had been his only guides to purposeful activity.
Now he felt the burden of choicechoice for the present and for the

future. While the largest part of his mind was enjoying its leisureplaying with his thoughts or examining the muralsanother small part of it was holding the half-suppressed memory of aspirations to be satisfied, of plans to be made, of the need for rationing his leisure to leave time for his work. It would not be fair to call him unhappy, nor accurate to say that he was satisfied, for the rising and falling tides of his aspirations always kept a close synchrony with the level of the attainable and the possible. Above all, he realized that he would never again be free from care.
These thoughts were passing through Hugo’s mind one afternoon during a period of leisure he had permitted himself. He now had time again for occasional reading, and he was leafing the pages of his Bible, half reading, half dreaming. As he turned a page, a line of the text called his mind to attention: ‘’. . . and when the woman saw that the tree was good for food, and that it was pleasant to the eyes "
This time no recollection of apples seen or tasted impeded the abstraction of his thought. The meaning was perfectly clearno more obscure than in the other stories he enjoyed in this book. The meaning, he knew now, lay not in the apple, but in him.
Of course, we have no way of knowing for sure what that meaning, so clear to Hugo, was. We can only conjecture, empathizing with the trials of his journey, interpreting them in the light of our own experiences. My own conjecture is that Hugo found a meaning not very different from the one I have arrived at, journeying through the maze of my own life. If it were not so, my experience would have falsified my theory, the model from which “The Apple” was drawn.

Chapter 12
Roots of Artificial Intelligence
The most important years of my life as a scientist were 1955 and 1956, when the maze branched in a most unexpected way. During the preceding twenty years, my principal research had dealt with organizations and how the people who manage them make decisions. My empirical work had carried me into real- world organizations to observe them and occasionally to carry out experiments on them. My theorizing used ordinary language or the sorts of mathematics then commonly employed in economics. Although I was somewhat interdisciplinary in outlook, I still fit rather comfortably the label of political scientist or economist and was generally regarded as one or both of these.
All of this changed radically in the last months of 1955. While I did not immediately drop all of my concerns with administration and economics, the focus of my attention and efforts turned sharply to the psychology of human problem solving, specifically, to discovering the symbolic processes that people use in thinking. Henceforth, I studied these processes in the psychological laboratory and wrote my theories in the peculiar formal languages that are used to program computers. Soon I was transformed professionally into a cognitive psychologist and computer scientist, almost abandoning my earlier professional identity.
This sudden and permanent change came about because Al Newell, Cliff Shaw, and I caught a glimpse of a revolutionary use for the electronic computers that were just then making their first public appearance. We seized the opportunity we saw to use the computer as a general processor for symbols (hence for thoughts) rather than just a speedy engine for arithmetic. By the end of 1955 we had invented list- processing languages for programming computers and had used them to create the Logic Theorist, the first computer program that solved non-numerical problems by selective

search. It is for these two achievements that we are commonly adjudged to be the parents of artificial intelligence.
Put less technically, if more boastfully, we invented a computer program capable of thinking non- numerically, and thereby solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind. With that, we opened the way to automating a wide range of tasks that had previously required human intelligence, and we provided a new method, computer simulation, for studying thought. We also acquired considerable notoriety, and attracted critics who knew in their hearts that machines could not think and wanted to warn the world against our pretensions.
In this and the following chapters I will give a blow-by-blow account of our research during 1955 and 1956, and place it in the intellectual atmosphere, the zeitgeist, within which it took place. I will present, of course, an egocentric view of that atmosphere, emphasizing how it influenced the ideas of our research group.
To understand how the Logic Theorist came about, we have to roam over several disciplines, including psychology, logic, and economics, and examine their views of the world just before our effort got under way. Those world views both shaped our own outlook and defined the constraints and assumptions we had to modify in order to go forward.
Cognitive Psychology before 1945
On the American side of the Atlantic Ocean, there was a great gap in research on human thinking from the time of William James almost down to World War II. American psychology was dominated by behaviorism, the stimulus-response connection (S R), the nonsense syllable, and the rat. Cognitive processeswhat went on between the ears after the stimulus was received and before the response was givenwere hardly mentioned, and the word mind was reserved for philosophers, not to be uttered by respectable psychologists.
My footnotes in Administrative Behavior show that William James and Edward C. Tolman were my principal sources among American psychologists. Tolman was the farthest from the dominant behaviorists (except for immigrant Gestalt psychologists from Europe). In his principal book, Purposive Behavior in Animals and Men (1932), he treated humans (and rats) as goal-seeking, hence decision-making, organisms, whose behavior was molded by the environment. But although well respected, Tolman remained at the edge of mainstream American psychology.

In Europe, psychologists were less preoccupied with rigor and were willing to use data from verbal protocols, paying more attention to complex behavior. In Remembering (1932), the English psychologist Frederick C. Bartlett examined how information is represented “inside the head” and modified by the processes that store and retrieve it. The Würzburg school in Germany, and Otto Selz and his followers, had similar concerns for the processes of complex thought and the ways in which the information used in thought is organized and stored in the head.
This point of view, carried forward by the Gestaltists Max Wertheimer and Karl Duncker, was hardly known in the United States until after World War II. Similarly, before the war, Jean Piaget’s work on the development of thought in children was familiar to some American educational psychologists (and to me through Harold Guetzkow) but to hardly any American experimental psychologists.
Apart from Tolman, one other viewpoint in prewar American psychology departs from behaviorism: the “standard” viewpoint of physiological psychology, well expressed by Edwin G. Boring in the preface to The Physical Dimensions of Consciousness:
[T]he simple basic fact in psychology is a correlation of a dependent variable upon an independent one. Ernst Mach made this point and B. F. Skinner took it up about the time this book was being written. He created the concept of “empty organism” (my phrase, not his), a system of correlation between stimulus and response with nothing (no C.N.S., the "Conceptual Nervous System"his phrase, not mine) in between. This book does not go along with Skinner . . . , but rather argues that these correlations are early steps in scientific discovery and need to be filledfor the inquiring mind dislikes action at a distance, discontinuities that remain “unexplained.” Thus my text undertook to assess the amount of neurological filling available in 1932how much fact there was ready to relieve the psychophysiological vacuum. [Boring 1933, pp. vi vii]
This last sentence of Boring separates his psychophysiological viewpoint (typified also by Karl Lashley) from both behaviorism and our own approach. He assumes that: (1) the “empty organism” is to be filled with explanatory mechanisms, an assumption accepted by all psychologists except radical behaviorists like Skinner; but also (2) the explanatory mechanisms are to be neurological.
Al, Cliff, and I did not share this second assumptionnot because of in-principle opposition to reductionism but because we believed that complex behavior can be reduced to neural processes only in successive steps, not in a single leap. Physics, chemistry, biochemistry, and molecular biology accept in principle that the most complex events can be reduced to the laws of

quantum physics, but they carry out the reduction in stages, inserting four or five layers of theory between gross biological phenomena and the submicroscopic events of elementary particles. Analogously for psychology, a theory at the level of symbols, located midway between complex thought processes and neurons, is essential.
In agreement with Boring and in contrast to our view, almost all American psychologists who were not behaviorists identified explanation in psychology with neurophysiology. This confounding continued into the postwar period with Donald Hebb’s influential The Organization of Behavior, and the confusion has today been inherited by those cognitive scientists who espouse parallel connectionist networks (“neural nets”) to model the human mind.
Since information processing theories of cognition represent a specific layer of explanation lying between behavior (above) and neurology (below), they resonate most strongly with theories that admit constructs of this kind. Would the forerunners of our own work, principally Selz, the Gestaltists and their allies, be pleased to be labeled “information-processing psychologists,” and would they accept our operationalizing their vague (in our eyes) concepts? With or without their consent, we acknowledge our debt to them.
The Influence of Formal Logic
To build a successful scientific theory, we must have a language that can express what we know. For a long time, cognitive psychology lacked a clear and operational language. Advances in formal logic brought about by Giuseppe Peano, Gottlob Frege, and Alfred North Whitehead and Bertrand Russell around the turn of the century provided it.
The relation of formal logic to psychology is often misunderstood. Both logicians and psychologists agree nowadays that logic is not to be confused with human thinking.* For the logician, inference has objective, formal standards of validity that can exist only in Plato’s heaven of ideas and not in human heads. For the psychologist, human thinking frequently is not rigorous or correct, does not follow the path of step-by-step deductionin short, is not usually “logical.”
How, then, could formal logic help start psychology off in a new direction?

  • An influential coterie of contemporary artificial intelligence researchers, including Nils Nilsson, John McCarthy, and others, believe that formal logic provides the appropriate language for A.I. programs, and that problem solving is a process of proving theorems. They are horribly wrong on both counts, but this is not the place to pursue my quarrel with them, beyond the comments in the next paragraphs.

By example, it demonstrated that manipulating symbols is as concrete as sawing pine boards in a carpentry shop; symbols can be copied, compared, rearranged, and chunked just as definitely as boards can be sawed, planed, measured, and glued. Symbols are the stuff of thought, but symbols are patterns of matter. The mind/body problem arises because of the apparent radical incongruity of "ideas"the material of thoughtwith the tangible biological substances of the brain. Formal logic, treating symbols as material patterns (for example, patterns of ink on paper) showed that ideas, at least some ideas, can be represented by symbols, and that these symbols can be altered in meaningful ways by precisely defined processes.
Even a metaphorical use of the similarities between symbol manipulation and thinking liberated my concept of thinking. Influenced by Rudolf Carnap’s lectures at the University of Chicago and his books, and by my study of Whitehead and Russell’s Principia Mathematica, I very early used this metaphor explicitly as the framework for my thinking about administrative decision making: “Any rational decision may be viewed as a conclusion reached from certain premises The behavior of a rational person can
be controlled, therefore, if the value and factual premises upon which he bases his decisions are specified for him” (Simon 1944, p. 19).
Exploiting this new idea in psychology requires enlarging symbol manipulation to embrace much more than deductive logic. Symbols can be used for everyday thinking, for metaphorical thinking, even for “illogical” thinking. This crucial generalization began to emerge at about the time of World War II, though it took the appearance of the modern computer to perfect it.
Parallel to the growth of logic, economics, in close alliance with statistical decision theory, constructed new formal theories of “economic man’s” decision making. Although economic man was patently too rational to fit the human form, the concept nudged economics toward explicit concern with reasoning about action. But the economist’s concern only for reasoning that was logical, deductive, and correct somewhat delayed recognition of the common interests of economics and psychology.
In striving to handle symbols rigorously and objectivelyas objectslogicians gradually became more explicit about their manipulation. When, in 1936, Alan Turing, an English logician, defined the processor now known as a Turing machine, he completed this drive toward formalization by showing how to manipulate symbols by machine. I did not become aware of Turing’s work until later, but I did glean some of the same ideas from Claude Shannon’s Master’s thesis (1938), which showed how to implement the logic of Boolean algebra with electrical switching circuits.
Finally, at this time the belief was also growing that mathematics could

be used in biology, psychology, and sociology as it had been used in the physical sciences. Alfred Lotka’s Elements of Physical Biology (1924), embodying this view, foreshadowed some of the central concepts of cybernetics. My former teacher Nicholas Rashevsky was another pioneer in this sphere. Although there had been little use of mathematics in psychology before the war, a few psychologists (including Clark L. Hull) had begun to show strong interest in its potential.
The Postwar Setting for Machine Intelligence
The developments I have been tracing came to public notice at the end of World War II under the general rubric of cybernetics, a term that Norbert Wiener devised to embrace such elements as information theory, feedback systems (servomechanism theory, control theory), and the electronic computer (Wiener 1948). In other countries, particularly behind the Iron Curtain, the term cybernetics was used even more broadly to encompass, in addition, game theory, mathematical economics and statistical decision theory, and management science and operations research. Wiener presented, in the first chapter of Cybernetics, his version of the history of these developments. The computer as symbolic machine played a minor role in the early development of cybernetics; its main underpinnings were feedback and information theory, while the computer was simply “the biggest mechanism.”
World War II did not produce the cybernetic developments; more likely, in fact, it delayed them slightly. Their ties with formal logic had been evident earlier, as I mentioned, in Shannon’s Master’s thesis, as well as in a closely parallel paper by Walter Pitts and Warren McCulloch (1943) that provided a Boolean analysis of nerve networks, and in a paper by Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow (1943) that provided a cybernetic account of behavior, purpose, and teleology. Most of the prominent figures in these developments had early in their careers been deeply immersed in modern symbolic logic. Wiener had been a student of Russell’s in Cambridge; much of von Neumann’s work in the 1920s and 1930s had been in logic; Shannon’s and Pitts and McCulloch’s use of logic has just been mentioned.
Work that too far anticipates its appropriate zeitgeist tends to be ignored, while work that fits the contemporary zeitgeist is recognized promptly. Von Neumann’s contributions to game theory in the 1920s were known to few

persons before 1945; Lotka was read by a few biologists; a few logicians were aware of the rapid strides of logic and the esoteric discoveries of Kurt Gödel, Alan Turing, Alonzo Church, and Emil Post. All this changed in the early postwar years.
My own experiences in hearing lectures by Carnap, Rashevsky, and Schultz document this dramatic shift in the climate of ideas. Through these teachers, I learned of Lotka, of recent developments in statistical decision theory, of Gödelbut not immediately of Church or Turing. I found a few other teachers and fellow students who shared this vague sense of the zeitgeist. My dissertation reflected the intellectual climate of about 1940 to 1942.
The “invisible college” operated with some efficiency, then as now; news of the new contributions, published in widely scattered journals and books, spread rapidly. My attention was called to most of them either before publication or shortly thereafter. Similarly, the decision-making approach of my dissertation rapidly became known in economics and operations research.
Biology and the behavioral sciences did not long stay aloof from cybernetics. Its feedback notions soon were being used, particularly by the physiologically inclined, and most enthusiastically in Great Britain (Ashby 1952; Walter 1953). The computer-as-brain metaphor suggested itself almost immediatelyfollowed almost as immediately by warnings against taking too literally the analogy between the neurological organization of the brain and the wiring of the computer (von Neumann 1958). Turing was one of the first to see the more fruitful analogy at a different level, the abstract level of symbol processing.
Feedback concepts had considerable, but relatively unspecific, impact on psychology, but the influence of the Shannon-Weaver information theory was clear and precise (Miller and Frick 1949). W. E. Hick proposed and tested a relation between response time and amount of information contained in the response, while others sought to measure the information capacity of the human sensory and motor channels. Limits on the applicability of information theory to psychology gradually became clear, and by the early 1960s the theory had become only a specialized tool for the analysis of variability.
In the early postwar years, the European work on thought processes was just beginning to reach the United States through translation and migration. The translations of Duncker’s On Problem Solving and Wertheimer’s Productive Thinking appeared in 1945; Humphrey, in Thinking (1951), provided the first extensive English-language discussion of Selz’s research and

theories; Katona’s Organizing and Memorizing and a number of Maier’s papers on problem solving had appeared by 1940 but were little noticed until the end of the war.
Information theory, statistical decision theory, and game theory had roused new interest in concept formation, and had suggested new research methods and theoretical ideas. Carl Hovland’s “A ‘Communication Analysis’ of Concept Learning” was published in 1952; while in the critical year 1956 there appeared both George Miller’s “Magical Number Seven” paper, setting forth a theory of the capacity limits of short-term memory, and Bruner, Goodnow, and Austin’s A Study of Thinking, a book that brought to the study of concept formation notions of strategy borrowed from game theory.
The war had produced a vast increase of research in human skills and performance (“human factors” research). Because much of this work was concerned with the human members of complex man-machine systemspilots, gunners, radar personnelthe researchers could observe the analogies between human information processing and the behaviors of servomechanisms and computers. The title of Donald Broadbent’s “A Mechanical Model for Human Attention and Immediate Memory” (1954) illustrates the main emphasis in this line of inquiry. The research in human factors and in concept formation provided a bridge, also, between psychology and the newly emerging field of computer science, and a major route through which ideas drawn from the latter field began to be legitimized in the former.
A long-standing concern with formalization in linguistics received a new impetus with Zelig Harris’s Methods in Structural Linguistics (1951). The work of Noam Chomsky, which was to redirect much of linguistics from a preoccupation with structure to a concern with processing (that is, with generative grammars) was just beginning (see Chomsky 1955). I do not know to what extent these developments in linguistics derived from the zeitgeist I have described. Some connections with logic are clear (for example, Chomsky 1956). But early efforts in the mechanical translation of languages lay outside the mainstream of linguistics (see Locke and Booth 1955 for the history).
Digital Computers Enter the Scene
Digital computers developed rapidly in the early postwar era. While logicians understood that they were universal machines (Turing machines), others viewed them primarily as arithmetic engines, working with numbers rather than with general symbols.

The use of mechanisms (robots, not computers) to illustrate psychological theories and enforce operationality had a long history, predating the computer. Boring (1946) surveys that history in his article “Mind and Mechanism.” And the developments in cybernetics produced a new pulse of mechanical robot building (Walter 1953; Ashby 1952), with which I had some contact at RAND beginning in 1952.
But all of these efforts were rather separate from simulation on the computer, which tended not toward activating mechanical beasts but toward programming game playing and other symbolic activities. The first checker program was coded in 1952 (Strachey), and other game-playing schemes are described in Bowden (1953). And Turing (1950), in a justly famous discussion, “Computing Machinery and Intelligence,” had put the problem of simulation in a highly sophisticated form, proposing the Turing Test to determine the ability of a computer to give answers to questions indistinguishable from human answers. The stage was set for the appearance of artificial intelligence.

Chapter 13
Climbing the Mountain:
Artificial Intelligence Achieved
The completely new turn that my life took in 1955 was the unanticipated result of my work in the Systems Research Laboratory at RAND and my contact there with computers. I have described in earlier chapters my experiences with the punched-card predecessors of modern computers, first at Chicago in 1938 for producing the statistical tables of the Municipal Year Book, then in Los Angeles at the IBM Service Bureau in 1941. During World War II, I heard vague rumors about more powerful computers being built at Aberdeen Proving Ground in Maryland for ballistic calculations. Soon after the war, these rumors became more concrete, and by 1945, when I first met John von Neumann, I was at least vaguely aware of modern digital computers.
My awareness and knowledge grew rapidly when I read, in 1949 or 1950, Edmund Berkeley’s Giant Brains (1949), an excellent account of the new machines. Berkeley was also selling a little toy do-it- yourself computer (the ''Geniac"), constructed of nothing more than batteries and wires, which could be “programmed” by rewiring to do a variety of tasks. Buying one, I got some hands-on feel for the way computers did their work.
In 1950, I made an optimistic speech to business executives on the prospective use of computers in business, mentioning both linear programming and game theory as keys to sophisticated applications. A brief quotation from the talk later published in the journal Advanced Management, will convey its flavor:
In describing these developments, which are now engaging the efforts of . . . scientists in a dozen locations, I do not wish to create undue anxiety in the minds of executives about their prospective obsolescence. Before today’s business executive and management engineer take their place with the mastodons,

quite a span of years if not generations is likely to elapse. Nevertheless, I think it is fair to regard these researchesalong with the
social-psychological researches mentioned earlieras portents that we are in time going to have theory in managementtheory of the kind that predicts reality, and not the kind that is contrasted with practice. When that time comes, managers will be those who can handle and apply that theory. [Simon 1950, p. 4]
Computers were within my sphere of attention, but only computers used as number crunchers. In spite of the “giant brain” metaphor, there is little suggestion in this 1950 talk that the most important application of computers might lie in imitating intelligence symbolically, not numerically. Arriving at that insight was the critical step that was required for genuine artificial intelligence to emerge. Both Al Newell and I arrived at it in the early 1950s, but by somewhat different routes.
Allen Newell
I must say something about my closest partner in the venture, who remains a close associate and friend to the present day. Although this volume is speckled with vignettes of various of my associates, I can provide only a scanty one of Al. Our paths have meshed so intricately over such a long time that telling about our collaboration and friendship would require writing another book. I will, however, say a bit about the young Al Newell, as I first encountered him.
When I first met Al at RAND in 1952, he was twenty-five years old, and fully qualified for tenure at any universityfull of imagination and technique, which is what it takes to make a scientist. I suspect he was ready for tenure soon after birth, Athena springing fully armed from the brow of Zeus. His energy was prodigious, he was completely dedicated to his science, and he had an unerring instinct for important (and difficult) problems. If these remarks suggest that he was not only bright but brash, they are not misleading.
His earliest and probably most important education as a cognitive psychologist came when, an undergraduate physics major at Stanford, he took several courses from the distinguished mathematician George Polya, who recorded many of his ideas about problem solving in a widely used book called How to Solve It (1945). Polya introduced Al to the word heuristic and to the idea behind that word. A year as a graduate student in the rarefied atmosphere of the Mathematics Department at Princeton convinced Al that his interests lay in applied, rather than pure, mathematics, and that for the

immediate future he wanted to be involved in hands-on research rather than graduate studies. He then accepted a position at RAND, where I found him.
If imagination and technique make a scientist, we must also add dollars. I learned many things in the postdoctoral training I took with Al, few more important than how to position the decimal point in a research proposal. My first lesson came from the Systems Research Lab, a grandiose project if there ever was one outside physics and space science. Al and his three colleagues simply took it for granted that it was reasonable for the air force to build an entire simulated air defense station and to staff it for years with an air force unit, enlisted men and officers. It was, indeed, reasonable, but I am not sure that would have occurred to me before I saw it happen.
Thinking big has characterized Al’s whole research career, not thinking big for bigness’ sake, but thinking as big as the task invites. Al learned about research funding through his early association with physicists, and it is a lesson that we behavioral scientists still need to study with him. (He has been teaching us this lesson at Carnegie Mellon University, with the funding of research in cognitive science and artificial intelligence and, more recently, with the computer networking of our campus.)
From our earliest collaboration, Al has kept atrocious working hours. By this I don’t mean that he is more of a workaholic than I amperhaps a dead heatbut that he works at the wrong time of day. From the start, he preferred sessions that began at eight in the evening and stretched almost to dawn. I would have done most of my day’s work by ten that morning, and by ten in the evening was ready to sleep, and not always able not to.
Perhaps his greatest pleasure (at least as judged by his behavior) is an “emergency” that requires him to stay up all night or two consecutive nights to meet a deadline. I recall his euphoria on our visit to March Air Force Base in 1954, when the air exercise extended over a whole weekend, twenty-four hours per day.
Some of these memories are frivolous, but high spirits, good humor, and hard work have characterized my relations with Al from the beginning. We have not been closely associated in joint research projects since the mid-1960s, certainly since our book Human Problem Solving appeared in 1972. But, however much our working paths have diverged, we still find, whenever we are together, that remarkable community of beliefs, attitudes, and values that has marked our association from the first ten minutes of meeting in February 1952.
In describing our style of work during the years, especially from 1955 to the early 1960s, when we met almost daily, I will paraphrase an interview I gave Pamela McCorduck about 1974, when she was writing Machines Who Think. It worked mostly by conversations together. Al probably talked

more than I; that is certainly the case now, and I think it has always been so. But we ran those conversations with the explicit rule that one could talk nonsensically and vaguely, but without criticism unless you intended to talk accurately and sensibly. We could try out ideas that were half-baked or quarter-baked or not baked at all, and just talk and listen and try them again.
Aside from talking shop, Al and I have frequently shared our personal concerns and problems. And after Lee Bach left Pittsburgh, Dorothea and I designated the Newells in our wills as guardians of our children, an indication of the closeness and trust we felt toward them. But mainly, we talk about our research, except sometimes when dining or socializing with Noël, Al’s wife, and Dorothea.
Whatever hobbies and recreations we have outside our work, we have pursued separately. My own guess is that, when together, we would not resist taking up again those issues that are central to the lives of both of usour science. The content of our talk would vary little, whether climbing a mountain together or talking in Al’s study or my living room.
The Research Gets under Way
At RAND’s Systems Research Laboratory I became fascinated by the method that Al and J. C. (Cliff) Shaw, an outstandingly talented and sophisticated systems programmer, had devised for using a card- programmed calculator to produce imitations of radar maps for air-defense simulation. In this application the computer was generating not numbers but locations, points on a two-dimensional map. Computers, then, could be general symbol systems, capable of processing symbols of any kindnumerical or not.
This insight, which dawned only gradually, led Al and me even more gradually to the idea that the computer could provide the formalism we were seekingthat we could use the computer to simulate all sorts of information processes and use computer languages as formal descriptions of those processes.
In the summer of 1954, I taught myself to program the 701, IBM’s first stored-program computer, and computers were much on my mind. Driving together to March Air Force Base for the air exercises I have mentioned, Al and I had a long discussion of the possibility of simulating human thinking by computer.
My earlier thoughts about chess programming, during the 1952 RAND summer seminar, had produced only a verbal description of a program. Now Al initiated in earnest an effort to program a digital computer to learn

to play good chess. His work led to a published paper on the subject, but not immediately to a running program. He was stimulated by the work of Oliver Selfridge (1955) and Gerry Dinneen (1955), which showed how computers were to become truly non-numerical processors. Selfridge and Dinneen had produced a (primitive) program for recognizing patterns. A seminar that Selfridge gave at RAND about this work impelled Al to begin his chess project. Al and I discussed the project on numerous occasions, but my role in it was wholly consultative.
In the first published description of his plan for a chess program, Al said: “This is the means to understanding more about the kinds of computers, mechanisms, and programs that are necessary to handle ultracomplicated problems” (Newell 1955, p. 101). The research was thus identified as artificial intelligence (though that name was not used); but the task to be examined was one of great psychological interest and had already been studied extensively by Adriaan de Groot (1946) in the Netherlands.
The initial approach also established the precedents, followed in all of our subsequent work, that artificial intelligence was to borrow from psychology, and psychology from artificial intelligence. Thus, Al’s programmatic description of his chess-learning proposal, like my 1952 sketch (see chapter 10), used aspiration values and notions of “satisfactory solution” in evaluating chess moves, and discussed the necessity for “rules of thumb” (later called heuristics) to reduce to manageable size the enormous search spacethe space containing all the branches in the tree of possible chess moves and replies. By using heuristics and settling for satisfactory moves, the search space could be explored very selectively, avoiding any attempt at an impossible exhaustive search. Always following this course, the research kept one eye on its potential for psychology.
Al soon added to the team Cliff Shaw, who had worked with him on the radar map problem. When John von Neumann had designed a powerful computer for RAND (which, over his protests, was named JOHNNIAC), Cliff had played a major role in constructing the programming systems for it. We judged that JOHNNIAC, with a 4,096-word high-speed store supplemented by a drum with about 10,000 words of usable capacity, would be large enough and fast enough to meet the needs of a chess program. Besides, nothing larger or faster existed.
Al had come to RAND without taking his Ph.D. He wanted sooner or later to acquire this union card, but he didn’t want to interrupt his exciting research. Without great difficulty, I convinced my colleagues at GSIA that it was wholly appropriate for a business school to award a doctorate for a thesis in what we would now call artificial intelligence. (We ultimately

awarded about a dozen such degrees before computer science became a separate discipline at Carnegie.)
After considering this and other alternatives, Al decided to move his base of operations and his chess research to Carnegie Tech, and the RAND Corporation agreed to keep him on the payroll, making him their Pittsburgh outpost. With Noël and their new son, Paul, Al was installed in Pittsburgh in the spring of 1955 and work on the chess project got under way.* I wanted to spend as much time as I could working with Al on the chess problem, and stated my intention to continue working on human problem solving and the theory of “Hugo-like” maze models (see “The Apple” in chapter 11).
We agreed to meet each Saturday, roaming on these occasions over a wide range of topicsparticularly problem solving and the chess language Al was trying to devise. Al tended to supply ideas starting from the language and computer end, I starting from human problem solving and what we knew of the heuristics there. This is one of the role specializations that, subject to strong qualifications, we mildly adhered to for some years. In the course of these discussions, we considered illustrative problems from areas other than chess, including Euclidean geometry, Katona-type matchstick problems, and symbolic logic (Principia Mathematica was on my bookshelf).
Since the program we were planning to build was to run on JOHNNIAC, in Santa Monica, Cliff and Al communicated by teletype, an early semiautomated computer network that ran up phone bills of $500 a month, a sum that didn’t faze Al (or RAND) but seemed colossal to me.
During the third week of October 1955, Al and I attended the meetings of the Institute of Management Science in New York. I arrived a day early for a morning meeting with Bernie Berelson at the Ford Foundation. On that afternoon, a beautiful sunny day, I decided to take a walk along the Hudson on Morningside Heights. I believe I had an appointment on the Columbia campus late in the afternoon, probably with Paul Lazarsfeld or Bob Merton.
As I walked I pondered about how one solves geometry problems. The example I had in mind had to do with angles inscribed in circles and semicircles (I think there were several cases depending on whether one line fell inside or outside the others). Suddenly I had a clear conviction that we could program a machine to solve such problems. I jotted some notes on a piece of paper and thought hard about it for a few minutes, the conviction re-

  • This chronicle of the autumn of 1955 is based on a memorandum I wrote on July 9, 1957, soon after the events described, and does not rely on my memory of long-ago events.

maining very strong. I think the conviction arose from the fact that I could see the heuristic I was using and how it cut down the search space.
That evening Al and I met in the hotel room of Merrill Flood, the operations research specialist who had first invited me to RAND, and after discussion we agreed to try to program a geometry machine before Christmas, two months away. We both felt strongly that we had an excellent chance to succeed. I have a clear picture of that room, and of where each of us was sitting.
Considerable attention was meanwhile being given to the programming language required for the project. On the basis of their previous experience, Cliff and Al knew that it would be difficult to write our programs directly in the machine language of the computer. In artificial intelligence programs, you cannot predict what data structures the system will need to build and store, or how these structures will interact and be modified in the course of the computation. The utmost flexibility is required, and information stored in memory must be indexed in ways that will make it accessible whenever needed.
These are also requirements, of course, for human memory. We cannot assume that the Lord has assigned us specific locations in memory for storing Latin verbs, and others for algebra. The storage must be dynamic, reassignable. And if human memory is any guide to artificial intelligence, the memory should be associative: Each symbol should lead to other symbols that are linked to it, and these links should be acquired through learning.
Moreover, there seem to be two kinds of associations: simple and directed. Presented with the simple stimulus dog, most (English-speaking) subjects respond cat. But presented with superordinate, dog, they respond animal; and presented with subordinate, dog, they respond dachshund, or collie. A computer language for A.I. would have to handle both simple and directed associations. In the language we built, the directed associations were called descriptions.
We needed a higher-level language, congenial to the human programmer, which would do automatically much of the “housekeeping” in the computer and which would be translated automatically by the computer itself into machine language. And memory structures would have to be highly modifiable. Although all three of us participated in its design, Al and Cliff took primary responsibility for constructing such an information-processing language (IPL) or list-processing language. This task was a major preoccupation during 1955 and into the spring of 1956. My role consisted primarily of comparing proposed language designs with the analogous human functions. We had, for example, numerous discussions on how to handle descrip-

tionsparticularly how to avoid limiting their generality. We were agreed on the need for flexibility and avoiding of dimensionalization.
A memo written by Al on April 2, 1956, marks a major breakthroughthe use of an association memory in the form of “list structures” to make search dimensionless. The idea had a dual source in machine technology and in the idea of human association nets, and extended to both networks of lists and lists of descriptions. Al and Cliff solved the implementation problems soon thereafter.
The Logic Theorist Is Conceived
After the Management Science meeting in New York, Al and I worked on geometry every Saturday. Facing the question of how to do the diagrams, we saw that we still had to deal with some basic problems of perception. Around the first of November, symbolic logic began to emerge as an alternativespecifically because it involved no diagrams.
The first note I have in writing, dated November 15, 1955, consists of an analysis of the proof of theorem
2.15 of Principia. During this time I was reviving my skill in logic by studying the proofs in chapter 2 of that treatise. By the beginning of December I was beginning to have pretty clear ideas about some pieces of the heuristic (for example, working backward in proofs by substitution). I was doing most of the actual work on the proofs, supplemented by our Saturday discussions. Al, after a burst of activity in November or October, was somewhat bogged down by studying for his written doctoral examinations.
Al’s notes pick up from about December 6, by which time we had most of the pieces but little of the organization of the program. During the subsequent week we conferred frequentlyalmost dailyfor short periods and I worked almost every night on the proofs. On Thursday, December 15 (having felt I was getting increasingly close during the week), I succeeded in simulating by hand (in literal imitation of a computer program) the first proof, using a program reasonably close to that published in our Institute of Radio Engineers paper the following September. During the subsequent several days, Al and I worked hard to sharpen the procedure and put it in a form that we agreed was programmable on the computer.
I don’t want to create an impression of specialization (Did Hillary or Tenzing touch the summit of Everest first?). Most of the actual paper-and-pencil work on developing the strategy of the LT program was done by me, just as most of the actual work on the language was done by Al and Cliff.

We were in the closest communication during the whole period and, through long association, had developed an extraordinary capacity to communicate even our subtleties to one another, and the whole product must be regarded as joint and inseparable. No one or two of us had much chance of completing it.
I have always celebrated December 15, 1955, as the birthday of heuristic problem solving by computer, the moment when we knew how to demonstrate that a computer could use heuristic search methods to find solutions to difficult problems. According to Ed Feigenbaum, who was a graduate student in a course I was then teaching in GSIA, I reacted to this achievement by walking into class and announcing, “Over the Christmas holiday, Al Newell and I invented a thinking machine.” (If, indeed, I did say that, I should have included Cliff Shaw among the inventors.) Of course, LT wasn’t running on the computer yet, but we knew precisely how to write the program.
We were not slow in broadcasting our success. In a letter to Adriaan de Groot, on January 3, 1956, I reported:
You will be interested to learn, I think, that Allen Newell and I have made substantial progress on the chess-playing machineexcept that at the moment it is not a chess-playing machine but a machine that searches out and discovers proofs for theorems in symbolic logic. The reason for the temporary shift in subject matter is that we found the human eye and the portions of the central nervous system most closely connected with it to be doing too much of the workat the subconscious levelin chess-playing, and we found this aspect of human mental process (the perceptual) the most difficult to simulate. Hence, we turned to a problem-solving field that is less “visual” in its content.
Two weeks ago, we hit upon a procedure that seems to do the trick, and although the details of the machine coding are not yet worked out, there seem to be no more difficulties of a conceptual nature to be overcome. By using a human (myself) to simulate the machineoperating by rule and without discretionthis simulated machine has now discovered and worked out proofs for the first twenty five or so theorems in Principia Mathematica. The processes it goes through would look very human to you, and corroborate in many respects the data you obtained in your chess studies.
While awaiting completion of the computer implementation of LT, Al and I wrote out the rules for the components of the program (subroutines) in English on index cards, and also made up cards for the contents of the memories (the axioms of logic). At the GSIA building on a dark winter evening in January 1956, we assembled my wife and three children together

with some graduate students. To each member of the group, we gave one of the cards, so that each person became, in effect, a component of the LT computer programa subroutine that performed some special function, or a component of its memory. It was the task of each participant to execute his or her subroutine, or to provide the contents of his or her memory, whenever called by the routine at the next level above that was then in control.
So we were able to simulate the behavior of LT with a computer constructed of human components. Here was nature imitating art imitating nature. The actors were no more responsible for what they were doing than the slave boy in Plato’s Meno, but they were successful in proving the theorems given them. Our children were then nine, eleven, and thirteen. The occasion remains vivid in their memories.
Our success was announced publicly, but briefly, in the unpublished RAND Report-850, “Current Developments in Information Processing” (May 1, 1956), a paper read by Newell in Washington, D.C., May 2, 1956. It was not until August 9, 1956, that the Logic Theorist, programmed on JOHNNIAC in IPL- II, produced its first complete proof of a theorem (Theorem 2.01 of Principia), and September 1956 that a formal description of the scheme, in Information Processing Language I (IPL-I), was published.
We did not wait long after the first computer proof had been found by JOHNNIAC before communicating to Bertrand Russell the news of our success. Here is our first letter, and his reply:
October 2, 1956 Earl Russell
41 Queen’s Road
Richmond, Surrey England
Dear Earl Russell:
Mr. Newell and I thought you might like to see the enclosed report of our work in simulating certain human problem-solving processes with the aid of an electronic computer. We took as our subject matter Chapter 2 of Principia, and sought to specify a program that would discover proofs for the theorems, similar to the proofs given there. We denied ourselves devices like the deduction theorem and systematic decision procedures of an algorithmic sort; for our aim was to simulate as closely as possible the processes employed by humans when systematic procedures are unavailable and the solution of the problem involves genuine “discovery.”
The program described in the paper has now been translated into computer language for the “Johnniac” computer in Santa Monica, and Johnniac

produced its first proof about two months ago. We have also simulated the program extensively by hand, and find that the proofs it produces resemble closely those in Principia. At present, we are engaged in extending the program in the direction of learning (of methods as well as theorems) and self-programming.
Very truly yours, Herbert A. Simon, Head
Industrial Management Department

2 November, 1956 Dear Mr. Simon,
Thank you for your letter of October 2 and for the very interesting enclosure. I am delighted to know that Principia Mathematica can now be done by machinery. I wish Whitehead and I had known of this possibility before we both wasted ten years doing it by hand. I am quite willing to believe that everything in deductive logic can be done by a machine.
Yours very truly, Bertrand Russell
A year later, we wrote Russell again to report the results of experiments in learning with LT.
September 9, 1957 Dear Mr. Russell:
The enclosed reprints will indicate our further progress in simulating human problem-solving processes with a computer. In work subsequent to that reported in the reprints, we have accumulated some interesting experience about the effects of simple learning programs superimposed on the basic performance program. For example, we obtain rather striking improvements in problem-solving ability by inducing the machine to remember and use the fact that particular theorems have in the past proved useful to it in connection with particular proof methods.
The proofs the Logic Theorist has discovered have generally been pretty close to those in Principia, but in one case it created a beautifully simple proof to replace a far more complex one in the book. In the case of proposition *2.85, pvq. .pvr: :p.v.q r, it noted that this can be derived by an application of syllogism from: pvq. .pvr: : - q.v.pvr, using the associative law, *1.5, to rearrange terms on the right-hand side. This new expression, in turn, can be obtained by modus ponens in *2.05 directly from *1.3.
Q.E.D. The machine required something less than five minutes to find the proof. Since the machine’s proof is both straightforward and
unobvious, we were much struck by its virtuosity in this instance.

You may also be interested in the evidence on page 229 of our paper that the learned man and the wise man are not always the same person. Of course this has been known for a long time, but it is nice to have such definite evidence to bring against the pedant. In general, the machine’s problem solving is much more elegant when it works with a selected list of strategic theorems than when it tries to remember and use all the previous theorems in the book. The contrasting Figures 7 and 8 are typical of the differences. I am not sure that these facts should be made known to schoolboys.
Sincerely yours, Herbert A. Simon
Professor of Administration

21 September, 1957 Dear Professor Simon,
Thank you very much for your letter of September 9, and for the enclosure. I am delighted by your example of the superiority of your machine to Whitehead and me. I quite appreciate your reasons for thinking that the facts should be concealed from schoolboys. How can one expect them to learn to do sums when they know that machines can do them better? I am also delighted by your exact demonstration of the old saw that wisdom is not the same thing as erudition.
Yours sincerely, Bertrand Russell
When the same novel proof of LT that Bertrand Russell enjoyed was sent to Stephen Kleene, the distinguished editor of the Journal of Symbolic Logic, in a paper co-authored by the Logic Theorist, he rejected the paper as not representing a new result. Since the methods of Principia Mathematica, he said, were now outmoded, it was no accomplishment to prove a theorem using that system. It would be rude to suggest that the difference between Kleene’s and Russell’s responses was further proof of the difference between learning and wisdom. In any event, we lost this opportunity to boast to logicians about LT’s prowess.
The letters to Russell show that from the beginning we were interested in simulating human problem solving, and not simply in demonstrating how computers could solve hard problems. Further, they show that we were quite aware of simpler ways of proving these theorems outside the system of Principia, and that we used that system only by way of exampleany system would have served as well. Later, the logician Hao Wang and others, taking computational efficiency as their only criterion, designed faster com-

puter proof procedures using truth tables or the method of natural deduction, and denigrated the Logic Theorist as primitive. They simply misunderstood the objectives of the research on LT.
Finally, the letters show that from the very beginning we were aware of the possibilities of machine learning and of automatic programming (and, indeed, we pursued some of these possibilities in the following years).
We were prompt in spreading the word of our success to our scientific colleagues. I have mentioned the brief talk on LT that Al gave in Washington in May. A summer workshop in artificial intelligence had been organized at Dartmouth for June 1956, by John McCarthy, Marvin Minsky, Nat Rochester, and Claude Shannon. Participants in the workshop included most of the persons who were thinking actively about artificial intelligence at that time: the four organizers, along with Oliver Selfridge, Ray Solomonoff, Trenchard More, and Herbert Gelernter. Al and I spent about a week there.
Although many ideas for programs to solve problems, recognize patterns, or play games were in the air, the two concrete schemes brought to the conference were our Logic Theorist and a program for proving theorems in the propositional calculus by the method of natural deduction, devised by Trenchard More (1957). More had constructed a flow diagram and had hand-simulated his scheme. The description of LT later presented in September in Cambridge was distributed to the group, and early debugging outputs were exhibited. We also had long discussions with John McCarthy at Dartmouth about the list-processing languages we had invented to program LT.
Marvin Minsky’s well-known essay ''Heuristic Aspects of the Artificial Intelligence Problem," although not published until several years later as Steps Toward Artificial Intelligence, was first drafted as a technical report late in 1956. It reflects very well the general body of knowledge in artificial intelligence that was pooled at the Dartmouth conference.*
Prior to the workshop, several persons and groups (including Minsky and ourselves) had given some attention to proving theorems in geometry. This was an important topic of conversation at Dartmouth, giving impetus to the successful work of Herb Gelernter and Nat Rochester on this problem, which followed shortly thereafer.
Next, we attended a meeting of the Institute for Radio Engineers (IRE, the predecessor of the Institute of Electrical and Electronic Engineers) at M.I.T. in September 1956, where a session was to be devoted to reporting

  • Minsky’s essay was considerably modified prior to publication. It is the earliest, unpublished, version of 1956 that best reflects his interpretations at the time of the Dartmouth conference.

the results of the Dartmouth conference. Someone proposed that John McCarthy should give a summary report of all the work. Since our own work represented the only tangible (programmed) example of artificial intelligence presented at Dartmouth, and since the work predated the conference, Al and I dissented loudly, and after negotiationin the course of a long walk with the chairman, Walter Rosenblithit was agreed that Al and I would give a paper on our work, and John a summary paper on the conference.
Thus, very soon after LT was actually operative, its capabilities and structure were known to virtually everyone interested at that time in the potential of computers for intelligent behavior, and to much of the wider computer community as well. Whoever missed the news had further opportunities to pick it up from our paper in the Proceedings of the Western Joint Computer Conference (1957), and soon from many other sources. Our light was hidden under no bushel. The broader story of its dissemination and impact will be taken up in the next chapter.
In reporting our work, we had to solve the problem of the public perception of the collaboration between Al and me. Although, formally, Al came to Carnegie as my student, his actual role was colleague and collaborator. He was my partner, not my protégé. But I was a well-known social scientist, while he was officially a graduate student, eleven years younger than I, with few publications. Both to be fair and to make a lasting collaboration possible, the parity between us had to be recognized. We followed a deliberate strategy to accomplish this.
Our names nearly always appeared alphabetically in our joint publications. Al’s name thus came first. (The exceptions were a few invited lectures I gave, not reporting new research.) People could interpret this as Al being the senior partner or as an alphabetical listing, but not as my being senior. We generally took turns attending public meetings. Since we were interchangeable parts, there was no sense in both of us going. Prior to 1965, Al was our sole spokesman abroad.
When I made references to our work, I was careful to mention both of us. When people sent me their manuscripts for criticism in which our names were mentioned in reverse order, I put them back in alphabetical order. But it turned out not to be a major problem, mainly because everyone who met Al discovered that he was a big boyhe wasn’t anyone’s protégé.
We also had to be careful that Cliff Shaw’s major role in our discoveries was properly recognized. This was made more difficult by Cliff’s taciturnity and great modesty. We made sure that he was included as co-author of the important early papers. We were embarrassed that he did not share the

Turing Award with us in 1975, and we insisted that his partnership in inventing list-processing languages be acknowledged in the citation that accompanied the award.
The Discovery of List-processing Languages
Writing and testing the Logic Theorist was only half of what we had accomplished in 1956. We had also invented a whole new class of computer-programming languages known as list-processing languages, the general nature of which has already been briefly indicated. These languages were the direct ancestor of John McCarthy’s LISP, which has been the standard A.I. language for thirty years, as well as embodying most of the ideas of what is now called object-oriented programming.
The basic idea is that, whenever a piece of information is stored in memory, additional information should be stored with it telling where to find the next (associated) piece of information. In this way the entire memory could be organized like a long string of beads, but with the individual beads of the string stored in arbitrary locations. “Nextness” was not determined by physical propinquity but by an address, or pointer, stored with each item, showing where the associated item was located. Then a bead could be added to a string or omitted from a string simply by changing a pair of addresses, without disturbing the rest of the memory. One could store as many Latin verbs as desired without preassigning storage for them. Each string of beads was called a list. Generalizing, any item on a list, any bead, could be the name of another list, with a pointer to its first bead. Now the structure of memory was no longer restricted to simple strings, but could include great branching treelike structures.
But a further generalization was required to represent the directed associations that the experiments of N. Ach (1905) and others had shown to exist in human memory. This was accomplished by the description list (nowadays known more commonly as a property list). The odd beads on a description list held the names of the attributes, the even beads next to them, the values of those attributes.
Thus, the attribute “color,” stored as seventh bead, say, on a description list, could be followed by the value “red,” as eighth bead. Again, a value might itself be a list structure of arbitrary complexity. So description lists could be associated with objects, forming schemas describing those objects. Schemas are now widely used in programming under such labels as scripts, frames, objects, and semantic nets.
Finally, a list-processing language requires processes that can insert and

delete items, find the next item on a list, find the value of an attribute of an object, assign such a value, create objects, and so on. About a dozen such processes are more or less essential to manipulate list structures. That’s basically all there is to list processing. You may say that it’s quite enough.
From the beginning, the new artificial intelligence community accepted list processing as the programming tool for A.I. That is still largely true today, more than thirty years later. The rest of the programming profession, however, did not greet the innovation with open arms. They observed that list processing threw half the memory away (a heavy cost with the very small computer memories then available). That was unavoidable in order to store the “next” address with each list item. Moreover, list- processing programs had to be executed by interpreters, which slowed execution time by a factor of nearly ten. To conventional programmers these languages seemed ridiculous, if not suicidal.
A few years of experience showed why the condemnation was egregiously wrong. Within a decade, the value, or even necessity, of list processing had become evident in many kinds of programming environments, and chapter 2 of Knuth’s influential Fundamental Algorithms (1968) gave list processing wide credibility outside the A.I. community. Today list-processing ideas are commonplace in sophisticated programming.
The languages our group developed were called IPLs (Information Processing Languages). IPL-II, the first one that actually ran on a computer, was the language of the Logic Theorist on JOHNNIAC. Of these languages, IPL-V was the most widely used on the IBM650, IBM704, and many other computers. It is now almost a dead language, although at least one version is still active (on a PC!). It was largely superseded by LISP, designed by John McCarthy, and today there are other competitors as well.
The IPLs were precocious in other respects besides being the first list-processing languages. They appeared only slightly later than FORTRAN, thus were pioneers among higher-level programming languages. They also anticipated later ideas of so-called structured programming, a set of heuristics for constructing computer programs that would be debuggable and intelligible as well as efficient.
An interesting, but probably undecidable, historical question is whether IPL-V contributed significantly to the development of structured programming. The IPL-V manual is quite explicit in its advocacy of top- down programming and independent closed subroutines, two of the central features of structured programming. The manual warns against instructions in one subroutine that refer to another, a no-no for structured programming, and characterizes processes solely in terms of inputs and outputs, a desideratum of structured programming.

Since list processing was in low regard among mainstream systems programmers during the early years of computing, however, they probably reinvented structured programming without much awareness that it was already preached and practiced in the A.I. community.

So we come to the end of the second panel, the winter of 1956-57, with list-processing languages operating on our computers and the Logic Theorist searching out the proofs of Principia Mathematica. Our project had achieved these two major successes remarkably quickly, launching the related disciplines of artificial intelligence and information-processing psychology.
For me, there was no question of turning back to my earlier interesting, but much less exciting, research on organizational decision making. In computer programming languages, I found tools that classical mathematical languages had not provided for exploring the processes of human thinking and for attaining accuracy and rigor in the behavioral and social sciences. I had passed a crucial branch point in the maze and was now committed to a future in cognitive psychology and computer science.
The rest of the storyat least the research part of ittells how our research group moved forward from that success, and tells about its impact, first in helping create the new discipline of computer science, then in producing a major revolution in cognitive psychology, and finally, in introducing essential new ideas into economics and engineering design, not to mention epistemology.

THE THIRD PANEL
VIEW FROM THE MOUNTAIN

Chapter 14 Exploring the Plain
The mountain pass over which Al, Cliff, and I crossed with the realization of the Logic Theorist opened up vast views. We saw before us the whole domain of artificial intelligence and the whole domain of human cognition. Numerous scientists have shared with us the adventure of exploring these lands, both at Carnegie and at many other institutions. This chapter recounts some of the explorations of our own research group, at first the three of us, soon a growing circle of graduate students and faculty colleagues, during the years from 1956 to 1978.
No detailed master plan mapped our research from the initial success of LT through the 1970s. We followed something like the flexible tactics advocated by the British military expert Liddell Hart, and gratefully borrowed by the Germans for the 1940 blitzkrieg: Push across the front; when you find a soft spot, wherever it may be, pour your reserves through and keep going. Research, groping through the uncertain and the unforeseeable, must be flexible to grasp and exploit every sign of progress. Let me try to describe how we carried out that strategy.
The Research Strategy
The initial motives that led us to the Logic Theorist, in the GSIA environment, influenced our research strategy. We were interested both in business applications that exploited the newfound powers of computers for symbolic processing, and in extensions that would contribute to understanding human thinking. Nowadays, we usually call the former artificial intelligence and the latter cognitive science.
Along both fronts, our strategy was to select promising tasks that call

for intelligence and to write and test programs capable of handling them. In the cognitive science part of our research, we also ran experiments with human subjects doing the same tasks to see how closely the computer simulations paralleled human behavior. As we gained a reasonable understanding of each task, we moved on to another. The precise order in which we took up tasks was partly a matter of chance, although we tended to start with those that seemed simple and progress toward the more complex.
Eventually, we wanted to model the whole (cognitive) human beinga goal that would keep us busy indefinitely.
The artificial intelligence side was the easier to relate to the goals of a business school. Among the graduate students, Fred Tonge was soon building a program that used heuristic search to balance assembly lines (to find the best arrangement of workers, tasks, and work stations), while Geoffrey Clarkson constructed an expert system (as we would call it today) for choosing stock portfolios for bank trust accounts. There was no deep reason for selecting these two particular tasks: they just came to attention, were practically and theoretically interesting, and seemed doable. Doability and significance are always good bases for choosing research problems. We want a problem whose answer has interest and value, but only if we have some ideas for approaching it.
The significance of a problem for cognitive science could be judged by how much attention psychologists paid to it and whether it illuminated important human capabilities. Doability depended on the current state of the programming art and whether we had any good ideas for experiments. Problem solving was an obvious research candidate, both because we had already made a start with LT, and because it is a critically important human mental activity. A substantial fraction of cognitive science research over the three and a half decades since 1956 has been directed toward understanding human problem solving.
Rote verbal learning was another obvious candidate, as it had been studied more than any other subject by American psychologists. The standard experiments were modeled on the way people traditionally learn foreign language vocabulary (paired-associate learning) or memorize poetry (learning by serial anticipation). Apart from experimental work we might do ourselves, there were thousands of experiments already in the published literature that we could use to test our simulation models.
Extrapolating letter series was chosen as yet another task for several reasons. It was an example of serial behavior, whose importance to psychology had been emphasized by Lashley. The task had been selected by designers of intelligence tests as a good measure of significant mental abilities. And, as it was a law- discovering task, it could open the path toward

illuminating the processes needed for scientific discovery and other creative activities.
A more speculative venture, undertaken as a doctoral thesis by Bob Lindsay, was to explore how computers might understand information, taking genealogical charts as the objects to be understood. In another important dissertation task, Ross Quillian was to design semantic memory structures that could display some of the characteristics of human associative memory.
Modeling Verbal Learning:
EPAM
The research on LT had borrowed more from psychology to advance artificial intelligence than from artificial intelligence to advance psychology. Psychology was very much on the research group’s mind, however. Almost from the beginning of 1956, we began thinking about elementary perceptual and memorizing processes, motivated partly by the perceptual questions raised by geometry and partly by a growing interest in learning processes for LT. Besides, I was trying to learn Greek at the time, and was introspecting about how I learned.
My first file memorandum sketching a possible approach to verbal learning is dated February 18, 1956. The name applied to the scheme, EPA-MINONDAS, betrays its interactions with my Greek studies. In particular, I developed ideas about the usefulness of redundancy for memory. This provided, in turn, a motivation for our later investigations of learning by hindsight. During this period, Al and I had numerous conversations about discrimination nets and sorting schemes that later were reflected in EPAM.
The notion of simulating behavior in classical verbal learning tasks emerged while I was searching the psychological journals for clues about associative memories. Eleanor Gibson’s classic paper on stimulus and response generalization (Gibson 1940) was an important source of ideas that Ed Feigenbaum subsequently incorporated in EPAM.
The General Problem Solver
Serious attempts to interpret LT as a psychological theory of problem solving got under way in the autumn of 1956. We had learned that O. K. Moore and S. B. Anderson had used problems in logic (disguised as “decoding” exercises) to study problem solving at Yale University (Moore and Anderson 1954a and 1954b). Their formal system was close enough to Whitehead and Russell to suggest using their task to compare human behavior with

the behavior of LT. To that end, Peter Houts, a graduate student, began to tape-record thinking-aloud sessions with subjects doing the Moore-Anderson logic task.
The first tapes, transcribed in the spring of 1957, made clear that LT did not fit at all well the detail of human behavior revealed by the protocols. The problem was discussed at a research seminar on organizational behavior that met on the Carnegie campus in the summer of 1957. During the week of the discussions, both Al and I, apparently independently, found in a particular thinking-aloud protocol clear evidence that means-ends analysis was the subject’s principal problem-solving tool.
Means-ends analysis is accomplished by comparing the problem goal with the present situation, and noticing one or more differences between themfor example: I am here, I want to be there; I am five miles from my goal. The observed difference jogs memory for an action that might reduce or eliminate it (take a bike or an automobile; walk). The action is taken, a new situation is observed, and, if the goal has still not been reached, the whole process is repeated.
During the summer and autumn of 1957, we gradually converged to a program embodying the newly discovered means-ends analysis. Because the reasoning processes in the program were independent of the particular topic on which it was reasoning, we christened it the General Problem Solver. The general flow diagram of GPS was produced before the end of October 1957, and the planning method (a scheme for simplifying search by abstracting the problem) was sketched a few days later (Newell, Shaw, and Simon 1962). Thirty years of subsequent research has confirmed that means-ends analysis, as embodied in GPS, is a key component of human problem-solving skill.
Talking to Psychologists
Our first published attempt to communicate directly with psychologists was “Elements of a Theory of Human Problem Solving,” in the July 1958 issue of Psychological Review (Newell, Shaw, and Simon 1958a). Written more than a year earlier, this paper was based on the experience with LT, emphasizing the broad resemblances between that program and human problem solving without detailed comparisons of behavior.
In the paper, we described behavior in terms of programs of “primitive information processes,” which could be executed on computers. The theory could be tested by matching the computer simulation to actual behavior revealed in thinking-aloud protocols. In our paper this information-

processing theory of thinking was compared with neurological, associationist, and Gestalt explanations.
Rather than emphasizing the novelty of our theory and proclaiming a new ''school" in psychology, we tried to show the continuity of our approach with the work of both our associationist and Gestalt predecessors. This paper was thus the first explicit and deliberate exposition of information-processing psychology, but without using that or any other trademark name.
GPS (including the planning method) was first publicly described (but not named) in “The Processes of Creative Thinking,” a paper read on May 14, 1958, at a University of Colorado symposium and in June at a RAND summer seminar; it was not published until 1962 (Newell, Shaw, and Simon 1962). This paper contained the first fragment of informal comparison of computer trace (hand-simulated) with a human thinking-aloud protocol. The suggestion that computers could simulate even creative activity created a stir at the Colorado meeting, not unmixed with a large quantity of skepticisma reasonable reaction.
Chess:
The Drosophila of A.I.
Work on a chess program, nearly dormant since 1955, was resumed toward the end of 1957.* Meanwhile, our awareness of Adriaan de Groot’s Het Denken van den Schaker [The chess player’s thinking] (1946) led to friendship and a collaboration with de Groot and his colleagues in Amsterdam that has continued down to the present (although not without some deep differences between us in methodological and conceptual outlook).
Chess has become a standard tool in cognitive science and artificial intelligence research (a standard “organism,” like Drosophila or Neurospora in genetics). Powerful programs, now at grandmaster strength, employ the speed and power of modern computers, sometimes analyzing fifty million or more possibilities before they make a move. Although they also use extensive chess knowledge, such programs belong to A.I., not to cognitive science.
Our own research on chess has been aimed at understanding human chess players, who at most may analyze a hundred branches in a difficult position. Our NSS program (1958), like Al Newell’s 1954 proposal, reasoned about

  • A history of early computer chess programs, and a description of the Newell-Shaw-Simon (NSS) program, can be found in “Chess- playing Programs and the Problem of Complexity” (1958, particularly pp. 322 31).

positions in terms of goals, and did only a little analysis. (It also played poor chess, to the temporary delight of Hubert Dreyfus, the well-known professional critic of artificial intelligence.)
Working with my son, Pete, and subsequently with George Baylor, a graduate student, I constructed another chess program, MATER (Baylor and Simon 1966), that, using highly selective and humanoid methods of analysis, was formidable in finding deep mating combinations but useless in other aspects of chess playing. In its special domain, it demonstrated the power of selective heuristics to avoid extensive search, supporting the claims I had made in the appendix to my “Behavioral Model” paper (1955a).
With another student, Michael Barenfeld, I worked with some of these same ideas to simulate the eye movements of an expert chess player scanning a novel chess position, thereby refuting arguments advanced by Gestalt psychologists that computers could not model the intuitive, “grasp-it-all-at-once” processes of human experts. We showed that human eye movements, which appear to indicate a grasp of the whole position, could be produced quite simply, using commonplace chess heuristics and without any special holistic Gestalt processes.
The research on perception led to work with Kevin Gilmartin, and ultimately to studies on expertise in chess with Bill Chase. These studies of perception solved many of the problems that had confronted us in 1955 and that had diverted us from chess to logic as our first task of computer simulation.
The Rand Summer Seminar, 1958
By the spring of 1958 we had carried out extensive experiments with the Logic Theorist; the General Problem Solver had been conceived and hand-simulated; the Newell-Shaw-Simon chess program was running; the EPAM program was under construction; and human protocol data were being gathered and analyzed. Research was also now progressing at a lively pace at M.I.T. (with Marvin Minsky and John McCarthy), but with emphasis upon artificial intelligence rather than cognitive science.
Communication was still weak between our efforts and other main lines of development in information- processing psychology. Among the most important of these were the burgeoning field of psycholinguistics, whose leading representative within psychology was George A. Miller, then at Harvard; work in concept formation deriving from information-theoretic points of view (by, among others, Carl Hovland at Yale, Jerry Bruner and colleagues at Harvard); and research focusing on “vigilance,” attention, and

the processes of short-term memory, Donald Broadbent’s laboratory in England being an important example.
The Ford Foundation had given the Social Science Research Council a small sum of money to be spent on cognitive psychology. Responsibility for spending it was assigned to a committee consisting of Hovland and Miller, who then co-opted me. The committee in turn asked Newell and me to organize a summer seminar in 1958 at the RAND Corporation in Santa Monica, aimed at acquainting a wider circle of social scientists with computer simulation and its use in psychology. Lectures and seminars were conducted principally by Newell and me, but also by Hovland, Miller, Minsky, and Shaw. Programming instruction was provided in IPL-IV, then running on the RAND computer, and the curriculum was built mainly around the principal programs already constructed or under constructionespecially LT, GPS, the NSS chess program, and EPAM.
In addition to the group that organized the seminar, the participants included a number of persons who subsequently played a significant role in developing computer simulation methods, relating those methods to classical psychological approaches and naturalizing them on university campuses.* Dan Berlyne, in Structure and Direction in Thinking (1965), examined the relations among information-processing psychology, Piagetian psychology, and Hullian learning theory. Bob Abelson’s “hot cognition” research (1963) was one of the earliest attempts to bring motivation and emotion within the scope of the information-processing paradigm. Bob Abelson, Jim Coleman, and Bill McPhee were among the principals who organized the Simulmatics Corporation, which sought to import simulation methods into social psychology.
Bert Green led the team that produced the BASEBALL program (Green et al. 1961), an important early application of artificial intelligence ideas to information retrieval. He also directed the RAND summer seminars in 1962 and 1963. Hovland, with his student E. B. Hunt, constructed information-processing models of concept formation (see Hunt 1962). Don Taylor extended the information-processing approach to problems of motivation, and wrote several expository reviews on problem solving (1960). Roger Shepard made the case for information processing to the (then still very skeptical) verbal learning psychologists (1963).
Many other activities can be traced to the 1958 seminar or received an impetus from it. Perhaps most important, in terms of its subsequent influence

  • The effects of this seminar were further reinforced by additional workshops of the same kind held at RAND in the summers of 1962 and 1963. A fourth was organized at Carnegie Mellon in 1972.

in psychology, was Plans and the Structure of Behavior, a book Miller wrote in collaboration with Galanter and Pribram during the year 1958 59. As the authors were firmly rooted in the psychology establishment and, prior to writing that book, in behaviorism, the book gained wide attention for information-processing psychology as a radical alternative to the prevailing behaviorist paradigm.
The writing of the book caused a quarrel with George Miller, which strained my relations with him for a short while but which was settled wholly amicably and has not prevented us from being good friends ever since. I will let George tell the story in his own words:
The next year [i.e., after the RAND seminar] I spent at the Stanford Center for Advanced Study in the Behavioral Sciences, and Eugene Galanter and Karl Pribram were there. And I’d come along with all this material from this summer seminar. We began meeting together, and our discussions got rather interesting, so we decided we should record them; and the first thing we knew we’d written a book. We showed it to Newell and Simon, who hated it. So I rewrote it, toned it down, and put some scholarship into it, and it is now the book Plans and the Structure of Behavior (1960).
Newell and Simon felt that we had stolen their ideas and not gotten them right. It was a very emotional thing. Since then I’ve discovered the good thing about Herb is that he can be shouting at you one minute, and the next minute have a drink with you. You just don’t back off with Herb Simonotherwise he’ll bully the hell out of you. His aspect is different from any other person’s I ever knew. I had to put the scholarship into the book, so they would no longer claim that those were their ideas. As far as I was concerned they were old familiar ideas; the fact that they had thought of it for themselves didn’t mean that nobody ever thought of it before. [From an interview with Bernard J. Baars, in Baars 1986, p. 213]
The references and footnotes in Plans give a good picture of the role that the RAND seminar and the previous work of the RAND-Carnegie group played in its conception (there were some eighteen references to it, twice as many as to any other work), as well as the influence of the other channels of information-processing psychologyparticularly the linguistic and “human factor” channels. The book also gives an excellent picture of the “prehistorical” zeitgeist that complements the account I have given here.

Stating the Theory:
Human Problem Solving
From an early date, Al and I had decided to write a treatise on human problem solving based on our research with human subjects and computer simulation. The project may have been launched as early as 1958 (the evidence is inconclusive); the published volume appeared in 1972.
Human Problem Solving begins by introducing information processing, computer simulation, and problem solving by heuristic search. Then it describes our empirical investigation of three task domains: logic, cryptarithmetic, and chess, and concludes with an exposition of the theory of problem solving we inferred from the evidence.
LT and list-processing languages were used to illustrate the introductory discussions. GPS provided the machine for simulating human behavior in the Moore-Anderson logic tasks. The NSS chess program and other chess-playing programs were compared with evidence on human chess playing.
The cryptarithmetic puzzles led us to an important new insight. In a cryptarithmetic puzzle, the subject is given a pseudo-arithmetic problem, for example: SEND + MORE = MONEY. The problem is solved by substituting digits for letters in such a way that the result is a correct sum. For example, 9567 + 1085 = 10652 (where S has been replaced by 9, E by 5, N by 6, D by 7, and so on) is a solution to the problem just given. The general method of means-ends analysis, the workhorse of GPS, seems to apply to these problems as to the others.
The basic GPS control structure that determined what step a subject would take next did not predict our protocols well, however. In the protocols we saw something that looked like a production system, a form of organization already well known in computer science that had not been applied to psychological systems (except to the extent that any primitive stimulus-response, S-R, connection can be regarded as a production).
In a production system, each elementary instruction has an if-then form: If conditions C are satisfied, then take action A. Whenever the conditions of a production are satisfied, the action is taken. When the conditions of several productions are satisfied simultaneously, these conflicts are resolved by priority rules. (For example, the productions may be listed in order of priority.)
Since the mid-1960s, when we introduced productions into psychological theory, they have been widely adopted, both to explain how human experts make “intuitive” decisions by recognizing familiar cues directly, and as the basis for so-called expert systems in artificial intelligence. Experts, human and computer, do much of their problem solving not by searching selectively

but simply by recognizing the relevant cues in situations similar to those they have experienced before.
Production systems were important for the shift in cognitive science and artificial intelligence in the 1960s from systems like GPS, which relied on general problem-solving skills, to systems that relied on large stores of specific knowledge. Of course, most expert systems (human and computer) rely on both.
We might trace the production system idea, as applied in psychology, back along several paths. First, a production, Condition Action, bears some resemblance to the stimulus-response connection, S R, of behaviorist psychology. The stimuli are the conditions that trigger the response. When we examine details, we find many differences, but the analogy is close enough so that causation cannot be dismissed out of hand.
Second, we can find early examples, in the literature on problem solving and decision making, of references to intuitive solutionthat is, solution by recognition. The following passage is from a talk I gave on August 26, 1957:
One can train a man so that he has at his disposal a list or repertoire of the possible actions that could be taken under the circumstances a person who is new at the game does not have immediately at his disposal a set of possible actions to consider,
but has to construct them on the spot, . . . a time-consuming and difficult mental task [T]he decision maker of experience has at
his disposal a checklist of things to watch out for before finally accepting a decision. . . .
A large part of the difference between the experienced decision maker and the novice in these situations is not any particular intangible like “judgment” or “intuition.” If one could open the lid, so to speak, and see what was in the head of the experienced decision maker, one would find that he had . . . at his disposal repertoires of possible actions; that he had checklists of things to think about
before he acted; and that he had mechanisms in his mind to evoke these, and bring these to his conscious attention when the situations for decision arose. Most of what we do to get people ready to act in situations of encounter consists in drilling these lists into them sufficiently deeply so that they will be evoked quickly at the time of the decision.
But we can find the idea of production systems, or a similar idea, in the psychological literature much earlier. The idea of condition-action pairs was central to Otto Selz’s theory of problem solving. In 1924 he wrote:
Intellectual processes are not a system of diffuse reproductionsas association psychology thoughtbut rather, like a system of body movements, particularly

of reflexes, they are a system of specific reactions in which there is as a rule an unambiguous relation between specific conditions of elicitation and both general and special intellectual operations.
In computer science, production systems, derived from formal logic (see, for example, Post 1941), were applied to the design of so-called string-processing languages and to systems programming tasks in the early or mid-1960s. At Carnegie Tech production systems quickly infiltrated into thesis projects. One student, Tom Williams, used a production system language in a 1965 dissertation, and another, Steve Coles, used one in a 1969 thesis (descriptions of both of these systems can be found in Simon and Siklóssy 1972). Meanwhile, Al and I were using production systems by 1965 to analyze chess and cryptarithmetic protocols.
It appears, then, that the idea that expert “intuition” is to be explained by recognition mechanisms was already abroad in 1924 (and probably much earlier), and that production systems were being used for cognitive simulation as early as 1965. And these ideas seem not to have a single line of antecedents, but several different ones.
Representation and Meaning
In most of the work incorporated in Human Problem Solving, Al and I collaborated closely. But around 1960, as we each began to work with our own graduate students, our paths began to diverge toward many projects separately. For me, the work with doctoral studentsEd Feigenbaum on EPAM, Bob Lindsay on inference and natural language understanding, and Ken Kotovsky on letter series extrapolationwere among the first instances of that divergence.
Meanwhile, I was becoming less directly involved in the GPS research, which Al continued with his student George Ernst. Nevertheless, during the 1960s, Al and I continued a frequent and close discussion of all the issues on which we were working, as indicated by the acknowledgments in our respective papers. We continued to have common concerns, but we began to develop somewhat different strategies for pursuing them: I tended to build models for specific tasks, testing them against human data; Al devoted more attention to general issues in the design of complex systems. It is easy to make the distinction sharper than it was, however, as Al worked on protocols in the cryptarithmetic, logic, and chess tasks, and my efforts extended the information-processing theory to account for motivation and emotion, perception, and creativity.
The specialization of efforts can be seen in the bibliography of Human

Problem Solving. In the years 1956 to 1962, there are no entries in which Al Newell is sole author, but nineteen co-authored by some combination of Newell, Shaw, and Simon. Similarly, from 1957 to 1961, there are no entries in which I am sole author. From 1963 on, I begin to appear frequently as co-author with students and with other faculty colleagues. Newell appears as co-author with Ernst during this same period, and as sole author of a sequence of well-known papers on the architecture of intelligent systems. Toward the end of the decade, his publications also deal with analyses of protocols that were later incorporated in Human Problem Solving.
I spent the year 1960 61 in Santa Monica, on leave at RAND, where I worked on two main projects in addition to my continuing collaboration with Al. One was to begin a major revision of EPAM, the system that Ed Feigenbaum had built for his dissertation. The other was a study of automatic programminggetting the computer to write its own programswhich eventuated in a system I called The Heuristic Compiler (HC). HC generated programs automatically, using a GPS-like mechanism. While it never went beyond a toy, HC provided a stock of ideas that were later drawn on by others investigating automatic generation of programs.
Questions of Representation and Learning
Problems in the real world are sometimes presented in the form of natural language statements (problems in textbooks), sometimes in the form of visible situations (the road in front of our car), sometimes in a combination of natural language text and pictures and diagrams (a scientific article). The steps that translate a problem from the form in which it is presented to an internal form on which the available problem-solving processes can operate are a crucial initial component of every problem-solving activity.
An explanation of problem solving is grossly incomplete if it does not account for what goes on in understanding the problem, or, what amounts to the same thing, in forming an internal representation of it. In a program like GPSindeed, in all the early problem-solving programsthe internal representation had to be provided to the problem solver by the user, thereby bypassing this important part of the problem- solving process.
Creating the internal problem representation requires a semantics, that is, information on what the representation denotes in the outside world. A semantics is needed both when problem solving begins and, subsequently, when changes in the external situation need to be known by the solver. This requirement is bypassed in problem-solving systems that operate wholly internally (that work out solutions in their “heads”), as most A.I. problem

solvers do today; but it becomes critical in robots that interact with a real-world physical environment (for example, autonomous vehicles).
Creating a problem representation from descriptive statements or pictures is also a form of learning. The information that lies in human memories is produced by transforming information acquired from outside. LT had learning capabilities, for it could store the theorems it had proved and then use these to help prove later theorems. EPAM was primarily a learning system, for it not only stored away response symbols but grew a discrimination net that allowed it to sort or recognize stimuli and thereby gain access to the appropriate responses in memory.
Learning is crucial to a system like the human mind that cannot be changed directly by opening the cover of the box and inserting a new program. Human memory can be altered only by learning. Reflecting on the limitations of the first generation of intelligent programs, we concluded that much of our research should focus on semantics, representation, and learning.
During the 1960s, my students, colleagues, and I focused on these topics, and some of our work was published in a 1972 book titled Representation and Meaning (Simon and Siklóssy 1972). Our studies aimed at finding basic mechanisms for understanding, not at achieving detailed matches with human data. They were closer to artificial intelligence than to cognitive science, but paid attention to both.
Thomas Williams (1965) and Donald Williams (1969) (no relation) explained how an information- processing system can use external information to learn to perform a taskthereby contributing to all three topics of understanding, representation, and learning. They dealt with two quite different kinds of external information: Thomas Williams with instructions of the sort one finds in Hoyle’s book of games; Donald Williams with examples of items on intelligence tests. How does one learn the rules of poker from Hoyle, or the requirements of a test from some illustrative items?
Steve Coles (1969) and Laurent Siklóssy (1968) showed how to extract meaning from combinations of pictures and natural language sentences that described the pictures. Coles used information from the pictures to remove syntactic ambiguity from natural language, while Siklóssy’s system learned to produce natural language sentences that described corresponding pictures (“The dog chases the cat” from a drawing of that event). These programs cast important new light on semantics and on how the meanings of symbols can be learned.
Finally, Harry Pople (1969) described a problem-solving system that used two different kinds of internal representations: one described situations with explicit propositions, the other represented them by modeling them. His

work addressed an issue that is still very much alive in artificial intelligence: the respective roles of logical reasoning and selective search using mental models in problem solving.
To my considerable surprise and chagrin, Representation and Meaning made no splash at all, and these fine pieces of research appear to have had little impact on later work. They all introduced important new ideas about semantics and opened up paths that had to be re-explored a decade later. Siklóssy’s theory of language learning is, in my view, still in advance of anything else that has been done on that topic. I can only think that this work was too far ahead of its time, that it provided answers to questions that other researchers had not yet begun to ask.
Another Carnegie Tech dissertation written under my supervision during this same period and responding to some of the same questions was Ross Quillian’s Semantic Memory (1966). It had a very different fate from the others. Quillian proposed a network model of memory that spread activation through memory to resolve ambiguities in the meanings of words occurring in sentences. The presence of other words in the same sentences provided a context that could activate the relevant lexical alternatives. In the context of “bird,” flicker would be interpreted as a large woodpecker; in the context of “light,” as a fluctuation in intensity.
Quillian published his dissertation in 1968 in a volume of theses (all from M.I.T. except Quillian’s) edited by Marvin Minsky, titled Semantic Information Processing. That volume, which contained interesting work by Bobrow, Rafael, Evans, and others, also bearing on semantic questions, was well received and did not lack attention in computer science. When Quillian and Collins carried out some experiments testing the network theory empirically, it began to receive attention in psychology also, and Quillian’s system was a direct forerunner of subsequent models that use a spreading activation mechanism. The stark contrast between the success of Quillian in arousing interest and the failure of the others remains a great mystery to me and, except for the case of Quillian, a major disappointment. I have always felt that I somehow let these colleagues down.
The response to the experimental and psychological research of this same period was more satisfactory. Dan Bobrow at M.I.T. had built an early system for solving algebra word problems, using syntactical knowledge almost exclusivelythat is to say, the system solved problems “mechanically” without understanding what they were about. With a Harvard undergraduate who came to work with me for a summer, Jeffrey Paige, I ran some experiments on high school students solving algebra problems, to see how closely their behavior fit Bobrow’s program.
We discovered that our subjects divided in two groups: one interpreted

the problems syntactically, as Bobrow’s program had, but the other used the real-world meanings of the problems. When the problem spoke of a board being cut in two parts, the ''syntactic" subjects parsed the sentence; the “semantic” subjects imagined a rectangular figure with a line drawn across it to represent the cut, or two figures after the cut had been made. This paper has become over the years a standard item in the problem-solving literature. It is today evident that successful problem solvers frequently use diagrams to mediate between words and their inference processes.
Similarly, the experimental work with Kotovsky on letter series (Simon and Kotovsky 1963; Kotovsky and Simon 1973) gradually caught on, as did the research on chess perception with Barenfeld (Simon and Barenfeld 1969) and Gilmartin (Simon and Gilmartin 1973). But it was a series of papers with Chase (Chase and Simon 1973a and 1973b) on the memory of experts and novices for chess positions that received massive attention from psychologists. It also established a tradition of expert/novice experiments that persists to the present time and has had major impact both on psychology and on expert systems for artificial intelligence.
Shortly after the chess experiments got under way, John R. Hayes and I built a system that could understand problem instructions expressed in natural language and then encode them into an internal representation suitable for a General Problem Solver. This UNDERSTAND system, being farther from the current concerns of experimental psychology than the chess perception research, was a slower starter, but has gradually caught on as a (partial) theory of the understanding of natural language.
I have given only the briefest descriptions of the contents of these research projects, focusing mainly on the issues they addressed and their reception. Anyone interested primarily in substance should consult Human Problem Solving (1972), Representation and Meaning (1972), and Models of Thought, volume 1 (1979), the latter containing a collection of about thirty of my psychology papers of this period, including most of those I have mentioned. For a shorter and much less technical overview, I recommend chapters 3 and 4 of the second edition of The Sciences of the Artificial (1981).
Missionary Efforts
From the account thus far, it can be seen that the discipline of psychology was not unduly eager to embrace the new information-processing paradigm. There had been too big a leap from Hullian S-R psychology (not to mention Skinnerian behaviorism) to computer simulation. The use of thinking-aloud

protocols as data was sometimes misunderstood as an attempt to revive introspection. Even the work of Hebb, which had prepared psychology for a more cognitive approach, helped only a little, for his physiological interpretations of processes left little room for a separate information-processing level of explanation.
Because, at the outset of our plunge into psychology, I had only marginal status as a psychologist (through my work on social and organizational psychology), and since Al had none at all, it was of great importance that two prominent psychologists, George A. Miller and Carl Hovland, were early attracted to the information-processing viewpointarrived at it, in fact, from information theory and wartime human- factors research just as Broadbent had in England. After we joined forces with them in the 1958 workshop at RAND, psychologists generally exhibited cautious interest toward the information- processing approach rather than rejecting it out of hand.
Our cause was aided by the parallel development of research on short-term memory and chronometric studies of perception, which also gradually adopted the information-processing label. (Donald Broadbent [1954] is an important example of this work.) That line of research, although it flew the same banner of information processing that ours did, differed from ours in several aspects.
First, it was strongly in the tradition of experimental psychology, with theory playing a distinctly subordinate role to experimentation. Second, it tended to focus on simpler and more traditional laboratory tasks of perception and choice, rather than the complex problem-solving tasks that we often employed.
Third, it was not particularly committed, or committed at all, to computer simulation as a way of formalizing and testing theories. Finally, it relied on more standard experimental designs, using speed and accuracy of response as its principal data. It made little or no use of thinking-aloud protocols as data.
What the cognitive or information-processing approaches to psychology had in common, and what distinguished them from behaviorism, was a willingness to consider what lay between the ears and to use words such as mental without blushing. Both were interested not only in the phenomena of thought but in its mechanisms and processes as well. Between them, these two varieties of information-processing research continued to gain adherents, until today the information-processing label has become positively faddish. Everyone is an information-processing psychologist.
The existence of the two varieties of cognitive psychology helps explain why the experimental side of my psychological research (for example, the work with Bill Chase) caught on much more rapidly than did the computer

models (such as the EPAM theory of verbal learning, which still suffers from neglect in spite of the wide range of experimental data it explains).
As for our work in computer science, GPS had an enormous success, but the other systems I have built or helped to build, like the Heuristic Compiler and UNDERSTAND, have had less impact. One possible reason is that the problems I have worked on have tended to be rather specific, and have required attention to psychological detail rather than to more general principles of system architecture.
The Signs of Recognition
I hope that readers will not detect in this account any hint of self-pity. Any claim that the world has neglected our research would fly spectacularly in the face of the facts, and could even be a symptom of acute paranoia. My account aims, rather, at understanding why and at what times particular meteors fall from heaven with a terrific crash while others slip silently and unnoticed into the sea.
There were many tangible evidences of recognition of my work during this period, although it is not always easy to know just what was being recognizedwhether the research on organization and administration, on bounded rationality, on artificial intelligence, or on cognitive psychology. Fairly early, I was invited to give lectures or series of lectures on various university campuses: by New York University in 1959, to speak on management; by Princeton, to give the Vanuxem Lectures in 1961; by Harvard for the William James Lectures on cognitive psychology in 1963; and by M.I.T., for the Compton Lectures on artificial intelligence in 1968. The NYU Lectures produced my book The New Science of Management Decision (1960, 1965, 1969), and the Compton Lectures, The Sciences of the Artificial (1969, 1981). The Vanuxem and James lectures were not published, for their content was too closely interwoven with the joint work with Al that eventuated in Human Problem Solving.
Honorary degrees and awards also began to come my way: degrees from Yale and Case (1963) and Chicago (1964) were the earliest. In 1959, I had been elected to the American Academy of Arts and Sciences (the “Boston Academy”) and the American Philosophical Society (the “Philadelphia Academy”), probably as much as anything for the visibility gained through my activity in the Social Science Research Council.
In 1969, the American Psychological Association awarded me its Distinguished Scientific Contributions Award (which troubled me somewhat, for it should have been a joint award with Al, who had to wait until 1987

for his). In 1975, Al and I received the Turing Award from the Association for Computing Machinery, probably delayed some years while the association decided what to do about joint awards. Long before the 1970s ended, Al and I had been fully legitimized in the cognitive science and artificial intelligence communities.
I soon learned that one wins awards mainly for winning awards: an example of what Bob Merton calls the Matthew Effect. It is akin also to the phenomenon known in politics as “availability,” or name recognition. Once one becomes sufficiently well known, one’s name surfaces automatically as soon as an award committee assembles.

Two decades of research took artificial intelligence and cognitive psychology a long distance across the plain that we saw from the vantage point of the Logic Theorist, and our university was a major center of the research contributing to this advance.
The initial research strategy never changed in any essential way. We identified significant intellectual tasks that people performed, either for recreation (puzzles and games) or as part of their daily occupations (investment decisions, pattern discovery, verbal learning). If we thought our understanding had reached the point where there was a good chance of writing computer programs to perform the tasks, graduate students and faculty members undertook to construct these programs and to test them for effectiveness or for conformity to evidence of human performance. As we came to understand the simpler domains of the mind, we undertook increasingly complex ones. The boundaries of the explored land advanced steadily, but there was always ample terra incognita beyondand the year 1978, with which this chapter ends, gave no signs that it would all be explored soon.

Chapter 15
Personal Threads in the Warp
Beyond my boyhood and college experiences and the first years of marriage, I have so far said relatively little about my personal and family life. Without aspiring to the same frankness I have shown in describing my professional life, I will pick up the thread of my personal life, well aware of the artificiality inherent in separating the two. My professional activities were not unwarmed by emotion; a large part of the pleasure I have had in life has come from them. Nor has this been a purely solitary or intellectual pleasure. Most of my research and all of my politicking have been collegial, social affairs. I have worked with people I liked (and occasionally with some I did not like), and enjoyed deeply the associations and friendships. And, for both my wife and myself, social life has been confounded with professional life, as most of our friends, our hosts, and our guests have been part of academe, often Dorothea’s or my departmental or office colleagues and their families. There has been no definable boundary between the sociability of work and of leisure.
As with most families, our life has gone through phases closely synchronized with the growth of our children. During the years in Chicago (1937 39) and Berkeley, we were two. In the second Chicago period, at Illinois Tech, we grew to be five. By about 1961, the children were mostly away at school, and we were two again. So we have remained, except for visits with children and grandchildren. I will try to characterize our life in these various conditions.

Family Life
You have already had some account of my interest in girls and women. If the statistics of the Kinsey Report are at least approximately correct, I would have to judge the strength of my libido and my response to it as lying somewhere close to the middle of the distribution. I cannot remember a time when 1 was not interested in girls and, later, women.*
For more than half a century, my marriage to Dorothea has brought deep companionship and love. Meeting her, on the eve of my twenty-first birthday, was no more, or less, a matter of love at first sight than had been my first meetings with some other beautiful women, but in this case a good beginning progressed to a full and satisfying relationship and a wedding six months lateron Christmas Day, 1937. Dorothea and I have been either very clever or very lucky ever since.
In spite of the expert status that fifty years of the institution should bestow, I have little to say about marriage that has not been said better by competent marriage counselors. In retrospect it seems easy, but the road was not without buried mines that had to be detected and removed. Dorothea and I started out with strong shared interests in political science and liberal politics. I was a little worried (I am not joking) that she did not know calculus, but she promised to remedy that, a promise she fulfilled only many years later. I wanted my wife to share all my interests, which included mathematical social science, but that did not wholly work out.
The interests we did share, soon supplemented by those of our social life and then of raising children, were quite enough. We shared liberal political views and activities; before the children arrived, we wrote and published a number of papers together on municipal government. When we married, Dorothea had been pursuing a Master’s degree in political science. We decided that each of us would take time off from our jobs to complete our degrees, and we each finished our requirements while we were living in California.
Although I would not pass muster as a feminist by present-day standards, I certainly did not have my father’s hangup about a wife’s contributing to the family budget. The problem of sharing housework when we were both employed was solved by eating out often and hiring a housekeeper. The rules under which we launched our undertaking would now be regarded as

  • I will soon be able to echo the comment of Justice O. W. Holmes, Jr., eyeing the young secretaries descending the steps of the Supreme Court building as they modestly clutched their skirts against a whipping breeze: “Oh, to be eighty again.”

old-fashioned, but at the time were progressive, although perhaps not avantgarde.
It was assumed that my career took priority. We agreed that climate or other geographical considerations were not important in choosing or changing a job. I would go wherever I could do my best work, and Dorothea would go with me. That may seem a major concession, especially since she was a native Californian, but she has never had a particular yen to return there. Nor did it seem a major decision at the time, or even a decision at all, that her employment would have to fit itself to mine.
Dorothea would work as long as she liked, but as we began to raise a family she would probably manage our household for some years. Again, my recollection is that this was more an implicit assumption than an explicit decision requiring discussion. After Kathie’s birth, Dorothea did continue working for two years, but then became a full-time mother and housewife, albeit with a heavy schedule of volunteer activities in the League of Women Voters and several other organizations, until the children were grown. At that time, she went back to school to prepare herself for a new career in educational research, and as a result, we again had the pleasure of working and publishing together, this time in cognitive psychology.
Under the rules by which we were playing house, the size of our family would have more impact on Dorothea’s life than on mine. Therefore, she had the majority vote in making the decisions, though I don’t recall that we disagreed on them. We waited until near the end of our graduate studies to have Kathie, in 1942, and Peter and Barbara arrived two and four years later, respectively. In that era, it appeared that all academic families had three children, so we conformed to the mode.
In that era also, the role of fathers in the birthing process was minimal. I believe I was at the movies when Kathie was born, having been assured at the hospital that nothing was imminent. But I dutifully sat in the waiting room while Pete and Barbara were making their way into the world. (In the case of Pete, I brought along a book on vector analysis to while away the time. I don’t recall what my reading matter was when Barb was born.) I learned to change diapers and took my turns on the night shift, but otherwise Dorothea took responsibility for the infants.
By disposition I was the disciplinarian of the family, and differences in our attitudes toward discipline were sometimes a major cause of stress between Dorothea and me, particularly as it began to appear that Pete, impatient of authority from the beginning, was going to have a stormy youth. In any event, the now- grown children tell me that I was a stern father, but reassure me that I am not now a hostile figure in their memories of their youth.

Although a disciplinarian, I probably was not a consistent one, and that for two reasons. First, I have a deep respect for independence of mind and spirit. My graduate students would be willing, I think, to attest to that. When Pete was having his greatest battles with me and with much of the world, I secretly admired his stubborn pluck. I certainly never crushed his spirit, although life might sometimes have been briefly quieter if I had been able to.
The second reason for my inconsistency in discipline is that I have been for most of my adult life a workaholic, toiling (more accurately, enjoying my work) for sixty to eighty hours a week, sometimes more. I enjoyed my children, but was never very good at entertaining them for long periods of time beyond reading to them in the evenings. I had too little patience to listen to them at length or to play many games at their level. Nor had my own father felt responsible for entertaining me much as a young boy, although I did spend many hours watching what he was doing. But his activitiesgardening, carpentry, fishingwere infinitely more watchable than mine of writing papers.
So I was not a stellar father. Moreover, I had strong doubts that I knew what was best for my children, or that I could predict the consequences of one way of treating them over another. Because my own childhood had been handled with a good deal of laissez-faire, my convictions about a father’s role were that children should not be overguided or overprotected. The thought of Little League baseball, with its cheering, involved parents, sends chills down my spine. In my boyhood, baseball was self-organized, played in a vacant lot or alley. Adults were not welcome.
In recent years I have felt much better about my fatherhood. I seem not to have done permanent harm. Kathie, Pete, and Barb, now in their forties, are progressing through interesting, challenging lives. They are warm and affectionate, evidently holding no deep grudges against their parents. On visits, we have good times together. Their basic values are ones I can respect (that is, they are much like mine). Dorothea and I have shared many of their problems with them, and they, with us. And for more than twenty years now, we have been watching the same process unrolling with our grandchildren, who now number six. As one grandchild is now married, we may even be greatgrandparents soon. I note with satisfaction that none of the children or grandchildren seems to have been unduly damaged or intimidated by my notoriety in the world.
Then there is the matter of money, a topic that must command some attention in life even from two who can live as cheaply as one. Throughout our married life, income has fortunately never been a serious problem. As has often been observed, there is no difficulty in maintaining solvency if

you spend no more than you earn. In particular, laziness and lack of imagination are great aids to solvency. Buying a vacation house or a sailboat or a second car, or even remodeling one’s residence, calls for initiative and energy. Dorothea and I have always seemed to be too busy with other activities to find much time for those things.
During two periods, however, we were a bit pinched on my academic salary. When we bought a house in Chicago in 1948 and had to do it all over again in Pittsburgh in 1949, we had essentially no savings, and it was lucky that my parents could take our second mortgage. Later, when Kathie was in college and Pete and Barbara in private schools, the budget stretched tight, but I was able to supplement my salary from RAND and other consulting. In recent years, I have been in the fortunate position of being able to accept or reject opportunities for consulting and lecturing on the basis of their professional interest and without regard for income.
I allow myself a total of about four days a year to attend to investments and other financial matters. I haven’t checked the record in detail, but I have the strong impression that we have done approximately as well as if we had put our savings in an index fundperhaps better. But it is easy to delude oneself in these matters unless one carries out the calculations. My training as an economist has been of help in one important way: It convinced me that the only information that is of value in a financial market is information that other people don’t have.
This means that I don’t have to pay daily, or even monthly, attention to the stock market, since it tells me nothing about whether I should buy or sell. Hence, the turnover on my investments is very low, to the discouragement of the brokers with whom I do business (and from whom, also, I never take tips or advice). Nothing that I learned while serving on the Finance Committee of the Carnegie Mellon Board of Trustees has shown me that this strategy is wrong. It satisfices; perhaps it even optimizes.
This is not to imply that I dismiss money as unimportant. Dorothea and I have always had enough for our needs, and usually more. If that had not been the case, I would undoubtedly sing a different tune.
Moreover, making a lot of money (I mean a lot) might be a fun game. It just is not a game I have had occasion to play, and I might have some qualms about its zero-sum aspects, as it is usually played. Apart from the game aspect, I have a hard time imagining why people want the stuff. Such trouble!
One writes about a life in terms of Big Issues and Critical Events. But on rereading these pages, I see how little they convey of our moment-to-moment existencethat is, of most of the hours of my life. Gertrude Stein called it her “daily living”; infrastructure is a trendy word for it, and it is as fundamental to a life as to a society. The moment-to-moment life Dorothea

and I have led has been simple. We love our home, and we try to make it attractive and comfortable to ourselves, but have never tried to make it a work of art. We like good food, but eat simply. We entertain little, and that for friendship and not to maintain a social position. We enjoy each other’s company (and solitude) sufficiently that we don’t go out very much. We like good music and good art, but don’t often exert much effort to experience it. The television and stereo are rarely used, the piano (amateurishly) more often. In recent years, we don’t even get to the movies frequently. But we are voracious readers with many common tastes.
Sticks in the mud, you might say. Recreations and Diversions
If we have ever exerted effort in entertaining ourselves, it has been in our travels, domestic and foreign. In
the early days of our marriage, we several times took hiking vacations, which I have described in the first panel. We did not camp with our children, howeverprobably because of laziness about the logisticsbut, usually in the summers, took long auto trips with them: three or four to the West Coast, a trip to Maine and Quebec, trips to Atlanta and Ensenada, and many others. Those trips were good fun (for all of us, I think) and have given me my best memories of being a father.
Our home emptied out rather early. When Pete had serious problems with us and with the local schools, we found a school for him in New England that guided him with patience and success. When Barbara, after an especially happy year in the Santa Monica High School, was desolate at the prospect of returning to a less congenial Pittsburgh school, we found a private school for her, also in New England. Kathie, meanwhile, had gone to college, first in Ann Arbor, then in New York. By 1961, Dorothea and I were mostly alone in our large house. The glue that holds us together is made of a habit of mutual love and affection, joint professional activities, and a shared curiosity about almost everything around us. We can gossip, not too maliciously, about our neighbors and our friends, about our children and relatives, about our respective daily activities. The encyclopedia, the atlas, the dictionary, and the world almanac frequently arrive at our dinner table to answer a question or settle a dispute. My Washington, or Dorothea’s Pittsburgh, committees, science, politics, religion, art, funny or ridiculous things we readall are grist for our mill. And, if all else fails, we sometimes bicker or retire to our respective workrooms.
Since the 1960s, our travels have most often taken us abroad, and have been as much urban as rural. Usually they have been associated with my

professional meetings or lectures, but with more recreational than professional intent, at least until the trip to China in 1972. In chapters 20 and 22, I will give an account of some of the more notable of these trips, especially those to Scandinavia, China, and the Soviet Union. Our many vivid shared experiences provide an increasingly pleasant source of reminiscence, a disease of old age that we do not attempt to resist.
Then, of course, there are the thousands of hours that have been taken up in hobbies. My principal ones have been hiking, piano, chess, painting and drawing, and acquiring foreign languages. As a boy I enjoyed, but was not good at, most sports, and in some other life would have pursued skiing, sailing, and tennis much farther than I did. Of my five main hobbies, only hiking can be sociable (unless you count chess, which provides a curious kind of sociability), and it is the only one I have shared much with Dorothea. She has had her own occupations, especially weaving and other crafts, and gardening.
Time is the tyrant. One cannot be loyal to two occupations any more than one can to two lovers. Whenever I found that one of my hobbies was seriously taking attention from my research, I dropped it. That happened to chess, and then painting, in the late 1950s. In both cases, I found that I was aspiring to professional competence, which obviously would have required an unlimited commitment. It was time to call a halt. It probably says something about my competitiveness that I often found myself getting serious about activities that were begun as diversions.
I spent two years in somewhat serious chess play as a high school student. The city recreation department provided chess-playing facilities in the evening. There I met Arpad Elo, the man responsible for the universally used chess rating system that tells whether a player is a master, an expert, or only an A player. (I never got beyond an A rating.) One evening I played Elo and lost as usual, playing White in the Giuoco Piano. When I got home, I reanalyzed the game and found that I could have beaten him easily if I had made the correct aggressive move with my Bishopon the seventh move, I believe. The next evening I pointed this out to him. ''Oh," he said, “but our game was last night.”
Wisely, I gave up serious chess in college. Three of us did play a consultation game against Edward Lasker when he visited the campus, and beat him. Many years later, I showed him the score of the game, and he ruefully pointed out where he had made his blunder. Chess remained on the back burner until we began work on the NSS program. Then I began to play regularly at the Pittsburgh Chess Club to raise the level of my sophistication, using the research as my excuse.
Soon I was playing in the city tournament, with a rating of 1,853 that

was rising fairly rapidly. I even beat the strongest player in the city at that time. (He was overconfident against a weaker player and tried for a win when he should have been satisfied with a draw.) Then I began to feel the juices of competitiveness rising within me, and dropped my chess at once. I could not have maintained the pace unless I devoted at least one or two days a week to the game.
With drawing and painting, I never thought I had any real talent, and my color-blindness certainly did not encourage me to cultivate these arts. At Christmastime, I think it was in 1958, and possibly partly to distract myself from personal worries (discussed in the next section), I began making collages and then switched to oil paint. A little later both Dorothea and I took some drawing lessons, and discovered that the ability to represent the world is at least as much a matter of skill as of talent (like everything else).
I was fascinated by painting. First, there were some puzzles to solve: What kind of a palette can a color- blind painter use without ending up with a totally confused canvas (since the green spots and the red spots you have laid down may be wholly indistinguishable)? But beyond this challenge, painting began to grip me and to occupy my thoughts when I might have been thinking about my research. I found it a terribly demanding and satisfying activity. After a year or two, I tapered off and have not gone back to it, but I am tempted from time to time.
My drawing I have kept up a little, when I travel. I sometimes carry a sketchpad, though never a camera, and am now surrounded (in a “hideaway” office on campus to which I can retreat) by sketches of mine that remind me of a favorite Japanese inn, of Hongkong, of Tianjin, of the Swiss Alps, of the Santa Monica beach, and of Panther Hollow in nearby Schenley Park. I have no illusions about the quality of the drawing, but it is great for evoking memories.
Hiking is not a competitive sport, so it poses no serious problems of loyalties. I do not aspire to become a world-class hikerand would be scared out of my wits if I developed aspirations to climb serious mountains.
It is harder to explain why I have not been more serious about my pianistic skills. As a child, I was reluctant to practice and, to avoid my teacher’s censure, had to develop a considerable skill in sight reading. The advantage of this is that I can leave the piano for long periods of time and return to it still able to maneuver through Mozart sonatas, Bach preludes, and the like, with some approximation to accuracy and nearly up to tempo. My playing is no worse, and no better, than it was in my seventeenth year, and I feel badly about that only intermittently. Oddly, I never commit a piece to memory, no matter how often I play it.
I have devoted a little time, but only a little, to musical composition, and

have engaged in research on computer programs capable of doing musical analysis. (You can find a report on one of them in volume 2 of Models of Thought.) Here, as in painting, I know that my aspirations could rise rapidly, and I have been correspondingly wary about becoming involved.
I count reading in foreign languages among my hobbies, for I have probably spent more time in that than in the other four combined. But I have already given some account of my language interests. And while we are on that topic, why haven’t I mentioned reading in general? It is, of course, much more than a hobby. It is one of life’s main occupations. As with eating, so with reading, I am nearly omnivorous. But my stomach for words is hardier than my stomach for rich foods, so I do not ration myself.
I am often complimented on the range of my interests. But you can see from this account that I control them severely. Moreover, the fact that understanding human thinking is my reigning passion has a curious consequence. For I can rationalize any activity I engage in as simply another form of research on cognition (and perhaps emotion as well). I have published on chess, on musical pattern, on the Chinese language, and on many other topics I simply stumbled into. I have also encouraged and followed the work of others in computer drawing (notably Duane Palyka and Harold Cohen), although I have not published anything myself. In a way, I can always view my hobbies as part of my research. It’s the best of all possible worlds.
Love and Marriage
Love has played not a small part in my life. By the broadest, Stendhalian or Proustian, definition of love, I have never been in love with a girl or woman who was not beautiful, nor wholly out of love with one who was.* The criterion of beauty is not necessarily classical: A face must be interesting, not just flawless.
And beauty has difficulty showing itself unless the face and eyes are lit by intelligence. I have always felt that, since faces have greater variety than do bodies, beauty resides somewhat more in the former than in the latter. When I encounter a woman who matches my idea of beauty, I am immediately stirred. But the attraction can wear off in minutes if there is no intelligence.
Love is as important in marriage as in the maneuvers leading up to it.

  • Alistair Cooke quotes from Charlie Chaplin’s autobiography: “Procreation is nature’s principal occupation, and every man, whether he be young or old, when meeting any woman, measures the potentiality of sex between them” (Cooke 1977, p. 26).

But my admission of susceptibility to beauty might raise questions about how well I have adapted to monogamy.
There are two mind/body problems. The classical one asks how a physical system can have thoughts. It has been answered definitively by the appearance of electronic computers that think. The second mind/body problem, quite different, is that of sacred and profane love. The solution is not so clear as to the first, and attempts to solve it make up a large part of what we call literature. As a young man I had to try to shape my personal answer.
For some years, I thought that body could be detached from mind for some purposesthat sexual attraction might be a precondition for love, but was certainly not synonymous with it (I still believe that), and that sexual acts with a loved one or with another had no implications for one’s love (I no longer believe that). I think I arrived at these conclusions in response to my body’s wants and not as a result of reading tracts on free love.
With what I now regard as great good fortune, I never fully acted on these earlier principles, and have always held strictly to the law, if not exactly to the spirit, of monogamy. Drunk, and occasionally sober, I have sometimes made advances to other women, but two things have prevented matters from progressing far. The first is my sense of vanity. To be attracted to a woman and find that I was not attractive to her would prick my pride. One sure safeguard is never to make the test, to move with great tentativeness and wait for a response. That is not a very powerful strategy in either love or war. But, in love, it does protect vanity and promote monogamy.
My second defense against profligacy I like to think of as a form of honesty. I have never been able to tell a woman I loved her when I felt only sexual attraction. Don Juan instructs us that most women are not very vulnerable to attacks that do not promise love, and preferably exclusive love. For more than fifty years I have been deeply in love with my wife, and I have been unwilling to say that I was not. None of the standard gambits for philandering were open to me. I could not say that I needed a woman because my wife misunderstood me. Sometimes Dorothea did misunderstand me, and I her, but crying on another shoulder did not seem a way to solve that problem while love remained.
A good logician, examining the sophistry of the preceding paragraphs, will see that I am not yet out of danger. I have not denied (and cannot deny) the possibility of being genuinely in love with two women at once, and this has occasionally happened to me.
About six or seven years after my marriage, a young woman enrolled in one of my classes; I’ll call her Karen. She was a couple of years younger than I and possessed of a remarkable, poised, aristocratic beauty. Bright

and imaginative, she was educated in the arts and humanities but not in the sciences. She had most of the attitudes that go with such an education, including a slight distrust of technology and some mild tendencies to mysticism. It was a delight to watch her beautiful and intelligent face during class sessions. We became friends, but on such proper terms that I was never able to decide whether she was sexually attracted to me. I soon learned that she was married and that her husband was away in the army. There were suggestions that the marriage was not successful.
Our conversations were on such topics as city planning and the arts, not on love. After the school year ended, I saw her only infrequently, and never under circumstances that could have led to intimacy. A few times she came to my office at ICMA. She was often late for appointments, sometimes very late, which put me into a tizzy of expectation. I had no doubt that I was in love with her, and no doubt that I was at least as much in love with Dorothea. I resolved to do nothing about the situation, a resolution that was easy to keep, as Karen, though a warm person, never hinted that she wanted more than friendship. Nor did I.
On one occasion, perhaps more, she came to our home (with a suitor in tow). Dorothea liked her and, I think, was not jealousat least there was no sign of jealousy except one afternoon when I arrived home quite late because of a meeting with Karen. I did not think of her often when I was with Dorothea, but on my travels, especially during the summer of 1948 that I spent in Washington, helping to organize the Economic Cooperation Administration, I sometimes dreamed of each of them (never both together). My contacts with Karen became infrequent, and were mostly exchanges of brief notes. I was aware of a divorce, a new marriage, and a second divorce. I usually knew her whereabouts. And I did not forget her.
In the summer of 1958, I was riding an emotional crest. Our initial forays into artificial intelligence had succeeded and were beginning to be recognized. The RAND summer seminar that Al and I had organized had just concluded triumphantly. The last evening, before departing for the Los Angeles Airport, I took a walk along the Santa Monica beach and was swept up in a tide of people who were moving in a dense mass toward the newly restored amusement pier that was just opening that day. I was carried along with them as far as the pier. Then, having to turn back and move against the tide, I was oppressed by a great depth of loneliness and emptiness in my isolation from that mass of happy, chattering humanity. But by the time I boarded the plane, my euphoria had returned.
A few weeks later, I was off to lecture to executive groups in two cities, an activity that keeps my adrenaline flowing. I drove from the first to the second meeting on a beautiful sunny summer afternoon, singing aloud much

of the way, not something I often do. My second lecture was scheduled for the following day. I had a pleasant dinner and a good night’s sleep. After the next morning’s meeting, a participant introduced himself and said he had been asked to convey greetings to mefrom Karen! She would be arriving that afternoon and hoped to see me.
Once at Rockmarsh (see chapter 2), I had grasped the wire of the electric fence while standing in the stream. The jolt I had just received, amplified by the euphoria I had been feeling for some weeks, hit me as hard. With difficulty, I fixed my attention on the business of the day.
Karen arrived toward the end of the afternoon (on time), and the three of us drove back to New York together, she driving the car and I trying to convey my feelings while being discreet in the presence of our companion, a task I found very hard. The same evening, I flew home to Pittsburgh, having plunged over the brink of an emotional precipice.
Having told Karen of my love for her, during the next few months I tried desperately to find a way to meet her on more than just a friendly basis, without denying my commitment to Dorothea. Karen acknowledged that she found me attractive (thus assuaging my vanity), but would not enter into an affair with a married man (divorce was never discussed).
Soon, my feelings of guilt at carrying on this negotiation were depressing me so seriously that I had to level with my wife. The friendly but Platonic conversations with Karen were no longer supportable for me. Both women observed that I was asking for the moon, which I was, and Karen opined that I was probably in love with an imaginary woman, which I may have been, though I did not think so. The two even got together once without me, and psychoanalyzed my problem to their satisfaction.
I did the obvious thing (obvious if you are not in love): I stopped seeing Karen, with just a couple of lapses, each of which was followed by such painful depression that it strengthened my resolve. The salt slowly dissolved from the wound. Today I can recount this tale in bittersweet terms, a sentimental old man retrieving his lost youth.
If I did not acknowledge the importance of this episode, I would be falsifying my life. Moreover, the experience added an important corollary to my theory of love: You can love two or more women at oncedenying that would be denying my own emotionsbut you cannot be loyal to more than one. The dichotomy of profane and sacred love is not enough. Unless accompanied by loyalty and commitment, love, even love that goes far beyond sexual attraction, provides an insubstantial base for marriage or for a satisfying continuing association.
The budget of time from which we never escape imposes priorities on us, priorities of values and priorities of people. Commitment in marriage means

that the needs of one other person must hold a special priority in our life. That person must be able to count on us as we count on him or her, and the needs of two persons cannot share the same urgency. It is this combination of love and commitment that has made my fifty-three years with Dorothea so central to the meaning of my life and, I hope, of hers. It took the experience I have recounted to make me understand that. I am embarrassed to be such a slow learner; I am not sorry to have had the experience.
There is a sermon hidden somewhere here, with a moral about the new concept of “relationship” that developed and spread in our society in the 1960s and 1970s. But I guess I will simply leave it to the new generations to work out their own definitions of loyalty. Perhaps they will discover something that I missed.

Chapter 16
Creating a University Environment for Cognitive Science and A.I.
Upon my return to the Graduate School of Industrial Administration after my 1960 61 leave of absence at the RAND Corporation, Lee Bach informed me that, for reasons of health, he was going to resign the deanship. That was very unpleasant news; it was hard to think of GSIA operating under another dean. The national and international success of the school had been so spectacular that Lee’s successor would be fortunate to hold the ground gained. It would be like becoming the heir of a Roman emperor who had just made a large addition to the empire.
I was the obvious heir, but the excitement of my cognitive science research combined with my previous administrative experience, including the year as acting dean, had convinced me that I did not want to spend my days in deanly duties. Meanwhile, Dick Cyert, now about forty-one, had been exerting a good deal of leadership within the school. For several years he had been head of the undergraduate Industrial Management Departmentthe job I held when I came to Carnegie Techand he had formed a group of the younger faculty to build an innovative management game for use in the Master of Science curriculum. He and Jim March were finishing their book, The Behavioral Theory of the Firm (1963), and he had other good research to his credit.
Bill Cooper soon started a faculty drive in support of Dick’s candidacy. Lee Bach was somewhat negative, believing that we should look for an outside candidate; Dick just didn’t seem distinguished enough for the deanship of a school of GSIA’s prominence. But faculty support for Dick built up rapidly.
I had generally positive views toward Dick, and had been the principal mover in appointing him as department head. I had only one reservation: He seemed to enjoy power too much, a worrisome trait in a leader. (Leaders

should exercise power, but enjoying it is another, and more dangerous, matter.) After a long and frank conversation with Dick, I retained my concern about his attitude toward power, but was persuaded that he also had a strong attachment to the goals of GSIAthe power would be used responsibly, not just to advance his career. I also concluded that since he now understood my concerns, we could get along welland we did.
Migration from GSIA
Dick was appointed dean in 1962, serving in that position until he became president of the university in 1972. I had (and have) serious reservations about the direction in which GSIA moved during this decade, but many factors were involved besides the actions of the dean. His most serious shortcoming was that he was bedazzled with mathematics and formal methods. As a result, senior faculty tolerance for nonquantitative research decreased, as well as for empirical work not based on formal theory.
Notwithstanding Dick’s previous involvement with the behavioral theory of the firm, this research was one of the first victims of the new bias. The mathematically inclined faculty we were recruiting had little taste or talent for empirical research that did not start (and sometimes end) with formal model building.
Over time, a coalition of neoclassical economists and operations research specialists came to dominate the GSIA senior policy committee, making decisions that produced a growing imbalance in the composition of the faculty. Although I had never thought I lacked sympathy with mathematical approaches to the social sciences, I soon found myself frequently in a minority position when I took stands against what I regarded as excessive formalism and shallow mathematical pyrotechnics. The situation became worse as a strict neoclassical orthodoxy began to gain ascendancy among the economists. It began, oddly enough, with Jack Muth.
Jack, as a graduate student, had been a valuable member of the Holt-Modigliani-Muth-Simon (HMMS) team in the dynamic programming research. He was (and is) very bright, and an excellent applied mathematician. In our project, he investigated techniques for predicting future sales and, generally, for dealing with uncertainty. Shortly after completing his dissertation, which was related to the project, Jack published in Econometrica in 1961 a novel suggestion for handling uncertainty in economics. He clearly deserves a Nobel for it, even though I do not think it describes the real world correctly. Sometimes an idea that is not literally correct can have great scientific importance. To economists his idea is known today as "ra-

tional expectations." I will explain it here only roughly; a detailed account would take us deep into technical matters that are irrelevant to the story.
The theory of rational expectations offered a direct challenge to theories of bounded rationality, for it assumed a rationality in economic actors beyond any limits that had previously been considered even in neoclassical theory. The name of the theory reveals its general idea: It claims that people’s rationality extends even to their expectations about an uncertain future, such expectations being derived, in fact, from a valid model of the economy, shared by all decision makers.
Jack’s proposal was at first not much noticed by the economics profession, but a decade later it caught the attention of a new young assistant professor at GSIA, Robert Lucas, who had just completed his doctorate at the University of Chicago.* Beginning in 1971, Lucas and Tom Sargent, who was also with us for a short time, brought the theory of rational expectations into national and international prominence. It is not without irony that bounded rationality and rational expectations, two of the major proposals after Keynes for the revision of economic theory (game theory is a third), though entirely antithetical to each other, were engendered in and flourished in the same small business school at almost the same time.
Not only did they flourish, but they were represented, along with Keynesian theory, in a four-man team that worked closely and amicably together for several years on a joint research project. The HMMS research team harbored simultaneously two Keynesians (Modigliani and Holt), the prophet of bounded rationality (Simon), and the inventor of rational expectations (Muth)the previous orthodoxy, a heresy, and a new orthodoxy.
The rational expectationists, and the neoclassical mathematical economists generally, gradually made GSIA less and less congenial to me. To oppose the trend and secure more tolerance for other points of view, I would have had to devote most of my time to the politics of GSIA, which was not where my interests then lay. It is not clear whether I would have won the struggle had I undertaken it.
Amid these controversies, I slowly retreated from GSIA, beginning shortly after Dick Cyert became dean, although I have worked hard (and relatively successfully) to make sure that there is room elsewhere on the campus for economists of other persuasionsin the School of Urban and Public Affairs and, subsequently, in the Department of Social and Decision Sciences within

  • The Cowles Commission had migrated from Chicago to Yale, and the Economics Department at Chicago, under the influence of Milton Friedman, had become ultra-orthodox in its adherence to the neoclassical faith, and completely intolerant of alternative religions. Bob Lucas was a product of the new Chicago School.

the College of Humanities and Social Science. Eventually, around 1970, I moved my office to the Psychology Department, but continued to participate in GSIA policy meetings and retained the position of associate dean’'without portfolio," I was fond of saying. Actually, to say that I retreated from GSIA is only partly correct; I was also drawn to the Psychology Department and the burgeoning new activity around the computer by the shift in my own research interests.*
Politics on the Campus
Throughout my career, I have devoted much time to the politics of science, both inside Illinois Tech and Carnegie Mellon and at the national level. Perhaps this is a good place to explain how Carnegie Institute of Technology became Carnegie Mellon University, because this happened in 1967, between the time when Dick Cyert became dean of GSIA and when he assumed the presidency of the university. In 1967, during the presidency of Guyford Stever, a merger was arranged between CIT and the Mellon Institute, a nonprofit industrial research organization in Pittsburgh that had been endowed by Andrew Mellon. The two institutions, and the names of their major benefactors, were merged.
I do not know whose decision it was for us to become a university. The professionals at the Mellon Institute were scientists, principally chemists, who would have been quite at home in an Institute of Technology. Somehow the merger was seized on as an opportunity to proclaim a broader mission for Carnegie Mellon by dubbing it a university.
Organization theorists will be interested to know that the change in name has not been without consequence. It has supported arguments such as, “We are now a university; universities have Philosophy Departments, therefore CMU ought to have a Philosophy Department.” What’s in a name? A great deal, it would appear.
Campus politics and administration need to be guided by two goals: excellence and innovation. Money does not guarantee excellence. Although university salaries and faculty quality are correlated, the correlation is far from perfect. Insisting on excellenceon the university’s getting what it pays for, and more if possibleat the time of critical personnel decisions (hiring, reappointment, promotion, tenure) can turn a mediocre faculty into a first-rate one.
*The defeat of bounded rationality and organization theory in GSIA was still a real blow to me. I have always liked the quote from General Stilwell, who, when driven with his troops from Burma, pushed aside excuses with, “I say we took a hell of a beating.”

When making tenure decisions, members of a faculty are inclined to sacrifice quality to humaneness, particularly when close associates and friends are being judged. Acting humanely is an admirable human trait, but it is easy to misconstrue what is at issue. A faculty tenure committee is not determining how many people will be employed in the society, but which people will be employed in a particular university. Retaining a faculty member who is less able than others who could be recruited is as inhumane to the (possibly unknown) replacement as it may be humane to the incumbent. Faculty members who are denied tenure don’t go on the breadline. They move to other universities or other occupations. Universities achieve high quality when they keep these facts in view.
Innovating means not simply generating ideas but disseminating them. Ideas can be disseminated by talking and writing, and the dissemination can be greatly facilitated by building institutional homes for them. At Carnegie, we have had considerable success in generating new ideas, in creating organizations to nurture them, and in propagating them through the wider educational and scientific communities. The first innovative activity I was involved in at Carnegie was founding GSIA; that organization and its worldwide influence on business education has already been described. The second was building a psychology department that has been an international leader in developing and diffusing computer simulation and information-processing psychology. The third one was introducing computers at Carnegie Tech and building there one of the world’s earliest and leading computer science departments.
A fourth effort at innovation, still developing, is reconstructing design as a scientific activity and reintroducing design into the engineering curriculum. A fifth is strengthening effective education at Carnegie, by emphasizing problem solving and the blending of liberal with professional values and approaches. The institution building associated with these innovations has largely occupied the part of my life that has been devoted to university policies and politics.
This activity is not at all separate from the main stream of my research, for the Carnegie campus provided the intellectual environment where innovative ideas could be developed and then communicated to the rest of the world. Behavioral theories of economics, bounded rationality among them, gained their visibility through the joint activities of our research group in GSIA during the 1950s. The Psychology Department provided the platform for launching the cognitive revolution in psychology. A sequence of organizations, culminating in the Computer Science Department, provided the corresponding platform for artificial intelligence.

The New Cognitive Psychology
The new research on cognitive psychology that was described in chapters 13 and 14 was launched from GSIA in 1956. Within a year, Lee Gregg in the Psychology Department began to take part, but no other interest was shown by that department. Lee, seeing the promise of the new approach, moved rapidly to it from the behaviorist empiricism of traditional experimental psychology in which he had been trained at the University of Wisconsin.
GSIA had had connections with psychology, in social and organizational psychology, and Harold Guetzkow had a joint appointment in GSIA and the Psychology Department. Because I was a Fellow of the Division of Social Psychology of the American Psychological Association (on the strength of my research on organizations), I also had at least minimal legitimacy in psychology. I began to propagandize for more participation of the Psychology Department in the cognitive revolution we had started.
Some GSIA funds were used to hire young experimental psychologists whom we thought might be seduced in the new directions, but that plan was not very successful. The traditions of the discipline and concerns about a successful career in psychology were too strong to allow untenured psychology faculty to join the revolt. By the time I went to RAND on my sabbatical, in 1960, I was beginning to doubt that we could accomplish the revolution from the foreign territory of GSIA, without a firm base also in the Psychology Department. I resolved to do something about it when I returned to Pittsburgh.
As I assessed the situation in the autumn of 1961, there had been little progress, and Haller Gilmer, chairman of the Psychology Department, unpersuaded by my particular vision of the future, was unwilling to promise there would be more. I decided to use some of my brownie points with the administration to bring about a rapid change. My method was abrupt, justified in my mind by the importance I attached to the goal. The depths of my convictions on matters important to me had not gone unnoticed by my colleagues. In his autobiography, Leland Hazard, the retired general counsel of Pittsburgh Plate Glass who taught very effectively in GSIA for many years, mentions an incident that occurred in 1960:
At the time of the School’s [GSIA’s] tenth anniversary we held a symposium called, “Management and the Corporation, 1980.” There were a dozen participants of national and international prominence. Barbara Ward (Lady Jackson) was seated next to me and Herb Simon was across the semicircle. "He has

the face of a fanatic," Barbara Ward said to me. Before I could reply the television lights came on. [Hazard 1982, p. 29.]
Whatever my face may reveal (it isn’t a poker face), I do act with coldness and calculation when important goals are at stake, even with a certain disrespect for the norms of politeness. I suppose that is as good a definition of fanaticism as any. I do not enjoy hurting people, but I do not always act to optimize human relations. When it looks like the effective thing to do, I can lose my temper, or appear to.*
In the case of psychology, I thought a great deal was at stake, and after a tense luncheon session with Haller (I did the shouting; he was calm), I wrote him the following memorandum summarizing our conversation:
November 2, 1961 Dear Haller:
I have given further thought to my course of action on the matters we discussed yesterday. I shall presently talk to President Warner, preferably after a new Dean has been found for GSIA, as follows:
1.I can fruitfully carry on my work at Carnegie only if there is on campus a strong graduate psychology program with emphasis on the area of cognition and simulation of cognitive processes. Since the local resourcesfinancial and environmentalcannot be expected to support a very general graduate psychology program of the first quality, this implies more specialization and focus in the department than now prevails. Apart from my own personal requirements, a specialized program of this sort is the only kind that makes sense on this campus in relation to the activities of GSIA and the new program on systems and communications sciences [computer science].
2.While we have made some progress in this direction in the past five years, we have made it only because GSIA was willing to supply the financial resources, and there is little evidence that the other resources of the Psychology Department have been oriented toward this goal. I have had the feeling that nothing happened except when I pushed, a feeling further confirmed by lack of movement during the year I was away from the campus.
3.To reach the goal will require vigorous leadership in the Psychology Department from a chairman who is thoroughly sold on the objective. Because of what I perceive as a drift in the department over the past two or three years, and because of your own statements of the limits of what
*Very likely, the kind of calculated anger I sometimes exhibit is less forgivable than the spontaneous, uncontrolled kind. Many years ago, a friend said to me, “The great thing about my mother is that she never struck us except in anger.” I suspect that this is the normal reaction, that loss of temper is a better excuse for aggressive behavior than is calculated severity.

you can do, I no longer have confidence that you will provide that leadership. I do not wish to continue exerting the pressure I have had to exert in the past to keep the department turned in what seems to me the only promising direction for development of its graduate work.
4.The integral relation of the behavioral science programs in Psychology and GSIA needs to be further emphasized by placing formal responsibility for the administration of the graduate programs in GSIA. The psychology graduate program cannot be satisfactorily supervised by the Dean of Graduate Studies in Engineering and Science, and the present semi-formal arrangement is too ambiguous to members of the Psychology Department.
These are not conclusions I have reached hastily, for I have examined these questions many times in the past year. I would have raised them with you earlier this fall had Lee not decided to retire from the Deanship.
Herb
The deanship of Humanities and Social Science, the division in which the Department of Psychology resided, was also vacant, but was filled at just this moment by the appointment of Jack Coleman, an economist from GSIA. Two days after I sent this memo, I also wrote to Jack, indicating my intention to appeal to Carnegie’s president if needed, to bring about the changes in the Psychology Department that I thought were necessary. It closed with “very best wishes for success in your new assignment, and apologies for precipitating your first administrative crisis within the first ten minutes of your welcoming ceremony.”
After I had sent these memoranda, I met with Keck Moyer and usLee Gregg, who were exercising active leadership in the Psychology Department, and reassured them that there would be ample room in the department for first-rate faculty in areas other than my brand of cognitive psychology. Believing that our goals were not in conflict, they agreed to major changes in the department.
Haller had already decided to resign,* and we persuaded Bert Green of M.I.T., who was already involved with artificial intelligence, to head the department. During Bert’s five years at Carnegie, he and Al Newell secured a research grant from the National Institute of Mental Health which provided the major support for our cognitive science research during the twenty years during which it was renewed. It was a broad grant that enabled us
*Haller stayed on as a member of the department, and I was very pleased that we could become friends again not too long after these events. Perhaps his recognition that I was riding the zeitgeist took the personal edge off our encounter. After he retired from Carnegie Mellon, at around age seventy, he went on to have a very productive career for a decade at Virginia Polytechnic Institute, helping to develop their programs in industrial psychology and working with other institutions in Virginia as well.

gradually to build a cohesive group of information-processing psychologists in the department. But even with generous funding the path was not smooth, because it was not at first easy to recruit young psychologists willing and technically competent to take this new route.
In 1965, the department initiated an annual spring symposium in cognition, which continues to the present day. The symposium brought many distinguished visitors to the department, where they could see what was going on and interact with our local talent. The published proceedings also gave growing visibility to our research program, about half the papers being authored by our faculty.
Nevertheless, progress was agonizingly slow as long as our little island was still surrounded by a great national sea of almost pure behaviorismnearly the same problem that orthodox economics had posed for the behavioral theory of the firm in GSIA. But in this case, the historical trend was on our side and we gradually won out.
Progress was also slowed somewhat by the student Troubles, which I will give an account of in chapter
18.For several years they required much faculty attention, and gave aid and comfort to competing views about the proper role of psychology in the university. The situation was only fully stabilized about 1973, when Lee Gregg took over the chairmanship of the Psychology Department, which he held until his premature death in 1980.
Computer Science
Establishing a computer science program at Carnegie was much easier than introducing cognitive psychology, because we were simply filling a vacuum rather than pushing against entrenched ideas. Soon after 1956, when the IBM 650 and Alan Perlis arrived on campus, faculty and students in four departmentsGSIA, electrical engineering, mathematics, and psychologybegan to take a strong interest in computing. About 1961, a steering committee was set up with representatives of these departments, under the rubric of Systems and Communications Sciences (S&CS).
Various members of the S&CS committee were offering, in their respective departments, courses that we would now regard as computer science courses, and because we had worked hard to maintain the permeability of departmental and college boundaries at Carnegie, students from many departments took these courses. The S&CS committee next decided to construct and administer a comprehensive exam at the doctoral level in computer sciences (in S&CS). Any department that wished could incor-

porate this exam as part of their examinations for the doctorate, and all four departments represented in the committee did so.
Soon, we were awarding degrees that were essentially computer science doctorates in the four departments. The university’s Committee on Graduate Studies learned of this several years after the fact, but by then it was too late to do anything but give it a blessing. In that way, we became one of the first universities in the countryin the worldto train students in computer science at the doctoral level.
By 1965, the desire was widespread to take the next stepto establish a separate Computer Science Department. It was created that year, with Alan Perlis as its first, and extremely effective, chairman. From the beginning, Carnegie Mellon, M.I.T., and Stanford were regarded as having the three leading computer science programs in the nation, a rank we continue to hold.
The Computer Science Department kept close ties with the departments that had formed the S&CS committee, and there have always been joint appointments of faculty among them. At present, four faculty members hold joint appointments in psychology and computer science. Computer science remained in the College of Science until 1987, when it became a separate college.
Engineering Design
One cannot inhabit engineering schools for several decades without acquiring views about engineering education. I formed such views very early during my tenure at Illinois Techbut probably mainly inherited them from my father. I was even moderately active in the Society for the Promotion of Engineering Education (now the American Society for Engineering Education). My initial views were that engineering education needed less vocationalism and more science.
With my experience in GSIA and a wider view of the world, I began to see things a little differently, and began to see, too, the similarities in education for various professions, especially engineering, business, and medicine. Our goal in GSIA was to balance a professional with a scientific orientation.
As I began to understand the trends in the stronger engineering schools, I saw that the same things that were happening to them were happening to the New Model business education: science was replacing professional skills in the curriculum. I looked a little further, and saw the same thing going on in medicine. More and more, business schools were becoming schools

of operations research, engineering schools were becoming schools of applied physics and math, and medical schools were becoming schools of biochemistry and molecular biology. Professional skills were disappearing from the curricula, and professionals possessing those skills were disappearing from the faculties.
The distinction between the scientific and the professional is largely a distinction between analysis and synthesis. Professionals not only analyze (understand) situations, they act on them after finding appropriate strategies (synthesis). In business, they design products and marketing channels, organize manufacturing processes, and find new financial instruments; in engineering, they design structures and devices and processes; in medicine, they design and prescribe treatments and perform operations. But analysis had driven synthesis from all these curricula.
This had happened for a good reason. Analysis is at the heart of science; it is rigorous; it can be taught. Synthesis processes are much less systematic; they are generally thought to be judgmental and intuitive, taught as “studio” subjects, at the drawing board or in clinical rounds or through unstructured business cases. They did not fit the general norms of what is properly considered academic. As a result, they were gradually squeezed out of professional schools to enhance respectability in the eyes of academic colleagues.
The discovery of artificial intelligence changed this situation radically. Artificial intelligence programs generally carry out design, or synthesis. Programs were designing electrical motors, generators, and transformers as early as 1956 and, by 1961, selecting investment portfolios. Such computer programs destroyed the mystery of intuition and synthesis, for their processes were completely open to examination. We could now understand, in whatever rigorous detail pleased us, just what a design process was.
Understanding it, we could teach it, at the same level of rigor that we taught analysis.
As I gradually came to understand both the dilemma of the professional schools and the solution being offered by A.I., I began to urge that Carnegie Tech restore design and designers (or theorists of design) to its Engineering College. In the early 1960s the message fell on deaf ears. The scientists then in the Engineering College neither understood engineering nor believed it could be taught. They educated engineers by giving them a lot of physics and math, hoping that their students would later be able to design safe bridges or airplanes.
In 1968, I was invited to give the prestigious Karl Taylor Compton Lectures at M.I.T. I titled my lectures “The Sciences of the Artificial,” and devoted one of them to the science of design, setting forth the view I have just sketched and filling it out with a prescription (a design!) for a curriculum

in design. The curriculum was motivated by my description, in the preceding lecture, of what our research had taught us about human thought processes, including design processes. There was no immediate seismic response to the lectures, but, in their published form, they began to attract more and more notice, in this country and abroad.
Gradually, Carnegie was able to recruit to the engineering departments a few faculty members who shared this view of design. Gary Powers and Steve Director were among the first. They came together in a Design Research Center, whose activities have burgeoned into a large network of research studies on synthesis processes of many kinds.
The research, in turn, is beginning to reflect back on curriculum, so that Carnegie Mellon is today a recognized leader in restoring professional skillsdesign skillsto engineering education. Of course, we are not bringing back the drawing board. We are teaching not just an art of design but a science of design. The main vehicle is the study of expert systems and other artificial intelligence systems that do design, thereby revealing its anatomy and physiology.
These developments have afforded me great satisfaction, particularly because, aside from providing the initial propaganda for them, I have not had to be very actively involved in bringing them about. They are now firmly rooted in the soil of the Engineering College and are proceeding under their own momentum. If one must be a reformer, that’s the best kind of reform.
New Presidents for Old
John Christian (Jake) Warner, who had assumed the presidency of Carnegie Tech just after my arrival there, retired from office in 1965. There was little faculty participation in the choice of his successor, and I recall only one meeting on the subject at which I was present. The new president was Guyford Stever, who left in 1972 to become director of the National Science Foundation.
In the 1972 presidential search, coming just a few years after the Student Troubles and the concessions to faculty and student democracy that they had brought about, there was much more active faculty participation, and even the students were brought into the process to a limited extent.
A few months before the 1972 search began, I had been invited by Stanford University to join its Board of Trustees. At that time many boards were co-opting an academic member or two from other campuses. I was both flattered and tempted by the invitation, but finally decided that if I was going to spend substantial time thinking about issues of educational policy

in universities, I would prefer doing it for Carnegie, where I might have some influence, rather than for one of our leading competitors.
I was an obvious possibility for the Carnegie presidency. When I told the chairman of the trustee search committee, after a week or so of deliberation, that I would not be a candidate, he invited me to join his committee, which I did. I also told him I had declined a board membership at another university, but that I would not reject an invitation to join the Carnegie Board. After he had consulted his board colleagues, his response was positive, but it was agreed that nothing would be done until a new president had been selected. The announcement of my appointment to the search committee specified that I was serving as an individual, not as a faculty representative.
Dick Cyert was the other obvious inside candidate for the presidency, and by his skillful and assiduous campaigning, he soon gained rather solid faculty support. Although some of the science faculty thought that the president should be a natural scientist, Dick was able to allay their worries. The trustees’ committee was also inclined to look for a scientist or an engineer from outside, but in the end, the faculty committee won over the trustees to their preference for Dick.
I was not soon persuaded to support Dick, because, as I have already said, I was unhappy with the way GSIA was going, and blamed at least part of the problem on Dick’s policies. Finally, I concurred with the others; there were no spectacularly good alternatives. It was no secret to Dick that I had been almost the last to climb on his bandwagon, but he harbored no visible resentment. But I thought it would be unfair for me to accept the board membership that had been promised if he were opposed to, or even mildly uncomfortable with, the idea. On the contrary, he responded positively. The invitation was extended, and I accepted.
It was of course anomalous for me to be simultaneously a tenured faculty member of the university and a member of its Board of Trusteesand not as a representative of the faculty. If it made anyone uncomfortable, I never knew about it. I have always tried to remember, at any given moment, which hat I was wearing, and not to wear them both at once. For a number of years I avoided committees of the board that dealt with internal academic affairs, devoting most of my effort to the Finance Committee and its subcommittee on investments. During these years, the university changed completely its way of handling its endowment, gradually entrusting it to a small set of money managers. An enormous amount of time went into fashioning the new arrangements and selecting the managers. Later, I served also on the Audit Committee.
My membership on the board was useful in two other ways. First, it enabled me, from time to time, to interpret the university to fellow trustees.

Few members of the Carnegie Board were very close to the university, and the knowledge that many of them had of it was based largely on memories of undergraduate student days (at Carnegie or elsewhere). On appropriate occasions, I could remind them of other important aspects of the university’s operations. I could even remind them that one-third to one-half of the university’s revenues were raised by the entrepreneurial activities of the facultyfar more than was coming in from gifts and endowment income.
They needed to have a realistic understanding of what a research university is like, for that was more and more what Carnegie was becoming.
Second, my board membership enabled me to maintain an open relation with Dick Cyert. We fell into the custom of meeting periodically at breakfast, our conversations roaming over the whole range of university affairs. This relation was tenable only if I could avoid strong advocacy of my own hobbies and the university activities with which I was most closely associated. I did not want to become an influence broker. As long as I looked at the whole of Carnegie first, and its parts second, I could be useful. I think I have usually been able to do this, but I would be surprised if there have not been some lapses.
All of this was possible because Dick Cyert was a strong president. No one could imagine that he was an easy target of persuasion, nor did he and I always agree on policies. So we remained close friends over the eighteen years of his presidency. And there have been no doubts on campus about who was in charge.
Since a major reason I did not seek the presidency myself was to reserve time for my research, I involved myself only selectively in university administrative affairs, which have accounted for only a small part of my work week. The university went along very well under Dick’s direction without my intervention. Dick did not feel obliged to keep me especially informedany more than any other trusteeabout what was going on, except on matters he wanted to discuss with me.
On the other hand, I am not so naïve as to suggest that my faculty colleagues, much less the deans, were unaware of my dual role. I am sure that I was often treated more tenderly than I otherwise would have been, that I was kept better informed, that my views and agreement on proposed policies were sought. Sometimes I was used as a channel to bring problems to Dick’s attention. I recognized that I had more clout than I would have had without board membership, but I tried to use it responsibly and without adding more confusion to the organization structure than was already there. (Carnegie has never had a neat organization chart, and Dick has never restricted his contacts to ''channels.")
Finally, it is impossible to unconfound the influence I enjoyed by virtue

of being a trustee from the growing influence I derived from my national and international scientific reputation. There are many ways in which sacred cows become sacred.
Why I Am Not a College President
In 1961 I was the obvious successor to Lee Bach in the deanship of GSIA, and from there the presidency would not have been a very long step. I made two specific career choices in favor of research over administration. These were among the half dozen most important occasions in my life when I had to opt for the left or the right path of the branching maze. I declined to be considered for the deanship of GSIA when Lee resigned (I would nearly certainly have been appointed); and I declined to be considered for the university presidency when Guy Stever resigned eleven years later.
Although I now imagine that I always wanted a research career, the documentary evidence does not bear this memory out. While I was a high school student, I thought I had a serious interest in the law (fed by Uncle Harold and my debating experience?). I took a vocational interest test, and when I scored nearly off the scale on introversion, the counselor allowed as how law might not be my vocation. Whether the test judged me correctly is an interesting question. I do find it difficult to take the initiative in cultivating other people.
The vocational interest test by no means decided matters. As late as 1942, I weighed the prospects of a career in the civil service, and even had thoughts of a political career. The latter I ruled out because (1) I was not a veteran, and (2) I was Jewish. Even though, as I explained earlier, I sometimes enjoy an underdog role, those two strikes against me in politics were too much. No attractive civil service opportunity presented itself before I was caught up in my academic career.
Why did I, after many years of administrative responsibility (directing the measurement research project at Berkeley, chairing departments at Illinois Tech and Carnegie Tech, and serving as associate dean of GSIA) decide not to stand for the deanship? A year as acting dean, while not quite the same thing, convinced me that it was not what I wanted. It was too disciplined a life; there would be no opportunity to pursue intriguing ideas that presented themselves; I felt a distaste for getting my satisfactions mainly from stimulating the contributions of others and needing to cultivate people to get their cooperation or their money, and especially to initiate such contacts. Perhaps the vocational interest test had been right. After weighing the possibility seriously, I decided I did not want to be a candidate.

The decision on the presidency was easier, both because success was less certain and because I could simply re-evoke the feelings of the previous decision. Success was less certain because Dick Cyert then had ten years of deanly experience that I had rejected, and because my sharp tongue and fierce infighting in behalf of cognitive psychology and A.I. had made me less than beloved by parts of the faculty.
However that may be, I did not seriously consider taking on the contest. I equivocated only long enough to assure myself a strong position in the university’s decision processes. I have never regretted the decision, especially in view of Dick’s stellar performance on the job, a performance made possible by a “deviousness” that our colleague Leland Hazard admiringly attributed to him, and that I surely did not possess.
Perhaps I had cast the die even earlier. When I was first listed in Who’s Who, at about the time I came to Pittsburgh, I made sure that my political affiliation (Democratic) and my religion (Unitarian) were placed in the public record. In moving from public administration to business administration, I did not want to be tempted to compromise my liberalism. That overt inflexibility is perhaps not wholly conducive to success in an administrative position that must mediate among half a dozen constituencies, including conservative business ones.
In fact, the close association with the business community that is essential for effective performance as president of a university such as Carnegie Mellon would have been uncomfortable for me. I find it nearly as easy to associate with businesspeople as with academicians, although, since I am not good at small talk and lack an interest in golf or sports, conversation sometimes languishes. But when the talk turns to current affairs and politics, I cannot conceal my liberal views, different from the views held by most businesspeople.
Perhaps the most serious problem I have in hobnobbing with the rich, is that, however attractive their other qualities, intellectual and personal, they are (in my experience) nearly uniformly humorless about money. They believe that it is very important, and they usually behave (I don’t know what goes on inside) as if they possessed it by right and not by the grace of God or fortune. Somehow, Dorothea and I lack a proper respect for wealth, even our own. We both think the income tax is too low.
I once phoned a very wealthy man, with whom I was on warm first-name terms, to ask him to donate a company product worth $400 to scientists in a Third World country. Without a moment’s hesitation he replied, “I’ll split it with you.” I regard this man, whom I like very much, as intelligent, interesting, and possessed of enlightened social viewsnot as liberal as mine, but far from reactionary. What struck me about his response was its

automaticitya knee-jerk reaction. Even trifling amounts of money were not to be disbursed casually.
In view of my attitudes, should I be embarrassed that GSIA, the school that housed my most important research and educational contributions, was founded by William Larimer Mellon, who built the Gulf Oil Company; that it resides in a university created by the multimillionaire Andrew Carnegie; and that the chair I have held for a quarter of a century was endowed by the very wealthy banker Richard King Mellon? Not at all. Giving money away is often the best thing you can do with it, and I do not object to being its beneficiary for good causes.
Liberal-professional Education
My adventures in teaching at Illinois Tech illustrate my experimental attitude toward instruction (see chapter 7). I have never confused teaching with delivering orderly lectures (to be assembled into a textbook) “covering” the subject matter. Nor did that confusion exist in the minds of the teachers that President Robert Doherty, Jake Warner’s predecessor, had assembled at Carnegie Tech beginning in 1936. Carnegie had pioneered in two important movements in engineering education: providing a substantial liberal arts component within the engineering curriculum, and shifting emphasis from teaching subject matter to teaching problem-solving skills.
Carnegie Tech was one of the engineering schools that led the way toward allocating about one-quarter of the undergraduate curriculum to nonengineering, nonscience subjects. It also undertook some pioneering steps to make that quarter of the curriculum something more than a hodgepodge of electives.
Carnegie Tech, under President Doherty, also introduced the Carnegie Plan, a statement of its fundamental educational objectives, implemented by courses specifically designed with those objectives in view. A succinct statement of the goals of the Carnegie Plan can be found in Doherty’s paper “Education for Professional Responsibility,” from which I quote:
Three changes in professional education are needed. First, a new philosophy and new outlook which will comprehend the human and social as well as the technical. Second, the development in all professional men of genuine competence in the professional way of thoughta way of thought which embodies an analytical and creative power that is as effective in the human and social realm as that developed in engineering. . . .
Third, the development of the ability to learn from experience so that in the

unfolding future they can continue to expand their fundamental knowledge, deepen their understanding, and improve their power as professional men and women and as leading citizens. [Doherty 1948, pp. 76 77]
This can be read as pious sentiment. What made it more was Doherty’s rethinking of the curriculum and teaching methods to subordinate subject matter coverage to instruction in problem-solving skills. In these efforts, he was aided by his provost, Elliott Dunlap Smith, by Dick Teare, who became engineering dean, and by many others.
Doherty retired in 1950, Smith in 1958, and their influence on the university has gradually been dilutedbut not wholly forgotten by those of us who were young faculty members during those years: Erwin Steinberg in English, Ted Fenton in History, and Dick Cyert among them. For that reason, faculty sophistication about educational philosophy and practice remains higher at Carnegie Mellon than at most other universities with which I am acquainted.
When I came to Carnegie, the school did not offer degrees in the social sciences and humanities (with the partial exception of Margaret Morrison Carnegie College for Women). History, English, languages, and psychology were “service” departments, and their faculty members, somewhat second-class citizens.
During the Stever administration (1965 72), Margaret Morrison College was combined with the College of Humanities and Social Science, we renamed ourselves a university (upon merging with the Mellon Institute of Research), and we began offering undergraduate degrees in several liberal arts subjects.
I had mixed feelings about these changes, because I had mixed (read negative) feelings about the viability of contemporary liberal arts education, especially in the humanities. Particularly, I was not impressed by the demeaning attitude of these fields toward “vocationalism.” They sometimes seemed to propose uselessness as an essential criterion for proper liberal studies, all the explicit emphasis being on knowledge, not skill.
Of course, practice is another matter. If there is any place in a university where skill is the name of the game, the language departments could perhaps claim it. But language teachers prefer to imagine that learning to read, write, understand, and speak is just an unfortunately necessary preliminary to immersion in literature, history, and culture. In actual fact they spend almost all their time teaching these preliminaries, but that is just one of life’s misfortunes. (English teachers have generally the same problems rationalizing their preoccupation with grammar and spelling.)
As far as Carnegie Mellon was concerned, I thought that the basic philosophy of the Carnegie Plan might be transported into the new social science

and humanities curricula under the banner “liberal-professional education,” buttressed by some thoughts about what that phrase might mean in practice. We could have no comparative advantage in the liberal subjects in competition with the Ivy League schools unless we offered something different and arguably better than they did. If we had no comparative advantage, we would not achieve quality; and if we did not achieve quality, we should not be in the business.
But to extend the application of the Carnegie Plan to the liberal arts, people had to be convinced that there was no conflict between “liberal,” properly interpreted, and “professional” properly interpreted.
Liberally educated people are skilled people; and the skills of well-educated professional people are infused with liberal values and knowledge. Those of us who were infected with the Carnegie Plan have had considerable, if far from complete, success in promoting the idea of liberal-professional education on our campus, especially in the College of Humanities and Social Science.
The whole story is complex and has not yet reached its denouement. Its essence can be conveyed in the following passages from a talk I gave to the faculty in 1977, which created quite a stir. At that time, Dean Pat Crecine, of the College of Humanities and Social Science, and his Associate Dean, Lee Gregg, were just instituting a core curriculum in the College, with less than 100 percent support of the facultyalthough many had been won over. On April 5, I gave a well-attended lecture on the subject of liberal education, aimed at provoking serious discussion of important educational issues on the campus. I think I succeeded. I began with a definition of liberal education that challenged its frequent disparagement of skill and the tension between “liberal” and “professional”:
There is remarkable agreement that liberal education, both etymologically and in every other way, means education for a free man, a free person. Disagreement only begins when you ask what kind of education a person needs in order to become and remain free.
. . . Its charter describes Yale, for example, as a school “wherein youth may be instructed in the arts and sciences who through the blessing of Almighty God may be fitted for public employment both in Church and civil state.” . . . [From] classical times, to prepare someone to be a free person was to prepare him or her to take a station in society. If that station involved performing as a citizen then it was preparation for citizenship. If that station involved productive work then the free person’s education included training for appropriate employment. [Simon 1977a, pg. 1]
Next I presented an illustrative examination (ten questions) as an operational definition of liberal education, and provided alternative answers to

one of the questions. “Notice,” I said, that we’re not simply examining knowledgewhether you have read your Homer and your Virgil. We’re examining skills. To answer the question you have to apply the skills of poetry writing, or the skills of computer programming, or some other skills."
I suggested that we must draw on the social sciences to design a proper education: first, to understand the motivations of our students; second, to “capture a large part of the out-of-class time for the educational processperhaps even insinuate in the dinner conversation some things that are relevant to becoming an educated person.” To do this, the school must appeal differently to students whose motivations are intellectual, to those whose motives are professional, to those whose motives are social, to those who are conventional, and to those “not elsewhere classified.” “The third thing we need from the social
sciences . . . is an empirically based theory of knowledge and skills and their mutual relation.”
How can we motivate our students to acquire a liberal education? I argued for a core curriculum, to provide common topics of conversation outside the classroom. Coverage was not importantwas, in fact, impossible. The core must be constructed by sampling. It must aim at developing skills as well as knowledge: problem-solving skills, skills of cross-examining experts, and skills of perceiving and appreciating.
Then came the punch line: “A faculty that is not liberally educated can’t provide liberal education. American college faculties, including our own, aren’t liberally educated. . . . How many of the licensees in your own field, if you walk up to them with a question, will say, ‘Oh, I can’t answer that; that isn’t my period’?” I concluded by proposing that the university establish a program of liberal education for the faculty, requiring every CMU faculty member to pass the comprehensive examinations on the common core of that program within the next four years. This, I said, was the way to prepare ourselves to provide liberal-professional education to our students.
As a consequence of my talk, my history colleagues invited me to join a team that was teaching the freshman core course in their subject. Of course I had to accept the invitation (and was quite delighted to do so). The course focused on the French Revolution (examples, not coverage, was the watchword), and included an exercise requiring students to test hypotheses they found in standard works on the Revolution against computerized files (in English) of the Cahiers de Doléances submitted to Versailles by the French provincial assemblies in 1787. Did the Cahiers support the claims of the books or didn’t they? I thus spent an instructive semester teaching and learning historymore learning than teaching. I have not repeated the experience, but mainly from laziness and not because it would not be fun

to do itor something similar in English literature or the French novelagain.
My proposed school for teachers has not yet been established at Carnegie Mellon. But I am patient, and realize that social reform cannot be accomplished in an instant.
In looking back on my talk, I find myself speculating on the origins of my educational views. The strong belief in liberal-professional education undoubtedly derives from my experiences in engineering schools, both Illinois Tech and Carnegie Tech, combined with a view that the humanities, in their traditional forms, have had an exaggerated role in liberal education. I have never reconciled myself with the view that professional education need be narrowly vocational, or that skill is a dirty word. Nor do I believe that the contemporary humanities have demonstrated a special competence to interpret the human condition.
The importance I attach to the university as a social system and to a core curriculum for enriching the educational experience undoubtedly stems from my experiences in the College at the University of Chicago, with its enlightening survey courses addressing every domain of knowledge. Since this was not yet the Chicago of Hutchins’s and Adler’s Great Books, I did not become committed to a specific canon. Such a commitment would make the idea of sampling unacceptable, hence would make a core unworkable.
And although it peeks through only a little into this talk, my view that the social sciences (and especially cognitive science) have an essential contribution to make to university education stems directly from my research in recent years on learning and problem-solving processes. Contemporary cognitive science provides knowledge that is vital for the improvement of educational processes. It also reveals the commonality of human thought processes across the most diverse fields, giving us reason to believe that effective communication can be established and maintained among the many specialized cultures that make up professional, intellectual, and artistic society today.

Chapter 17
On Being Argumentative
My account of my life in the university, both in GSIA and afterward, makes clear that I have not avoided controversyindeed, have often been embroiled in it. I like to think that it was not a love of battle that was responsible for this, but that confrontation was necessary to achieve some of the goals I thought important for the university.
If controversy occurs in the life of the university, it also occurs in research. A good deal of my research has been directed at unseating established positions, first in public administration, next in economics, and then in psychology. In one sense, that is to be expected. Research is supposed to produce something new, and new is different from old. Nevertheless, much research, including very important discoveries, some of them revolutionary in the Kuhnian sense, does not contradict the old, but builds upon it. Even if it ultimately undermines the old order, its subversive consequences are not always obvious at the outset.
I have usually announced my revolutionary intent. The published “essential portion” of my dissertation was sedate enough, “Decision Making and Administrative Organization,” published in 1944 in the Public Administration Review. But it was followed in the same journal in 1946 by “The Proverbs of Administration,” which asserted that the basic principles of the classical theory of administration (those of Luther Gulick and L. Urwick 1937) were not principles at all, but proverbs, full of wisdom but always occurring in mutually contradicting pairs.
The message was mostly criticalI showed that classical theory was in bad shape but proposed only very general remedies: “We need more research to establish when and under what circumstances which proverb is valid.” It was only in the mid-1950s that I had an opportunity to begin

validating empirically the alternative theory I sketched in Administrative Behavior.
The “Proverbs” article got plenty of attention, not all of it favorable. Urwick never quite forgave me this attack on his life work, but Gulick was quite friendly in later years. Presumably he made allowance for the hubris of a young man. Hubris, arrogance, or whatever, that article secured my instant and permanent visibility in public administration. It is still frequently cited.
In chapter 4, I described the behavioral movement in political science, spearheaded by Charles Merriam’s department at Chicago and the volume edited by Herbert Storing, Essays on the Scientific Study of Politics, that devoted individual chapters to flaying the chief behaviorists, including me.
Replying to this attack would have itself required a book almost as long as Administrative Behavior, one that I was never tempted to write. It seemed to me that Administrative Behavior was its own best defenseand my judgment seems to have been vindicated, as I do not see that its reputation has suffered over the years.
It is true that I am still accused of “positivism” as though that were some kind of felony, or at least a venial sin; and there still seems to be widespread lack of understanding of why one cannot logically deduce an “ought” without including at least one “ought” among the premises. But I think these difficulties have little or no connection with the Storing book. They arise from the general tendency today to use positivist as a pejorative term without any clear notion of what positivist as believe.
In economics, combat was slower in getting under way. My initial forays into economics consisted of a few articles (on tax incidence and technological change) that kept well within the neoclassical framework, and then some papers suggesting the need to recognize the limits on rationality in order to create a more lifelike picture of the business firm. In none of these papers did I challenge the foundations of economic theory strongly, or its macroeconomic applications, although the raw materials for such a challenge were certainly provided in Administrative Behavior and my subsequent pieces on the firm.
The first hostilities took the form of counterattacks from the opponents of bounded rationality: from Edward S. Mason (1952) and Fritz Machlup (1946), the former claiming that my revisions of the theory of the firm were not very relevant to economic theory, the latter that people, whatever the appearances, really maximized. But the blame for war is not easily assignedhere or elsewhere. Certainly my colleagues in economics at Carnegie soon knew of my skepticism. Franco Modigliani, while remaining a

close friend during his Pittsburgh years and ever since, never mistook me for an ally in matters of economic theory. And Jack Muth, in his announcement of rational expectations in 1961, explicitly labeled his theory a reply to my doctrine of bounded rationality. Lunchtime debate with my colleagues and disputes about personnel decisions in the economics faculty undoubtedly contributed to the gradual escalation of my conflict with the profession. By the time I returned to a concern with economics in the 1970s, the war was open and declared.
Here again the task was to go beyond skepticism about the foundations of neoclassical economics and to provide an alternative. You can’t beat something with nothing. Administrative Behavior was a beginning, which I followed up in the early 1950s with papers on organizational equilibrium, the theory of the employment relation, and a behavioral model of rational choice. All three papers met economic theory on its own ground, and all three were published in mainstream economics journals.
Mason’s objection, echoed by others in the profession, that the revision was simply irrelevant to the main concern of economics with industries and the economy as a whole, induced me to broaden the attack to challenge the evidence supporting neoclassical macroeconomics. This shift is visible in my economics papers written after the mid-1970s. Here I developed the distinction between procedural and substantive theories of rationality; insisted on the need for a computational theory of economic decision making; challenged the evidence usually cited for believing that demand and supply are equilibrated at the margin in real markets; and showed that auxiliary assumptions, quite independent of the central assumptions of optimization, accounted for the ability of economists to explain real-world phenomena. These papers are overtly combative. Perhaps my stridency will abate as economics embraces more of the heresy (as it appears now to be doing).
My work in cognitive psychology with Al Newell and Cliff Shaw, starting with the Logic Theorist in 1956, accented the positive. We had specific, concrete proposals for both methodology (computer simulation and thinking-aloud protocols) and substance (problem solving using heuristic search conducted by a physical symbol system). Of course, these proposals flew in the face of the predominant behaviorism, especially in its strong Skinnerian form; but we set forth our theory and the evidence for it without much overt challenge to the prevailing religion. In fact, in our Psychological Review paper (Newell, Shaw, and Simon 1958a), we claimed explicitly to be natural descendants of both the behaviorists and the Gestaltists, and to provide a reconciliation of these contending schools.
Disagreement does not have to be announced in order to be perceived.

Psychologists, as soon as they stopped ignoring us (and we were not ignored for very long), recognized the revolutionary import of what we were saying. But the dispute in psychology proceeded very differently from those in public administration and economics, due, no doubt, to the stronger empirical orientation of the first of these fields. To be sure, it has its share of philosophical pronunciamentos and methodological discourses, but these have to take place within bounds defined by the steady accumulation of empirical findings. However it turns out in the long run, and the empirical evidence will decide that, the revolution achieved a great success in the 1970s.
Neither Allen Newell nor I has devoted much time or energy to explicit replies to the criticsbehaviorist, Gestalt, or phenomenological. We have adopted the policy (was it the anarchist Bakunin’s? or Sorel’s?) of ''propaganda of the deed, not propaganda of the word." The best rhetoric comes from building and testing models and running experiments. Let philosophers weave webs of words; such webs break easily.
A number of the most important psychological models that I have constructed were conceived while I thought through the implications of critical attacks on the information-processing paradigm. For example, when Ulric Neisser asserted, in a paper he published in 1963, the impossibility of computers responding to emotions or entertaining multiple goals, I took the challenge seriously and was able to produce the model of motivational and emotional controls of cognition that is contained in my 1967 paper bearing that title.
Similarly, the work that Barenfeld and I did on chess perception, published in 1969, was aimed at refuting the claims of Tichomirov and Poznyanskaya (1965) that a computer scanning a chess board was incapable of grasping chess relations as Gestalts, as these masters could. And a major motivation for my continuing interest in models of scientific discovery has been to show that, contrary to the claims of phenomenologists, programs could be designed to discover laws and invent new concepts. Constructive proofs like these were far more effective in answering criticism than rhetoric, however eloquent, would have been.
Mention of computer simulation and philosophers raises the most contentious issue of all: whether one should speak of computers as thinking. Cognitive psychologists, even those sympathetic to simulation, often avoid the issue by talking of “the computer metaphor.” No one would deny metaphorical thoughts to computers. But the question, “Can computers really (not just metaphorically) think?” raises the warmest passions in philosophers, and sometimes in the common person in the street as well.
There is a knock-down argument that is supposed to settle the question

instantly. It goes like this: Computers are machines; machines cannot think; hence computers cannot think. It is not regarded as an adequate riposte to say: Human beings are also (biological) machines; therefore, if machines cannot think, human beings cannot think.
Argumentation, unlike formal logic, appeals to premises that are not always made explicit but are presumed to be already stored in the memory of the reader, evoking these beliefs and drawing inferences from them. That is the structure of the argument I have just given. Most readers already “know” that machines cannot think and that computers are machines. Those premises don’t require evidence.
In fact, most readers will accept the statement, “Computers can’t think,” without any explicit argument at all, because the requisite premises will be evoked from memory as soon as the conclusion is stated. On the other hand, the riposte fails because many readers do not believe that human beings are machines; or, if they believe it, do so with substantial qualifications and restrictions.
In a debate between two protagonists, that party has a decided advantage whose case rests on beliefs solidly established in the minds of most readers. In arguing that machines think, we are in the same fix as Darwin when he argued that man shares common ancestors with monkeys, or Galileo when he argued that the Earth spins on its axis. Wearied by the prospect of having to establish our nonintuitive premises as well as our conclusions, Al Newell and I have usually let a steadily accumulating collection of programs (our own and others’) demonstrate the thinking of computers. The computers, speaking for themselves (figuratively and literally), will in time convince all but the most hardened fundamentalists that they think.
In amplification of these views, I reproduce here a letter I wrote to my daughter Barbara well over a decade ago.
May 21, 1977
Dear Bar:
You ask about Weizenbaum and all that. It’s a long and complicated story, on at least three different levels. First, there is the whole set of questions about what artificial intelligence has achieved and will achieve by way of imitating human thought and other human processes. In principle at least, that is an empirical question that ought to be answered dispassionately after looking at the facts.
Second, there are the questions about what the accomplishments of artificial intelligence, however great or meager, mean for human society and people generally. Those are also empirical questions, but their answer depends partly on the answers to the first set of questions.

Third, there are the questions about how people feel about intelligent machines and their relations to people. These are questions of emotion and value that are not susceptible to demonstration and proof.*
Now it would be nice if we could settle the first and second sets of questions, the factual ones, more or less independently of the third, the emotional ones. But it has not worked out that way. From the very beginning of A.I., back in the 1950s, it has generated in the hearts of some people strong fears, anxiety, and even anger. In this respect, A.I. has had much the same kind of effect as Darwin’s announcement of the Theory of Evolution. Both aroused in some people anxieties about their own uniqueness, value, and worth. Thus, very early on, an engineer named Mortimer Taube (I think) wrote an angry letter to Science about the first GPS piece that Al and I published there, and an even angrier book. He was followed by Richard Bellman . . . Bellman was followed by the humanistic philosopher Hubert Dreyfus (a brother of Bellman’s research associate), and Dreyfus by Weizenbaum. There have, of course, been many others, but those are the ones who have gotten most attention.
In general, I have not answered these attacks You don’t get very far arguing with a man about his religion, and these are
essentially religious issues to the Dreyfuses and Weizenbaums of the world. Weizenbaum’s book, for example, vacillates among the positions that (1) the claims for A.I. are much overstated, (2) there is a danger that these claims will be realized, (3) it is immoral for people to try to realize them (he sometimes uses the word “obscene,” and compares such people with the Nazis). Now I understand some of the reasons why Joe is upset (he was himself a refugee from the Nazis), but not why he has fixated on computers as the objects of all of his anxieties. In any event, I see no point in arguing with him.

  • 19
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值