Amos liked to say that if you are asked to do anything—go to a party, give a speech, lift a finger—you should never answer right away, even if you are sure that you want to do it. Wait a day, Amos said, and you’ll be amazed how many of those invitations
you would have accepted yesterday you’ll refuse after you have had a day to think it over. A corollary to his rule for dealing with demands upon his time was his approach to situations from which he wished to extract himself. A human being who finds himself
stuck at some boring meeting or cocktail party often finds it difficult to invent an excuse to flee. Amos’s rule, whenever he wanted to leave any gathering, was to just get up and leave. Just start walking and you’ll be surprised how creative you will become
and how fast you’ll find the words for your excuse, he said. His attitude to the clutter of daily life was of a piece with his strategy for dealing with social demands. Unless you are kicking yourself once a month for throwing something away, you are not throwing
enough away, he said. Everything that didn’t seem to Amos obviously important he chucked, and thus what he saved acquired the interest of objects that have survived a pitiless culling. One unlikely survivor is a single scrap of paper with a few badly typed
words on it, drawn from conversations he had with Danny in the spring of 1972 as they neared the end of their time in Eugene. For some reason Amos saved it: People predict by making up stories People predict very little and explain everything People live under
uncertainty whether they like it or not People believe they can tell the future if they work hard enough People accept any explanation as long as it fits the facts The handwriting was on the wall, it was just the ink that was invisible People often work hard
to obtain information they already have And avoid new knowledge Man is a deterministic device thrown into a probabilistic Universe In this match, surprises are expected Everything that has already happened must have been inevitable At first glance it resembles
a poem. What it was, in fact, was early fodder for his and Danny’s next article, which would also be their first attempt to put their thinking in such a way that it might directly influence the world outside of their discipline. Before returning to Israel,
they had decided to write a paper about how people made predictions. The difference between a judgment and a prediction wasn’t as obvious to everyone as it was to Amos and Danny. To their way of thinking, a judgment (“he looks like a good Israeli army officer”)
implies a prediction (“he will make a good Israeli army officer”), just as a prediction implies some judgment—without a judgment, how would you predict? In their minds, there was a distinction: A prediction is a judgment that involves uncertainty. “Adolf Hitler
is an eloquent speaker” is a judgment you can’t do much about. “Adolf Hitler will become chancellor of Germany” is, at least until January 30, 1933, a prediction of an uncertain event that eventually will be proven either right or wrong. The title of their
next paper was “On the Psychology of Prediction.” “In making predictions and judgments under uncertainty,” they wrote, “people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of
heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic error.” Viewed in hindsight, the paper looks to have more or less started with Danny’s experience in the Israeli army. The people in charge of vetting Israeli
youth hadn’t been able to predict which of them would make good officers, and the people in charge of officer training school hadn’t been able to predict who among the group they were sent would succeed in combat, or even in the routine day-to-day business
of leading troops. Danny and Amos had once had a fun evening trying to predict the future occupations of their friends’ small children, and had surprised themselves by the ease, and the confidence, with which they had done it. Now they sought to test how people
predicted—or, rather, to dramatize how people used what they now called the representativeness heuristic to predict. To do this, however, they needed to give them something to predict. They decided to ask their subjects to predict the future of a student,
identified only by some personality traits, who would go on to graduate school. Of the then nine major courses of graduate study in the United States, which would he pursue? They began by asking their subjects to estimate the percentage of students in each
course of study. Here were their average guesses: Business: 15 percent Computer Science: 7 percent Engineering: 9 percent Humanities and Education: 20 percent Law: 9 percent Library Science: 3 percent Medicine: 8 percent Physical and Life Sciences: 12 percent
Social Science and Social Work: 17 percent For anyone trying to predict which area of study any given person was in, those percentages should serve as a base rate. That is, if you knew nothing at all about a particular student, but knew that 15 percent of
all graduate students were pursuing degrees in business administration, and were asked to predict the likelihood that the student in question was in business school, you should guess “15 percent.” Here was a useful way of thinking about base rates: They were
what you would predict if you had no information at all. Now Danny and Amos sought to dramatize what happened when you gave people some information. But what kind of information? Danny spent a day inside the Oregon Research Institute stewing over the question—and
became so engrossed by his task that he stayed up all night creating what at the time seemed like the stereotype of a graduate student in computer science. He named him “Tom W.” Tom W. is of high intelligence, although lacking in true creativity. He has a
need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a
strong drive for competence. He seems to have little feel and little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense. They would ask one group of subjects—they called it the “similarity”
group—to estimate how “similar” Tom was to the graduate students in each of the nine fields. That was simply to determine which field of study was most “representative” of Tom W. Then they would hand a second group—what they called the “prediction” group—this
additional information: The preceding personality sketch of Tom W. was written during Tom’s senior year in high school by a psychologist, on the basis of projective tests. Tom W. is currently a graduate student. Please rank the following nine fields of graduate
specialization in order of the likelihood that Tom W. is now a graduate student in each of these fields. They would not only give their subjects the sketch but inform them that it was a far from reliable description of Tom W. That it had been written by a
psychologist, for a start; they would further tell subjects that the assessment had been made years earlier. What Amos and Danny suspected—because they had tested it first on themselves—is that people would essentially leap from the similarity judgment (“that
guy sounds like a computer scientist!”) to some prediction (“that guy must be a computer scientist!”) and ignore both the base rate (only 7 percent of all graduate students were computer scientists) and the dubious reliability of the character sketch. The
first person to arrive for work on the morning Danny finished his sketch was an Oregon researcher named Robyn Dawes. Dawes was trained in statistics and legendary for the rigor of his mind. Danny handed him the sketch of Tom W. “He read it over and he had
a sly smile, as if he had figured it out,” said Danny. “And he said, ‘Computer scientist!’ After that I wasn’t worried about how the Oregon students would fare.” The Oregon students presented with the problem simply ignored all objective data and went with
their gut sense, and predicted with great certainty that Tom W. was a computer scientist. Having established that people would allow a stereotype to warp their judgment, Amos and Danny then wondered: If people are willing to make irrational predictions based
on that sort of information, what kind of predictions might they make if we give them totally irrelevant information? As they played with this idea—they might increase people’s confidence in their predictions by giving them any information, however useless—the
laughter to be heard from the other side of the closed door must have grown only more raucous. In the end, Danny created another character. This one he named “****”: **** is a 30 year old man. He is married with no children. A man of high ability and high
motivation, he promises to be quite successful in his field. He is well liked by his colleagues. Then they ran another experiment. It was a version of the book bag and poker chips experiment that Amos and Danny had argued about in Danny’s seminar at Hebrew
University. They told their subjects that they had picked a person from a pool of 100 people, 70 of whom were engineers and 30 of whom were lawyers. Then they asked them: What is the likelihood that the selected person is a lawyer? The subjects correctly judged
it to be 30 percent. And if you told them that you were doing the same thing, but from a pool that had 70 lawyers in it and 30 engineers, they said, correctly, that there was a 70 percent chance the person you’d plucked from it was a lawyer. But if you told
them you had picked not just some nameless person but a guy named ****, and read them Danny’s description of ****—which contained no information whatsoever to help you guess what **** did for a living—they guessed there was an equal chance that **** was a
lawyer or an engineer, no matter which pool he had emerged from. “Evidently, people respond differently when given no specific evidence and when given worthless evidence,” wrote Danny and Amos. “When no specific evidence is given, the prior probabilities are
properly utilized; when worthless specific evidence is given, prior probabilities are ignored.”* There was much more to “On the Psychology of Prediction”—for instance, they showed that the very factors that caused people to become more confident in their predictions
also led those predictions to be less accurate. And in the end it returned to the problem that had interested Danny since he had first signed on to help the Israeli army rethink how it selected and trained incoming recruits: The instructors in a flight school
adopted a policy of consistent positive reinforcement recommended by psychologists. They verbally reinforced each successful execution of a flight maneuver. After some experience with this training approach, the instructors claimed that contrary to psychological
doctrine, high praise for good execution of complex maneuvers typically results in a decrement of performance on the next try. What should the psychologist say in response? The subjects to whom they posed this question offered all sorts of advice. They surmised
that the instructors’ praise didn’t work because it led the pilots to become overconfident. They suggested that the instructors didn’t know what they were talking about. No one saw what Danny saw: that the pilots would have tended to do better after an especially
poor maneuver, or worse after an especially great one, if no one had said anything at all. Man’s inability to see the power of regression to the mean leaves him blind to the nature of the world around him. We are exposed to a lifetime schedule in which we
are most often rewarded for punishing others, and punished for rewarding. When they wrote their first papers, Danny and Amos had no particular audience in mind. Their readers would be the handful of academics who happened to subscribe to the highly specialized
psychology trade journals in which they published. By the summer of 1972, they had spent the better part of three years uncovering the ways in which people judged and predicted—but the examples that they had used to illustrate their ideas were all drawn directly
from psychology, or from the strange, artificial-seeming tests that they had given high school and college students. Yet they were certain that their insights applied anywhere in the world that people were judging probabilities and making decisions. They sensed
that they needed to find a broader audience. “The next phase of the project will be devoted primarily to the extension and application of this work to other high-level professional activities, e.g., economic planning, technological forecasting, political decision
making, medical diagnosis, and the evaluation of legal evidence,” they wrote in a research proposal. They hoped, they wrote, that the decisions made by experts in these fields could be “significantly improved by making these experts aware of their own biases,
and by the development of methods to reduce and counteract the sources of bias in judgment.” They wanted to turn the real world into a laboratory. It was no longer just students who would be their lab rats but also doctors and judges and politicians. The question
was: How to do it? They couldn’t help but sense, during their year in Eugene, a growing interest in their work. “That was the year it was really clear we were onto something,” recalled Danny. “People started treating us with respect.” Irv Biederman, then a
visiting associate professor of psychology at Stanford University, heard Danny give a talk about heuristics and biases on the Stanford campus in early 1972. “I remember I came home from the talk and told my wife, ‘This is going to win a Nobel Prize in economics,’”
recalled Biederman. “I was so absolutely convinced. This was a psychological theory about economic man. I thought, What could be better? Here is why you get all these irrationalities and errors. They come from the inner workings of the human mind.” Biederman
had been friends with Amos at the University of Michigan and was now a member of the faculty at the State University of New York at Buffalo. The Amos he knew was consumed by possibly important but probably insolvable and certainly obscure problems about measurement.
“I wouldn’t have invited Amos to Buffalo to talk about that,” he said—as no one would have understood it or cared about it. But this new work Amos was apparently doing with Danny Kahneman was breathtaking. It confirmed Biederman’s sense that “most advances
in science come not from eureka moments but from ‘hmmm, that’s funny.’” He persuaded Amos to pass through Buffalo in the summer of 1972, on his way from Oregon to Israel. Over the course of a week, Amos gave five different talks about his work with Danny,
each aimed at a different group of academics. Each time, the room was jammed—and fifteen years later, in 1987, when Biederman left Buffalo for the University of Minnesota, people were still talking about Amos’s talks. Amos devoted talks to each of the heuristics
he and Danny had discovered, and another to prediction. But the talk that lingered in Biederman’s mind was the fifth and final one. “Historical Interpretation: Judgment Under Uncertainty,” Amos had called it. With a flick of the wrist, he showed a roomful
of professional historians just how much of human experience could be reexamined in a fresh, new way, if seen through the lens he had created with Danny. In the course of our personal and professional lives, we often run into situations that appear puzzling
at first blush. We cannot see for the life of us why Mr. X acted in a particular way, we cannot understand how the experimental results came out the way they did, etc. Typically, however, within a very short time we come up with an explanation, a hypothesis,
or an interpretation of the facts that renders them understandable, coherent, or natural. The same phenomenon is observed in perception. People are very good at detecting patterns and trends even in random data. In contrast to our skill in inventing scenarios,
explanations, and interpretations, our ability to assess their likelihood, or to evaluate them critically, is grossly inadequate. Once we have adopted a particular hypothesis or interpretation, we grossly exaggerate the likelihood of that hypothesis, and find
it very difficult to see things any other way. Amos was polite about it. He did not say, as he often said, “It is amazing how dull history books are, given how much of what’s in them must be invented.” What he did say was perhaps even more shocking to his
audience: Like other human beings, historians were prone to the cognitive biases that he and Danny had described. “Historical judgment,” he said, was “part of a broader class of processes involving intuitive interpretation of data.” Historical judgments were
subject to bias. As an example, Amos talked about research then being conducted by one of his graduate students at Hebrew University, Baruch Fischhoff. When Richard Nixon announced his surprising intention to visit China and Russia, Fischhoff asked people
to assign odds to a list of possible outcomes—say, that Nixon would meet Chairman Mao at least once, that the United States and the Soviet Union would create a joint space program, that a group of Soviet Jews would be arrested for attempting to speak with
Nixon, and so on. After the trip, Fischhoff went back and asked the same people to recall the odds they had assigned to each outcome. Their memories of the odds they had assigned to various outcomes were badly distorted. They all believed that they had assigned
higher probabilities to what happened than they actually had. They greatly overestimated the odds that they had assigned to what had actually happened. That is, once they knew the outcome, they thought it had been far more predictable than they had found it
to be before, when they had tried to predict it. A few years after Amos described the work to his Buffalo audience, Fischhoff named the phenomenon “hindsight bias.”† In his talk to the historians, Amos described their occupational hazard: the tendency to take
whatever facts they had observed (neglecting the many facts that they did not or could not observe) and make them fit neatly into a confident-sounding story: All too often, we find ourselves unable to predict what will happen; yet after the fact we explain
what did happen with a great deal of confidence. This “ability” to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle, flaw in our reasoning. It leads us to believe that there is
a less uncertain world than there actually is, and that we are less bright than we actually might be. For if we can explain tomorrow what we cannot predict today, without any added information except the knowledge of the actual outcome, then this outcome must
have been determined in advance and we should have been able to predict it. The fact that we couldn’t is taken as an indication of our limited intelligence rather than of the uncertainty that is in the world. All too often, we feel like kicking ourselves for
failing to foresee that which later appears inevitable. For all we know, the handwriting might have been on the wall all along. The question is: was the ink visible? It wasn’t just sports announcers and political pundits who radically revised their narratives,
or shifted focus, so that their stories seemed to fit whatever had just happened in a game or an election. Historians imposed false order upon random events, too, probably without even realizing what they were doing. Amos had a phrase for this. “Creeping determinism,”
he called it—and jotted in his notes one of its many costs: “He who sees the past as surprise-free is bound to have a future full of surprises.” A false view of what has happened in the past makes it harder to see what might occur in the future. The historians
in his audience of course prided themselves on their “ability” to construct, out of fragments of some past reality, explanatory narratives of events which made them seem, in retrospect, almost predictable. The only question that remained, once the historian
had explained how and why some event had occurred, was why the people in his narrative had not seen what the historian could now see. “All the historians attended Amos’s talk,” recalled Biederman, “and they left ashen-faced.” After he had heard Amos explain
how the mind arranged historical facts in ways that made past events feel a lot less uncertain, and a lot more predictable, than they actually were, Biederman felt certain that his and Danny’s work could infect any discipline in which experts were required
to judge the odds of an uncertain situation—which is to say, great swaths of human activity. And yet the ideas that Danny and Amos were generating were still very much confined to academia. Some professors, most of them professors of psychology, had heard
of them. And no one else. It was not at all clear how two guys working in relative obscurity at Hebrew University could spread the word of their discoveries to people outside their field. In the early months of 1973, after their return to Israel from Eugene,
Amos and Danny set to work on a long article summarizing their findings. They wanted to gather in one place the chief insights of the four papers they had already written and allow readers to decide what to make of them. “We decided to present the work for
what it was: a psychological investigation,” said Danny. “We’d leave the big implications to others.” He and Amos both agreed that the journal Science offered them the best hope of reaching people in fields outside of psychology. Their article was less written
than it was constructed. (“A sentence was a good day,” said Danny). As they were building it, they stumbled upon what they saw as a clear path for their ideas to enter everyday human life. They had been gripped by “The Decision to Seed Hurricanes,” a paper
coauthored by Stanford professor Ron Howard. Howard was one of the founders of a new field called decision analysis. Its idea was to force decision makers to assign probabilities to various outcomes: to make explicit the thinking that went into their decisions
before they made them. How to deal with killer hurricanes was one example of a problem that policy makers might use decision analysts to help address. Hurricane Camille had just wiped out a large tract of the Mississippi Gulf Coast and obviously might have
done a lot more damage—say, if it had hit New Orleans or Miami. Meteorologists thought they now had a technique—dumping silver iodide into the storm—to reduce the force of a hurricane, and possibly even alter its path. Seeding a hurricane wasn’t a simple matter,
however. The moment the government intervened in the storm, it was implicated in whatever damage that storm inflicted. The public, and the courts of law, were unlikely to give the government credit for what had not happened, for who could say with certainty
what would have happened if the government had not intervened? Instead the society would hold its leaders responsible for whatever damage the storm inflicted, wherever it hit. Howard’s paper explored how the government might decide what to do—and that involved
estimating the odds of various outcomes. But the way the decision analysts elicited probabilities from the minds of the hurricane experts was, in Danny and Amos’s eyes, bizarre. The analysts would present the hurricane seeding experts inside government with
a wheel of fortune on which, say, a third of the slots were painted red. They’d ask: “Would you rather bet on the red sector of this wheel or bet that the seeded hurricane will cause more than $30 billion of property damage?” If the hurricane authority said
he would rather bet on red, he was saying that he thought the chance the hurricane would cause more than $30 billion of property damage was less than 33 percent. And so the decision analysts would show him another wheel, with, say, 20 percent of the slots
painted red. They did this until the percentage of red slots matched up with the authority’s sense of the odds that the hurricane would cause more than $30 billion of property damage. They just assumed that the hurricane seeding experts had an ability to correctly
assess the odds of highly uncertain events. Danny and Amos had already shown that people’s ability to judge probabilities was **** by various mechanisms used by the mind when it faced uncertainty. They believed that they could use their new understanding of
the systematic errors in people’s judgment to improve that judgment—and, thus, to improve people’s decision making. For instance, any person’s assessment of probabilities of a killer storm making landfall in 1973 was bound to be warped by the ease with which
they recalled the fresh experience of Hurricane Camille. But how, exactly, was that judgment warped? “We thought decision analysis would conquer the world and we would help,” said Danny. The leading decision analysts were clustered around Ron Howard in Menlo Park,
California, at a place called the Stanford Research Institute. In the fall of 1973 Danny and Amos flew to meet with them. But before they could figure out exactly how they were going to bring their ideas about uncertainty into the real world, uncertainty intervened.
On October 6, the armies of Egypt and Syria—with troops and planes and money from as many as nine other Arab countries—launched an attack on Israel. Israeli intelligence analysts had dramatically misjudged the odds of an attack of any sort, much less a coordinated
one. The army was caught off guard. On the Golan Heights, a hundred or so Israeli tanks faced fourteen hundred Syrian tanks. Along the Suez Canal, a garrison of five hundred Israeli troops and three tanks were quickly overrun by two thousand Egyptian tanks
and one hundred thousand Egyptian soldiers. On a cool, cloudless, perfect morning in Menlo Park, Amos and Danny heard the news of the shocking Israeli losses. They raced to the airport for the first flight back home, so that they might fight in yet another
war. * By the time they were finished with the project, they had dreamed up an array of hysterically bland characters for people to evaluate and judge to be more likely lawyers or engineers. Paul, for example. “Paul is 36 years old, married, with 2 children.
He is relaxed and comfortable with himself and with others. An excellent member of a team, he is constructive and not opinionated. He enjoys all aspects of his work, and in particular, the satisfaction of finding clean solutions to complex problems.” † In
a brief memoir, Fischhoff later recalled how his idea had first come to him in Danny’s seminar: “We read Paul Meehl’s (1973) ‘Why I Do Not Attend Case Conferences.’ One of his many insights concerned clinicians’ exaggerated feeling of having known all along
how cases were going to turn out.” The conversation about Meehl’s idea led Fischhoff to think about the way Israelis were always pretending to have foreseen essentially unforeseeable political events. Fischhoff thought, “If we’re so prescient, why aren’t we
running the world?” Then he set out to see exactly how prescient people who thought themselves prescient actually were.