top of page
  • Writer's pictureLars Christensen

Thinking, fast and slow by Daniel Kahneman


I finished this book in October 2023. I recommend this book 3/10.


You should read this book if you want to learn about human flaws from a Nobel prize winner. The book was well written but a long read. You will learn about how the brain is two systems and how easily we can be manipulated—and you are not really in control; like, you can not convince your brain that 2+2 is not 4, etc.


Get your copy here.


My notes and thoughts:

  • P26. This picture is unremarkable: two horizontal lines of different lengths with fins appended, pointing in different directions. The bottom line is obviously longer than the one above it. That is what we all see, and we naturally believe what we see. If you have already encountered this image, however, you recognize it as the famous Müller-Lyer illusion. As you can easily confirm by measuring them with a ruler, the horizontal lines are in fact, identical in length. Now that you have measured the lines, you—your System 2, the conscious being you call "I"—have a new belief: you know that the lines are equally long. If asked about their length, you will say what you know. But you still see the bottom line as longer. You have chosen to believe the measurement, but cannot prevent System 1 from doing its thing; you cannot decide to see the lines as equal, although you know they are. To resist the illusion, there is only one thing you can do: you must learn to mistrust your impressions of the length of lines when fins are attached to them. To implement that rule, you must be able to recognize the illusory pattern and recall what you know about it. If you can do this, you will never again be fooled by the Müller-Lyer illusion. But you will still see one line as longer than the other.

  • P48. System 1 is impulsive and intuitive; System 2 is capable of reasoning, and it is cautious, but at least for some people, it is also lazy. We recognize related differences among individuals: some people are more like their System 2; others are closer to their System 1. You can see that when some people you know will jump quickly to their first conclusion, whereas others will be thinking a bit more before judging.

  • P52. If you have recently seen or heard the word EAT, you are temporarily more likely to complete the word fragment SO_P as SOUP than as SOAP. The opposite would happen, of course, if you had just seen WASH. We call this a priming effect and say that the idea of EAT primes the idea of SOUP and that WASH primes SOAP. Priming effects take many forms. If the idea of EAT is currently on your mind (whether or not you are conscious of it), you will be quicker than usual to recognize the word SOUP when it is spoken in a whisper or presented in a blurry font. And, of course, you are primed not only for the idea of soup but also for a multitude of food-related ideas, including fork, hungry, fat, diet, and cookie. If for your most recent meal you sat at a wobbly resturant table, you will be primed for wobbly as well. Furthermore, the primed ideas have some ability to prime other ideas, although more weakly. Like ripples on a pond, activation spreads through a small part of the vast network of associated ideas. The mapping of these ripples is now one of the most exciting pursuits in psychological research.

  • P65. You read this correctly: performance was better with the bad font. Cognitive strain, whatever its source, mobilizes System 2, which is more likely to reject the intuitive answer suggested by System 1.

  • P66. A study conducted in Switzerland found that investors believe that stocks with fluent names like Emmi, Swissfirstm and Comet will earn higher returns that nthose with clunky labels like Geberit and Ypsomed.

  • P77. "How many animals of each kind did Moses take into the ark?" The number of people who detect what is wrong with this question is so small that it has been dubbed the "Moses illusion." Moses took no animals into the ark; Noah did. Like the incident of the wincing soup eater, the Moses illusion is readily explained by norm theory. The idea of animals going into the art sets up a biblical context, and Moses is not abnormal in the context. You did not positively expect him, but the mention of his name is not surprising. It also helps that Moses and Noah have the same vowel sound and number of syllables. As with the triads that produce cognitive ease, you unconsciously detect associative coherence between "Moses" and "ark" and so quickly accept the question. Replace Moses with George W. Bush in this sentece and you will have a poor political joke, but no illusion.

  • P83. Early in my career as a professor. I graded students' essay exams in the conventional way. I would pick up one test booklet at a time and read all that student's essays in immediate succession, grading them as I went. I would then compute the total and go on to the next student. I eventually noticed that my evaluations of the assays in each booklet were strikingly homogeneous. I began to suspect that my grading exhibited a halo effect and that the first question I scored had a disproportionate effect on the overall grade. The mechanism was simple: if I had given a high score to the first essay, I gave the student the benefit of the doubt whenever I encountered a vague or ambiguous statement later on. This seemed reasonable. Surely, a student who had done so well on the first essay would not make a foolish mistake in the second one! But there was a serious problem with my way of doing things. If a student had written two essays, one strong and one weak, I would end up with different final grades depending on which essay I read first. I adopted a new procedure. Instead of reading the booklets in sequence, I read and scored all the student's answers to the first question, then went on to the next one. I made sure to write all the scores on the inside back page of the booklet so that I would not be biased when I read the second essay. Soon after switching to the new method, I made a disconcerting observation: my confidence in my grading was now much lower than it had been.

  • P94. A candidate's political future can range from the low of "She will be defeated in the primary" to a high of " She will someday be president of the United States." Here, we encounter a new aptitude for System 1. An underlying scale of intensity allows matching across diverse dimensions. If crimes were colors, murder would be a deeper shade of red than theft. If crimes were expressed in music, mass murder would be played fortissimo while accumulating unpaid parking tickets would be a faint pianissimo. And, of course, you have similar feelings about the intensity of punishments. In classic experiments, people adjusted the loudness of a sound to the severity of crimes; other people adjusted loudness to the severity of legal punishments. If you heard two notes, one for the crime and one for the punishment, you would feel a sense of injustice of one tone was much louder than the other.

  • P101. A survey of German students is one of the best examples of substitution. The survey that the young participants completed included the following two questions:

    • How happy are you these days?

    • How many dates did you have last month?

The experimenters were interested in the correlations between the two answers. Would students who reported more dates say that they were happier than those with fewer dates? Surprisingly, no: the correlation between the answers was about zero. Evidently, dating was not what came first to the student's minds when they were asked to assess their happiness. Another group of students saw the same two questions, but in reverse order:

  • How many dates did you have last month?

  • How happy are you these days?

The results this time were completely different. In this sequence, the correlation between the number of dates and reported happiness was about as high as correlations between psychological measures can get. What happened? The explanation is straightforward, and it is a good example of substitution. Dating was apparently not the center of these student's life, but when they were asked to think about their romantic life, they certainly had an emotional reaction.

  • P117. How many good years should you wait before concluding that an investment adviser is unusually skilled? How many successful acquisitions should be needed for a board of directors to believe that the CEO has an extraordinary flair for such deals? The simple answer to these questions is that if you follow your intuition, you will more often than not err by misclassifying a random event as systematic. We are far too willing to reject the belief that much of what we see in life is random.

  • P126. We see the same strategy at work in the negotiation over the proof of a home when the seller makes the first move by setting the list price. As in many other games, moving first is an advantage in single-issue negotiations—for example, when price is the only issue to be settled between a buyer and a seller. As you may have experienced when negotiating for the first time in a bazaar, the initial anchor has a powerful effect. My advice to students when I taught negotiations was that if you think the other side has made an outrageous proposal, you should not come back with an equally outrages counteroffer, creating a gap that will be difficult to bridge in further negotiations. Instead, you should make a scene, storm out, or threaten to do so, and make it clear—to yourself as well as the other side—that you will not continue the negotiation with that number on the table.

  • P144. In today's world, terrorists are the most significant practitioners of the art of inducing availability cascades. With a few horrible exceptions as 9/11, the number of casualties from terror attacks is very small relative to other causes of death. Even in countries that have been targets of intensive terror campaigns, such as Israel, the weekly number of casualties almost never came close to the number of traffic deaths. The difference is in the availability of the two risks, the ease and the frequency with which they come to mind. Gruesome images, endless repeated in the media, cause everyone to be on edge. As I know from experience, it is difficult to reason oneself into a state of complete calm. Terrorism speaks directly to System 1.

  • P150. One of the easy answers is an automatic assessment of representativeness—routine in understanding language. The (false) statement that "Elvis Presley's parents wanted him to be a dentist" is mildly funny because the discrepancy between the images of Presley and a dentist is detected automatically. System 1 generates an impression of similarity without intending to do so. The representativeness heuristic is involved when someone says, "She will win the election; you can see she is a winner" or "He won't go far as an academic; too many tattoos." We rely on representativeness when we judge the potential leadership of a candidate for office by the shape of his chin or the forcefulness of his speeches. Although it is common, prediction by representativeness is not statically optimal. Michael Lewis's best-selling Moneyball is a story about the inefficiency of this mode of prediction. Professional baseball scouts traditionally forecast the success of possible players in part by their build and look. The hero of Lewis's book is Billy Beane, the manager of the Oakland A's, who made the unpopular decision to overrule his scouts and to select players based on the statistics of past performance. The players the A's picked were inexpensive because other teams had rejected them for not looking the part. The team soon achieved excellent results at a low cost.

  • P151. One sin of representativeness is an excessive willingness to predict the occurrence of unlikely (low base-rate) events. Here is an example: you see a person reading The New York Times on the New York subway. Which of the following is a better bet about the reading stranger?

    • She is a Ph.D.

    • She does not have a college degree.

Representativeness would tell you to bet on the PhD, but this is not necessarily wise. You should seriously consider the second alternative because many more non-graduates than PhDs ride in New York subways. And if you must guess whether a woman who is described as "a shy poetry lover" studies Chinese literature or business administration, you should opt for the latter option. Even if every female student of Chinese literature is shy and loves poetry, it is almost certain that there are more bashful poetry lovers in the much larger population of business students.

  • P175. But the inference he had drawn about the efficacy of reward and punishment was completely off the mark. What he had observed is known as regression to the mean, which in that case was due to random fluctuations in the quality of performance. Naturally, he praised only a cadet whose performance was far better than average. But the cadet was probably just lucky on that particular attempt and, therefore, likely to deteriorate regardless of whether or not he was praised. Similarly, the instructor would shout into a cadet's earphones only when the cadet's performance was unusually bad and, therefore, likely to improve regardless of what the instructor did. The instructor had attached a causal interpretation to the inevitable fluctuations of a random process.

  • P183. "Depressed children treated with an energy drink improve significantly over a three-month period." I made up this newspaper headline, but the fact it reports is true: if you treated a group of depressed children for some time with an energy drink, they would show a clinically significant improvement. It is also the case that depressed children who spend some time standing on their heads or hugging a cat for twenty minutes a day will also show improvement. Most readers of such headlines will automatically infer that the energy drink or the cat hugging caused an improvement, but this conclusion is completely unjustified. Depressed children are an extreme group, they are more depressed than most other children—and extreme groups regress to the mean: depressed children will get somewhat better over time even if they hug no cats and drink no Red Bull. In order to conclude that an energy drink—or any other treatment—is effective, you must compare a group of patients who receive this treatment to a "control" group that receives no treatment. The control group is expected to improve by regression alone, and the aim of the experiment is to determine whether the treated patients improve more than regression can explain.

  • P207. Because luck plays a large role, the quality of leadership and management practices cannot be inferred reliably from observations of success. And even if you had perfect foreknowledge that a CEO has brilliant vision and extraordinary competence, you still would be unable to predict how the company will perform with much better accuracy than the flip of a coin. On average, the gap in corporate profitability and stock returns between the outstanding firms and the less successful firms studied in Built to Last shrank to almost nothing in the period following the study. The average profitability of the companies identified in the famous In Search of Excellence dropped sharply as well within a short time. A study of Fortune's Most Admired Companies finds that over a twenty-year period, the firms with the worst ratings went on to earn much higher stock returns than the most admired firms. You are probably tempted to think of casual explanations for these observations: perhaps the successful firms became complacent, and the less successful firms tried harder. But this is the wrong way to think about what happened. The average gap must shrink because the original gap was due in good part to luck, which contributed both to the success of the top firms and to the lagging performance of the rest. We have already encountered this statistical fact of life: regression to the mean.

  • P214. Mutual funds are run by highly experienced and hardworking professional who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than fifty years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. Typically, at least two out of every three mutual funds do not achieve their own benchmark in any given year. More importantly, the year-to-year correlation between the outcomes of mutual funds is very small, barely higher than zero. The successful funds in any given year are mostly lucky; they have a good roll of the dice. There is general agreement among researchers that nearly all stock pickers, whether they know it or not—and few of them do—are playing a game of chance. The subjective experience in a situation of great uncertainty. In highly efficient markets, however, educated guesses are no more accurate than blind guesses.

  • P219. Tetlock interviewed 284 people who made their living "commenting or offering advice on political and economic trends" He asked them to assess the probabilities that certain events would occur in the not too distant future, both in areas of the world in which they specialized and in regions about which they had less knowledge. Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? Which country would become the next big emerging market? In all, Tetlock gathered more than 80,000 predictions. He also asked the experts how they reached their conclusions, how they reacted when proved wrong, and how they evaluated evidence that did not support their positions. Respondents were asked to rate the probabilities of three alternative outcomes in every case: the persistence of the status quo, more of something such as political freedom or economic growth, or less of that thing. The results were devastating. The experts performed worse than they would have if they had simply assigned equal probabilities to each of the three potential outcomes. In other words, people who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys who would have distributed their choices evenly over the options. Even in the region they know best, experts were not significantly better than nonspecialists.

  • P232. Suppose that you need to hire a sales representative for your firm. If you are serious about hiring the best possible person for the job, this is what you should do. First, select a few traits that are prerequisites for success in this position. Don't overdo it—six is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of those questions for each trait and think about how you would score it, say on a 1-5 scale. You should have an idea of what you will call "very weak" or "very strong." The preparations should take you half an hour or so, a small investment that can make a significant difference in the quality of the people you hire. To avoid halo effect, you must collect the information in one trait at a time scoring each before you move on the next one. Do not skip around. To evaluate each candidate, add up the six scores. Because you are in charge of the final decision, you should not do a "close your eyes." Firmly resolve that you will hire the candidate whose final score is the highest, even if there is another one whom you like better—try to resist your wish to invent broken legs to change the ranking. A vast amount of research offers a promise: you are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such a situation, which is to go into the interview unprepared and make choices by an overall intuitive judgment such as " I looked into his eyes and like what I saw."

  • P253. The project was my initiative, and it was, therefore, my responsibility to ensure that it made sense and that major problems were properly discussed by the team, but I failed that test. My problem was no longer the planning fallacy. I was cured of that fallacy as soon as I heard Seymour's statistical summary. If pressed, I would have said that our earlier estimates had been absurdly optimistic. If pressed further, I would have admitted that we had started the project on faulty premises and that we should at least consider seriously the option of declaring defeat and going home. But nobody pressed me, and there was no discussion; we tacitly agreed to go on without an explicit forecast of how long the effort would last. This was easy to do because we had not made such a forecast to begin with. If we had had a reasonable baseline prediction when we started, we would not have gone into it, but we had already invested a great deal of effort—an instance of the sunk-cost fallacy, which we will look at more closely in the next part of the book. It would have been embarrassing for us—especially for me—to give up at that point, and there seemed to be no immediate reason to do so. It is easier to change directions in a crisis, but this was not a crisis, only some new facts about people we did not know. The outside view was much easier to ignore than bad news in our own effort. I can best describe our state as a form of lethargy—an unwillingness to think about what had happened. So we carried on. There was no further attempt at rational planning for the rest of the time I spent as a member of the team—a particularly troubling omission for the team dedicated to teaching rationality. I hope I am wiser today, and I have acquired a habit of looking for the outside view. But it will never be the natural thing to do.

  • P262. President Truman famously asked for a "one-armed economist" who would take a clear stand; he was sick and tired of economists who kept saying, "On the other hand..."

  • P264. He labels his proposal the premortem. The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: "Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster." Gary Klein's idea of the premortem usually evokes immediate enthusiasm. After I described it casually at a session in Davos, someone behind me muttered, "It was worth coming to Davos just for this!" ( I later noticed that the speaker was the CEO of a major international corporation.)

  • P283. Your reaction to the next question: You are offered a gamble on the toss of a coin.

    • If the coin shows tail, you lose $100.

    • If the coin shows heads, you win $150.

    • Is this gamble attractive? Would you accept it?

To make this choice, you must balance the psychological benefit of getting $150 against the psychological cost of losing $100. How do you feel about it? Although the expected value of the gamble is obviously positive because you stand to gain more than you can lose, you probably dislike it—more people do. The rejection of this gamble is an act of System 2, but the critical inputs are emotional responses that are generated by System 1. For most people, the fear of losing $100 is more intense than the hope of gaining $150. We concluded from many such observations that "losses loom larger than gains" and that people are loss averse. You can measure the extent of your aversion to losses by asking yourself a question: What is the smallest gain that I need to balance an equal chance to lose $100? For many people, the answer is about $200, twice as much as the loss. The "loss aversion ratio" has been estimated in several experiments and is usually in the range of 1.5 to 2.5. This is an average, of course, some people are much more loss-averse than others. Professional risk-takers in the financial markets are more tolerant of losses, probably because they do not respond emotionally to every fluctuation. When participants in the experiment were instructed to "think like a trader, they became less loss averse, and their emotional reaction to losses was sharply reduced.

  • P299. Prospect Theory: People don't like giving up something they have in their possession.

  • P309. It is not okay for the store owner to lower his employee's wages in bad times from $9 to $7. But people are more likely okay with him hiring in a new employee at $7.

  • P321. We are willing to pay a high price for a guarantee, like settling a court case.

  • P333. We will always overestimate the likelihood of something bad happening, like Earthquakes in California.

  • P352. People don't value the future enough. They are not willing to cut their losses, look at countries and cities that overspend on bridges and buildings.

  • P373. The evidence comes from a comparison of the rate of organ donation in European countries, which reveals startling differences between neighboring and culturally similar countries. An article noted that the rate of organ donation was close to 100% in Austria but only 12% in Germany, 86% in Sweden but only 4% in Denmark. The enormous difference is framing effects, which are caused by the format of the critical question. The low-contribution country had an opt-in vs. an opt-out option.

  • P385. We are willing to suffer longer if the pain eases off than if the pain ends the session.

  • P390. People will watch a live concert through their recording phone because their brain is making them fear forgetting the event in the future.

  • P397. Ending quotes.

    • "The easiest way to increase happiness is to control your use of time. Can you find more time to do the things you enjoy doing?"

    • "Beyond the satiation level of income, you can buy more pleasurable experiences, but you will lose some of your ability to enjoy the less expensive ones."

13 views0 comments
bottom of page