Tuesday, December 25, 2012

Thinking, Fast and Slow, Daniel Kahneman


Kahneman's work explains areas in behavioural economics, specifically prospect theory (how decisions are made when outcomes are probabilistic) and the effects of cognitive biases on choice. It explains the process by which people reach conclusions/make decisions and why the choices are often the wrong ones. The book has several examples that illustrate errors of judgement and choice in analytical situations, mainly the results of cognitive biases. The book covers research by Kahnemann and others from the 70s to present. Prior to the work described here,  one of the assumptions made by economists/social scientists in their research was that people are rational. Departures from rationality were believed to be functions of emotion. Kahnemann's research made the claim that departures from rationality are because of flaws in cognitive machinery i.e. cognitive biases. His work describes how the mind works based on recent developments in psychology. The mind is subject to the influence of heuristics, intuition and biases and its functioning can be explained by three models:
  •  A model of the mind consisting of two components:
    • System 1: Fast automatic thinking: By intuition or by expertise
    • System 2: Slow engaged thinking: Deliberation, algorithmic, measured
This model explains how and why humans reach erroneous conclusions when presented with simple mathematical choices. The book describes 10-15 heuristics and biases which cause System 1 to reach erroneous conclusions.
  • Two economic models of human behaviour called Econs (rational, selfish and invariant in tastes) and Humans (real people). Modern economic theory/ modelling is based on Econs which explains why economic models to date are flawed.
  • The Experiencing self and the Remembering Self: Two ways in which humans consider memories of events which cause incorrect decisions because of incorrect assessments of past experiences.
The work uses these models to illustrate how modern economic models are flawed and how human decision making is flawed when evaluating decisions involving risks.  

Part I: This section describes the systems.

  • The two systems:
    • System 1: Operates quickly , no effort, no voluntary control
    • System 2: Deliberate, requires attention, can reprogram System 1 for a specific task
    • The division of maximizes performance and minimizes effort.
  • Attention/Effort
    • It takes effort for System 2 to get engaged.
    • Law of least effort: A person will engage the system that allows the task to be performed with least effort.
    • Experts in any field are able to solve problems in their field using System 1.
  • Lazy control
    • System 2 is engaged less often that it should be, because of "laziness"
    • Cognitive load: Load placed on the mind because of System 2 being engaged in one task.
    • Ego depletion: Depletion of self control causes System 1 to be engaged because of cognitive load on System 2 on another task.
    • The nervous system consumes more glucose than most of the rest of the body
    • Unless explicit effort is made, an individual will favor using System 1 without engaging System 2
    • System 2 can be divided into 2 components:
      • Intelligence: IQ
      • Rationality: Immunity to bias
  • Association
    • Association (ideas or suggestion) affects System 1's perceptions/decisions
    • Priming affects System 1's perception/decision
  • Cognitive ease/Cognitive strain
    • Measure of an individuals current condition, can predict likelihood of using system 1 vs System 2
    • When in a state of cognitive ease, System 1 predominates
    • Cognitive ease can be brought on by association, priming
    • Cognitive strain can be brought on by associated difficulties (bad fonts e.g.)
  • Norms, causes
    • Past events can cause System 1 to believe in a norm i.e a stereotype, perception of normal behavior
    • The mind has a need to assign causality to events
    • System 1 is incapable of making correct conclusions about causality - it does not have the ability to think statistically
  • How conclusions are reached by System 1
    • Confirmation Bias: A deliberate search for confirming evidence
    • Halo effect: Tendency to reach erroneous conclusions in one dimension based on liking a person for another dimension
    • Limited evidence (WYSIATI): base errors, framing effects, overconfidence
  • How judgments happen in System 1 when inadequate information is provided
    • Neglect of information, use of basic assessments
  • How questions are answered:
    • Substitution: In case of a difficult question, individuals use heuristic to arrive at a simple problem which can be solved and substitutes it
    • Affect heuristic: Likes and dislikes determine beliefs about the world

Part II: Heuristics and Biases: This section lists a number of biases/heuristics/intuitive conclusions which cause System 1 to reach erroneous conclusions.

  • Law of small numbers:
    • Even researchers make mistakes on sample size: Sample size is low, even in research experiments. A small sample will exaggerate the effect of outliers.
    • System 1 believes it can see order, where randomness exists
    • Causal explanations of chance events are invariably wrong
    • Solution: When conducting experiments: De correlate results by averaging
  • Anchors
    • Providing an anchor when asking a question can influence the response: E.g would you contribute $100 to this cause? If not how much?
  • Availability
    • Availability of the memory of events, can influence perception of frequency of the events
    • Difficulty in remembering a large number of event is can alter perception of frequency, even if absolute number is higher
  • Impact of availability
    • Emotional tail wags the rational dog
    • Availability bias attempts to create a world that is simpler than reality
    • Availability cascade: Emotional response to availability and results in bias flowing into public policy
  • Representation bias:
    • Stereotyping used without examination of bias, or stats about accuracy of stereotypes
    • Base rate information will always be rejected when specific instance information is available
    • Always apply Bayesian analysis
  • Representation bias with varying degrees of information
    • System 1 often judges conditions with smaller population to be more likely than condition with a larger population because it satisfies a representation bias
  • Causes vs Statistics
    • Base rates are ignored, even causal statistics may not change deeply held beliefs
  • Regression to mean
    • Regression to the mean is often interpreted as a causal event
    • Regression and correlation are related concepts. Where correlation is not perfect, there will be regression to the mean
  • Taming intuitive predictions
    • Use correlation to obtain a prediction that lies between an intuitive prediction and the base rate
    • Unbiased predictions will not predict extreme cases, unless a lot of information is available
    • In some cases, such as venture capital, this may  be detrimental because they are searching for extreme cases

Part III: Overconfidence: Other reasons System 1 makes mistakes

  • Illusion of understanding
    • The mind creates an illusion of understanding by believing WYSIATI
    • Hindsight bias creates the illusion that outcomes were obvious and that decisions were obvious
    • Outcome bias affects the perception of decisions based on the results
    • Halo effect affects the perception of human decisions based on organization outcomes
  • Illusion of validity: A cognitive illusion
    • The illusion of skill/validity
    • Supported by a powerful professional culture
    • Hedgehogs and foxes: hedgehogs fit events to a single framework and predict based on that
    • Media favors appearance of hedgehogs in debates
  • Intuition vs Formulas
    • System 1 is influenced by several factors (priming etc. above)
    • The result is that statistical prediction will generally  outperform human expert prediction (Meehl, Clinical vs. Statistical prediction)
    • Humans tend to try to think outside the box, adding
    • When predictability is poor, inconsistency (generated by System 1) destroy predictive validity
    • Broken leg rule: Occurrence of outlier events impacts prediction
    • Combining predictors (averaging them) is better than a linear multiple regression algorithm
  • When can we trust expert intuition
    • Other school of thought: Neural Decision Making: Seeks to understand how intuition works (Gary Klein, Sources of Power)
    • Intuition : System 1 implements rapid pattern recognition with System 2 executing a deliberate process to make sure that the decision will work
    • Requirements:
      • An environment that is regular enought to be predictable
      • Prolonged practice at identifying the  regularities
      • E.g Chess players an rapidly and intuitively recognize a situation as weak or strong, but this needs approx 6 years of practice at 5 hrs/day
  • The outside view
    • Inside view vs. Outside view: Knowledge about an individual case makes an insider feel no need for the statistics of the case
    • Exhibited as a belief in the uniqueness of the case
    • Planning fallacy: Unrealistically close to best case
  • The engine of capitalism
    • Irrational optimism: Optimistic bias plays a dominant role in risk taking
    • Overconfidence in ones own forecast: An effect of System 1 and WYSIATI
    • Remedy: Prepare a premortem for all decisions: Assume that decisions made, result in a disaster. Write a postmortem

Part IV: Choice: What influences human choice

  • Bernoulli's errors
    • Humans vs. Econs
      • Econs: Rational, Selfish, Maximize utility, Tastes do not change
    • Utility theory (Bernoulli)
      • Prior to Bernoulli, outcomes of gambles were compared based on outcomes (expected values)
      • Bernoulli realized that people dislike risk and this was explained by diminishing marginal value of wealth
      • Assigned a utility to each value of wealth, though the increase in utility decreased as wealth increasing
      • Diminishing returns
      • Explains insurance: Risk is transferred from poor person (with higher loss of utility) to a richer person (lower loss of utility)
  • Prospect theory:
    • Utility theory has a flaw: Utility is not absolute, it depends on the reference point
    • Difference is utility can differ based on direct: Loss of $500 has greater neg utility that a gain of $500
    • Depends on increase/decrease: E.g $5M has a different utility if it is considered in the context of an increase from 1M to 5M or a decrease from $10M to $5M
    • Taking this into account, will result in different predictions for how willing a poor or rich person is willing to take risk
    • Conclusion: If all options are bad, people tend to prefer gambling/risk taking, else they  avoid risk
    • Prospect theory
      • How financial decisions are made:
      • Evaluation compare to a reference point: status quo
      • Diminishing sensitivity  to evation of changes
      • Loss aversion
      • Gain/loss vs. Psychological utility is an S curve, but not a symmetric curve
      • Problems: Does not account for regret,disappointment
  • Endowment effect
    • Decisions are impacted by whether a good is meant for exchange or for use
    • Psychological value of a good for use, such as a mug or an already possessed good can change the utility of selling it
  • Bad events
    • Loss aversion is with respect to a reference point
    • Not achieving a goal may be a loss, exceeding a goal may be a gain
    • Impacts negotiations, where parties fight harder to avoid losses than to make gains
    • In a negotiation both parties feel they have lost more than gained
    • The asymmetry between feeling of gain/loss impacts the feeling of fairness: Can impact  whether  customer choose to buy products whose prices have risen
    • Fairness: It is considered unfair to impose losses on a customer, relative to his reference point
    • Reference points cause a sense of entitlement
  • Fourfold pattern
    • Outweighing of Small probability events
    • Decision weights are not identical to probability weights
    • =>Expectations (weighing by probability) is flawed
    • Decisions are made based on decision weights not probabilities
    • Decisions weight = probability, p=0 and p=1, but d ne p for all other value (d <p or d>p depending on d)
    • p=0 is close to possibility and p=100 is close to certainty
    • Fourfold pattern: Gain/Loss vs. High/Low probability
    • The fourfold pattern shows how high/low probability of a gain or loss results in  acceptance/rejection of unfavorable/favorable outcomes in negotiations because of the  aversion to loss/hope of gain and consequent risk taking/aversion
  • Rare events
    • People overestimate probabilities of unlikely event
    • People overweight unlikely events
    • Vivid or alternative descriptions of events influence decision weights (1 in 1000 vs. 0.1%)
  • Risk policies
    • People tend to be risk averse in gains and risk taking in losses
    • Broad framing (the grouping of several decision problems into a single problem) can result in better decisions than narrow framing (separately deciding each problem).
    • Samuelson's problem: Aversion to a single gamble vs expected value of several hundred instances of the gamble
    • Since a life will consist of several such small gambles, it pays to take the small gambles
      • Gambles must be independent experiments
      • Gambles must not be excessive
      • Gambles must not be long shots
    • Loss aversion + narrow framing less to bad (risk averse) decision
    • E.g. individual managers are risk averse because they take individual decisions. A CEO frames the decisions broadly, and favors taking a risk, in the hope that statistically one of them will pay off
  • Keeping score
    • Disposition effect: A product of narrow framing: E.g the tendency to sell winning stock in preference to losing stock, because of the pain caused by acknowledging and closing a losing stock.
    • Sunk cost fallacy: Tendency to throw good money at a bad project in the hope of salvaging it
    • Regret/blame: People have strong reactions to an outcome produced by action, than to an outcome produced by inaction (regret)
    • There is an aversion to trading increased risk for any other advantage, even if the advantage is significantly more gainful than the risk
    • Regret/hindsight bias cause regretful feelings when moderate amount of though has gone into decisions
    • Think deeply and anticipate regret, or think little.
  • Reversals
    • Preference reversals: Preference can change when two choices are compared jointly vs. if they are presented singly
    • Frames and Reality
    • Losses cause stronger negative feelings than cost
    • Framing a decision can impact decisions: gallons per mile vs. miles per gallon

Part V: Two selves: How memories are assessed

  • Two selves
    • Experienced utility vs. Decision utility
    • Experiencing self vs. Remembering self
    • Experience expresses satisfaction of the whole experience, while remembering may only remember selected parts of the whole experience
    • Peak end rule: Intense events towards the end of an experience are remembered
    • Duration neglect: Durations of experiences are often forgotten while intensity is not
  • Life as a story
    • Duration neglect, peak end rule and the remembering self impact decisions
  • Experienced well being/Thinking about life
    • Measures of happiness  reflect the remembering self not the experienced self
    • Affective forecasting: The effect of recent significant memories on opinion
    • Focusing illusion: Nothing is as important as you think when you are thinking about it
  • Conclusions
    • System1/System2, Econs/Humans, Experiencing self/Remembering self

No comments:

Post a Comment