http://www.amazon.com/dp/0300164629

Authors research investigates why some people are systematically less prone to failures of rationality.

Chapters

Inside George W. Bush’s mind: hints at what intelligence tests miss

Bush’s IQ estimated to be 120, but even his supporters concede that there is something subpar about many aspects of his thinking. Only confusing because we conflate intelligence and IQ, as if IQ captured all cognitive strengths/weakenesses.

Bush shows poor rationality. Now know about about rationality to begin to conceive of a Rationality Quotient along the same vein as IQ. Important to be able to nail it down so we can talk about is usefully.

Dysrationalia: separating rationality and intelligence

Many examples of smart people being excruciatingly dumb. Only seems strange because we are confused about what we mean by smart/dumb in the first place. Motte-and-bailey between IQ and general cognitive effectiveness.

Cattell/Horn/Carrol theory of intelligence is sort of consensus view. Performance across wide range of mental tasks relies on only a few attributes, of which two are dominant.

Gf - fluid intelligence - abstract reasoning, especially in novel domains - intelligence-as-process.

Gc - crystallized intelligence - accumulated declarative knowledge - intelligence-as-knowledge.

Rephrase as ‘smart but acting foolishly’. Intelligent but irrational.

Rational - caricatured by Spock. Modern cogsci has more precise definition.

Instrumental rationality - behaving in a way that maximizes your values given your resources. ie winning

Epistemic rationality - how well beliefs map to the structure of the world. Seems to need something about correctly updating on evidence, otherwise you can score well by just having no beliefs.

Dysrationalia as an intuition pump - given that prevailing view is that all mental abilities should correlate highly with IQ, and that failing to do so is labeled a disorder, describe ‘smart but acting foolishly’ as a similar disorder. Author doesn’t actually back this view, just uses it as a way to lead people into thinking of intelligence and rationality as separate, because people keep forgetting that IQ does not encompass all mental ability.

The reflective mind, the algorithmic mind, and the autonomous mind

Not claiming that IQ does not measure something important/useful. Just that it doesn’t measure everything.

Main strength of Type 2 processing is it’s ability to create simulations. So an important capability is the ability to keep those simulations apart from reality - not confusing what is and what might be. “It is the mark of an educated mind to be able to entertain a thought without accepting it.”

Representational abuse - “this banana is a phone” - how do we separate such consensual fantasies from reality?

Cognitive decoupling - creating copies of representations and holding them separate. Metabolically expensive, but just as well otherwise we might get lost in simulation. Certainly people who struggle to pay attention instead of daydreaming.

Primary representation has high priority so eg close eyes to free up resources for simulation.

We only bother to measure capabilities which have high variance between people eg face recognition is pretty much same for almost everyone.

Large part of what IQ measures is likely cognitive decoupling eg IQ correlates highly executive function and working memory.

Existing distinction between optimal/maximal performance situation (narrow task, participant instructed to maximize some known measure) and typical performance situation (open-ended, no explicit measure given). IQ tests are measured under maximal performance settings.

Another existing distinction - cognitive abilities vs cognitive styles / thinking dispositions. Former is largely measured by IQ tests. Latter is broadly scattered but generally assesses differences in how people form+update beliefs and decompose+prioritize goals. IQ shows low-to-zero correlation on performance on various thinking disposition tests.

While IQ is more-is-better, thinking dispositions tend to have a sweet spot eg don’t want to be too reflective or will never actually act. However, in modern environment most people are way below the sweet spot.

Many typical-performance tests show performance that has strong correlation with certain thinking dispositions even when controlling for IQ ie certain thinking dispositions are not explained away by IQ, and have value in addition to IQ. Eg representative sample of US citizens show negative correlation between scores on rationality test and number of arrests, evictions, bounced checks etc even after controlling for cognitive abilities.

Tripartite model: autonomic mind, cognitive mind, reflective mind. Autonomic maps to System 1. Cognitive maps to Cf. Reflective maps to rationality.

Reflective mind initiates inhibition. Cognitive mind sustains it. No evidence for this yet. Maybe later?

Reflective mind initiates decoupling. Cognitive mind sustains it. Again, no evidence yet.

IQ tests measure the ability to inhibit or decouple but not the tendency to inhibit or decouple. Expensive operations that need to be kicked in at critical moments.

Autonomic mindware - Encapsulated Knowledge Bases, Tightly Compiled Learned Information. These don’t appear to be standard terms, but are not explained here.

Cognitive mindware - strategies and production systems. eg long division?

Reflective mindware - beliefs, goals and general knowledge. Includes Gc, but the broad sampling used in Gc tests means that specific important concepts are not measured (eg statistical thinking) so Gc only very loosely correlated with rationality.

Cutting intelligence down to size

Discusses strategies for downsizing focus on intelligence to make space for rationality. Skipping much of this - interesting but repetitive.

Flynn effect - IQ increasing over time. Flynn confused as to why no increase in intelligence eg pace of invention. Again, caused by confusion between IQ and broad-view intelligence.

Quotes from early scientists arguing that low IQ individuals barely human eg can’t really feel pain. Tendency to conflate IQ scores with personal worth.

Folk psychology already has some support for distinction eg survey of students shows default confation between the two, but prompting about difference produce ‘smart but foolish’ -type distinctions.

Overall argument is that there is a motte-and-bailey going on with definitions of intelligence. Wants to concede the IQ motte and carve RQ out of the bailey.

Why intelligent people doing foolish things is no surprise

Why irrational? Cognitive misers. Mindware gap.

Cognitive misers - default to low metabolic cost heuristics to an extent that is not adaptive in current environment.

Evolutionary pressures: fast reaction > accurate reaction, false positive > false negative (for threats), reproductive fitness > goals or happiness. Also just a pile of kludges.

Mindware gap - cultural progress in effective cognitive tools but many of these tools not widely possessed (or even valued). Eg college level stats would make one wildly rich in 18th century Europe. Similarly for modern finance/economics.

Contaminated mindware - some mindware is actively harmful.

Hard to decide exactly when someone is being perfectly rational, but easy to measure when they are clearly wrong, so research tends to focus on classifying errors. Argues that education might be better off focusing on fixing known-bad thinking than on trying to define+measure what should be learned.

The cognitive miser: ways to avoid thinking

Fully disjunctive reasoning - examining all possible world states. Most people capable of doing such reasoning but don’t do so unless prompted, leading to mistakes. IQ only weakly correlated with spontaneously carrying out disjunctive reasoning.

Attribute substitution - subconsciously replace assessment of attribute, with assessment of some correlated attribute that is easier to assess. Reasonable heuristic, but can lead to clearly wrong decisions eg scope insensitivity or violating dominance principle or basic laws of probability.

Substitute probability for vividness, salience, accessibility.

Cognitive misers are vulnerable to exploitation. Not enough for heuristics to be right 98% of the time when the remaining 2% are being actively exploited.

‘Laundry of life’ - most decisions have little effect compared to small number of major decisions that change your life. Not totally sold on this - little, repeated decisions such as eating healthily or studying every day also add up to huge impacts. Non-laundry decisions tend to be unique/unfamiliar which means that learned heuristics are more likely to fail.

Anchoring/adjustment.

Status quo bias / default heuristic.

Need to recognize when the decision environment is hostile to heuristics eg if available information/cues are being filtered your heuristics can be heavily influenced.

Framing and the cognitive miser

Strength of framing, priming, anchoring effects is a threat to personal autonomy. Same free will problem I felt was ignored in Nudge. I really like this way of framing dysrationalia as loss of free will - vivid scenario + loss aversion. Cognitive misers allow others to focus their attention.

Eg of framing - tax deductions have high approval, but tax penalty for compliment group would not pass, even though this is the same thing. Anecdotally, I related this idea to some family members and was surprised when they insisted that the two are not the same! Seemed to be a matter of mental accounting - if the taxes deductions are ‘paid for’ by eg tax raises on large companies it feels like there is no affect on the compliment group.

Equality heuristic. Eg subjects argue for equally dividing profits between partners in law firm. Separate groups argues for equally dividing expenses. But equal expenses implies unequal profits, and vice-versa. Framing to appeal for fairness has reversed political/enconomic views.

Kahneman - “the basic principle of framing is the passive acceptance of the formula given.” Subjects presented with both framings tend to realize the equivalence and aim for consistency, suggesting that the habit of generating framings may help counter the bias. Note Tetlocks foxes often considered multiple different framings of the same question at once - this seems like a very important habit.

Consider the example of credit card companies lobbying for difference in card vs cash prices to be framed as discount rather than surcharge - intensely aware of how that framing would affect their success.

Politics is largely a fight to choose the framing eg tax relief - framing where we are relieving suffering.

For between subjects experiments, no correlation with IQ. For within-subjects experiments, small correlation with IQ.

Major point of the book - intelligent people perform better only when you tell them what to do (in the domain of rational decision making). High IQ people are more able to make use of particular cognitive tools when cued, but no more likely to spontaneously self-cue. This is really interesting from the point of view of making people more rational - could we make tools for specific domains that cue the user at the appropriate time?

Myside processing: heads I win - tails I win too!

Myside bias - evaluate evidence, generate evidence, and test hypotheses in a manner biased toward own prior opinions and attitudes. Appears to encompass confirmation bias? Cognitively demanding to switch perspectives. ie to compute ‘if the world were this way, what would I expect to see’.

Overconfidence related to confirmation bias - failure to think of reasons one might be wrong.

Wonder how this relates to depression - where self-evaluations are usually negatively distorted. Do depressed people suffer from eg planning fallacy?

Also see biased assessment of bias - subjects uniformly rated themselves as less biased than their peers! Explains the phenomenon I notice where people will be well aware of literature on bias but will fiercely defend their own mistakes. Do we need a buddy system? Pronin suggests this is caused by being deceived by conscious introspection. Can we defeat this by imagining the same decisions in a stranger? Would be better evaluate their bias than our own? TODO try this out!

Illusion of control in traders correlates negatively with income/bonuses.

Egocentricism in processing - leads to overconfidence in how others will perceive/understand our words/actions. Reminds me of the smarties experiment - I guess theory of mind still isn’t great even in adults. Often responsible for bad UI - can’t imagine that the user will struggle to understand.

No correlation between myside bias and IQ.

A different pitfall of the cognitive miser: thinking a lot, but losing

Inability to initiate or sustain inhibition of autonomic mind.

Trolley problem as example of inhibition - most people fail to override emotional response when considering pushing someone, but are not consciously aware of this and so vigorously defend it. Leads to many convoluted rationalizations trying to establish consistent moral framework.

Can also occur when cold eg jellybean experiment - represented distributions so that low-win option has more quantity of winning beans that high-win option - >30% of subjects lured into picking the low-win option even though they are given the exact numbers.

I vaguely recall an example given in a room full of traders, where two sets of choices are given. There is no consistent utility function that would lead to choosing differently in each case, but it was still compelling. Presented as the cognitive equivalent of an optical illusions - even knowing the correct answer rationally it was not possible to stop the feeling.

Toy logic problems test ability to separate reasoning from prior belief. Such unnatural de-contextualisation increasingly important eg jury trials, customer is always right, anti-discrimination law. Last one is particularly interesting - to even have the ability to make non-discriminatory decisions have to be able to suspend autonomic responses which rely on stereotype - not a matter of choosing to discriminate so much as struggling not to.

Willpower as ability to inhibit Type 1 response/control.

Hyperbolic discounting.

Bundling rule - succumbing today = succumbing everyday - maps individual temptations into long-term effect. TODO try this.

Mindware gaps

Debates about rationality have focused on purely cognitive strategies, obscuring the possibility that the ultimate standard of rationality might be the decision to make use of superior tools.

Mindware gap - even when thinking carefully, may not have the appropriate rules/knowledge/strategies to make a good decision eg before probability theory was very hard to reason about risk.

Eg facilitated communication - hard to see flaws without exposure to scientific thinking, especially idea of using control group.

Eg convictions over multiple cot deaths - argument was missing concept of dependence between random variables.

Bayes rule. Likelihood ratio - P(data | hypothesis) / P(data | ~hypothesis). Posterior odds = prior odds * LR (where odds = P(H) / P(~H)). I haven’t seen this formulation before. Can think of LR as the shift in likelihood, independent of prior odds - easier to share / pool with others.

Common failure (eg in facilitated communication case) is failure to consider P(data | ~hypothesis). In experiment where subjects where asked to pick which cards contained data needed to make diagnostic decision ~half failed to pick P(D|~H). Is this enough to explain confirmation bias? A belief update rule that is one-sided in this way might tend to confirm itself, because it can’t see cases when data is better explained by ~H. Might be interesting to play around with.

Falsification eg 2-4-6 task. Genius intervention that improves performance - make two groups eg RED vs BLUE rather than a single group with pass/fail. Confirming one falsifies the other, cueing subjects to try to put triplets in both groups.

Base rate neglect.

Conjunction error.

^ all declarative knowledge. Let’s talk about strategic knowledge too.

Belief identification - thinking disposition measuring whether changing beliefs to get to truth is more important than defending current beliefs.

Reflectivity/impulsivity - measure by Matching Familiar Figures Test - long delay + low error vs short delay + high error.

Suppose we found that low belief identification is correlated with higher income. Would be tempting to take this as support of the idea that low belief identification makes people more effective. But also plausible that belief identification is correlated with religion which (in the US) is correlated with low income. Going to have to be very careful about this sort of evidence when I try to figure out whether rationality is low-hanging fruit for intelligence augmentation.

Other strategic dispositions: typical intellectual engagement, need for closure, belief perseverance, confirmation bias, overconfidence, openness to experience, faith in intuition, counter-factual thinking, categorical thinking, superstitious thinking, dogmatism. These are strategic rather than declarative in that there is no correct setting - their worth is dependent on environment. That said, largely tuned for evolutionary environment.

Some correlation between rational mindware and intelligence (likely due to educational choices), but less than one would hope eg less than half of subjects with above median SAT correctly applied base rate in diagnostic example.

In most professions, people are trained in the jargon and skill necessary to understand the profession, but are not necessarily given training in making the kind of decisions that members of the profession have to make.

We expect doctor, lawyers, engineers etc to make good decisions about likelihood and risk but we don’t educate them on the subject at all.

Contaminated mindware

Albania imploded because more than half of the population and half the GDP was invested in Ponzi schemes. …wow._ Explains this as contaminated mindware - irrational beliefs about economics.

Similarly for the rash of child abuse cases ‘revealed’ through ‘recovered memory’. Bad mindware was along the lines of ‘patient is always right’ - no search for independent confirmation of claims of eg satanic abuse.

Contaminated mindware often spread through enticing narrative. Often complex - more likely to appeal to high intelligence. Evidence given doesn’t support this very strongly - single case of investment fraud, seems explainable by class segregation. But certainly intelligence doesn’t seem to be a defense either.

Knowledge projects - when current beliefs are mostly true, filtering new evidence through current beliefs leads to learning faster by discarding more false information. Argument that because belief revision has large downstream effects on other beliefs it must be computationally expensive. Not worth doing unless we see evidence that is totally implausible under current belief structure. Reasonable heuristic, but when holding strong beliefs that are not true means actually faster at learning more untrue things. Hence eg holocaust deniers - isolated on ‘islands of false belief’.

Folk theory - ‘beliefs are like possessions’ ie consciously/deliberately acquired to serve our interests. Vs memetics where beliefs are selected for virality/persistence rather than accuracy/usefulness. Question flips from how people acquire beliefs to how beliefs acquire people.

Deal breaker memes - mindware that actively sabotages mental environment to protect/isolate itself - eg faith > evidence. Prevents mixing with other memes.

Falsifiability as defense against parasitic mindware - don’t trust ideas that resist evaluation.

Belief traps - believes which, if true, are too expensive to test. Be suspicious.

How many ways can thinking go wrong: a taxonomy of irrational thinking tendencies and their relation to intelligence

Regardless of lack of consensus on exact model, seems clear that only a few cognitive features underlie intelligence.

Not so clear for rationality. Proposes a taxonomy of errors.

Adds serial associative cognition to tri-partite model - Type 2 thinking which does not involve simulation/decopling. No examples?

Considers the 4-card falsification problem. No autonomic response that would be useful here, and subjects talking aloud reveal some kind of Type 2 thinking. Or maybe are just rationalizing? SAC but not engaging in simulation (eg of what would world look like in counterfactual). Focal bias - only considers the presented model. Suggests that this is another example of the cognitive miser - if Type 1 can’t help, engage in cheap Type 2.

Implies that other features of reflective mind are to interrupt SAC or to refocus it.

Rough taxonomy:

Don’t really understand the explanation of ‘self’/egocentrism given here - seems to be roughly the tendency towards constructing a myside focal bias. Ah, explained later as tendency to construct models which support self-esteem/self-worth.

Where does the idea about belief preservation / islands of false belief fit into this taxoomy? Seems like ‘failure to activate alternate hypotheses’, but as a result of the cognitive miser rather than a mindware gap.

Many of the examples in the book fall into multiple categories.

Adds a sixth category - Mr Spock Syndrome. Brain damage or mental disorders can interfere with valency judgments. Can’t make decisions effectively if can’t correctly attach values to them. Some weak evidence that this is present in some otherwise healthy people.

Correlation between categories and IQ as judged by fairly narrow evidence:

When ~0 used, it’s because the author doesn’t give exact numbers but just says eg ‘virtually none’ or ‘very small’. TODO lookup references.

The social benefits of increasing human rationality - and meliorating irrationality

Argues that if we could increase everyones IQ overnight, little would change. Just make the wrong decisions faster. VS Hive Mind. Fight!

Lots of attention on intelligence, very little on rationality. Irrationality is clearly costly though eg estimates that more spent on pseudo-scientific medicine than scientific medicine.

Mindware seems teachable eg evidence that disjunctive can be taught and is not strongly limited by intelligence. Strategic mindware such as ‘think of the opposite’ is not taxing and so is likely widely learnable. Similarly, pointers in other books to evidence that reappraisal can be taught, which is more or less the same as ‘consider alternative hypotheses’.

Probabilistic / statistical thinking also teachable, although less so. Somewhat counterintuitive. Similarly for scientific principles around establishing causality.

Implementation intentions - making conscious, verbal declarations of ‘when X occurs, I will do Y’ can place trigger in autonomous mind. Mental bundling also promising.

Strategies for affective forecasting that focus attention on the outside view eg ‘ask someone else in the same situation, dont trust imagination’.

Falsifiability can help inoculate against contaminated mindware. Even concept of memetics itself is useful - being able to think about memes makes it easier to accept the idea that some of your beliefs might be parasites.

Can also alter environment to remove opportunities for failure. Nudge-style policies to prevent specific errors. Pre-commitment (eg not having tempting food in the house) works similarly at an individual level.

No known conceptual or theoretical limits to constructing an RQ test. Just a matter of time and money.

Many results emerging on areas where rationality can be taught. But no widespread attempt to teach them.

Thoughts

Fairly compelling evidence towards the end that rationality is sufficiently separate from IQ to be studied/measured independently. The promise of an RQ test is, uh, promising but I wish the author had more directly confronted the problem that we can’t measure instrumental rationality, only obvious deviations from it.

Similarly compelling evidence that tendency to initiate decoupling or inhibition is separate from ability to sustain it. It’s not a distinction I had even thought to make before.

The tripartite model doesn’t actually seem to add anything over dual process theory. It’s not clear to me which features belong in the reflective vs cognitive mind or what tests you would use to justify the distinction, whereas Type 1 vs Type 2 processes have clear physiological markers. Any model at this stage is going to be an approximation, of course, but the tripartite model is not giving me any extra insight or predictive power. If IQ does indeed break down to mostly working memory and executive function, I don’t see any reason to group those together in one mind while grouping other similar traits in another mind.

The taxonomy of rationality failures is potentially useful. I keep coming across more and more biases - having a systematic way to organize and understand them would help. Some cursory thought doesn’t produce anything that doesn’t fit the taxonomy.

The list of references on teaching rationality is a valuable starting point, but it doesn’t seem to include any negative results so I will have to search from scratch too to avoid that bias.

The idea of decoupling/simulation as a concrete operation is one that I had missed so far. I’m not sure how well supported it is - a quick search for the term mostly leads back to this book. Similarly, not sure how much evidence there is that it is a more expensive operation than other Type 2 thought. It certainly has a useful place in the taxonomy though - there are a lot of biases that fall under failure to simulate other possible worlds.

Though it was a sidenote in the book, I really like the framing of dysrationalia as surrendering free will. It’s vivid, clearly negative and framed as a loss - a good meme to combat the emotionless Spock.

Overall, the book seems to be largely an advocacy piece. It’s well-written and interesting, but I worry about the strength of the argument (eg missing negative results) and I feel like I could have learned the same information from a much smaller book if the advocacy was toned down. This may just be a symptom of reading pop science books which have to explain everything from scratch - perhaps I should be pulling in more textbooks and literature reviews.