http://smile.amazon.com/dp/B009JU6UPG

This book is written in a very motivational-anecdote-you-too-can-change-your-life style which immediately rings alarm bells for me. But it’s on CFAR’s reading list so I stuck with it and ended up reading several other books by the same authors on how to change behavior and on how to craft sticky ideas. Those other books explain the writing style here - it’s carefully structured to be persuasive, memorable and actionable.

That style of writing makes sense from a paternalistic point of view - the authors are genuinely trying to convey sound advice in a way that will maximize it’s impact. It’s just not very convincing for a reader who is starting from a position of active doubt. In all three books they often criticize some existing wisdom as completely flawed and give their own replacement instead, but only justify it with a single anecdote or experiment.

The books are clearly carefully researched and built on long experience in the field, and choosing not to drown the reader in the fine details of that research makes for a more effective educational tool. But it’s frustrating for a reader who is trying to be a little more epistemically careful than average.

Villains

Typical talk about bias, overconfidence and introspection. Rather than an exhaustive list of biases though, they focus on four major villains of decision making:

The classic ‘make a list of pros and cons’ approach does not address any of these villains.

The book instead presents the WRAP process (which has a handy poster you can put on your wall):

I didn’t realize on first reading, but the mapping between the villains and the process is a little forced.

The process is aimed at decisions where you have time to think, gather evidence, consider alternatives etc. Decisions that occur in a matter of seconds or minutes are the domain of intuition, and that’s a totally different story.

You are still human, so this process won’t magically prevent you from blundering. But small improvements can make a big difference, especially in competitive fields where the difference between average and world-class might be a 25% success vs 35% success. The authors example is baseball batting, which is a weird choice coming right out of the explaining how such short-term calls are not the subject of this book. A better example might be poker, where even world-class players still lose the majority of hands. They just win slightly more of the hands that matter.

Finally, the fifth villain is failing to start the decision process, or even to notice that there is a decision to be made. They introduce ‘tripwires’ - pre-determined signals that kick you out of autopilot and into the decision-making process eg a threshold weight which, when crossed, triggers you to rein in your diet. A common example is people staying in jobs they hate because there is never a single point where they are forced to decide whether to leave. After reading this book I started using snoozed emails to send decision points in the future eg in three months I’ll receive an email from myself asking if I’m still happy about my move to Berlin, and prompting myself to apply the WRAP process to deciding if I need to change anything.

Widen your options

Look out for “whether or not…” decisions - there are never only two options.

Consider opportunity costs. Instead of “do X or not do X” think “do X or (not do X and do something else with those resources)”. Instead of “should I buy the fancy $1000 stereo or the basic $700 stereo” think “should I buy the fancy $1000 stereo or the basic $700 stereo plus $300 of albums”. Econs think this way by default, but humans benefit from a nudge.

You will never encounter a car salesman who says, “Hey, why not buy the entry-level model and use the savings to take your family on vacation?”

Imagine the ‘vanishing options genie’ appears in a puff or smoke, and declares that if you pick any of the options you are currently considering he will make your head explode. Are you sure there are no other options? Anecdotally, this is hilariously effective, and it would have been worth reading the whole book just for this idea.

I’m mildly surprised by how effectively people can be nudged out of narrow framing. Most cognitive biases are very difficult to overcome, but this one seems to hardly put up a fight at all. I wonder if there is some fundamental reason for this.

Multitrack - explore multiple options in parallel. “This AND that” rather than “this OR that”.

What’s the right number of options? Too many creates choice overload / paralysis by analysis. But 2 is certainly much better than 1, and 3 is a nice round number.

Options need to be meaningfully distinct, not just tweaks (the vanishing options genie can help here), and they need to be real competitors rather than just strawmen opposition to the favourite.

As a personal example of multitracking, I’m currently paying the bills with consulting, working on my own research projects and applying to part-time grad school programs. A few months ago, I was thinking of those as exclusive options and trying to decide which one to commit years of my life to, without taking time to explore each.

The classic case would be to confront the policymaker with the choices of nuclear war, present policy, or surrender.

Another cause of narrow framing is being locked into a single mindset - ‘prevention focus’ or ‘promotion focus’. Leads to focusing on avoid cons or pursuing pros, respectively. Need to actively shift between both when generating options eg in response to recession, some companies reacted defensively by cutting costs and reducing risks, others reacted offensively by betting big and investing in new markets, but the most successful companies where those that did both. There isn’t a clear suggestion on how to do this, but I imagine that just dividing options into prevention vs promotion would make it clear whether or not you have neglected one side.

Looks for bright spots - find places, people or practices that are successful and figure out how to replicate them eg rather than “how do we fix malnutrition in poor villages” think “which villages are not suffering from malnutrition and what are they doing differently”. I guess the advantage here is, in addition to generating options you might not have thought of, the option already has some field testing. There’s no guidance though on how to separate causation from correlation, or how to figure out if the solution is reproducible.

Playlists - list of heuristics / categories for generating options eg “can you personify the product”, “is there a key color for the brand”. Useful triggers for widening the search. A beautiful example of this is How to solve it, which I often turn to when I’m stumped by an analytic problem.

Ladder up - look for solutions to similar problems, starting with those that are near-identical and progressively abstracting more. For example, trying to solve waiting times in school cafeteria - are there situations when the waiting times are lower, how do other school cafeterias handle lines, how do other businesses handle lines, how are large crowds managed in general, how are other flow control problems handled etc.

Reality-test your assumptions

Seek out disagreement, especially for high-risk decisions - eg create a murder board.

For each option, consider “what would have to be true for this option to be the correct answer” or “what evidence would persuade me to change my mind”. Refocuses the conversation from competing sides to facts that can be researched, and primes people to notice when the evidence in question pops up. Allows people to disagree with out being disagreeable. Reminds me for some reason of the use of bots by the Rust team to refocus the interaction from ‘me vs you’ to ‘us vs the bot’. Rather than arguing the merits of the code, you are now working together to see if this objective bar can be met.

Ask disconfirming questions eg rather than “do you get lots of holiday time” ask “how many holiday days have you taken in the last year, where did you go, how much notice did you give, were any holidays canceled on you” etc. Page 923 of On Being a Happy, Healthy, and Ethical Member of an Unhappy, Unhealthy, and Unethical Profession has many great examples of such questions eg “who were the last three associates to leave the firm, what are they doing now, how can I contact them”.

Ask questions that have concrete, factual answers and do not rely on a shared reference frame (eg how much autonomy is a lot?). And decide in advance what answers you want to hear, so you can’t later talk yourself into thinking it isn’t that bad. Also ask broad, open-ended questions to uncover problems you might not have predicted eg “if there was one thing you could change about this company, what would it be”.

Consider the opposite. List key assumptions / beliefs and find low-risk ways to test them.

Consider both the inside view and outside view. Outside view provides base rates, which then anchor your assessment of the inside evidence eg you might naively put your chances of business success at 50%, but once you find out that 99% of similar startups fail, you may be forced to realise that you don’t have sufficient evidence about your own uniqueness to update that base rate so far.

A simple way to obtain an outside view is to find someone with more experience than you and ask them for a base rate. Beware that asking for a specific prediction about your case might trigger the same inside view distortion that you are trying to avoid. Good questions: “what are the important variables in a case like this”, “what kind of evidence can tip the verdict”, “what percentage of cases like this get settled before trial”. Bad questions: “do you think I can win this case”, “what would you do if you were me”.

Superforecasting goes into a lot more detail on how to balance inside and outside views.

Break out of filter bubbles. Seek out multiple independent points of view and don’t contaminate them by sharing data between them. See how many of those views turn up the same points.

Sample texture. Eg rather than only looking at percentage of good reviews, read a few random reviews to see if the pros and cons apply to you. Eg rather than only looking at defect rates in factory, visit some of the workers and see how the defects happen.

(The section from inside/outside view to here is summarized as ‘zooming in and out’.)

Ooch - find cheap, fast ways to test theories and get feedback. For some reason I have violent aesthetic objections to this word. Is there an alternative? Eg before committing years to studying for a particular career, find a way to intern or shadow someone for a few weeks. Eg hire several candidates for trial periods rather than relying on interviews to pick one.

Counters overconfidence (refers to Tetlock’s work on prediction).

Also works as a way to overcome fear - can inch your way into scary decisions, at each step finding that your fears are not confirmed.

Also good for dealing with disagreement. If you are convinced someone is wrong, let them try an experiment and agree beforehand on the criteria for success. One you will be surprised. And it may even take less time and energy than arguing to consensus.

Doesn’t work for decisions that require commitment though eg a three week internship can’t tell you if you have the grit for a 7 year medical degree.

Attain distance before deciding

Short-term emotions lead to short-term thinking. Emotion is what assigns values to outcomes, but we overweight short-term emotions compared to long-term emotions eg being too scared to ask someone out.

10/10/10 - how will you feel about this decision ten minutes from now? How about ten months? Ten years? Eg asking someone out - in ten minutes might be happy or embarrassed, in ten years might be married or might have totally forgotten about the whole thing. Explicitly considering different time-frames helps correctly weigh the impact of short-term emotions. The main value here seems to be a good heuristic for dealing with fear - it’s easily to otherwise systematically fail to take options which gamble short-term downsides against long-term upsides. Simply ignoring fear does not distinguish situations where the fear is sensible.

Another similar idea I learned at Jane St is to collect similar decisions into a category and think about the whole collection at once. So the decision is not based on ‘what will happen if I take this leap’ but ‘what will my life be like if I habitually make these kinds of leaps’ (and when it comes to parkour these are decisions about literal leaps!). It’s similar to the idea of bundling habits, so instead of thinking ‘should I sleep in today’ you think ‘should I habitually sleep in for the rest of my life’, which makes the correct decision somewhat clearer.

Yet another similar idea I read somewhere recently is to explicitly imagine the worst case outcome and then figure out how you would respond. For example, when lead climbing I might look at a particular section and feel afraid, but then realize that the worst thing that could realistically happen is I sprain an ankle, and if it came to that I would only be out of training for a month or two and would still be glad that I tried. It’s oddly soothing.

Mere-exposure effect. Things that seem obvious or true may just be familiar. I could rant for hours about folk truths in programming. Loss aversion. status quo bias.

Useful questions to counter - “what would our successors do?”, “what would I advise my best friend to do in this situation”. Attains distance by simulating a person who does not have our emotional attachments to the status quo?

Figure out your core priorities and how to weight them. Don’t be distracted by pros/cons that aren’t priorities. Agonizing over decisions is often a sign of unclear priorities.

In organizations, make sure everyone knows what the core priorities are. Company value statements are useful to the extent that they prioritize values and aid decisions about tradeoffs. “Courage, integrity and love” does not help anyone make decisions. “First, do no harm”, on the other hand, is crystal clear about priorities (you could argue about whether it is the correct priority, but that’s a different matter).

Seems obvious, but most people have not explicitly thought about their core priorities and are instead driven by day-to-day nudges.

If forensic analysts confiscated your calendar, email records and browsing history for the past six months, what would they conclude are your core priorities?

Can’t make time for everything, so also make a ‘stop doing’ list, containing everything you do that is dominated by opportunity costs. Figure out how to remove those things from your life, to make room for the core priorities.

Suggests setting a one hour timer and when it goes off, ask if what you are doing right now is what you most need/want to be doing. Comparing this to eg Pomodoro reminds me of productivity vs effectivity - the difference between doing more things and doing the right things.

Prepare to be wrong

Bookend the future - in addition to point estimates, also explicitly consider worst- and best-case scenarios. Each estimate taps different pool of knowledge / different kinds of simulation.

Prospective hindsight - imagine the event in consideration has already happened, and weave a story about how/why. Better at generating stories about concrete scenarios than we are at pulling probablilities out of the air.

Premortem - “imagine we are X months into the future and this plan has failed horribly - explain why” - helpful counter to the planning fallacy, generates risks that can now be warded off.

Preparade - similar practice - are you prepared to deal with unexpected levels of success. Can you get enough parts supplied? Can you ship that many units? Can prevent unexpected success from overrunning you and becoming failure.

Safety factors - premorterm and prepared help with forseeable problems - safety factors guard against unforseeable problems.

Vaccinate against disappointment eg ‘realistic job interviews’ in call centres that are up-front about the level of stress and abuse lead to higher retention.

Mental simulation - rehearse possible scenarios and your responses in your head eg different ways a difficult conversation might go.

Tripwires

No-one ever sits down and decides to live their life without ever traveling. But often the decision to travel right now simply never happens.

Tripwires are forced decision points to kick you out of autopilot eg Zappos offered employees $1000 to quit - which forces them to stop and consider their future right now rather than putting it off indefinitely. (Also provides psychological benefits of committing to the decision rather than being in state of wavering forever.)

I’m trying to figure out how you would set a tripwire for traveling. Perhaps something like “if I haven’t made the trip three years from now, I immediately quit my job and do it, no matter what”.

When a decision is made based on current evidence, set a tripwire to trigger reexamining the evidence at some fixed point in the future. Eg Kodak stuck with print because customers weren’t happy with the quality of digital pictures - they should have setup a yearly survey with a trigger such as “if the survey indicates that 10% of customers are happy with digital, we re-examine the space”.

Deadlines are also a kind of tripwire, forcing you to commit to some action rather than eternally putting it off.

Similarly, employee reviews are a kind of tripwire on self-examination. If you get a terrible review, it’s time to look at your work and see if there is anything you can do to improve.

Partitions eg dividing food into small, sealed portions so that opening each portion acts as a tripwire nudging you to think about whether to keep eating. If a bucket of popcorn came as 20 individual packets, you would never eat all 20. Investment rounds, personal budgets etc.

Tripwires can help make safe spaces for risky experiments eg “I’ll launch my own business, but I won’t invest more than $10k until I have my first paying customer”. Capping your risk allows you to stop worrying about it and focus on the experiment itself.

Can give tripwires to other people to prime them (eg “if you see someone using our product in a way we might not have anticipated, come tell us about it”) or give make them feel safe expressing intuition (eg “if a nurse is worried about a patient, they SHOULD call the rapid-response team”). These don’t seem to quite match the previous definition of a tripwire. Is there a better name?

(I’ve skipped the chapter on ‘trusting the process’, recommended reading, case studies for practice, example of common obstacles and ways to overcome them and a ton of end notes.)

Thoughts

I’m not sure how to evaluate things like this. The advice all seems self-evident. Unfortunately, most advice does, even contradictory advice. There is probably a lot of evidence and experience behind the advice, but it’s not presented here so I don’t know which parts are well-supported and which parts are extrapolation or guesswork.

A lot of the advice is low risk/commitment though, so I can at least pick out things that I a) want to try or that I have found useful in the past:

I’ve also been thinking a lot about autopilot and nudges vs core priorities, especially after watching Designing For Agency. The one hour alarm suggestion is a start, but I want to figure out how to ensure the success of priorities which require planning and preparation, or which don’t have a regular rhythm to them. It’s easy to figure out what kind of system might lead to spending more time reading books and less time reading twitter. It’s much harder to figure out what kind of system might lead to spending more trying new activities, because it’s the kind of thing that happens irregularly and requires some kind of premeditation to pick an activity and figure out how to arrange it.

I dunno. Research continues…