Authors perspective - not scientific consensus!
Trying to explain the actions of another person is not so different from trying to explain your own actions. In both cases, we are trying to infer the internals based on observable clues. For our own thoughts we have somewhat more observations, but on the other hand we are significantly more biased towards flattering, rational and/or coherent explanations of ourselves. I keep expecting someone to coin ‘naive introspection’ as a counterpart of naive realism.
I argue that we should draw a radically different moral. That interpretation of the motivations of real people is no different from the interpretations of fictional characters. There is no ‘deep’ truth about the real Anna’s motivations, any more than there is for the fictional Anna. No amount of therapy, clever experiment, or neuroscientific measurement can recover true motives, not because the search is too difficult, but because there is nothing to search for.
I quote this in it’s entirety, because I don’t think I can fairly paraphrase it at this early point.
Rather than deep planning or motives, think of your mind as improvising as it goes along. Like case law - we create opinions on the fly and then try to be consistent - to stay in character. This sounds like it is heading towards the logic of appropriateness - making decisions via identity / via the question “what would someone like me do in this situation”.
Hindsight bias and post-hoc rationalization make it very difficult to accurately recall past expectations, motivations and reasoning. But there may not have been any to recall in the first place. We are so good at spinning up plausibly stories on the fly that it feels as if we are exploring a great web of reasoning and motivation, but it’s an illusion.
Perception illusion as analog - can only see detail in a tiny fraction of our field of vision, but we anything that we call attention to is swept by that fraction, creating the illusion that we can see in detail in a much wider field.
Give shots of adrenaline to test subjects and then annoy them. If they don’t know they have been given adrenaline, they attribute the physical feelings to annoyance at the confederate. If they know they’ve been given adrenaline, this effect is reduced. That is, when trying to figure out what they are feeling, the lack of information is causing them to mistakenly attribute feelings that they would not have perceived otherwise.
Commitment effect. Get people to make a choice, change their choice without them noticing, and they defend the new choice.
Decisions based on heuristics, justified on-the-fly afterwards. Heuristics very sensitive to framing etc.
“Be true to yourself” bugs me for these reasons - it relies on the idea that there is some hidden immutable ‘true self’ that you just need to uncover. In reality, it appears that our selves are more of an ongoing narrative imposed over past actions and events.
Self-discovery => self-writing.
Easterlin paradox - richer nations are not happier on average, but richer people within each nation are. This is heavily disputed. I have another set of notes coming up on how ridiculously hard it is to justify anything that claims to measure happiness within a single population and experiment, let alone comparing data from different surveys across different countries and timespans.
Maybe what this is really telling us is that we don’t have an absolute sense of how happy we are, so we come up with a number on the fly based on whatever comparisons come to mind easily. This would explain why framing and current mood have such a big impact on reported numbers.
Certainly for senses we can easily measure in the lab, like colour or weight, human perception is much better at dealing with comparisons than with absolute measures. The illusions demonstrating that colour perception is relative work because our visual system adjusts for illumination, to more correctly perceive the actual absolute colour of the material. Similarly, it’s hard to directly perceive the size of something in your visual field because the visual system is using environmental cues to figure out it’s actual absolute size. These both seem like terrible examples for arguing that perception is fundamentally relative rather than absolute - they are caused by postprocessing that’s trying to remove relative effects and determine the underlying absolute property.
By changing the options presented, can change peoples risk/reward sensitivity. A given person might generally choose one of the middle options presented, rather than having a fixed sensitivity against which all the options are compared. The author discusses experiments they ran with presumably small or simulated losses and generalises to major life decisions. But say I give people a series of options wagering their house against small amounts of cash on a coin toss - noone would take any of these options, regardless of the framing. So there must be a point at which absolute values of risk start to be felt, but the author talks as if decisions are always totally relative.
People who are given a bigger pile of cash are willing to pay more to avoid electric shocks.
You don’t normally walk around shops continually assessing how much money is in your wallet and thinking, well, that chocolate bar does seem a bit expensive. But on the other hand, I’ve got 20 pounds in my wallet. So I’ll buy it.
Of course people are more price-sensitive when they have less money! And this makes perfect sense eg if my salary doubles then suddenly my time-money tradeoff has changed and I should more willing to spend money to save time. The interesting thing here is that the perceived value of money can be distorted by proximity, not that people don’t have an absolute value attached to money.
I really enjoyed the thesis and I would love to read a carefully written text on the subject. Unfortunately, this isn’t it. The discussion is irritatingly sloppy - possibly because it’s transcribed video rather than edited writing. Either way, it’s pretty low value reading.