https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing
Plenty well-covered elsewhere:
https://www.scottaaronson.com/blog/?p=3535
http://slatestarcodex.com/2017/11/30/book-review-inadequate-equilibria/
http://www.overcomingbias.com/2017/11/why-be-contrarian.html
But still useful to write down notes for the sake of having to store the information clearly enough to write it.
Subject is ‘when can I expect to be better at X than whatever standard answer society produces’.
Standard economics ideas::
- Efficient in dimension X relative to observer O - does X better than observer O ever could
- Exploitable in dimension X relative to observer O - O can gain by out-performing the system at X
- Can be inefficient but inexploitable eg can’t short the housing crash
- Systems that are efficient usually get that way by being structured so that they are exploitable in a way that corrects the system eg price aggregation in stock markets
New ideas:
- Adequate in dimension X/Y relative to observer O - Y is being converted into X at some rate, O can see how to get a better rate
- Can be inadequate but inexploitable eg schools could be much better at turning time into education, but many incentives in place preventing improvement or competition
Many reasons for inadequate systems. Proposed rough categorization:
- Decision-makers who don’t benefit from making good decisions
- eg central bank of Japan - decision-makers incentivized not to act because acting makes them personally responsible for any failures
- Asymmetric information
- eg lemon markets
- Coordination problems - broken in multiple places but fixable if everyone would just
- eg academic publishing
Many, many examples follow.
Much discussion on how to determine if you are rational enough to be able to judge existing debate. Doesn’t seem to reach a satisfying conclusion.
Between Scylla and Charybdis:
- over-confidence and Dunning-Kruger
- ‘modest epistemology’ - excessive use of outside view to argue against being able to outperform anything ever
Reference class forecasting is very sensitive to choice of reference class.
Pick your battles:
- Easy - agree with expert consensus
- Medium - study expert debate and figure out which expert to believe
- Hard - improve on best answer that society can offer you, after years of work
- Expect success relative to adequacy of society and your own expertise/experience
In order of decreasing amounts of time, pay attention to:
- Object-level considerations.
- Reliability of authorities and adequacy of systems vs your own reliability/adequacy in this area
- Your own ability to judge debate and evidence
- Whether some else is better at judging debate and evidence than you are
Modest epistemology is essentially about avoiding arguments that a crank could use to justify themselves. But any epistemological rule can be abused in bad faith, so this is a poor meta-heuristic.
Everyone thinks that they are better than average, but some people actually are. The former is not a foolproof refutation of the latter.
Theorizes that modest epistemology may be motivated by ‘status regulation’ - making sure that people don’t steal status by attempting projects that predicting future success way in excess of their current status (hence the concern about crank-proof reasoning) - or by ‘anxious underconfidence’ - ‘if you try something more ambitious, you could fail and have everyone think you were stupid to try.’
Don’t assume you can’t do something when it’s very cheap to try testing your ability to do it. Don’t assume other people will evaluate you lowly when it’s cheap to test that belief.
…the number one characteristic of top forecasters, who show the ability to persistently outperform professional analysts and even small prediction markets: they believe that outperformance in forecasting is possible, and work to improve their performance.
Inadequacy is an easy general argument = foot-gun:
…have some common sense; pay more attention to observation than to theory in cases where you’re lucky enough to have both and they happen to conflict; put yourself and your skills on trial in every accessible instance where you’re likely to get an answer within the next minute or the next week; and update hard on single pieces of evidence if you don’t already have twenty others.
Bet on everything where you can or will find out the answer. Even if you’re only testing yourself against one other person, it’s a way of calibrating yourself to avoid both overconfidence and underconfidence, which will serve you in good stead emotionally when you try to do inadequacy reasoning.