https://smile.amazon.com/gp/product/B004GUU2PS
Fascinating premise, but desperately in need of a good editor.
- Our goals come from genes and memes that use us as vehicles for replication
- Frames this as a reverse ai safety problem - how do we escape our programming?
Implications:
- arguing that something is (or was) adaptive is not an argument that it is rational
- thin rationality - optimally pursuing existing goals - is not sufficient to escape enslavement
- need to judge goals themselves
But judgment of goals depends on goals - recursive, reflective judgment
- many different goals, often in conflict with each other
- different timescales, different weights
- argues for ‘rational integration’ but not really clear what this means
Not as simple as S1 vs S2
- often S1 goals are adaptive for replication in EEA and S2 goals are adaptive for vehicle in modern environment, so often better to suppress S1
- but memes can also infect S2 eg suicide bomber vs fear of death
- memetic goals can often be more self-destructive than genetic goals, because they don’t need the vehicle to survive in order to replicate
Ultimately, doesn’t really cast any light on how to choose goals, but the reverse-ai-safety framing is provocative.