https://smile.amazon.com/gp/product/B005DXR5ZC/

Bought after watching the accompanying talk.

Published 2011. Author is physics professor at Oxford, big in quantum computing.

Vast majority of progress in understanding and controlling the world happened in the last few 100 years. Not for lack of trying - for many many 1000s of years before that people tried and failed. What changed?

Scientific revolution. How does science work?

Inductivism doesn’t work. Hidden conditionals. Reference class problem. Science is able to correctly predict events that have never been experienced before eg first nuclear bomb test.

Empiricism doesn’t work. Theories cannot be derived solely from evidence because observation is inherently theory-laden. Even believing your senses requires a coherent theory of how your sense interact with reality.

Justificationism - the idea that beliefs need to be justified by reference to some authoritative source. Inductivism, empiricism etc are disguised justificationism - they tell you what you have to do in order to be allowed to believe something.

Fallibilism - there exist no justifiable means of assigning truth or probability to any theory.

Fallibilism is inherently open to change. Justificationism seeks to secure ideas against change.

The Enlightenment was a rebellion against authority as a source of justified belief.

Testable predictions is not enough. ‘Winter happens because the gods are sad’ is falsifiable by traveling, because winter should happen everywhere at once. But once falsified by observing summer in Australia, it could be corrected to ‘winter happens because the gods are sad and drive the summer away to the other side of the world’. Manages to be more or less the same theory while predicting very different observations, because the details of the theory aren’t really connected in any fundamental way to the prediction.

Science is process of discovering every better explanations. If a theory can be easily modified to fit many different observations, it doesn’t really explain the observations, only predict them. Experimental testing doesn’t not lead to increasing accuracy with these sorts of theories. Bad explanations - testable but not falsifiable.

Really similar to recent criticism in social psychology. So many different observations could be interpreted as fitting the original hypothesis that it breaks single-hypothesis-testing.

Real explanation - tilting of the planet and heating from the sun - is much harder to vary. Partly because each part of the explanation plays a crucial role in determining the outcome eg if the Earth were a different shape we would see different kinds of seasons. Partly because it has more reach - the explanation naturally applies to all spheres and all sources of radiation, so trying to alter it so that it says winter everywhere on Earth would require some special exception.

‘Science is what we’ve learned about how not to fool ourselves.’ Good explanations make it hard to fool yourself, because it’s difficult to modify them to remove predictions you don’t like.

General principles for an error-correcting process: tolerance of dissent, tradition of criticism, openness to change, distrust of dogmatism and authority, aspiration to progress (which entails believing progress is possible and desirable).

Does computability help?. Cog sci vs psych. Computability useful for removing hidden sources of complexity (eg gods, which are able to make arbitrary actions for arbitrary unpredictable reasons, but because they have human motivations and we have builtin hardware for thinking about humans, they seem like a simple explanation).

Observation of most phenomenon requires special instruments, as well as theories of how those instruments work. Ever-increasing chains of theories connect human experiences to reality. Only works because error-correcting processes keep those chains solid.

Intelligence is the only source of explanations with reach. Evolution can produce knowledge embodied in the behavior of organisms, but it will never adapt an organism to an environment wildly different from its current environment. Evolution won’t take life to the moon. Fringe debatable, but the core point seems solid.

Humans as ‘universal constructors’ - any given physical transformation is either forbidden by the laws of physics, or accomplishable by humans given enough time and knowledge.

Humans as ‘universal explainers’? Are there limits on the knowledge that we can obtain? No, because we can bootstrap bigger brains or more thumbs or whatever it is that we need.

Think this argument is papering over the assumption that bootstrapping is possible from every point on the curve after the Enlightenment. If we posit parallel Earths with more lead in the air, can we imagine a world where humans are sufficiently intelligent to reach the Enlightenment but not intelligent enough to ever conceive of the computer?

Similarly, construction of a given system may depend on being able to master the complexity, and even with unlimited intelligence there may be fundamental limits to measurement and computation that create a complexity threshold. Laws of physics might allow the existence of a system but forbid its construction.

Spaceship Earth doesn’t work as a metaphor. Natural world is not a cushy life-support system - death and extinction are not uncommon. Even prehistoric hunter-gathers carried a large body of knowledge about the world, which they used to survive a hostile environment. The parts of the world that are cushy for human existence are the parts that were modified by human knowledge. Prior to the application of knowledge the world was not a spaceship, it was a pile of dangerous raw materials. With enough knowledge, any part of the galaxy could be made comfortable.

Anthropic principle doesn’t work. Hides the introduction of a measure over all possible universes. Imagine all the universes where your brain happens to exist long enough to ponder it’s own existence. In what percentage of those is the brain then immediately destroyed eg by the end of the very short-lived universe? Only certain measures don’t predict that we exist in a favorable universe, but only for the next few seconds. Similar for the argument that we must live in a simulated universe. From a Bayesian point of view, these arguments are just pulling a prior out of nowhere and then conditioning on it. The resulting claim is a totally direct consequence of the unjustified prior.

Weird paradox. If you had a process that could predict the next major scientific discovery, you could use that process to just make the discovery. So scientific progress is governed by the laws of reality, but is not predictable from them. Seems a bit too absolute. You could maybe predict things about the discovery without having all the details, similar to how we can predict the course of the solar system without knowing the positions of every atom. So this doesn’t seem specific to science, more just an example of the rule that the best model of a system is the system itself, or at least an identical copy.

Hofstadter’s domino. Abstract explanations sometimes more powerful / more explanatory than reductive explanations. Makes intuitive sense to a programmer - the best explanation for ‘why was this error printed’ is usually not in terms of electrons and silicon, but in terms of the intentions encoded in the program encoded in the patterns in the electrons, and that explanation would be a good explanation regardless of whether the computer underneath runs on electricity or spring-loaded dominos.

Similarly, a theory that involves consciousness intelligences making decisions and acting on the world is not contradicted by the fact that the laws of physics are deterministic, and is no less true than an explanation in terms of electrons and neurons. Minds are real, and can be the causes of events, even though they are implemented in atoms.

Universality in written language, number systems, movable print, Jacquard loom. Non-universal systems, like pictograms or roman numerals, require new symbols or materials for every new element. Universal systems can build new elements by combining or customizing existing elements. All universal systems are digital - enables error-correction, without which copying, combining and transforming is bounded by decay.

DNA as universal system - reached a point where the DNA system stopped changing, but organisms continued to evolve. Reach is surprising, because DNA stopped changing before the evolution of multi-cellular creatures but can still describe them. Biology is not my strong point - do we still believe that DNA alone encodes the entire organism?

Can also form universal computers from DNA. How many digital error-correcting systems can’t be used to build a universal computer? Is it really that unusual a property? Suspect that error-correction is the hard part, and once you have that it’s easy to be a computing substrate by accident.

Speculates that limited AI is not possible - that being intelligent requires being universal explainers, like humans. I’m not really clear on what a ‘universal explainer’ is. Seems to require creativity at least, for theory generation.

Hilberts infinite hotel. Garbage disposal - hand all bags to the next person up, twice as fast as the person before you. Each person does a finite amount of work at a finite speed. All the garbage disappears into infinity. Can’t get it back either - if we run the same algorithm in reverse there is no good reason why we should get garbage back as opposed to nothing, or any arbitrary object.

In the infinite hotel we can solve the halting problem. Implies that the laws of physics determine what is computable or provable.

Usefulness of mathematical truth is theory-laden, because we can’t otherwise determine whether the rules of reality correspond to the rules of the abstract system we built. Knowledge of mathematical truth is also theory-laden, because it depends on our theories of what computers or brains are doing. ‘Proof theory is actually a branch of computer science, not of mathematics’. Formal proofs are justificationism - mathematics is really about good explanations, and proofs are just our means of error-correction.

Unreasonable effectiveness of mathematics is even more fundamental than it appears - we appear to live in a universe where the laws of physics allow themselves to be computed. We can imagine a universe where the laws of physics are non-computable and scientific progress is bounded. Have to think a bit about the difference between non-computable and probabilistic. We can’t build a computer that predicts individual quantum events, but we seem to be doing fine without it.

Some surprising assertions about how we have nothing to fear from any advanced intelligence because they will be universal explainers too. We seem to have plenty to fear from other universal explainers on our planet, so this isn’t backed up by observation so far.

Intro to quantum mechanics. Fungibility of particles and quantization of energy levels is important, because it makes it much easier for configurations to collide. Entanglement (interacting with other particles) usually prevents configurations from colliding, which is why interference is unlikely on a macro scale.

Like Yudkowsky, author rejects quantum collapse in favor of many worlds. Argues that the vagueness of what exactly entails an observation and the culture of promoting confusion (“if you think you’ve understood quantum mechanics then you don’t”) is bad epistemology.

Postmodernism, deconstructionism etc use the impossibility of definite proof of anything to deny the existence of good and bad explanations. Ironically, this produced an authority-based community.

Behaviorism similarly denies explanations. Doesn’t work, because inductivism doesn’t work. Can’t predict under which conditions prior observations will continue to hold without a model.

Explanationless science more prone to error, because without an explanatory model can’t distinguish results from mistakes.

Error-correcting processes not just in science - evolution, open markets, democracy.

Governance as error-correcting process. Rather than focusing on justifying who should rule, focus on systems that will limit and reverse bad decisions and protect and encourage good decisions. From this point of view, democracy is good not because it represents the will of the people, but because it is better at peacefully removing bad leaders than previous systems.

Compromise policies are bad for progress, because their failure does not cast any light on the original policies that they were compromised from, all of which will still be campaigned for by their original proponents. Plurality systems that produce majority governments avoid this failure. But still get poor feedback because the effects of most policies take more than a single term to observe. Do we actually observe majority governments making better decisions than coalition governments?

Normal to be cynical about political agreement, but only because discourse focuses on debate. There has been a definite trend among democratic countries towards a shared system of values that would have been unusual not far in the past eg abolition of slavery, universal human rights, suffrage.

Whole chapter on objective beauty that doesn’t convince me at all.

Species can evolve themselves into extinction, because natural selection optimizes for the relative survival of a gene in a population, not the overall success of the individual or the population. Similarly, human selection of memes doesn’t necessarily lead to progress in explanations.

Societies are built of memes, but are also the environment in which memes are selected. Static society - captured by memes that defend themselves against competition by banning criticism, demonizing change, tabooing progress, disabling creativity etc. Dynamic society - captured by metamemes that promote generation and selection of better and better memes - memes compete on accuracy, usefulness etc. Progress in the former is incredibly slow on human timescales, so we should try to create a good environment for the latter.

Speculation on the evolution of creativity. Static societies encourage conforming to complex customs, taboos, rituals etc. Learning these requires explanations, because simple mimicry falls under the inductivism trap. So a static society would create evolutionary pressure for the ability to generate and test explanations. But that creativity is being used to accurately conform, explaining how we managed to evolve creativity and yet not display it for such a long period.

Seems unnecessary to invoke staticness. Mentioned early that prehistoric humans still had a huge body of knowledge about local environment. Learning that efficiently would require explanation ability - it’s usable as an error-correcting device for transmitted knowledge.

Against Diamonds theory of Eurasian dominance based on geography, climate, available plants and animals etc. Argues that it was the fact that the transition to a dynamic society that led to the knowledge to take advantage of local resources.

Thoughts

Much of it is thought-provoking. Some of it seems obviously wrong. But the key ideas are really well argued: