http://www.amazon.com/dp/0691148902

Looks at how new technology is affecting collaboration in science, and argues that this could be a paradigm-shift on the scale of the creation of scientific publishing, if we take advantage of it.

Chapters

Reinventing discovery

Gower’s Polymath Project. Open forum attack on a complex unsolved problem. Solved in 37 days, after 800 comments from 27 people, totaling 170k words.

There is a common phenomenon on the internet where a bunch of people get really excited about a common cause and get together to work on a project. Months later, they have produced 17 different versions of the logo and no actual work. Since this seems to be the default result, it’s worth keeping an eye out for what makes successful projects different. I suspect that one necessary feature is that there is a coordinator who is already doing work on which other people can join in, rather than just an excited group.

Many different perspectives on the problem. More eyes on bugs. Creating a Fox.

Tools create shared short-term memory. Why short-term?

Cognitive tools - tools that add new abilities or capacity to human thinking. See mindware for non-tech examples.

Scientific process was one such tool - collection of hard-earned lessons on how to produce useful (predictive) theories about the world. Argues that networked tools could produce similar paradigm shift.

Breaks down into:

Incentives important part of collaboration eg GenBank initially had many leeches. Bermuda Agreement solved the coordination problem for the Human Genome Project - Leviathan won’t give you funding unless you agree to share. No such agreement yet for eg flu genome.

Current incentives are strongly against sharing. Can’t rely on individual action solving the problem.

Eg Wikipedia was long ignored by scientists, because they are under strong pressure to publish-or-perish and Wikipedia doesn’t help towards that goal.

Amplifying collective intelligence

Online tools make us smarter

Kasparov vs the World (~50k people, ~5k people per move). Unexpectedly close match.

Many beginners, but also published opinions from strong (but not Kasparov -strong) players.

Smart Chess - Krush and team built public move tree. Helped coordinate, reduce duplicated effort, provided reference point for discussion. Reference point is interesting - having canonical name / url for particular sequence.

Between move 10 and 50 also played Krush’ recommended move, but that recommendation was drawing from many inputs. Krush acting as coordination point / gardener.

Previous match Kaspov vs the World did not allow for such coordination. The World got crushed.

Collective brain is a poor metaphor - brain is very specific hardware/organisation/structure - no reason to believe that any collab cognition should follow the same structure.

Not same as Wisdom of Crowd, where average result is good but not as good as best individuals. These are examples where the collab effort was substantially better than any individual, on complex, creative, open-ended problems.

Plenty examples of collab efforts being stupider than individual. Need to figure out what precisely makes the difference to make this a useful tool.

Early tools - most successes (eg wikipedia, linux, invention of blogs) did not come from experts in the field. Note again that Linus’ job is mostly coordination and filtering.

Random thought - creating a structure where the workers are exploring/chunking options so leader can make informed decision.

Restructuring expert attention

InnoCentive - ‘craigslist for science’. Allows placing bounties on solution of scientific/industrial problems.

Makes connections between needs and skills. Networking enabling further specialization / increasing the effectiveness of specialization.

Restructuring expert attention - given need for cognition, find valuable outlet to connect it to.

Right expert to right problem at right time eg Krush move 10 moved game into area which she had previously analysed in a coaching session.

Microexpertise - Krush not better than Kasparov overall, but better on this particular move. Many microexperts combined to play well.

Chess structure is embarrassingly parallel - more search power can make up for worse value heuristics - does a chess team actually need variety or just parrallelisation? Would 10 Krushs perform worse than 10 different players at the same level? How much of the value here is different knowledge vs just being able to explore more options? Certainly some, but how much?

Designed serendipity - create structure that routinely makes such matches between problems and micro-expertise.

Imagine future where people only work in large collectives instead of individual problems. Are there opportunity costs to focusing everyone on same problem? Are the outweighed by faster/better solutions? What is the ideal tradeoff?

Right now probably so far from the ideal that any change would be an improvement, so not directly relevant, but an interesting question anyway.

Eg of serendipity - Einstein couldn’t figure out geometry for Special Relativity. Grossman pointed him at Riemannian geometry, where many of the problems had already been solved.

Successful scientists tend to collect unsolved problems, keep lookout for chance encounter with a new tool. Want to optimize this process. Critical mass! Filtering noise vs signal also important in this metaphor - don’t want to absorb too many neutrons.

Need diversity of knowledge/expertise/ideas but enough common ground to communicate.

Architecture of attention - task is to focus people on tasks where they have comparative advantage.

Historical strategies for collaboration:

Large formal groups restrict serendipity. Communication limited to hierarchy. Top-down allocation of tasks.

Dynamic division of labor - no fixed responsibility, no no pre-assigned task - to each what they can do best, continual update and reallocation. Does this only work for accretive problems? eg adding more theorems to maths pool is one thing, having 20 plumbers installing pipes at random is another. But the description of the modern construction industry seen in Checklist Manifesto is very dynamic. There is top-dow C&C of goals and responsibilities but decision-making power is pushed to the edges. Same change seen in 20th century militaries. But there is no provision for random volunteers walking in and installing a light fixture. In Smart Chess, who was allowed to update the move tree? Is protection of shared resources / right to filter info an important decision point?

Division of labor vs division of insight. Assembly lines work by controlling and making uniform the interface to individual labor. Doesn’t work if you don’t know in advance what work will be required. Dynamic division of labor important when you cannot plan the division of labor in advance. Again, modern construction does divide labor dynamically to some extent, but seems to have other important differences. Have to figure out what they are.

Contrast to CERN physicists - huge formal collaboration with fixed roles. Because it’s infrastructure? How would you handle maintenance, certification, trust etc? Is the existence of a platonic solution the dividing line?

Markets. (Soviet planners ask ‘who is in charge of delivering bread to London’). Prices aggregate knowledge. Dynamic division of labor enabled by using price as signal for supply and demand. Limited structure - not suitable for aggregating complex knowledge. Although maybe suitable for focusing attention eg InnoCentive.

Structure dictates what kid of info can be transmitted eg info about housing bubble was not reflected in prices because of the technical difficult/danger of shorting.

Also reminds me that transaction costs are interesting to consider - they limit the granularity of the minimum contribution.

Collaboration != committee. Members are volunteers - not compelled to be involved in areas where they are not already incentivised. Can also route around problematic members. Aha, can’t do this construction. If some idiot is putting pipes everywhere you can’t just mute them. Degree of interference dictates how safe it is to allow members without control. 50% in forum => moderate. 50% noise in construction => building falls down.

Is the ‘online’ part important? Couldn’t put 50k chess voters in one place, and couldn’t get them to communicate effectively if you did. Interface shapes interaction - allowing control over group dynamics.

Offline interactions have high bandwidth but also high latency (have to go find the person) and limits number of interactees (can only meet people in local network).

Compared to offline interactions, Polymath interactions were better edited/presented, easier to skip and could be returned to again and again. Same arguments for making one good lecture recording instead of suffering through many mediocre live lectures.

Scaling up forces specialization - at some level of complexity can’t fit the whole problem in one head so need ways to distribute it effectively.

Patterns of online collaboration

Linux. Encourage large community of contributions from the beginning. Result runs much of the world.

Open source architecture - access, modify, share designs.

Most software projects aim for a single, canonical version, but architecture example made me realize that it’s possible to use similar techniques without any central authority at all (eg small scripts for extending Photoshop/Blender/similar could work like this). The authority is mainly there to act as a ratchet on the value of the solution in cases where many forked efforts would not be able to solve the complex problem.

Idea: authority/control/ownership can be an expense. Don’t default to it but consider cost/benefit in each case. Many granularities possible too. Construction controls access to the building - can’t cheaply clone a building so makes sense to protect the shared resource. Typical open source project also controls access to the official ‘building’ but allows clones to do whatever they want. Similarly for eg the authority to bill the client in construction.

Linux, Wikipedia etc are about scaling complexity in breadth/quantity rather than depth/quality. Most Wikipedia articles could be better written by a single expert, but no single expert could match the scale. Somewhat debatable for Linux - much of the kernel is written by experts already, and additionally contains much knowledge learned painfully over many years.

Open Source is:

Linux almost forked because Linus was a bottleneck on review/merge. Solved by delegating to lieutenants who each specialize in some areas. Whole codebase rewritten at great effort to be more modular. Allowed complexity to grow beyond comprehension of single person.

Polymath was monolithic/linear. Wouldn’t scale.

Wiki-novels don’t work because modular structure (chapters -> pages) does not match important structure of problem (arcs, characters, relationships, setting). Argues that collab novels could potentially work if tools matched the structure.

Issue trackers. Structure limits scope of conversation to single issue at time so not overwhelmed by total volume. Tags used to direct attention. Automation of routine tasks eg bors.

Reuse allows crystallized knowledge/work to accumulate rather than being started from scratch each time. Standing on the shoulders of giants. Compare to printing press - before that point knowledge was actively decaying all the time.

Dynamic division of labor - the writer doesn’t even need to know that I used their code - can collaborate without coordination. Passive collaboration.

Scientific publications don’t allow for fine-grained reuse / contribution eg taking a paper, improve argument, add data, republish - valuable effort but not rewarded (actually likely punished for plagiarism).

MathWorks competition - coding competition where submissions are automatically scored and published. Rapid feedback - community learns faster.

Micro-contributions - most MathWorks contribution only change one line, but cumulative effect is dramatic.

Task tracking becomes a limit on contribution size. Hammering in one nail in a building does not actually help - still have to check that the work is done, can’t track on that level. Can only benefit from contributions that you can keep track of.

Similarly, DVCS reduces overhead of accepting contributions. Prior to that, dealing with many contributors required even more coordination around when to make changes to avoid painful merges.

Gowers asked polymathees to contribute only one idea per post - more modular, better reuse.

Scores in MathWorks act to automatically direct attention to most promising ideas. Similar principle to market prices. Krush/Linus acted similary as expert scoring systems - deciding which work to keep and which to discard.

More sophisticated mechanism might direct people to where their expertise is most valuable eg ‘this section of code is new and hasn’t seen micro-opt yet’ vs ‘this section is highly optimized but hasn’t seen many different approaches’. Could also add noise to attention signal to prevent getting stuck in local optima.

Need to convert individual insight in collaborative insight to actually ratchet progress. Ability to do this imposes limits on the power of collective intelligence.

The limits and potentials of collective intelligence

Experiment where subjects effectively share confidence rather than information - not effectively making use of collective knowledge. Group discussion focused on information they had in common rather than the unique information each possessed. Since the unique pro information was shared and unique con information was distributed, they each saw a much more pro view than they would have if they shared information well.

Collective intelligence works best on problems where verifying solutions is easier than producing solutions - otherwise not worth reuse. Plus transaction cost of reuse.

Group failed to pool info, but even if they did would have still had unresolvable differences on opinion. Collective intelligence works when group has shared praxis - common body of knowledge and values. High inferential distance adds transaction cost to reuse.

If the students were each interested in voting to effectively achieve their own values, and were collaborating in good faith, they still have shared goal of obtaining predictive accuracy. Tetlocks foxes were more effective at predicting outcomes in groups, without requiring that they share the same desired outcome.

Might make a stronger claim - that disagreements over shared values only interfere to the extent that it interferes with C&C over limited resources. In the architecture example, the pool of designs can still become valuable even if the members have many different design goals.

Important in economics/politics - good faith cooperation in eg thinktanks can still benefit all participants even if they fail to agree on fundamental values. Just difficult to produce when there is so much signalling pressure. Secret think-tanks? Private negotiations?

Think the point about shared praxis is accurate, just that this is a poor example in an important category, so worth contesting :)

Other problems - groupthink, civic breakdown, trolls.

Think of scientific revolution as establishing a shared praxis on how a theory should be judged, what makes a good contribution, when work is accepted/trusted.

Breaks down when no way to score contributions / direct attention eg wars over string theory because not yet any way to falsify.

Long imaginary narrative about potential future which I am too lazy to transcribe.

Networked science

All the worlds knowledge

Swanson - no credentials / training in medicine but made valuable connections by trawling Medline for unnoticed patterns. Medline acting as cognitive prosthesis. Reminds me of the main character in Blindsight, who translates discoveries made by AIs into a form understandable by humans. Doesn’t actually understand the science on either side, but is very good at pattern-matching.

Google Flu Trends - as accurate as existing methods but with much less time lag. (Many caveats). Superforecasting notes that google searches for ‘housing bubble’ track the event quite accurately, making the we-couldnt-have-possibly-seen-it-coming excuses of the CDOs somewhat hard to believe.

Trend is far more data, far more tools to make sense of it.

Sloan Digital Sky Survey - robot telescope with wide-angle lens trying to map entire universe. Have to make sense of this vast amount of raw data and condense it down into human-scale maps.

Algorithmic searches for eg dwarf galaxies, twin black holes.

Previous survey efforts produced photographic plates - could not share easily. SDSS is free online - reduce barrier to entry for asking interesting questions.

Projects like this are question-limited rather than data-limited.

Most scientists would prefer to share data, but there are strong negative incentives. Moloch whose blood is citation counts! Even SDSS has an exclusivity period.

Unrelated to current section, but suddenly struck by the contrast between normal vision of the future where scientists become tiny depressing cogs in a vast machine, and the vision here where they are microexperts whose unique talents are carefully aimed at matching problems. Not necessarily any actual difference day-to-day, just a different way of looking at value.

Unhitching observation from analysis. Rise of informatics.

Big projects are more likely to be open:

Data-driven intelligence - routing mechanical work applied at huge scale - broader than ML in that it doesn’t require learning/inference, just massive-scale machine labor.

Freestyle chess is dominated by human-computer teams. Same for weather prediction, surprisingly.

Describes process of sequencing genomes. Not much brains, but a lot of brawn.

Data web. Currently not much open, not much linking. Slowly growing though.

Combining/linking data increases the numbers of questions which can be asked/answered non-linearly.

Change in the nature of explanation. May not always be some simple underlying theory. May be just a massive pile of correlations with good predictive power. Eg statistical translation still beating explicit linguistic models. Maybe end for Occams razor. Rationality: A-Z has a lot to say on the subject, from the point of view of how an AI would form theories about the world. Well worth reading again.

Democratizing science

Hanny’s Voorwerp. Volunteer on Galaxy Zoo finds a strange blob. Asks in forum - noone knows. Photo is verfied. Other telescopes pointed in same direction to confirm it really exists. Still unsolved.

Green pea galaxies. Originally running joke in forums. Some explained away but a few remain mysteries. Forum members learn spectral analysis to investigate. Professionals come and go but most of the work in the discovery done by amateur voluteers.

Argued that when it comes to rationality large effect sizes on small groups are more important for the world right now than small effect sizes on large groups. Would that continue to be true if large-scale amateur science like this became more common?

FoldIt - crowdsourced protein folding. OS community for scripting interface. Tracking score lends weight to advice of high-scoring players. Taking advantage of microexpertise in 3d spatial reasoning. Biological understanding condensed into game interface so barrier to entry is low.

Open Dinosaur Project - crowdsourcing extracting data from paleo papers into structured db.

Galaxy Zoo classifications now used as training set for machine classifier.

Community build - rapid and helpful feedback vital for maintaining motivation.

Cognitive surplus. New tools overcoming barriers + costs, letting more minds getting involved in large, complex projects.

Managing large-scale collab without money - different structure+incentives from eg corporations.

Ideological opposition to Polio eradication. Measles outbreak in UK after bogus MMR-autism link.

Politically, scientific policy treated as yet another interest to be placated rather than source of actual knowledge.

Both education and markets have improved scientific literacy (markets by making benefits widely available - turning scientific discovery into a passive good).

Can we do more?

Open Access - very hard to become informed through expensive paywall. Last year I read several hundred CS papers at work. If they were priced like eg medical papers, that would have been ~$5000 which is 10% of median US household income.

Especially problematic for data-based intelligence - no affordable options for crawling.

arXiv, PLoS enable construction of new bridging institutes - can link directly to source.

In 2009 Elsevier made $1.1b profit on $3.2b revenue. For context, in 2009 the NSF asked for $6.85b

Much lobbying against Open Access eg want to make it illegal for funding bodies to require Open Access.

Singh libel case. BCA evidence torn apart in a day by a group of bloggers. Also acted as source of expert info for Singh’s legal team. ‘Wiki litigation’.

Tao’s blog has 10k readers.

Vision of science as a 2 way medium instead of untrusted loudspeaker. Certainly can’t rely on the media to convey science accurately.

Ingenuity gap - gap between societies problems and it’s capacity to solve them. Need to bridge it before we lose.

The challenge of doing science in the open

Fear of not being credited or being falsely accused of plagiarism.

Discoveries used to be closely guarded. Publish-or-perish incentivized sharing of discoveries by rewarding it with prestige and career advance. Scientific revolution could not have happened otherwise.

Bottleneck now is sharing ideas + data. Need to figure out how to provide similar incentives to share.

Qwiki failed - no incentive to be the first to contribute. Similar result for open peer review. Huge selective pressure to focus only on personal reputation.

Why do the programming equivalents work? Maybe reputation in OS comes from public display of knowledge, rather than discovery?

But Amazon reviews work. Sad that we have a situation where review important scientific discoveries is less incentivised than reviewing pokemon cartoons.

Existing collab projects are conservative - downstream goal is still publication. For the scientists at least - the volunteers are in it for the joy of discovery, or the need for cognition, or maybe just to climb the leaderboard.

Influences my thinking on direction - need to be bear in mind community influences - eg are there areas of research that don’t fit well into discrete publications.

This is why best examples of collective intelligence come from outside science - publish-or-perish produces too many negative incentives.

Similar disincentives socially - no respect for coordination or infrastructure work.

Patents, NDAs. Bayh-Dole act. Has an influence, but overweighted in discourse - not nearly as strong as publish-or-perish.

The open science imperative

Coordination problems, network effects.

Similar problems in early days of journals - solved by funders insisting on publication.

Compulsion - beginning to see Open Data policies as well as Open Access.

Easier upstream - funders don’t have to fear losing customers - publishers do.

Need social pressure to follow - otherwise will get bare minimum data dump and nothing else.

Incentives - reputation economy. Give public credit for data, code and infrastructure.

Climate Research Unit hack - working in public could lead to chilling effect on scientific thought, for fear of being used out of context.

More info = more confirmation bias. Less gatekeepers = less defenses against memetic misinformation. But we already have this problem - pseudoscience spreads over all the place but actual science is hard to access.

Increasing complexity of all fields may make open/collab science the only option.

Call to action.

Can we solve coord problem in other communities? Eg good quantified self research could be done by a small group with little funding, maybe act as a seed.

Thoughts

Most discussion I’ve seen on the subject of collective intelligence has been fuzzy daydreaming. This book benefits from being very concrete. It’s a cautious extrapolation from current trends and past examples.

I’m most interested in the section on collaboration. It’s far from being complete, but it introduced some good ideas and triggered more. Now thinking of collaborative work as consisting of:

Constant ongoing process. Have to pay attention to efficiency of each section eg modular software is easier to decompose into human-sized subproblems.

Most of the rest of the book consists of examples and ideas I’ve seen before, but seeing them all gathered in one place is valuable. Makes me realize that the traditional academic career is not the only path to contributing to science.

So here is an idea - find a narrow field, with low barriers to entry, that is currently under-served, where participants have some intrinsic motivation to get results but no active incentive to hoard them. Would it be possible to seed some interesting collaborative effort by working in the open and making it easy to get involved? Certainly works for open-source software, and there is a lot to learn from looking at successes and failures there. The rationality movement and quantified self seem like two good examples of semi-scientific efforts that have arisen outside of the normal scientific structure.