Mass-Collaborative Science

From P2P Foundation
Jump to navigation Jump to search


Description

See:

"The Common Good," a new essay in Nature by consulting editor Philip Ball, explores the growing use of collaborative methods to build and evaluate scientific efforts.

Typology

By Jamais Cascio:

"Ball covers three broad categories of mass-collaborative science.

The first I would characterize as Mass Analysis, in which large numbers of people take a look at a set of data to try to find mistakes or hidden details. His best example of this is the NASA Clickworkers project, which used a large group of volunteers to look at maps of Mars in order to identify craters. It turned out that the collective crater identification ability of volunteers given a small amount of training was as good as the best experts in the field. Ball links this directly to the James Surowiecki book, The Wisdom of Crowds, which argues that the collective decision-making power of large groups can be surprisingly good. WorldChanging's Nicole Boyer has mentioned The Wisdom of Crowds in a couple of her essays, most notably this week's The Wisdom of Google's Experiment. The ability of groups to act collectively to analyze and generate information is one of the drivers of collaborative efforts such as Wikipedia -- any individual contributor won't be an expert on everything, but the collected knowledge of the mass of authors is unbeatable.

The second model of collaborative science he discusses is that of Mass Evaluation, in which large numbers of people have the opportunity to vet articles and arguments by researchers. This is a less quantitative and more subjective approach than collaborative analysis, but can still produce high-quality results. Ball cites Slashdot and Kuro5hin as examples of this approach, with the mass of participants on the sites evaluating the posts and/or comments, eventually pushing the best stuff up to the top. In the world of science, articles submitted to journals are regularly checked out by groups of reviewers, but the set of evaluators for any given article is usually fairly small. Ball cites the physics pre-print journal arXiv as an exemplar of a countervailing trend -- that of open evaluation. ArXiv allows anyone to contribute articles, and lets participants evaluate them -- a true "peer review."

The third model Ball discusses is perhaps the most controversial -- that of Collaborative Research, where research already in progress is opened up to allow labs anywhere in the world to contribute experiments. The deeply networked nature of modern laboratories, and the brief down-time that all labs have between projects, make this concept quite feasible. Moreover, such distributed-collaborative research spreads new ideas and discoveries even faster, ultimately accelerating the scientific process. Yale's Yochai Benkler, author of the well-known Coase's Penguin, or Linux and the Nature of the Firm, argues in a recent article in Science (pay access only) that such a method would be potentially revolutionary. He calls it "peer production;" we've called it "open source" science, and have been talking about the idea since we started WorldChanging." (http://www.worldchanging.com/archives/001090.html)


More Information

  1. Science 2.0
  2. Open Notebook Science