MICHAEL Crichton wrote the Oscar-winning science fiction adventure Jurassic Park. But screen writing was not his first career, he studied medicine at Harvard, and later in life became very concerned about environmentalism and science, and the difficulty of sorting fact from fiction. In a lecture to the Commonwealth Club in San Francisco in September 2003 he said:
“The greatest challenge facing mankind is the challenge of distinguishing reality from fantasy, truth from propaganda. Perceiving the truth has always been a challenge to mankind, but in the information age (or as I think of it, the disinformation age) it takes on a special urgency and importance.”
For sure every day we are bombarded with information from the internet, radio and television and making sense of it can be difficult.
Scientists are meant to know the difference between fact and fiction and as a first check of the reliability of a source of information they will often ask if it has been “peer-reviewed”. Peer-review means that research findings are conducted and presented to a standard that other scientists working within that field consider acceptable. This is normally achieved through publication in a scientific journal and involves the editor of the journal asking for comment on the validity, significance and originality of the work from other scientists before publication. In short, the system of peer-review means scientific research is subject to independent scrutiny but it doesn’t guarantee the truth of the research finding.
In theory rebuttals play an equal or more important role than peer review in guaranteeing the integrity of science. By rebuttals I mean articles, also in peer-reviewed journals, that show by means of contrary evidence and argument, that an earlier claim was false. By pointing out flaws in scientific papers that have passed peer-review, rebuttals, at least theoretically, enable scientific research programs to self-correct. But in reality most rebuttals are totally ignored and so fashionable ideas often persist even when they have been disproven.
Consider, for example, a paper published in 2006 by marine biologist, Boris Worm, and coworkers, in the prestigious peer-reviewed journal Science . The study was based on the meta-analysis of published fisheries data and predicted the collapse of the world’s fisheries by 2048. Publication of the article by Worm et al. was accompanied by a media release entitled “Accelerated loss of ocean species threatens human well-being” with the subtitle “Current trend projects collapse of all currently fished seafoods before 2050”.
Not surprisingly, given the importance of the finding, the article attracted widespread attention in the mainstream media and also within the scientific community. But not everyone agreed with the methodology used in the Worm study. Eleven rebuttals soon appeared, many within the same journal Science, and within months of the original article.
Ray Hilborn, a Professor in the School of Aquatic and Fishery Sciences, University of Washington, described the projection that all the world’s wild fish would be collapsed by 2048 as “fallacious and inappropriate to appear in a scientific journal” . Professor Hilborn objected, in particular, to the use of catch data as an indication of fish abundance. You see, rather than estimating the actual number of fish of a particular species in a particular part of the ocean the Worm study defined fisheries “collapse” as a drop in the number of that fish species caught below 10 percent of the recorded maximum number of that fish species ever caught for that locality. Professor Hilborn explained that number of fish caught may not in fact be an accurate reflection of abundance, because a low catch could be the result of management restrictions, for example, the introduction of a fishery exclusion zone or even a ban on catching that species of fish. So, Professor Hilborn explained, a healthy, well-managed stock may in fact be classified as collapsed using the Worm et al. criteria.
Other rebuttals made similar comment, with Steven Murawski, the Director of Scientific Programs and Chief Scientific Advisor, National Marine Fisheries Service in the USA, explaining that :
“A variety of biological, economic, and social factors and management decisions determine catches; low catches may occur even when stocks are high (e.g., due to low fish prices or the effects of restrictive management practices), and vice versa. The inadequacy of Worm et al.’s abundance proxy is illustrated by the time series of data for [the fish species] Georges Bank haddock (Melanogrammus aeglefinus).
“The highest catch for haddock occurred in 1965 at 150,362 tons. This catch occurred during a period of intense domestic and international fishing. In 2003, haddock catch was 12,576 tons, or 8% of the time series maximum. Under the Worm et al. definition, the stock would be categorized as collapsed in 2003. However, stock assessment data estimate the total magnitude of the spawning biomass in 2003 to be 91% of that in 1965. Comparing the estimate of spawning stock biomass in 2003 to the level producing maximum sustainable yield, the stock was not even being overfished in 2003.”
Franz Holker and coworkers, wrote that the Worm study extrapolated far beyond the range of available observations and that using a similar rational it was possible to predict 100 percent unemployment in Germany by 2056 .
Dr Holker agreed with Dr Worm that there had been an increased in the number of overexploited or depleted fish stocks from about 10 percent in the mid-1970s to around 25 percent in the early 1990s, but suggested that the situation had stabilized since the early 1990s.
Given these very significant errors in the Worm study documented in the many rebuttals, it would seem reasonable to conclude that the original research paper as published in the peer-reviewed journal Science would have been quickly and totally discredited. But this was not the case at all. Dr Worm became quite famous and his paper claiming the imminent collapse of the world’s fisheries was cited on average 72.3 times each year for the three years after its publication in other peer-reviewed publications, while the 11 rebuttals were cited on average just 1.5 times per year .
In reality, the rebuttals scarcely altered the scientific perception of the original article.
In a comprehensive study of this, and six other high-profile original articles and their rebuttals, Jeannette Banobi, Trevor Branch and Ray Hilborn, found that at least in marine biology and fishery science rebuttals are for the most part ignored .
They found that original articles were cited on average 17 times more than rebuttals and that annual citation numbers were unaffected by rebuttals. On the occasions when rebuttals were cited, the citing papers on average had neutral views on the original article, and incredibly 8 percent actually believed that the rebuttal agreed with the original article.
Dr Banobi and coworkers commented that:
“We had anticipated that as time passed, citations of the original articles would become more negative, and these articles would be less cited than other articles published in the same journal and year. In fact, support for the original articles remained undiminished over time and perhaps even increased, and we found no evidence of a decline in citations for any of the original articles following publication of the rebuttals…
“Thus the pattern we observed follows most closely the hypothesis of competing research programs espoused by Lakatos (1978): in practice, research programs producing and supporting the views in the original papers remained unswayed by the publication of rebuttals, thus significant changes in these ideas will tend to occur only if these research programs decay and dwindle over time while rival research programs (sponsored by the rebuttal authors) gain strength.”
But there is no guarantee that a rival research program will gain strength. In fact given the increasing politicization and shrinking funding-base for many scientific disciplines it is perhaps more likely that popular research programs, however flawed, will become entrenched and with them mistaken but popular ideas about the natural environment. And this can only hinder our capacity to respond in a timely and effective manner to environmental issues.
As Michael Crichton said in his 2003 lecture:
“What constitutes responsible [Environmental] action is immensely difficult, and the consequences of our actions are often difficult to know in advance. I think our past record of environmental action is discouraging, to put it mildly, because even our best intended efforts often go awry. But I think we do not recognize our past failures, and face them squarely.”
Indeed it is the naive view that scientific communities learn from obvious mistakes. And as past failures become more entrenched it can only become increasingly difficult to distinguish truth from propaganda, including in the peer-reviewed literature.
 Worm, B. et al. 2006. Impacts of Biodiversity Loss on Ocean Ecoystem Services, Science, Vol. 314, pp. 787-790
 Hilborn, R. The Projection from B. Worm Et Al. Science, Vol 316, pp. 1281-1282
 Murawski, S. et al. 2007. Biodiversity Loss in the Ocean: How Bad is it? Science, Vol 316, p. 1281
 Holker, F. et al. 2007. Comment on “Impacts of Biodiversity Loss on Ocean Ecoystem Services”, Science, Vol. 316, p. 1285
 Banobi, J. et al. 2011. Do Rebuttals Affect Future Science? Ecosphere. Vol 2, pp 1-11. http://www.esajournals.org/doi/abs/10.1890/ES10-00142.1