The official Global Disease Burden 2010 (GDB) report was released today to a cadre of enthusiastic scientists, clinicians, and public health officials at the Royal Society in London (research that I mentioned this morning in a post about University of Washington scholar Christopher Murray–seen in photo). The project has been underway since 2007, and its results are as anticipated by public health officials and practitioners as geneticists and biologists were for the Human Genome Project (HGP). However anticipated the release of this study is, hesitancy to openly embrace the project remains.
Emma Veitch, the Acting Deputy Editor on PLoS Medicine and a Consulting Editor on PLoS ONE, writes that the “key differences between the two [HGP and GDB] projects is their approach to transparency and data sharing.” The HGP was open, and excitedly shared its findings. The GDB, on the other hand, is reserved and cautious. For a project of this size, and with such vital information for the betterment of public health, what is keeping Murray and his colleagues from joyously revealing the full spectrum of their findings? As with a lot of science, it can be summed up nicely with a single word:
Veitch continues by noting that the GDB’s “data displays, however fun and nifty, do not equate to the type of transparency that is needed for full external scrutiny and reanalysis,” because, she says, research of this type is “subject to huge uncertainty.” Uncertainty about what, though?
Veitch recognizes that the results from a project of this sort are only “estimates,” and as such are subject to the test of time and further research. What has me confused, then, is that I thought all science was subject to these same variables. If so, we must figure out a way to reconcile the gap between basic research and policy creation.
This situation reminds me of the precautionary principle. The principle is an imperative that says action should be taken regardless of uncertainty or consensus among scientists if not acting (or disseminating the information) means the problem persists and further puts others (i.e. the public) at risk. An argument against this is that acting without total certainty may end up doing equal harm. But we’ll never know either way. What results from this stand off between certainty and uncertainty? Policy gridlock.
We see this in climate science, resource management, education reform, and other issues at the science-policy interface. In fact, this is a very human problem: we have limited foresight, and our models of reality only get us so far in ensuring that a course of action is right and beneficial. The only way we continue forward is by using the precautionary principle (which when seen in this light is a very intuitive notion). There is more at stake in this than just wanting to make sure the findings are sound.
Murray, his team of researchers, and the entire community of scientists want to have more confidence in their work before sending out to the world because, once again, this is ultimately an issue of how science is understood as not only a practice, but an epistemology. Science is perceived as an enterprise (a community) that, when it speaks, it has its facts straight about the subject at hand. This expectation is perpetuated when scientists balk at sharing their work when it is not fully girded with certainty. Thus, this is really an issue of cultural gridlock.
I argue that the only way to move forward, to shift this paradigm, is for scientists to chip away at this unnecessary perception by taking risks and informing the public and–maybe more crucial–research funders that science is simply and is always a work in progress. This should not, however, keep us from acting with what information we have.