Wednesday, May 9, 2012

Null results are useful results: frog declines and publication bias

An article in the latest issue of CFN investigated Chorus Frog populations over time near Ottawa, Canada.  The authors conclude that the Chorus Frog may have declined a bit in this area, but just how much it declined is difficult to determine due to insufficient historical data:

"The lack of historical data makes it difficult to assess the current status of the Western Chorus Frog in western Ottawa. The species may have declined, remained approximately the same (by shifting to different breeding sites), or even increased its distribution (by colonizing additional sites)." (from the article Abstract)

This is an example of a null result in an ecological study.  I argue that: 1) null results are important, and 2) null results are seldom published by bigshot journals.

1) Null results are important.  Seburn & Gunson's study (2011) did not simply provide a shrug of the shoulders as to whether Chorus Frogs are declining near Ottawa.  Despite not finding Chorus Frogs at several historically-occupied locations, they did find Chorus Frogs at many newly-documented areas around Ottawa, which suggests there has not been a major population collapse here as there have been in other areas.  This is important because the data provided in this paper allows further research into Chorus Frog declines in meta-analyses (i.e., big review studies that use data from lots of other studies).  Individual investigations published in journals provide data for meta-analyses of large-scale trends (Stewart, 2010).  Are frogs declining more in some climates than others?  Is there a relationship between habitat fragmentation and frog declines?  What role does soil pH play (Seburn & Gunson note that declines seem to have been more common in areas with acidic soil)?  These types of questions can only be answered by meta-analyses that make use of multiple studies such as the one by Seburn & Gunson.

2) Null results are seldom published by bigshot journals.  Bigshot journals tend to publish grand splashy conclusions that challenge our way of thinking.  Null results tend not to fall into this category.  Thus high-impact journals tend to publish "groundbreaking" studies with large effect sizes, while other journals publish continued research on the topic with smaller effect sizes and null results (Barto & Rillig, 2012).  Research reporting null or weak effect sizes are often of better scientific quality (e.g., larger sample sizes) than studies reporting grand effects (Barto & Rillig, 2012), suggesting the types of results published by high-impact journals are more prone to bias than those published by other journals.

This brings us to the "file drawer problem".  When researchers do their analyses and find a null result, often they toss this result into their file drawer (or computer folder nowadays) and never get around to writing it up for publication (Rosenthal, 1979).  The reasons are understandable.  Publishing an article takes a lot of work, and depending on a researcher's career status it may not be worth their while to spend their time publishing a null result knowing that: a) it will likely not be published in a high-impact journal, and b) job prospects often depend on publishing in high-impact journals.  So individuals' incentives lead to a publication bias whereby research is less likely to be published if it produces a null result, regardless of the quality of the science.  This bias leads to the body of scientific research not accurately representing reality, since an entire segment of results is less frequently published.

While I don't have data on CFN's recent performance at publishing null results, a study of ecological journals during 1989-1995 found that we published more null results than average (Csada et al., 1996).  I'm glad we routinely publish null results.  They aren't sexy, but they contribute to our understanding of nature.  Isn't that supposed to be our goal?

REFERENCES (some links require a subscription to that journal to see the article):
Barto, E.K., & Rillig, M.C. 2012. Dissemination biases in ecology: effect sizes matter more than quality. Oikos 121:228-235.
Csada, R.D., P.C. James, and Espie, R.H.M. 1996. The "file drawer problem" of non-significant results: does it apply to biological research? Oikos 76:591-593.
Rosenthal, R. 1979. The "file drawer problem" and tolerance for null results. Psychological Bulletin 86:638-641.
Seburn, D., & Gunson, K. 2011.Has the Western Chorus Frog (Pseudacris triseriata) Declined in Western Ottawa, Ontario? The Canadian Field-Naturalist 125:220-226.
Stewart, G. 2010. Meta-analysis in applied ecology. Biology Letters 6:78-81.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.