The following is an invited guest post by Dr. Matthew Chew. The bias in research commentary in Nature by Daniel Sarewitz has attracted a lot of comments on the Nature site and Ecolog*. We felt that the issue deserves more discussion. Here, Dr. Chew presents his take on the issue.Matthew K. Chew, Ph.D. Arizona State University School of Life Sciences Center for Biology and Society http://asu.academia.edu/MattChew
Recently in Nature (485:149) my ASU colleague Daniel Sarewitz warned that research is being distorted by a ‘powerful cultural belief’ that ‘progress in science means the continual production of positive findings,’ generating a ‘pervasive bias’ due to ‘a lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties.’ He judged the resulting flow of false, weak or too-narrowly positives to be effectively useless. Sarewitz went on to suggest that bias toward positive results introduced ‘systematic error’ and was likely to prevail ‘in any field that seeks to predict the behaviour of complex systems [including] economics, ecology, environmental science [and] epidemiology.’ Lumping various “e-” disciplines into a single culture may give their practitioners false comfort that Sarewitz is tarring research with too broad a brush. Nevertheless, ecologists would be rash to invoke insular exceptionalism as a pretext for dismissing his concerns.
What would constitute a ‘positive’ bias in ecology? Well, what hypothesis do ecologists want to support? Very rarely can one find an ecologist whose personal motivations exclude management goals of any kind. In the simplest terms, ecologists often want to save things. Leopold, Carson, Cousteau, Ehrlich, Attenborough, Wilson and others taught us well. Most of us knew before studying ecology how important it is to save things. With experience we’re likely to settle for demonstrating why a particular something could do with a bit of saving; but one way or another, ecologists want to help nature reveal what’s going wrong. It is axiomatic to ecologists that things are going wrong. Making that obvious to everyone is a primary motivation. No one will fret over a non-problem.
If Sarewitz and I are both correct, ecological bias is a double-edged sword. On one edge its influence renders ecological findings (especially predictions) suspect. On the other, that very bias provides a collective identity that draws ecologists together. During his stirring 2008 keynote address to the Ecological Society of America, Lord Robert May declared “This is us, not some natural event.” There was no mistaking the responsibilities being ‘us’ (humans: part of the problem) gave ‘us’ (ecologists) for finding a solution. Is being biased for good, important reasons acceptable if it actually renders our findings useless? My answer is a sympathetic, unequivocal no. So far, so good. But we have a more difficult bias to contemplate.
Having one foot in ecology and the other in history, I find that historians and ecologists face similar challenges: ad hoc methods; indirectly observed, inferred and partial data; actively uncooperative or poorly bounded objects of study; and the need to concoct a coherent narrative from disparate elements. We shouldn’t wonder at that, because ecology is a firmly rooted successor to Enlightenment natural history— the history, that is, of everything but people. Charles Elton’s 1927 definition of ecology as ‘scientific natural history’ might have been glib, but it profoundly bracketed the aspirations and prospects of the field.
History is not science. Historians embrace contingency, while scientists strive to eliminate it. Historians describe succeeding conditions, but do not view particular sequences as inevitable, repeatable, or typical. They hypothesize unique events based on unique actions under unique conditions. There are no laws of history unless teleological commitments (religious, ethical or philosophical) are deployed to interpret events. Historians simplify their stories by emphasizing important actors and events, but not to generate predictive models. Scientists seek to predict conditions or events under the deterministic assumption that given identical circumstances, what happened once will happen again. Replicating results makes science ahistorical. Matter and energy don’t grow or learn or make decisions. Mathematical rectitude is expected.
Many objects ecologists inherited from natural history—continents, climates, soils, species, organisms—are products of historical contingency. While subject to physical laws, they ‘lack’ interchangeability, even with themselves at other times. Some can learn and make decisions. They never quite repeat the past.
The crux of the problem: To make natural history scientific, ecologists must construct ahistorical objects by subsuming a mess of contingent exceptions by using statistical approximations that apply in general but never in particular. The only alternative is treating historically contingent populations, ‘communities’ and ecosystems as ahistorical objects. Scientific simplifications—models—are meant to facilitate prospective manipulation. Our ‘science’ bias drives us to de-nature the natural objects and phenomena we seek to save, in order to analyse them by methods we are taught to deem appropriately scientific. Ecology is sometimes dismissed as a ‘soft’ science, but society demands hard facts. Appeasing society by de-naturing and de-historicizing natural history to make ecology harder leaves…what?
Ecology’s most intractable problems tend to bubble up repeatedly. Contingency is no exception. In his 1999 Oikos (84:177-192) review ‘Are there general laws in ecology?’ John H. Lawton concluded contingencies were relatively manageable in micro- and macro-scale studies, but the [community level] ‘middle ground is a mess’. He offered no solution beyond implying that community ecologists need to tolerate—even revel in—messiness. That seems an explicit admission that community-level studies are natural histories. It does not license sloppy research or reverting to qualitative romanticism, but it does require leaving ample room for ‘unknown unknowns’. Since conservation concerns and interventions often involve community scale processes, conservation must incorporate flexibility even in identifying basic goals and objectives. Rather than apologizing for ‘soft’ forecasting on a case-by-case basis as the ‘best available’ science, perhaps it’s time for ecologists to recognize contingency as one of our most robust, general, even positive findings.
* Search ‘sarewitz’ on Ecolog here.
10 thoughts on “The Positively Biased Life”
Great piece! My father was an ecology professor with two books on writing for the life sciences. The point was to write things that were NOT “useless” or, worse yet, were useful but incomprehensible (too much jargon, qualifiers, etc.).
I do think the author exaggerates how historians study the past as a bunch of “unique” events without anything “typical.” He is correct that we do have a sense of contingency – one that is difficult to impart to students until they have immersed themselves in the field.
Example: All of human (world) history changed in the matter of 5 minutes on a day in June 1942 – the Battle of Midway. Historians have studied the decisionmaking under fire on both sides, the intrusion of weather (so unpredictable), and all the rest. It came down to a 20 second decision by a junior commander above the entire Japanese fleet: they had visual but his orders were to do something else. Seeing that the Japanese lacked figher cover (they had killed all the other American planes), he ordered his men down to bomb before the Japanese could launch their other fighters. Within five minutes, Japan lost control of the Pacific and the war was going to be won by the USA. THAT is a predictive statement based on a study of naval warfare, control of seas and supplies and the small odds (probability) of the Japanese finding any way around their coming strangulation by the US Navy and allied forces. Not deterministic, by any means, but so it is with much of natural science too.
I’m sending this around to historian friends who also enjoy science. More than you think!
Well put. The historical perspective adds fascinating and essential context to the issue. Coming from a molecular biology/human health background and having been recently introduced to academic ecology, I’ve been struck by an apparent conservation focus in the field. Now I’ve an idea where that comes from. Also, pointing at ecology’s inheritance of the natural history tradition (“the history, that is, of everything but people”) makes me think of passages from a really excellent Greenwire profile of Peter Kareiva (chief scientist of the Nature Conservancy):
I’m also reminded of a recent paper about biases in paper citation and meta-analysis in ecology (blogged here: “Are Ecologists Too Credulous?” by Mike Fowler at Nature Blogs).
I guess the take-home message is that scientists are human, too. Though we seem to be somewhat in denial about it.
Sandra, many ecologists aren’t conservation-focused. I wouldn’t venture to try and quantify the proportion who aren’t, but it’s substantial.
“Wait two years” (and see!). How true! I research and teach business and economic history. The “consensus” of economists was that the housing bubble of 1997-2007 could not possibly take down the whole economy. People ACTED on this “expert” consensus. I thought it was nuts (I’m writing a book on history of housing bubbles) and refused to buy the McMansions that other people were buying based on the consensus. “Wait a few years,” I said to skeptics. Lo and behold, the whole world has changed.
The historical element in any field introduces what philosophers call “fallibilism” – a sense that we might (just _might_) be wrong even though we think we are right. Or, as Lord Keynes once said in response to a man who accused him of changing his views on a topic:
Keynes: “Sir, when the facts change, I change my mind. What do you do?”
Jonathan Bean suggested that the naval battle for control of the WWII Pacific Theater pivoted around a single command decision. It also pivoted around several decisions made in late 1941 to deploy each of the US Pacific Fleet’s three aircraft carriers on various missions ahead of the Japanese attack on Pearl Harbor. Both are retrospective analyses. Many decisions contributed to the outcome of the war; some were necessary to it, but calling one sufficient (and predictive) seems like a stretch. It’s interesting, though, that we feel compelled (despite all precedent) to reduce complex phenomena to deterministic sequences. Quite often that seems directed at assigning blame or taking credit—both ‘positive’ results.
Sandra Chung’s anecdote about Peter Kareiva reminds me that Peter was one of the first to blog positively about a controversial commentary I co-authored for Nature last year (474:154-155). Perhaps that will prove to have been a pivot point, too; but many careers’ worth of work went into it.
That said, If anyone comes up with a better analysis, I will happily change my mind. It’s happened before.
Matt is correct. The problems with blog replies is one cannot get into such fine detail. My key point was to agree that contingency and complexity is important.
Also, while prospective predictions from history are based on historical immersion and analogies (good or bad) I would not fall into the trap of overemphasizing the uniqueness of events. That overemphasis has led to bad decisionmaking as with the housing bubble. “This time is different” is sometimes true but when dealing with flashing red lights of “bubble” deviations from “normal” pricing models one ought to beware the hubris of “new” “positive” results based on fancy, untested models (untested by historical financial data. I’m looking at you James Fallows, author of _Dow 36,000_!).
I strongly object to your and Sarewitz’s use of the term “bias”. Frankly, it’s just silly to lump together every single obstacle to scientific progress as “bias”. Sarewitz uses bias to cover everything from the financial incentives drug companies have to fudge the results of drug trials, to the fact that experiments in model systems sometimes give different results than in other systems (you don’t say!), to any research that doesn’t have direct application to solving some human problem (!)* And now you’ve extended “bias” even farther, to include such things as statistical generalizations about community ecology!
Yes, science is done by fallible humans. Yes, individual scientists, and science as a whole (one of many important distinctions you and Sarewitz gloss over) can fail to get the right answer and go off track for all sorts of reasons. It’s always been thus. Tell me something I didn’t know. Despite all that, scientists seem to have gained a lot of perfectly objective, reliable, usable knowledge about how the world works since the 17th century. Including ecological knowledge. Are you claiming otherwise? If not, what’s your point? Lumping together all this stuff that you and Sarewitz lump together as “bias” amounts to saying “Science has a problem: it doesn’t always work perfectly.”
On my own blog, I write a fair bit about specific ways in which ecologists, or scientists generally, can go off track, and specific ways to prevent that from happening. Notably, different problems have different solutions, which have *nothing whatsoever to do with one another*. In contrast, you and Sarewitz are just lumping together everything not nailed down, calling it “bias” (a *very* loaded term), and pretending that you’ve made some kind of deep and hugely important point. Sorry, but I don’t see it. At all.
It’s ironic that a post that so casually lumps together so much different stuff under the heading of “bias” would take ecologists to task for failing to attend sufficiently to contingent, system-specific details.
*For anyone who doesn’t believe me: yes, Sarewitz really does explicitly equate fundamental research and biased research. Here’s the quote: “A biased scientific result is no different from a useless one. Neither can be turned into a real-world application.” Which is a transparent fallacy. Compare: “A biased scientific result is no different from an applied one. Neither increases our fundamental understanding of nature.” Or, “A human being is no different than a rock. Neither can fly.”
Pingback: What's the role of "ignorance" in science? | science before breakfast
I’m inclined to agree with Jeremy Fox, although I’m not quite as miffed as he is. As scientists, pretty much everything we do is wrong. We are constrained by current (and thus soon to be out of date) techniques and lines of thinking. This may sound rather negative, but I believe our principal aim is to make our understanding of the world less wrong. If one views a particular piece of research as a complete description of a process and pattern, then yes, bias (to use your term) in that research is hugely problematic. But I don’t think we should. or do, look at things that way.
Aristotle was wrong, Newton was wrong, de Lamarck was wrong, Darwin was wrong. Yes all these individuals have made a massive contribution to our understanding of the world. My PhD work involved re-doing some early work by Metschnikoff on the crustacean Daphnia. This particular aspect of his work rested on some flawed assumptions which I showed to be false. Am I glad he published this incomplete view? You bet I am. You can also be sure that we are all making assumptions that will give scientists of the future something to work on. In short this quotation is, as Jeremy writes, completely false: “A biased scientific result is no different from a useless one. Neither can be turned into a real-world application.”
Also, for the record, the term “real-world application” makes me hold my head in my hands. Much of the work we do is not applied, and yet I think all of us are trying to understand the real world as best we can.
Pingback: Journal of Ecology blog stats since the beginning | Journal of Ecology blog