Science seeks the truth but frequently does not arrivethere, at least at first.Theconsequences can be mistaken beliefs and misguided practices that become soentrenched that they are almost regarded as laws of nature.The practical effects of science-gone-wrongcan range from bad medicine to bad policy to microfinance dollars misdirected.
Jonah Lehrer, writing in The New Yorker last December (yes, I recognize that is not a peer-reviewed journal), cites some of the problems of science, and I borrow heavily from his piece. Scientific journals – not just People magazine – like exciting stories. This biases them towards publishing surprising findings with statistically significant positive results. One study showed that in 97% of all published studies authors found the result they had hypothesized – an outcome attributable both to authors’ and journals’ preference for publishing only positive (not “null”) results. This creates a skew away from objectivity. John Ioannidis, of Stanford, examined the 49 most cited articles in three major medical journals. Most of these studies were widely heralded randomized control studies. Forty-five showed positive results.
These positive biases can set off scientific feeding frenzies to find more and more nuance related to new, hot scientific “findings.” Alas, these, too, tend to reinforce questionable results, for identical reasons. Statistical analyses suggest that the results of small follow-up studies, rather than reporting results that are randomly distributed around a true mean as statistics tell us they should be, overwhelmingly tend to report results that are more positive. Even authors who produce highly original scientific claims find that they cannot easily find a publication outlet for their own subsequent studies that weaken or contradict them.
How can these biases be so prevalent? For one, science is less objective than we may believe. As an example: Does acupuncture work? Over a thirty year period, forty-seven studies in Japan and China showed that it did — every time. During the same period, studies in the West showed positive effects of acupuncture only half the time. Why? Because our expectations shape our perceptions. And this phenomenon holds in medicine, genetics, psychology, and presumably every other field of study. So why not microfinance studies?
One way these insidious influences creep in to influence scientific outcomes is through measurement errors. Even in randomized control trials, frequently considered the gold standard in mircofinance, there still may be the need to measure subjective “outcomes” like “made a good business investment,” or “benefited from training.” Evaluators may see what they want. Another problem is experiments that are simply not well designed.
As a practical matter, perhaps the biggest risk of making key go/no-go decisions on the basis of scientific evidence alone is that mistakes are rarely noticed, and more rarely undone. Of the 49 studies Ioannidis looked at (remember, these were the most cited in medicine), only thirty-four were replicated, and of those, two-fifths were contradicted or showed markedly smaller effects. Worse, he looked at nearly 450 claims about the effects of various genes on disease. Of these, only one was consistently replicable – less than ¼ percent of the sample
Unfortunately, these results often become scientific lore, difficult or impossible to dislodge and continuing to influence clinical practice even after they have been disproven or weakened.
Add to this problems with methodology. David Roodman reported that the Journal of Money, Credit and Banking found, upon trying to replicate the results it had published, that errors were commonplace, even if they didn’t always materially affect an article’s ultimate findings. The context for Roodman’s report is itself a rather testy, public debate about math and causality that he and Jonathan Murdoch found themselves in with Pitt and Khandker when they tried to replicate the latter’s microfinance results using the same data and methods.
What is the sum of all these observations: 1. The endeavor of doing science and publishing scientific results is fraught with mistakes and biases, despite all efforts to avoid them. 2. Mistaken ideas get extended, developed, and can take on the status of scientific fact. 3. Policy decisions flow from these results (which can lead to bad medicine or bad microfinance). 4. Mistakes are much less rarely undone than we’d hope once they achieve such status.
Given this situation, can we afford to place all our eggs in the basket of scientific experiment? Isn’t it wise to listen to voices of the field, even if we know they might be shading the truth to bring benefits to themselves?
I’m no science-basher. I am a scientist. But the lives of desperately poor people are at stake. If we continue to provide support for methods of microfinance that aren’t completely effective, our losses will be modest compared to discontinuing support for ideas that are helping the poor lead more productive lives, with more dignity, even if science tells us they shouldn’t be.