From the New York Times:
"But the primary problem is that nutrition policy has long relied on a very weak kind of science: epidemiological, or “observational,” studies in which researchers follow large groups of people over many years. But even the most rigorous epidemiological studies suffer from a fundamental limitation. At best they can show only association, not causation. Epidemiological data can be used to suggest hypotheses but not to prove them."
I remember being in a discussion once about the safety of GMO foods and someone made the remark that epidemiological studies were unreliable. I really was perplexed that someone could throw an entire field like epidemiology under the bus. They were most likely thinking about some of the claims made in an article I have referred to before here recently:
Deming, data and observational studies: A process out of control and needing fixing. 2011 Royal Statistical Society. (link)
“Any claim coming from an observational study is most likely to be wrong.” Startling, but true. Coffee causes pancreatic cancer. Type A personality causes heart attacks. Trans-fat is a killer. Women who eat breakfast cereal give birth to more boys. All these claims come from observational studies; yet when the studies are carefully examined, the claimed links appear to be incorrect. What is going wrong? Some have suggested that the scientific method is failing, that nature itself is playing tricks on us. But it is our way of studying nature that is broken and that urgently needs mending, say S. Stanley Young and Alan Karr; and they propose a strategy to fix it."
This criticism of observational studies is interesting. I have discussed this before in relation to wellness program evaluation studies and generally concluded that those criticisms sort look like straw man arguments trying to hold observational studies against the golden standard of the randomized clinical trial. Obviously you would expect results from a RCT to be more reliable and replicable. Does this mean that we refuse to ask interesting questions or analyze important policy decisions just because a RCT is too expensive or impractical, when there are quasi-experimental approaches that could be used to advance our knowledge and understanding of important issues? Or in these cases are we just better off basing our understanding of the world on circumstantial and anecdotal evidence alone?
Jayson Lusk has blogged about this recently:
"What about the health impacts of meat consumption? It is true that many observational, epidemiological studies show a correlation between red meat eating and adverse health outcomes (interestingly there is a fair amount of overlap on the authors of the dietary studies and the environmental studies on meat eating). But, this is a pretty weak form of evidence, and much of this work reminds of the kinds of regression analyses done in the 1980s and 90s in economics before the so-called "credibility revolution."
So, perhaps some bad practices associated with p-hacking and lack of identification has given epidemiological research, and observational studies in general, a worse reputation than they deserve. And if Jayson is correct, a lot of these poor study designs lacking identification may have misled us about our diets and healthy food choices.
No comments:
Post a Comment