Tuesday, July 11, 2017

The Credibility Revolution in Econometrics

Previously I wrote about how graduate training (and experience) can provide a foundation for understanding statistics, experimental design, and interpretation of research. I think this is common across many master's and doctoral level programs. But some programs approach this a little differently than others. Because of the credibility revolution in economics, there is a special concern for identification and robustness. And even within the discipline, there is concern that this has not been given enough emphasis in modern textbooks and curricula (see here and here). However, this may not be well understood or appreciated by those outside the discipline.

What is the credibility revolution and what does it mean in terms of how we do research?

I like to look at this through the lens of applied economists working in the field:

Economist Jayson Lusk puts it well:

"Fortunately economics (at least applied microeconomics) has undergone a bit of credibility revolution.  If you attend a research seminar in virtually any economist department these days, you're almost certain to hear questions like, "what is your identification strategy?" or "how did you deal with endogeneity or selection?"  In short, the question is: how do we know the effects you're reporting are causal effects and not just correlations."

Healthcare Economist Austin Frakt has a similar take:

"A “research design” is a characterization of the logic that connects the data to the causal inferences the researcher asserts they support. It is essentially an argument as to why someone ought to believe the results. It addresses all reasonable concerns pertaining to such issues as selection bias, reverse causation, and omitted variables bias. In the case of a randomized controlled trial with no significant contamination of or attrition from treatment or control group there is little room for doubt about the causal effects of treatment so there’s hardly any argument necessary. But in the case of a natural experiment or an observational study causal inferences must be supported with substantial justification of how they are identified. Essentially one must explain how a random experiment effectively exists where no one explicitly created one."

 How do we get substantial justification? Angrist and Pischke give a good example in their text Mostly Harmless Econometrics in their discussion of fixed effects and lagged dependent variables:

"One answer, as always is to check the robustness of your findings using alternative identifying assumptions. That means you would like to find broadly similar results using plausible alternative models." 

To someone trained in the physical or experimental sciences, this might 'appear' to look like data mining. But Marc Bellemare makes a strong case that it is not!

"Unlike experimental data, which often allow for a simple comparison of means between treatment and control groups, observational data require one to slice the data in many different ways to make sure that a given finding is not spurious, and that the researchers have not cherry-picked their findings and reported the one specification in which what they wanted to find turned out to be there. As such, all those tables of robustness checks are there to do the exact opposite of data mining."

That's what the credibility revolution is all about.

See also: 

Do Both! (by Marc Bellemare)
Applied Econometrics
Econometrics, Multiple Testing, and Researcher Degrees of Freedom








No comments:

Post a Comment