Friday, July 21, 2017

Regression as a variance based weighted average treatment effect

In Mostly Harmless Econometrics Angrist and Pischke discuss regression in the context of matching. Specifically they show that regression provides variance based weighted average of covariate specific differences in outcomes between treatment and control groups. Matching gives us a weighted average difference in treatment and control outcomes weighted by the empirical distribution of covariates. (see more here). I wanted to roughly sketch this logic out below.

Matching

 δATE = E[y1i | Xi,Di=1] - E[y0i | Xi,Di=0] = ATE

This gives us the average difference in mean outcomes for treatment and control  (y1i,y0i ⊥ Di) i.e. in a randomized controlled experiment potential outcomes are independent from treatment status

We represent the matching estimator empirically by:

 Σ δx P(Xi,=x) where δx is the difference in mean outcome values between treatment and control units at a particular value of X, or  difference in outcome for a particular combination of covariates (y1,y0 ⊥ Di|xi) i.e. conditional independence assumed- hence identification is achieved through a selection on observables framework.


Average differences δx are weighted by  the distribution of covariates via the term P(Xi,=x).

Regression

We can represent a regression parameter using the basic formula taught to most undergraduates:

Single Variable: β = cov(y,D)/v(D)
Multivariable:  βk = cov(y,D*)/v(D*)

where  D* = residual from regression of D on all other covariates and 
E(X’X)-1E(X’y) is a vector with the kth element cov(y,x*)/v(x*) where x* is the residual from regression of that particular ‘x’ on all other covariates.

We can then represent the estimated treatment effect from regression as:

 δR = cov(y,D*)/v(D*) = E[(Di-E[Di|Xi])E[yiIDiXi] / E[(Di-E[Di|Xi])^2]  assuming (y1,y0 ⊥ Di|xi)

Again regression and matching rely on similar identification strategies based on selection on observables/conditional independence.

Let E[yi | DiXi] = E[yi | Di =0,Xi] + δx Di

Then with more algebra we get: δR = cov(y,D*)/v(D*) = E[σ^2D(Xi) δx]/ E[σ^2D(Xi)]

where σ^2D(Xi) is the conditional variance of treatment D given X or  E{E[(Di –E[Di|Xi])^2|Xi]}.

While the algebra is cumbersome and notation heavy, we can see that the way most people are familiar with viewing a regression estimate cov(y,D*)/v(D*)  is equivalent to the term (using expectations)  E[σ2D(Xi) δx]/ E[σ2D(Xi)] , and we can see that this term contains the product of the conditional variance of D and our covariate specific differences in treatment and controls δx.

Hence, regression gives us a variance based weighted average treatment effect, whereas matching provides a distribution weighted average treatment effect.

So what does this mean in practical terms? Angrist and Piscke explain that regression puts more weight on covariate cells where the conditional variance of treatment status is the greatest, or where there are an equal number of treated and control units. They state that differences matter little when the variation of δx is minimal across covariate combinations.

In his post The cardinal sin of matching, Chris Blattman puts it this way:

"For causal inference, the most important difference between regression and matching is what observations count the most. A regression tries to minimize the squared errors, so observations on the margins get a lot of weight. Matching puts the emphasis on observations that have similar X’s, and so those observations on the margin might get no weight at all....Matching might make sense if there are observations in your data that have no business being compared to one another, and in that way produce a better estimate" 

Below is a very simple contrived example. Suppose our data looks like this:
We can see that those in the treatment group tend to have higher outcome values so a straight comparison between treatment and controls will overestimate treatment effects due to selection bias:

 E[Y­­­i|di=1] - E[Y­­­i|di=0] =E[Y1i-Y0i]  +{E[Y0i|di=1] - E[Y0i|di=0]}

 However, if we estimate differences based on an exact matching scheme, we get a much smaller estimate of .67. If we run a regression using all of the data we get .75. If we consider 3.78 to be biased upward then both matching and regression have significantly reduced it, and depending on the application the difference between .67 and .75 may not be of great consequence. Of course if we run the regression including only matched variables, we get exactly the same results. (see R code below). This is not so different than the method of trimming based on propensity scores suggested in Angrist and Pischke.


Both methods rely on the same assumptions for identification, so noone can argue superiority of one method over the other with regard to identification of causal effects.

Matching has the advantage of having a nonparametric, alleviating concerns with functional form. However, there are lots of considerations to work through in matching (i.e. 1:1, 1:many, optimal caliper width, variance/bias tradeoff and kernel selection etc.). While all of these possibilities might lead to better estimates, I wonder if they don't sometimes lead to a garden of forking paths.

See also: 

For a neater set of notes related to this post, see:

Matt Bogard. "Regression and Matching (3).pdf" Econometrics, Statistics, Financial Data Modeling (2017). Available at: http://works.bepress.com/matt_bogard/37/

Using R MatchIt for Propensity Score Matching

R Code:

# generate demo data
x <- c(4,5,6,7,8,9,10,11,12,1,2,3,4,5,6,7,8,9)
d <- c(1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0)
y <- c(6,7,8,8,9,11,12,13,14,2,3,4,5,6,7,8,9,10)

summary(lm(y~x+d)) # regression controlling for x

Wednesday, July 12, 2017

Instrumental Variables and LATE

Often in program evaluation we are interested in estimating the average treatment effect (ATE).  This is in theory the effect of treatment on a randomly selected person from the population. This can be estimated in the context of a randomized controlled trial (RCT) by a comparison of means between treated and untreated participants.

However, sometimes in a randomized experiment, some members selected for treatment may not actually receive treatment (if participation is voluntary, for example the Medicaid expansion in Oregon). In this case, sometimes researchers will compare differences in outcome between those selected for treatment vs those assigned to control groups. This analysis, as assigned or as randomized, is referred to as an intent-to-treat analysis (ITT). With perfect compliance, ITT = ATE.

As discussed previously, using treatment assignment as an instrumental variable  (IV) is another approach to estimating treatment effects. This is referred to as a local average treatment effect (LATE).

What is LATE and how does it give us an unbiased estimate of causal effects?

In simplest terms, LATE is the ATE for the sub-population of compliers in an RCT (or other natural experiment where an instrument is used).

In a randomized controlled trial you can characterize participants as follows: (see this reference from egap.org for a really great primer on this)

Never Takers: those that refuse treatment regardless of treatment/control assignment.

Always Takers: those that get the treatment even if they are assigned to the control group.

Defiers: Those that get the treatment when assigned to the control group and do not receive treatment when assigned to the treatment group. (these people violate an IV assumption referred to monotonicity)

Compliers: those that comply or receive treatment if assigned to a treatment group but do not recieve treatment when assigned to control group.

The outcome for never takers is the same regardless of treatment assignment and in effect cancel out in an IV analysis. As discussed by Angrist and Pishke in Mastering Metrics, the always takers are prime suspects for creating bias in non-compliance scenarios. These folks are typically the more motivated participants and likely would have higher potential outcomes or potentially have a greater benefit from treatment than other participants.  The compliers are characterized as participants that receive treatment only as a result of random assignment. The estimated treatment effect for these folks is often very desirable and in an IV framework can give us an unbiased causal estimate of the treatment effect. This is what is referred to as a local average treatment effect or LATE.

How do we estimate LATE with IVs?

One way LATE estimates are often described is as dividing the ITT effect by the share of compliers. This can also be done in a regression context. Let D be an indicator equal to 1 if treatment is received vs. 0, and let Z be our indicator (0,1) for the original randomization i.e. our instrumental variable. We first regress:

D = β0 + β1 Z + e  

This captures all of the variation in our treatment that is related to our instrument Z, or random assignment. This is 'quasi-experimental' variation. It is also an estimate of the rate of compliance. β1 only picks up the variation in treatment D that is related to Z and leaves all of the variation and unobservable factors related to self selection (i.e. bias) in the residual term.  You can think of this as the filtering process.  We can represent this as: COV(D,Z)/V(Z). 

Then, to relate changes in Z to changes in our target Y we estimate β2  or COV(Y,Z)/V(Z).

Y = β02 Z + e        
Our instrumental variable estimator then becomes:
βIV = β2 / β1  or (Z’Z)-1Z’Y / (Z’Z)-1Z’D or COV(Y,Z)/COV(D,Z)  

The last term gives us the total proportion of ‘quasi-experimental variation’ in D related to Y. We can also view this through a 2SLS modeling strategy:


Stage 1: Regress D on Z to get D* or D = β0 + β1 Z + e 

Stage 2: Regress Y on D*  or  Y = β0IV D* + e 

 As described in Mostly Harmless Econometrics, "Intuitively, conditional on covariates, 2SLS retains only the variation in s [D  in our example above] that is generated by quasi-experimental variation- that is generated by the instrument z"

Regardless of how you want to interpret βIV, we can see that it teases out only that variation in  our treatment D that is unrelated to selection bias and relates it to Y giving us an estimate for the treatment effect of D that is less biased.

The causal path can be represented as:

Z →D→Y   

There are lots of other ways to think about how to interpret IVs. Ultimately they provide us with an estiamate of the LATE which can be interpreted as an average causal effect of treatment for those participants in a study whose enrollment status is determined completely by Z (the treatment assignment) i.e. the compliers and this is often a very relevant effect of interest. 

Marc Bellemare has some really good posts related to this see here, here, and here.


Tuesday, July 11, 2017

The Credibility Revolution in Econometrics

Previously I wrote about how graduate training (and experience) can provide a foundation for understanding statistics, experimental design, and interpretation of research. I think this is common across many master's and doctoral level programs. But some programs approach this a little differently than others. Because of the credibility revolution in economics, there is a special concern for identification and robustness. And even within the discipline, there is concern that this has not been given enough emphasis in modern textbooks and curricula (see here and here). However, this may not be well understood or appreciated by those outside the discipline.

What is the credibility revolution and what does it mean in terms of how we do research?

I like to look at this through the lens of applied economists working in the field:

Economist Jayson Lusk puts it well:

"Fortunately economics (at least applied microeconomics) has undergone a bit of credibility revolution.  If you attend a research seminar in virtually any economist department these days, you're almost certain to hear questions like, "what is your identification strategy?" or "how did you deal with endogeneity or selection?"  In short, the question is: how do we know the effects you're reporting are causal effects and not just correlations."

Healthcare Economist Austin Frakt has a similar take:

"A “research design” is a characterization of the logic that connects the data to the causal inferences the researcher asserts they support. It is essentially an argument as to why someone ought to believe the results. It addresses all reasonable concerns pertaining to such issues as selection bias, reverse causation, and omitted variables bias. In the case of a randomized controlled trial with no significant contamination of or attrition from treatment or control group there is little room for doubt about the causal effects of treatment so there’s hardly any argument necessary. But in the case of a natural experiment or an observational study causal inferences must be supported with substantial justification of how they are identified. Essentially one must explain how a random experiment effectively exists where no one explicitly created one."

 How do we get substantial justification? Angrist and Pischke give a good example in their text Mostly Harmless Econometrics in their discussion of fixed effects and lagged dependent variables:

"One answer, as always is to check the robustness of your findings using alternative identifying assumptions. That means you would like to find broadly similar results using plausible alternative models." 

To someone trained in the physical or experimental sciences, this might 'appear' to look like data mining. But Marc Bellemare makes a strong case that it is not!

"Unlike experimental data, which often allow for a simple comparison of means between treatment and control groups, observational data require one to slice the data in many different ways to make sure that a given finding is not spurious, and that the researchers have not cherry-picked their findings and reported the one specification in which what they wanted to find turned out to be there. As such, all those tables of robustness checks are there to do the exact opposite of data mining."

That's what the credibility revolution is all about.

See also: 

Do Both! (by Marc Bellemare)
Applied Econometrics
Econometrics, Multiple Testing, and Researcher Degrees of Freedom








Monday, July 10, 2017

The Value of Graduate Education....and Experience

What are some of the additional benefits of graduate study? What if you just skip the time, money and energy spent in graduate school and went straight to writing code?

This made me think of a Talking Biotech podcast with Kevin Folta discussing the movie Food Evolution. Toward the end they discussed some critiques of the film, and a common critique about research in general is bias due to conflicts of interest. Kevin States:

"I've trained for 30 years to be able to understand statistics and experimental design and interpretation...I'll decide based on the quality of the data and the experimental design....that's what we do."

Besides taking on the criticisms of science, this emphasized two important points.

1) Graduate study teaches you to understand statistics and experimental design and interpretation and this requires a new way of thinking. At the undergraduate level I learned some basics that were quite useful in terms of empirical work. In graduate school I learned what is analogous to a new language. The additional properties of estimators, proofs, and theorems taught in graduate statistics courses suddenly made the things I learned before make better sense. This background helped me to translate and interpret other people's work and learn from it, and learn new methodologies or extend others. But it was the seminars and applied research that made it come to life. Learning to 'do science' through new ways of thinking about how to solve problems through statistics and experimental design. And interpretation as Kevin says.

2) Graduate study is an extendable framework. Learning and doing statistics is a career long process. This recognizes the gulf between textbook and applied econometrics.