Gupdta describes ITT:

*"ITT analysis includes every subject who is randomized according to randomized treatment assignment. It ignores noncompliance, protocol deviations, withdrawal, and anything that happens after randomization. ITT analysis is usually described as “once randomized, always analyzed."*

In Mastering Metrics, Angrist and Pischke describe intent-to-treat analysis:

*"In randomized trials with imperfect compliance, when treatment assignment differs from treatment delivered, effects of random assignment...are called intention-to-treat (ITT) effects. An ITT analysis captures the causal effect of being assigned to treatment."*

*While treatment assignment is random, non-compliance is not! Therefore if instead of using intent to treat comparisons we compared those actually treated to those untreated (sometimes termed 'as treated' analysis) we would get biased results. When there is non-compliance, there is the likelihood that a relationship exists between potential outcomes and the actual treatment received. While the ITT approach gives an unbiased causal estimate of the treatment effect, it is often a diluted effect because of non-compliance issues and can provide an underestimate of the true effect (Angrist, 2006).*

Angrist and Pishke discuss how instrumental variables can be used in the context of a RCT with non-compliance issues:

*"Instrumental variable methods allow us to capture the causal effect of treatment on the treated in spite of the nonrandom compliance decisions made by participants in experiments....Use of randomly assigned intent to treat as an instrumental variable for treatment delivered eliminates this source of selection bias."*

The purpose of this post is to build intuition related to how an instrumental variable (IV) approach differs from ITT, and how it is not biased by selection related to non-compliance issues in the same way that an 'as treated' analysis would be.

My goal is to demonstrate with a rather simple data set how IVs tease out the biases from non-compliance and give us only the impact of treatment on the compliers also known as the local average treatment effect (LATE).

A great example of IV and ITT applied to health care can be found in Finkelstein et. al. (2013 & 2014) - See The Oregon Medicaid Experiment, Applied Econometics, and Causal Inference.

For another post walking through the basic mechanics of instrumental variables (IV) estimation using a toy data set see: A Toy IV Application.

**Key Assumptions**

Depending on how you frame it there are about 5 key things (assumptions if we want to call them that) we need to think about when leveraging instrumental variables - in humble language:

**1) SUTVA**- you can look that up but basically it means no interactions or spillovers between the treatments and controls - my getting treated does not make a control case have a better or worse outcome as a result

**2) Random Assignment**- that is the whole context of the discussion above - the instrument (Z) or treatment assignment must be random

**3) The Exclusion Restriction**- Treatment assignment impacts outcome only through the treatment itself. It is the treatment that impacts the outcome. There is nothing about being in the randomly assigned treatment group that would cause your outcome to be higher or lower in and of itself, other than actually receiving the treatment. Treatment assignment is ignorable. This is often represented as: Z -> D -> Y where Z is the instrument or random assignment, D is an indicator for actually receiving the treatment, and Y is the outcome.

**4) Non-zero causal effect of Z on D:**Being assigned to the treatment group is highly correlated with actually receiving the treatment i.e. when Z =1 then D is usually 1 as well. (if these were perfectly correlated that would imply perfect compliance)

**5) Monotonicity**- We'll just call this an assumption of 'no-defiers.' It means that there are no cases that always do the opposite of what their treatment assignment indicates, i.e. if Z = 1 then D = 0 AND if Z =0 then D is always 1. Stated differently we can't have cases where there are those that always get the treatment when assigned to the control group and never receive treatment when assigned to the treatment group.

**Types of Non-Compliance**

Given these assumptions, with monotonicity we end up with three different groups of people in our study:

**Never Takers:**those that refuse treatment regardless of treatment/control assignment.

**Always Takers:**those that get the treatment even if they are assigned to the control group.

**Compliers:**those that comply or receive treatment if assigned to a treatment group but do not receive treatment when assigned to control group.

The compliers are characterized as participants that receive treatment only as a result of random assignment. The estimated treatment effect for these folks is often very desirable and in an IV framework can give us an unbiased causal estimate of the treatment effect. But how does this work?

**Discussion**

I have to first recommend a great post over at egap.org titled '10 Things to Know About Local Average Treatment Effects.' Most of my post is based on those well thought out examples.

Just to level set, the context of this discussion going forward is a RCT with the outcome measured as Y, and treatment assignment being used as the instrument Z. (this can be extended to apply to other scenarios using other types of instruments). Actual receipt of treatment, or treatment status, is indicated by D with D=1 indicating a receipt of treatment. So an ITT analysis would simply be a comparison of outcomes for folks randomly assigned to treatment (Z = 1) vs those that were controls (Z = 0) regardless of compliance or non-compliance (determined by D). An 'as treated' analysis would be a comparison of everyone that received the treatment (D = 1) vs. those that did not (D=0) regardless of randomization. This is a biased analysis. The IV or local average treatment effect (LATE) estimate is the difference in outcomes for compliers.

Going back to the original article by Angrist (1996), it discusses IVs, LATEs and the types of noncompliance as they relate to the assumptions we previously discussed. In that article they explain that the treatment status (D) of the always takers and never takers is invariant (uncorrelated) to random assignment Z. No matter what Z is, they are going to do what they are going to do. But, we also know that Z (by definition of compliance and assumption 4) is correlated with actual treatment assignment D for the compliers.

Lets consider a RCT with one sided non-compliance. In this case the controls are not able to receive the treatment by nature of the design. So there are no 'always takers' in this discussion. Below is a table summarizing a scenario like this with 100 people randomly assigned to treatment (Z = 1) and 100 controls (Z = 0). (This can be extended to include always takers and the post I mentioned before at egap.org will walk through that scenario)

For story telling purposes, let's assume the 'treatment' is a weight loss program. We've got some really unmotivated folks (never takers) in both the treatment and control group that just don't comply with the treatment. Let's say on average they all end up losing 5 pounds (Y = 5) regardless of the group they are in. On the other hand, we have more conscientious folks that if randomly assigned to treatment they will participate. But they are motivated and healthy. Even in absence of treatment their potential outcomes (weight loss) are pretty favorable. They are bound to lose 20 pounds even in absence of treatment.

As discussed before, we can see how when there is non-compliance, there is the likelihood that a relationship exists between potential outcomes and the actual treatment received.

If we ignore treatment assignment, and just compare the average weight lost (y) for those that received treatment to all of those that did not we could run the following regression:

Y = β0 + β1 D + e

with β1 = 10 (see the R code that generates this data and these results)

We could calculate this by hand as: 25 - [(2/3)*20 + (1/3)*5)] = 10

We know that non-compliance biases this estimate.

The ITT estimate can be estimated as:

Y = β0 + β1 Z + e

with β1 = 4

We can see from the data this is simply the difference in means between the treatment and control group: [.2*5 + .8*25] - [.2*5 + .8*20] = 21-17 = 4

We know from the discussion above and can see from the data that this is greatly diluted by noncompliance. But because of randomization this is an unbiased estimate.

Finally, the IV or local average treatment effect (LATE) estimate is the difference in outcomes for compliers.

Because our example above is contrived, the outcomes for the compliers is made explicit in the table above. If you know exactly who the compliers are the math would be straight forward:

LATE = 25 - 20 = 5

You can also get LATEs by dividing the ITT effect by the share of compliers:

4/.8 = 5

In a previous post, I've described how an IV estimate teases out only that variation in our treatment D that is unrelated to selection bias and relates it to Y giving us an estimate for the treatment effect of D that is less biased.

We can view this through the lens of a 2SLS modeling strategy:

Stage 1: Regress D on Z to get D*

D* = β0 + β1 Z + e

β1 only picks up the variation in Z that is related to D (i.e.

*quasi-experimental variation*) and leaves all of the variation in D related to non-compliance and selection in the residual term. You can think of this as working like a filtering process.

Stage 2: Regress Y on D*

Y = β0 +βIV D* + e

The second stage relates changes in Z (

*quasi-experimental variation*) to changes in our target Y.

We can see (from the R code below) that our estimate βIV = 5.

We can also get the same result (and correct standard errors) by using the ivreg function from the AER package in R:

summary(ivreg(y ~ D | Z,data =df))

**R**

**Code:**https://gist.github.com/BioSciEconomist/a72fae6e01053fdb6d13c9a80d8e39f9

**References:**

Angrist, Joshua D., et al. “Identification of Causal Effects Using Instrumental Variables.” Journal of the American Statistical Association, vol. 91, no. 434, 1996, pp. 444–455. JSTOR, www.jstor.org/stable/2291629.

Angrist, J.D. J Exp Criminol (2006) 2: 23. https://doi.org/10.1007/s11292-005-5126-x

"The Oregon Experiment--Effects of Medicaid on Clinical Outcomes," by Katherine Baicker, et al. New England Journal of Medicine, 2013; 368:1713-1722. http://www.nejm.org/doi/full/10.1056/NEJMsa1212321

Medicaid Increases Emergency-Department Use: Evidence from Oregon's Health Insurance Experiment. Sarah L. Taubman,Heidi L. Allen, Bill J. Wright, Katherine Baicker, and Amy N. Finkelstein. Science 1246183Published online 2 January 2014 [DOI:10.1126/science.1246183]

Gupta, S. K. (2011). Intention-to-treat concept: A review. Perspectives in Clinical Research, 2(3), 109–112. http://doi.org/10.4103/2229-3485.83221

## No comments:

## Post a Comment

Note: Only a member of this blog may post a comment.