*"ITT analysis includes every subject who is randomized according to randomized treatment assignment. It ignores noncompliance, protocol deviations, withdrawal, and anything that happens after randomization.ITT analysis is usually described as “once randomized, always analyzed”.*

"ITT analysis avoids overoptimistic estimates of the efficacy of an intervention resulting from the removal of non-compliers by accepting that noncompliance and protocol deviations are likely to occur in actual clinical practice"- Gupta, 2011

"ITT analysis avoids overoptimistic estimates of the efficacy of an intervention resulting from the removal of non-compliers by accepting that noncompliance and protocol deviations are likely to occur in actual clinical practice"

In Mastering Metrics, Angrist and Pischke describe intent-to-treat analysis:

*"In randomized trials with imperfect compliance, when treatment assignment differs from treatment delivered, effects of random assignment...are called intention-to-treat (ITT) effects. An ITT analysis captures the causal effect of being assigned to treatment."*

While treatment assignment is random, non-compliance is not! They point out that the issue of non-compliance in ITT creates selection bias. However this can be handled:

*"ITT effects divided by the difference in compliance rates between treatment and control groups capture the causal effect*"

Say what? Well how does that work?

Lets look at this. Suppose we have a randomized trial or treatment with outcome Y, where Z = 1 if assigned to a treatment group and 0 if assigned to control. An intent to treat (omitting controls) estimate could be estimated with the following regression:

Y = b0 + b1*Z + e (1)

*ITT or 'reduced form'*

where b1 = COV(Y, Z) / V(Z)

If we let D = 1 for those in the study that actually received treatment i.e. compilers and D = 0 indicate non-treated or non-compliers, then the difference in compliance rates between treatment and control groups can be estimated as:

D = b0 + b2*Z + e (2)

*'1st stage'*

where b2 = COV(D, Z) / V(Z)

It turns out then, as suggested by Angrist and Pishke, dividing our ITT effects by the difference in compliance rates is precisely the ratio of reduced form to first stage estimates. Mathematically this is an instrumental variables framework.

b(IV) = b1/b2 = COV(Y, Z) / COV(D, Z)

The random assignment, or intent-to-treat flag 'Z' becomes our

*instrumental variable*for treatment delivered or D. Angrist and Pishke describe IVs this way:

*“The instrumental variables (IV) method harnesses partial or incomplete random assignment, whether naturally occurring or generated by researchers"*

This is a powerful method of eliminating selection bias:

*"Use of randomly assigned intention to treat as an instrumental variable for treatment delivered eliminates this source of selection bias."*

(For more information and some toy examples showing how this works, see the links below)

In

*Intent-to-Treat vs. Non-Intent-to-Treat Analyses under Treatment Non-Adherence in Mental Health Randomized Trials*there is a nice discussion of ITT and IV methods with applications related to clinical research. Below is a nice treatment of IV in this context:

*“Instrumental variables are assumed to emulate randomization variables, unrelated to unmeasured confounders influencing the outcome. In the case of randomized trials, the same randomized treatment assignment variable used in defining treatment groups in the ITT analysis is instead used as the instrumental variable in IV analyses. In particular, the instrumental variable is used to obtain for each patient a predicted probability of receiving the experimental treatment. Under the assumptions of the IV approach, these predicted probabilities of receipt of treatment are unrelated to unmeasured confounders in contrast to the vulnerability of the actually observed receipt of treatment to hidden bias. Therefore, these predicted treatment probabilities replace the observed receipt of treatment or treatment adherence in the AT model to yield an estimate of the as-received treatment effect protected against hidden bias when all of the IV assumptions hold.”*

A great example of IV and ITT applied to health care can be found in Finkelstein et. al. (2013 & 2014) - See the Oregon Medicaid Experiment, Applied Econometics, and Causal Inference.

Over at the Incidental Economist, there was a nice discussion of ITT in the context of medical research that does a good job of explaining the rationale as well as when departures from ITT make more sense (such as safety and non-inferiority trials).

The regression algebra above can be informative. For example, if compliance were perfect, a simple comparison between treatment and controls as indicated by the treatment indicator Z would yield unbiased treatment effects.

Y = b0 + b1*Z + e

This is simply the ITT estimate where b1 = COV(Y, Z) / V(Z), which is an unbiased estimate of treatment effects when there is no selection bias

With perfect compliance, the IV will collapse to give us the same result as an ITT estimate. In this case D = Z and the regression

D = b0 + b2*Z + e

will be an identity, b2 or COV(D, Z) / V(Z)

*= 1*

so the IV estimator gives us: b1/1 = b1 which is our ITT estimate.

With imperfect compliance, the denominator departs from 1, allowing us to adjust our ITT estimate in a way that removes selection bias related to unobservables.

**See also:**

Instrumental Explanations of Instrumental Variables

A Toy IV Application

Other IV Related Posts

**References:**

Mastering ’Metrics:

The Path from Cause to Effect

Joshua D. Angrist & Jörn-Steffen Pischke

2015

Gupta, S. K. (2011). Intention-to-treat concept: A review. Perspectives in Clinical Research, 2(3), 109–112. http://doi.org/10.4103/2229-3485.83221

Ten Have, T. R., Normand, S.-L. T., Marcus, S. M., Brown, C. H., Lavori, P., & Duan, N. (2008). Intent-to-Treat vs. Non-Intent-to-Treat Analyses under Treatment Non-Adherence in Mental Health Randomized Trials. Psychiatric Annals, 38(12), 772–783. http://doi.org/10.3928/00485713-20081201-10

"The Oregon Experiment--Effects of Medicaid on Clinical Outcomes," by Katherine Baicker, et al. New England Journal of Medicine, 2013; 368:1713-1722. http://www.nejm.org/doi/full/10.1056/NEJMsa1212321

Medicaid Increases Emergency-Department Use: Evidence from Oregon's Health Insurance Experiment. Sarah L. Taubman,Heidi L. Allen, Bill J. Wright, Katherine Baicker, and Amy N. Finkelstein. Science 1246183Published online 2 January 2014 [DOI:10.1126/science.1246183]

Detry MA, Lewis RJ. The Intention-to-Treat PrincipleHow to Assess the True Effect of Choosing a Medical Treatment.

*JAMA.*2014;312(1):85-86. doi:10.1001/jama.2014.7523

## No comments:

## Post a Comment

Note: Only a member of this blog may post a comment.