Monday, July 15, 2013

Quasi-experimental Designs in Financial Aid Research

 The following are some very good articles related to regression discontinuity, difference-in- difference, and instrumental variable applications in assessing the impact of financial aid on student college enrollment decisions:

van der Klaauw (2002). Estimating the effect of financial aid offers on college enrollment: A regression-discontinuity approach. International Economic Review.  43(4), 1249–1287.

 “An important problem faced by colleges and universities, that of evaluating the effect of their financial aid offers on student enrollment decisions, is complicated by the likely endogeneity of the aid offer variable in a student enrollment equation. This article shows how discontinuities in an East Coast college’s aid assignment rule can be exploited to obtain credible estimates of the aid effect without having to rely on arbitrary exclusion restrictions and functional form assumptions. Semiparametric estimates based on a regression–discontinuity (RD) approach affirm the importance of financial aid as an effective instrument in competing with other colleges for students.”

 Stephanie Riegg Cellini. Causal Inference and Omitted Variable Bias in Financial Aid Research: Assessing Solutions The Review of Higher Education Spring 2008, Volume 31, No. 3, pp. 329–354

 Discusses the shortfall of multivariable regression estimates when used to estimate treatment effects in the face of omitted variable and selection bias in the case of analyzing treatment effects of financial aid offers on enrollment. Introduces quasi-experimental methods such as instrumental variables, difference in difference, and regression discontinuity approaches. This paper acknowledges that these methods are the new standard (over multivariable regression) in program evaluation in higher ed research and advocates the use of multiple methods to assess the robustness of results in the face of these challenges.

  Godman, J. Who merits financial aid?: Massachusetts' Adams Scholarship. Journal of Public Economics. Volume 92, Issues 10–11, October 2008, Pages 2121–2131

  This paper discusses the potential biases involved in simply comparing scholarship recipients to losers. Consistent with Cellini (2008) it uses multiple methods to addresses these issues including difference-in-difference and regression discontinuity estimates.  The DD estimates identify treatment effects by comparing variation across time between groups that would have received the scholarship in the past had it been implemented to winners in the present. The RD methods (similar to van der Klaauw (2002)) by contrast utilize a single contemporary cohort based on the identifying assumption that treatment assignment near the cutoff is as good as random, providing an estimate of an average treatment effect of scholarships.



Friday, July 12, 2013

Some informative statements regarding Regression Discontinuity designs


From:
Regression Discontinuity Designs in Economics
David S. Lee and Thomas Lemieux*Journal of Economic Literature 48 (June 2010): 281–355

RD as a weighted average treatment effect 

“In the presence of heterogeneous treatment effects, the discontinuity gap in an RD design can be interpreted as a weighted average treatment effect across all individuals"

RD  ‘as good as’ random assignment 

the notion that the RD design generates local variation in treatment that is “as good as randomly assigned” is helpful because we can apply known results for randomized instruments to the RD design"


Fuzzy RD and ‘intent-to-treat’ analysis 

“ the fuzzy RD design can be described by the two equation system… estimating the treatment effect by instrumenting the treatment dummy…In this setting, (the estimate)can be interpreted as an “intent-to-treat” effect”