Monday, July 15, 2013

Quasi-experimental Designs in Financial Aid Research

 The following are some very good articles related to regression discontinuity, difference-in- difference, and instrumental variable applications in assessing the impact of financial aid on student college enrollment decisions:

van der Klaauw (2002). Estimating the effect of financial aid offers on college enrollment: A regression-discontinuity approach. International Economic Review.  43(4), 1249–1287.

 “An important problem faced by colleges and universities, that of evaluating the effect of their financial aid offers on student enrollment decisions, is complicated by the likely endogeneity of the aid offer variable in a student enrollment equation. This article shows how discontinuities in an East Coast college’s aid assignment rule can be exploited to obtain credible estimates of the aid effect without having to rely on arbitrary exclusion restrictions and functional form assumptions. Semiparametric estimates based on a regression–discontinuity (RD) approach affirm the importance of financial aid as an effective instrument in competing with other colleges for students.”

 Stephanie Riegg Cellini. Causal Inference and Omitted Variable Bias in Financial Aid Research: Assessing Solutions The Review of Higher Education Spring 2008, Volume 31, No. 3, pp. 329–354

 Discusses the shortfall of multivariable regression estimates when used to estimate treatment effects in the face of omitted variable and selection bias in the case of analyzing treatment effects of financial aid offers on enrollment. Introduces quasi-experimental methods such as instrumental variables, difference in difference, and regression discontinuity approaches. This paper acknowledges that these methods are the new standard (over multivariable regression) in program evaluation in higher ed research and advocates the use of multiple methods to assess the robustness of results in the face of these challenges.

  Godman, J. Who merits financial aid?: Massachusetts' Adams Scholarship. Journal of Public Economics. Volume 92, Issues 10–11, October 2008, Pages 2121–2131

  This paper discusses the potential biases involved in simply comparing scholarship recipients to losers. Consistent with Cellini (2008) it uses multiple methods to addresses these issues including difference-in-difference and regression discontinuity estimates.  The DD estimates identify treatment effects by comparing variation across time between groups that would have received the scholarship in the past had it been implemented to winners in the present. The RD methods (similar to van der Klaauw (2002)) by contrast utilize a single contemporary cohort based on the identifying assumption that treatment assignment near the cutoff is as good as random, providing an estimate of an average treatment effect of scholarships.



No comments:

Post a Comment