I recently ran across:
The State of Applied Econometrics - Causality and Policy Evaluation
Susan Athey, Guido Imbens
https://arxiv.org/abs/1607.00699v1
A nice read, although I skipped directly to the section on machine learning. A few interesting causality/machine learning comments.
They discussed some known issues related to estimating propensity scores using various machine learning algorithms in terms of the sensitivity of results, especially for propensity scores close to 0 or 1. They discuss trimming weights as one possible approach, which I have heard before in Angrist and Pischke and other work (see below). In fact, in a working paper where I employed gradient boosting to estimate propensity scores for IPTW regression, I trimmed weights. However, I did not trim them for the stratified matching estimator that I also used. I wish I still had the data because I would like to see the impact on my previous results.
Another interesting application discussed in this paper was a two (or 3?) stage LASSO estimation (they actually have a great overall discussion of penalized regression and regularization in machine learning) where they mention first running LASSO to select variables related to the outcome of interest, second running LASSO to select for variables related to selection, and finally running OLS to estimate a causal model that includes the selected variables from the previous LASSO methods.
The paper covers a range of other topics including decision trees, random forests, distinctions between traditional econometrics and machine learning, instrumental variables etc.
Some Additional Notes and References:
Multiple Algorithms (CART/Logistic Regression/Boosting/Random Forests) with PS weights and trimming:
http://econometricsense.blogspot.com/2013/04/propensity-score-weighting-logistic-vs.html
Following Angrist and Pischke I present results for regressions utilizing data that has been 'screened' by eliminating observations where ps > .90 or < .10 using the r 'matchit' package
http://econometricsense.blogspot.com/2015/03/using-r-matchit-package-for-propensity.html
An attempt to make sense of econometrics, biostatistics, machine learning, experimental design, bioinformatics, ....
Sunday, August 7, 2016
Estimating the Causal Effect of Advising Contacts on Fall to Spring Retention Using Propensity Score Matching and Inverse Probability of Treatment Weighted Regression
Matt Bogard, Western Kentucky University
Abstract
In the fall of 2011 academic advising and residence life staff working for a southeastern university utilized a newly implemented advising software system to identify students based on attrition risk. Advising contacts, appointments, and support services were prioritized based on this new system and information regarding the characteristics of these interactions was captured in an automated format. It was the goal of this study to investigate the impact of this advising initiative on fall to spring retention rates. It is a challenge on college campuses to evaluate interventions that are often independent and decentralized across many university offices and organizations. In this study propensity score methods were utilized to address issues related to selection bias. The findings indicate that advising contacts associated with the utilization of the new software had statistically significant impacts on fall to spring retention for first year students on the order of a 3.26 point improvement over comparable students that were not contacted.Suggested Citation
Matt Bogard. 2013. "Estimating the Causal Effect of Advising Contacts on Fall to Spring Retention Using Propensity Score Matching and Inverse Probability of Treatment Weighted Regression" The SelectedWorks of Matt BogardAvailable at: http://works.bepress.com/matt_bogard/25
Subscribe to:
Posts (Atom)